International Journal of Advanced Robotic Systems ARTICLE

A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

Regular Paper

Sobers Lourdu Xavier Francis1*, Sreenatha G. Anavatti1, Matthew Garratt1 and Hyunbgo Shim2

1 University of New South Wales, Canberra, Australia 2 Seoul National University, Seoul, Korea *Corresponding author(s) E-mail: [email protected]

Received 03 February 2015; Accepted 25 August 2015

DOI: 10.5772/61348

© 2015 Author(s). Licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV) of The aim of this paper is to deploy a time-of-flight (ToF) the PMD camera. A series of still and moving tests are based photonic mixer device (PMD) camera on an Auton‐ conducted on the AGV to verify correct sensor operations, omous Ground Vehicle (AGV) whose overall target is to which show that the postulated application of the ToF traverse from one point to another in hazardous and hostile camera in the AGV is not straightforward. Later, to stabilize environments employing obstacle avoidance without the moving PMD camera and to detect obstacles, a tracking human intervention. The hypothesized approach of feature detection algorithm and the scene flow technique applying a ToF Camera for an AGV is a suitable approach are implemented to perform a real-time experiment. to autonomous robotics because, as the ToF camera can provide three-dimensional (3D) information at a low Keywords 3D ToF Camera, PMD Camera, Pioneer Mobile computational cost, it is utilized to extract information Robots, Path-planning about obstacles after their calibration and ground testing, and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D) world map which has been divided into a grid/cells, where the 1. Introduction collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the As an AGV must be able to adequately sense its surround‐ ings in order to operate in unknown environments and target. PMD depth data is used to populate traversable execute autonomous tasks, vision sensors provide the areas and obstacles by representing a grid/cells of suitable necessary information required for it to perceive and avoid size. These camera data are converted into Cartesian any obstacles to accomplish autonomous path-planning. coordinates for entry into a workspace grid map. A more Hence, the perception sensor becomes the key sensory optimal camera mounting angle is needed and adopted by device for perceiving the environment in intelligent mobile analysing the camera’s performance discrepancy, such as robots, and the perception objective depends on three basic pixel detection, the detection rate and the maximum system qualities, namely rapidity, compactness and perceived distances, and infrared (IR) scattering with robustness [1].

Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348 1 Over the last few decades, many different types of sensors optimal camera angle is adopted. To ensure that the top- [2] have been developed in the context of AGV path- most pixels are observed directly ahead of the robot, planning to avoid obstacles [3] such as infrared sensors [4], thereby giving an adequate conception of obstacles and ultrasonic sensors, sonar [5], LADAR [6], laser rangefinders maximizing the ground plane conception, various analyses [7], camera data fused with radar [8] and stereo cameras of the camera performance are carried out in this paper. with a projector [9]. These sensors’ data along with data Later, the parametric calibration for the PMD cameras is processing techniques are used to update the positions and performed by obtaining necessary camera parameters to directions of the vertices of obstacles. However, these derive the true information of the imaging scene. The sensor systems are unable to provide necessary informa‐ imaging technology of the PMD camera is better under‐ tion without ease about any surroundings. stood whereby camera pixels provide a complete image of As the world’s attention is increasingly focusing on the scene with each pixel detecting the range data stored in automation in every field, extracting 3D information about a 2D array, which are utilized and interpreted in this paper an obstacle has become a topical and challenging task. As to extract information about the surroundings. A few such, an appropriate sensor is required to obtain 3D experiments are carried to measure the camera’s parame‐ information that has small dimensions, low power con‐ ters and distance errors with respect to each pixel. sumption and real-time performance. The main limitation To determine the difference between the function of the of a 2D camera is that, as it is the projection of 3D informa‐ camera in an environment similar to that claimed by the tion onto a 2D image plane, it cannot provide complete manufacturer data sheet, white surface testing and grass information of the entire scene. Thus, the processing of surface testing are conducted for the PMD camera. This also these images will depend upon the view point (rather than provided a means to compare the performance of the the actual information about the object). In order to camera from a flat white surface to a flat grassy surface. overcome this drawback, the use of 3D information has Later, the camera data is synchronized with the instanta‐ emerged. In general, researchers use a setup consisting of neous orientation and position of the platform (and thus a charge-coupled device (CCD) camera and light projector the camera), which translates the Cartesian coordinates in order to obtain a 3D image, such as that of the 3D into squares of grids. It reconstructed the ground region visualization of rock [9]. (extremities that the camera could see) into a grid of A 3D sensor is selected for our work, which is based on the suitable grid cells which are imputed to the path-planning photonic mixer device (PMD) technology that delivers algorithm. range and intensity data with low computational cost as During real-time experimentation, the grid-based Efficient well as compactness and economical design with a high D* Lite path-planning algorithm and scene-flow technique . This camera system delivers absolute geomet‐ rical dimensions of obstacles without depending upon the were programmed on the Pioneer onboard computers object surface, distance, rotation or illumination. Hence, it using the OpenCV and OpenGL libraries. In order to is rotation-, translation- and illumination-invariant [10]. compensate for the ego-motion of the PMD camera, which Nowadays, RGBD cameras (e.g., , Asus Xtion, is aligned with the AGV coordinates, a features detection Carmine) have been widely used in object recognition and algorithm using Good − Features −to −Track from the OpenCV mobile robotics applications. However, these RGBD library is adopted. cameras cannot operate in outdoor environments [11]. A The paper is organized as follows: a brief comparison of the PMD camera with a working range of (0.2 - 7 metres) 3D sensors and their fundamental principles is presented provides better depth precision compared to Kinect (0.7 - in Section II. In Section III, the calibration of the PMD 3.5 metres) and Carmine (0.35 - 1.4 metres). camera is performed by parametric calibration. Section IV However, the PMD camera is constrained by its limited presents the manipulation of the PMD camera data. In FoV, namely its need to adjust the camera mounting Section V, several standardized tests are devised to downwards to obtain a greater view of the ground. The characterize the PMD camera and Section VI describes the specific angle of this mounting is explained in this paper; experimental results. nonetheless, it was unexpected that light incident at different angles to the ground would result in significant 2. ToF-based 3D Cameras - state-of-the-art receiver loss and distortion of distance measurements due to scattering. The camera is mounted on the front of the Nowadays, 3D data are required in the automation robot through brackets that enable variable camera industries for analysing the visible space/environment. mounting angles at a static angle of tilt - it is perceived that The rapid acquisition of 3D data by a robotic system for this configuration would enable the best compromise navigation and control applications is required. New 3D between the ground conception and a straight ahead cameras at affordable prices which have been successful‐ conception as a function of the camera tilt angle ψ. Due to ly developed using the ToF principle to resemble LIDAR being mounted above the robot, it would be necessary to scanners. In a ToF camera unit, a modulated light pulse flag closer obstacles so as to reduce the blind spot in front is transmitted by the illumination source and the target of the robot. Because of these considerations, a more distance is measured from the time taken by the pulse to

2 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348 reflect from the target back to the receiving unit. PMD intensity and the 3D structure of the range data. Gesture technologies have developed 3D sensors using the ToF recognition has been performed based on shape contexts principle, which provide for a wide range of field and simple binary matching [40] whereby motion informa‐ applications with high integration and cost-effective tion is extracted by matching the difference in the range production [12]. data of different image frames. The measurement of shapes and deformations of metal objects and structures using ToF ToF cameras do not suffer from missing texture in the scene range information and heterodyne imaging are discussed or bad lighting conditions, are computationally less in [41]. expensive than stereo vision systems and - compared with laser scanners - have higher frame rates and more compact 2.1 The Basic ToF Principle sensors, advantages which make them ideally suited for 3D perception and motion reconstruction [13]. The following A single pixel consists of a photo-sensitive element (e.g., a advantages of ToF cameras are found in the literature: photo diode) which converts incoming light into current. • Simplicity: compared with 3D vision systems, a ToF- The distance between the camera and an object is deter‐ based system is very simple and compact, consisted no mined by the ToF principle and the time taken by the light moving parts and having built-in illumination adjacent to travel from the illumination unit to the receiver is directly to its lens. proportional to the distance travelled by the light [42], with the delay time tD given by • Efficiency: only a small amount of processing power is required to extract distance information using a ToF camera. Dobj tD = 2 ´ • Speed: in contrast to laser scanners that move and co measure point-by-point, ToF cameras measure a com‐ plete scene with one shot at up to 100 frames-per-second where Dobj is the object distance in metres and co is the (fps), much faster than their laser alternatives. velocity of light in m/s. In addition, ToF cameras have been applied in robotic The pulse width of the illumination determines the applications for obstacle avoidance [14, 15]. There are many maximum range that the camera can handle, which can be other applications in various fields [16] that have gained determined by substantial research interest following the advent of the range sensor, such as robotic and machine vision in the field of mobile robotics search and rescue [17, 18], path-planning 1 Dmax= ´ c o ´ t for manipulators [19], the acquisition of 3D scene geometry 2 [20], 3D sensing for automated vehicle guidance and safety systems, wheelchair assistance [21], outdoor surveillance The distance between the sensor and the object is half the [22], simultaneous localization and mapping (SLAM) [23], total distance travelled by the radiation. The two different map building [24], medical respiratory motion detection distance measurement methods described by T. Kahlmann [25], robot navigation [26], semantic scene analysis [27], et al. [42]) are the pulse run time and the phase shift mixed/augmented reality [28], gesture recognition [29], determination. markerless human motion tracking [30], human body tracking and activity recognition [31], 3D reconstruction In the ToF camera, the distance between the camera and the [32], domestic cleaning tasks [33] and human-machine obstacle is calculated by the autocorrelation function of the interaction [34, 35]. electrical and optical signals, which is analysed by a phase- shift algorithm (Figure 1). Using four samples, A1, A2, A3 The authors in [36] used stereo cameras to identify the 3D and A4 (each shifted by 90 degrees), the phases of the orientation of the object and ToF cameras have been used received signals - which are proportional to the distance - for 3D object scanning, employing different approaches, can be calculated using the following equation[43]. such as passive image-based and super-resolution methods with probabilistic scan alignment [37]. The 3D range The phase shift, φ, is camera obtains range data and locates the positions of objects in its FoV [38], while active sensing technologies æAA- ö have been used as an active safety system for construction j = arctanç1 3 ÷ (1) çAA- ÷ applications and accident avoidance [39]. è2 4 ø The ToF camera has been used to detect the standard size of obstacles and it can be used for obstacle and travel path In addition to the phase shift of the signal, ar, the signal detection applications for the blind and visually impaired strength of the received signal (amplitude) and br offsets by combining 3D range data with stereo audio feedback from the samples which represent the greyscale value of [21], using the algorithm to segment obstacles according to each pixel can be determined by

Sobers Lourdu Xavier Francis, Sreenatha G. Anavatti, Matthew Garratt and Hyunbgo Shim: 3 A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics tracking and activity recognition [31], 3D reconstruction The phase shift, ϕ,is [32], domestic cleaning tasks [33] and human-machine − interaction [34, 35]. ϕ = A1 A3 arctan − (1) A2 A4 The authors in [36] used stereo cameras to identify the 3D orientation of the object and ToF cameras have been used

f = 20MHz for 3D object scanning, employing different approaches, mod

Emitted signal such as passivetracking image-based and activity recognition and [31], super-resolution 3D reconstruction The phase shift, ϕ,is Received signal [32], domestic cleaning tasks [33] and human-machine methods with probabilistic scan alignment [37]. The 3D − interaction [34, 35]. ϕ = A1 A3 f Intensity arctan − (1) range camera obtains range data and locates the positions A2 A4 The authors in [36] used stereo cameras to identify the 3D of objects in its FoVorientation [38], of while the object active and ToF sensing cameras have technologies been used intensity Amplitude offset f = 20MHz for 3D object scanning, employing different approaches, mod have been used assuch an as active passive safety image-based system and for super-resolution construction Emitted signal Received signal methods with probabilistic scan alignment [37]. The 3D applications and accident avoidance [39]. f Intensity range camera obtains range data and locates the positions A A A A A A1 2 3 4 A1 2 3 of objects in its FoV [38], while active sensing technologies intensity Amplitude offset Time The ToF camera hashave been been used used as an active to detect safety system the for standard construction size applications and accident avoidance [39]. of obstacles and it can be used for obstacle and travel pathFigure 1. Phase shift distance measurement principle: optical sinusoidally modulated input signal, sampled with four sampling points per modula‐ A A A A A A1 2 3 4 A1 2 3 detection applicationsThe ToF camera for the has blindbeen used and to detect visually the standard impaired sizetion period [11] Figure 1. PhaseTime shift distance measurement principle: optical of obstacles and it can be used for obstacle and travel path sinusoidally modulated input signal, sampled with four sampling by combining 3Ddetection range applications data with for the stereoblind and visuallyaudio impaired feedbackFigure 1. Phase shift distance measurement principle: optical sinusoidallypoints modulated per input modulation signal, sampled periodwith four sampling [11]. by combining 3D range data with stereo audio feedback points per modulation period [11]. [21], using the algorithm to segment obstacles according Integration time 200 ms [21], using the algorithm to segment obstacles according ()()AAAA---2 2 1 3 2 4 (2) to intensity andto the intensity 3D and structure the 3D structure of of the the range data. data. a = Modulation frequency 23 MHz r 2 Gesture recognition has been performed based on shape In addition to the phase shift of the signal, ar, the signal Gesture recognition has been performed based on shape In addition to the phase shift of theWavelength signal, a , the signal13 m contexts and simple binary matching [40] whereby motion strength of the received signal (amplitude) and br offsets r contexts and simpleinformation binary is extracted matching by matching [40] the whereby difference in motion the from the samples which represent the greyscale value of strengthAAAA1+ 2 + of 3 the+ 4 received signal (amplitude)Pixel resolution and br offsets200[H]x200[V] range data of different image frames. The measurement of each pixelb canr = be determined by (3) information is extractedshapes and bydeformations matching of metal the objects difference and structures in the from the4 samples which representFoV the V/H [] greyscale value40/40 of using ToF range information and heterodyne imaging are (A − A )2 − (A − A )2 range data of different image frames. The measurement of eacha = pixel1 can3 be2 determined4 (2) by Internal illumination[nm] 870 discussed in [41]. The depth data is rcalculated from 2the phase information φ shapes and deformations of metal objects and structuresas Table 1. PMD CamCube 2.0 specification + + + ( − )2 − ( − )2 using ToF range information2.1. The basic ToF principle and heterodyne imaging are = A1 A2 A3 A4 A A3 A2 A br = 1 (3) 4 c j ar4 (2) discussed in [41].A single pixel consists of a photo-sensitive element d = o ´ Integration Time 200 ms r (4)ϕ 2 (e.g., a photo diode) which converts incoming light into The depth data is calculated2fmod 2p from the phase information current. The distance between the camera and an object is as Modulation frequency 23 MHz determined by the ToF principle and the time taken by the ϕ where f is the modulation= frequencyco × and c is the+ speed + Pixel+ resolution 48[H]x64[V] 2.1. The basic ToF principlelight to travel from the illumination unit to the receiver is mod dr π o A1 A(4)2 A3 A4 2 fmod 2b = (3) directly proportional to the distance travelled by the lightof light. r 4 FoV H/V [] 40/30 A single pixel[ consists42], with the delay of time at photo-sensitiveD given by elementwhere fmod is the modulation frequency and co is the speed Internal illumination[nm] 850 The PMDof light. CamCube 2.0 and PMD S3 cameras are shown in ϕ (e.g., a photo diode) which convertsD incoming lightFigures into 2 and 3The and their depth specifications data is are calculated listed in Tables fromTable the 2. phase PMD[vision]S3 information specification = × obj The PMD CamCube 2.0 and PMD S3 cameras are shown in tD 2 1 and 2. current. The distance between the cameraco and an object isFigures 2 andas 3 and their specifications are listed in Tables 1 and 2. 3. Parametric Calibration determined by thewhere ToFDobj principleis the object and distance the in metrestime and takenco is theby the c ϕ velocity of light in m/s. = o × light to travel from the illumination unit to the receiver is dr Theπ PMD CamCube 2.0 and PMD(4) S3 cameras, developed The pulse width of the illumination determines the 2 fmod by2 PMD Technologies Inc. and which are based on the ToF directly proportionalmaximum to range the that distance the camera travelled can handle, which by the can be light principle, are used. The PMD CamCube 2.0 camera’s determined by [42], with the delay time tD given by where fmod is the modulation frequencyreceiver optics and chaso is200 the by 200 speed pixels with an FoV of 40(H)/ 1 40(V) degrees. The PMD S3 has an FoV of 40(H)/30(V) Dmax = × co × t of light. 2 degrees with 64 HP and 48 VP, and thus in total is able to The distance between theD sensor and the object is half the = × obj The PMD CamCube 2.0 and PMDreturn S3 cameras 3,072 distance are measurements shown in in a square array. total distancetD travelled2 by the radiation. The two different distance measurement methodsco described by T. Kahlmann Figures 2 and 3 and their specificationsIn this section, are listed the calibration in Tables of the PMD cameras is Figure 2. PMD[vision] CamCube 2.0 camera et al. [42]) are the pulse run time and the phase shift performed by a parametric calibration procedure that Figure 2. PMD[vision]1 and 2 CamCube. 2.0 camera. where Dobj is thedetermination. object distance in metres and co is the provides necessary camera parameters. These parameters velocity of light inIn m/s.the ToF camera, the distance between the camera and can beThe used calibration to derive the test true is alsoinformation conducted of the with imaging three different chequerboard squares of dimensions (9 x 6), (5 x 7) and the obstacle is calculated by the autocorrelation function Table 1. PMD CamCube 2.0 specification. scene. The procedure can also provide the translational of the electrical and optical signals, which is analysed by The pulse width of the illumination determines the Integration time 200 ms vector(6 and x 4)rotational and corner matrix spacingon the 3D ofspace 27 formm the,40 cameramm, 100 mm, a phase-shift algorithm (Figure 1). Using four samples, Modulation frequency 23 MHz respectively. The manufacturer’s values for the PMD maximum rangeA1, that A2, the A3 camera and A4 (each can shifted handle, by 90 whichdegrees), thecan be which can be used to find the true information about the Wavelength 13 m CamCube 2.0 camera are: phases of the received signals - which are proportional coordinate positions of the camera and the imaging determined by Pixel resolution 200[H]x200[V] to the distance - can be calculated using the following scenario. μ equation[43]. FoV V/H [˚] 40/40 • pixel size, Sx = Sy =45 m, 1 Internal illumination[nm] 870 Basically,• Aspect the intrinsic ratio =parameters 1, Skew =0,of the camera provide Dmax = × co × t the transformation between the image coordinates and the 2 pixel •coordinates focal length, of the camerafcam = 12.8[44, 45mm]. Theand intrinsic matrix Figure 3. PMD[vision] S3 Camera The distance betweenwww.intechopen.com the sensor and the object is half the : 3 is given by FigureA ToF-camera3. PMD[vision] as a 3D S3 Vision Camera. Sensor for Autonomous Mobile Robotics The manufacturer’s value for the PMD S3 camera are: total distance travelled by the radiation. The two different4 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348 • pixel size, Sx = Sy = 100 μm, distance measurement methods described by T. KahlmannTable 2. PMD[vision]S3 specification. • Aspect ratio = 1.333, Skew =0, et al. [42]) are the pulse run time and the phase shift Integration Time 200 ms • focal length, f = 8.4 mm and determination. ModulationFigure frequency 2. PMD[vision]23 MHz CamCube 2.0 camera. cam Pixel resolution 48[H]x64[V] In the ToF camera, the distance between the camera and FoV H/V [˚] 40/30 Internal illumination[nm] 850 the obstacle is calculated by the autocorrelation function Table 1. PMD CamCube 2.0 specification. of the electrical and optical signals, which is analysed3. Parametric by calibration Integration time 200 ms a phase-shift algorithm (Figure 1). Using four samples, The PMD CamCube 2.0 andModulation PMD S3 cameras, frequency developed 23 MHz A1, A2, A3 and A4 (each shifted by 90 degrees),by PMD the Technologies Inc. and which are based on Wavelength 13 m phases of the received signals - which are proportionalthe ToF principle, are used. The PMD CamCube 2.0 camera’s receiver optics hasPixel 200 resolution by 200 pixels with an 200[H]x200[V] to the distance - can be calculated using the following FoV of 40(H)/40(V) degrees.FoV The V/H PMD [˚] S3 has an FoV of 40/40 equation[43]. 40(H)/30(V) degrees with 64 HP and 48 VP, and thus in total is able to return 3,072Internal distance illumination[nm] measurements in a 870 square array. In this section, the calibration of the PMD cameras is performed by a parametric calibration procedure that www.intechopen.com provides necessary camera parameters. These parameters Figure 4. Parametric calibration: 3 (corners: x=9, y=6; spacing x=y=27 mm). can be used toA ToF-camera derive the true as information a 3D Vision of Sensor the imaging for Autonomous Mobile Robotics scene. The procedure can also provide the translational vector and rotational matrix on the 3D space for the camera which can be used to find the true information about the coordinate positions of the camera and the imaging scenario. Basically, the intrinsic parameters of the camera provide the transformation between the image coordinates and the pixel coordinates of the camera [44, 45]. The intrinsic matrix is given by ⎡ ⎤ fcam/Sx 0 Cx ⎣ ⎦ 0 fcam/Sy Cy (5) 001

where fcam is the focal length of the camera, Sx and Sy are the pixel sizes of the camera in the x and y axes respectively, and Cx and Cy are the principal points of the sensor array. To obtain the camera parameters, an Figure 5. Parametric calibration (corners: x=5, y=7; spacing experimental setup is developed in which the calibration x=y=40 mm). process of the PMD CamCube 2.0 and PMD S3 camera is performed by capturing intensity and depth images of the chequerboard with different orientations, as shown in The main parameters of the intrinsic matrix are the focal Figures 4 and 5. The chequerboard has black and white length, the principal point, the skew coefficient and squares of known dimensions, each printed on a 21 cm x distortions. It defines the optical, geometric and digital 29.7 cm A4 paper. The ‘OpenCV’2, and is used to estimate characteristics of the camera which are calculated using the intrinsic parameters as well as the radial distortion in Equations 6 and 7. a Debian 6.0 installed laptop. ⎡ ⎤ 2 2 2 2.8951e 0 1.0297e OpenCV (Open Source Library) is a library of = ⎣ 2 2 ⎦ programming functions mainly aimed at real-time computer vision, Icube2.0 0 2.89268e 1.016920e (6) developed by Intel[46]. 00 1

4 2015, Vol. No, No:2015 www.intechopen.com The calibration test is also conducted with three different chequerboard squares of dimensions (9 x 6), (5 x 7) and (6 x 4) and corner spacing of 27 mm,40mm, 100 mm, respectively. The manufacturer’s values for the PMD CamCube 2.0 camera are:

• pixel size, Sx = Sy =45μm, • Aspect ratio = 1, Skew =0,

• focal length, fcam = 12.8 mm and

Figure 3. PMD[vision] S3 Camera. The manufacturer’s value for the PMD S3 camera are:

• pixel size, Sx = Sy = 100 μm, Table 2. PMD[vision]S3 specification. • Aspect ratio = 1.333, Skew =0, Integration Time 200 ms Modulation frequency 23 MHz • focal length, fcam = 8.4 mm and Pixel resolution 48[H]x64[V] FoV H/V [˚] 40/30 Internal illumination[nm] 850

3. Parametric calibration The PMD CamCube 2.0 and PMD S3 cameras, developed by PMD Technologies Inc. and which are based on the ToF principle, are used. The PMD CamCube 2.0 camera’s receiver optics has 200 by 200 pixels with an FoV of 40(H)/40(V) degrees. The PMD S3 has an FoV of 40(H)/30(V) degrees with 64 HP and 48 VP, and thus in total is able to return 3,072 distance measurements in a square array. • pixel size, S = S = 100 μm, éf/ S 0 C ù x y In this section,êcam the x calibration of x the ú PMD cameras is performed byê a0 parametricfcam / Scalibration y C y ú procedure that(5) • Aspect ratio = 1.333, Skew =0, provides necessaryê camera parameters. ú These parameters Figure 4. Parametric calibration (corners: x=9, y=6; spacing ë0 0 1 û can be used to derive the true information of the imaging •x=y=27focal length,mm). f cam = 8.4 mm and scene. The procedure can also provide the translational vectorwhereThe calibration andf cam rotationalis thetest focal ismatrix length also conductedon of the the 3D camera, space with for threeSx theand different camera Sy are whichthechequerboard pixel can sizes be of usedsquares the camera to of find dimensions in the x true and (9yinformation axes x 6), respectively, (5 x 7)about and theand(6 x coordinateCx 4) and and Cy corner are positions the spacing principal of the of points camera 27 mm of,40 andthe mmsensor the, imaging 100 array.mm, scenario.Torespectively. obtain the camera The manufacturer’sparameters, an experimental values for the setup PMD is Basically,developedCamCube the 2.0in intrinsic which camera the are: parameters calibration of process the camera of the provide PMD the transformation between the image coordinates and the CamCube 2.0 and PMD S3 cameraμ is performed by captur‐ pixel• pixel coordinates size, Sx = ofSy the=45 cameram, [44, 45]. The intrinsic ing intensity and depth images of the chequerboard with matrix• Aspect is given ratio by = 1, Skew =0, different orientations,⎡ as shown in Figures⎤ 4 and 5. The • focal length, fcam = 12.8 mm and chequerboard hasf camblack/S x and 0whiteC x squares of known ⎣ ⎦ dimensions, each printed0 onfcam a 21/S ycmC yx 29.7 cm A4 paper.(5) Figure 3. PMD[vision] S3 Camera. The manufacturer’s value for the PMD S3 camera are: The ‘OpenCV’1, and 001 is used to estimate the intrinsic μ parameters• pixel size, as Swellx = Sasy =the 100 radialm, distortion in a Debian 6.0 Table 2. PMD[vision]S3 specification. where f is the focal length of the camera, S and S installed• Aspectcam laptop. ratio = 1.333, Skew =0, x y Integration Time 200 ms are the pixel sizes of the camera in the x and y axes • focal length, fcam = 8.4 mm and Modulation frequency 23 MHz respectively, and Cx and Cy are the principal points of Pixel resolution 48[H]x64[V] the sensor array. To obtain the camera parameters, an Figure 5. Parametric 5. Parametric calibration calibration (corners: (cx=5,orners: y=7; spacing x=5, x=y=40 y=7; spacingmm) FoV H/V [˚] 40/30 experimental setup is developed in which the calibration x=y=40 mm). Internal illumination[nm] 850 process of the PMD CamCube 2.0 and PMD S3 camera The main parameters of the intrinsic matrix are the focal is performed by capturing intensity and depth images of length, the principal point, the skew coefficient and 3. Parametric calibration the chequerboard with different orientations, as shown in distortions.The main parameters It defines of the the optical, intrinsic geometric matrix are and the digital focal Figures 4 and 5. The chequerboard has black and white length, the principal point, the skew coefficient and The PMD CamCube 2.0 and PMD S3 cameras, developed characteristics of the camera which are calculated using squares of known dimensions, each printed on a 21 cm x distortions. It defines the optical, geometric and digital by PMD Technologies Inc. and which are based on Equations 6 and 7. 29.7 cm A4 paper. The ‘OpenCV’2, and is used to estimate characteristics of the camera which are calculated using the ToF principle, are used. The PMD CamCube 2.0 the intrinsic parameters as well as the radial distortion in Equations 6 and 7. camera’s receiver optics has 200 by 200 pixels with an a Debian 6.0 installed laptop. FoV of 40(H)/40(V) degrees. The PMD S3 has an FoV of é2.8951⎡ e2 0 1.0297 e 2 ù ⎤ 40(H)/30(V) degrees with 64 HP and 48 VP, and thus in ê2.8951e2 0 1.0297e2 ú 2 OpenCV (Open Source Computer Vision Library) is a library of ê2 2 ú total is able to return 3,072 distance measurements in a Icube2.0 == ⎣ 0 2.89268 e 1.0169202 e 2 ⎦ (6) programming functions mainly aimed at real-time computer vision, Icube2.0ê0 2.89268e 1.016920 úe (6) square array. 0 0 1 developed by Intel[46]. ëê00 1 ûú In this section, the calibration of the PMD cameras is performed by a parametric calibration procedure that 4 2015, Vol. No, No:2015 www.intechopen.com provides necessary camera parameters. These parameters FigureFigure 4. Parametric 4. Parametric calibration calibration (corners: x=9, (corners: y=6; spacing x=9, x=y=27 y=6; mm) spacing can be used to derive the true information of the imaging x=y=27 mm). é83.88370514 0 31.98480415 ù scene. The procedure can also provide the translational The calibration test is also conducted with three different ê ú I = 0 84.15943909 29.34420967 (7) vector and rotational matrix on the 3D space for the camera chequerboard squares of dimensions (9 x 6), (5 x 7) and (6 S3 ê ú ê0 0 1 ú which can be used to find the true information about x 4) and corner spacing of 27 mm, 40 mm, 100 mm, respec‐ ë û the coordinate positions of the camera and the imaging tively. The manufacturer’s values for the PMD CamCube scenario. 2.0 camera are: Basically, the intrinsic parameters of the camera provide The distortion is caused mainly by the camera optics and is the transformation between the image coordinates and the • pixel size, Sx = Sy = 45 μm, directly proportional to its focal length. The radial distor‐ pixel coordinates of the camera [44, 45]. The intrinsic tion and tangential distortion are the lens distortion effects matrix is given by • Aspect ratio = 1, Skew =0, introduced by real lenses. Generally, there are four distor‐ ⎡ ⎤ • focal length, f = 12.8 mm and fcam/Sx 0 Cx cam tion parameters, i.e., two radial and two tangential distor‐ ⎣ ⎦ tion coefficients, which are calculated as: 0 fcam/Sy Cy (5) The manufacturer’s value for the PMD S3 camera are: 001 where fcam is the focal length of the camera, Sx and Sy =é ----01 - 03 03 ù are the pixel sizes of the camera in the x and y axes Distortioncube2.0 ë0.4256 0.8573 e 1.3939 e 2.0328 e û (8) respectively, and Cx and Cy are the principal points of the sensor array. To obtain the camera parameters, an Figure 5. Parametric calibration (corners: x=5, y=7; spacing experimental setup is developed in which the calibration x=y=40 mm). =é ---03 - 05 ù process of the PMD CamCube 2.0 and PMD S3 camera Distortions3 ë0.4394 1.0112 4.4002 e 7.8725 e û (9) is performed by capturing intensity and depth images of the chequerboard with different orientations, as shown in The main parameters of the intrinsic matrix are the focal Figures 4 and 5. The chequerboard has black and white length, the principal point, the skew coefficient and squares of known dimensions, each printed on a 21 cm x distortions. It defines the optical, geometric and digital 29.7 cm A4 paper. The ‘OpenCV’2, and is used to estimate 1characteristics OpenCV (Open Source of the Computer camera Vision which Library) are is a calculated library of programming using functions mainly aimed at real-time computer vision, developed by Intel [46]. the intrinsic parameters as well as the radial distortion in Equations 6 and 7. Sobers Lourdu Xavier Francis, Sreenatha G. Anavatti, Matthew Garratt and Hyunbgo Shim: 5 a Debian 6.0 installed laptop. A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics ⎡ ⎤ 2 2 2 2.8951e 0 1.0297e OpenCV (Open Source Computer Vision Library) is a library of = ⎣ 2 2 ⎦ programming functions mainly aimed at real-time computer vision, Icube2.0 0 2.89268e 1.016920e (6) developed by Intel[46]. 00 1

4 2015, Vol. No, No:2015 www.intechopen.com ⎡ ⎤ 83.88370514 0 31.98480415 The relationship between the scene/world point P and = ⎣ ⎦ wo IS3 0 84.15943909 29.34420967 (7) its projection pcam can be written as 001

The distortion is caused mainly by the camera optics and Pcam = Icam[Rext|Text]Pwo (13) is directly proportional to its focal length. The radial distortion and tangential distortion are the lens distortion where (x , y , z ) are the coordinates of a point P in effects introduced by real lenses. Generally, there are four wo wo wo wo the scene/world coordinate system and (x , y , z ) distortion parameters, i.e., two radial and two tangential cam cam cam are the coordinates in the camera coordinate system. distortion coefficients, which are calculated as: Distortioncube2.0= optical image, respectively, while the range data obtained Intrinsic parameters Calibration parameters 4. Manipulation of PMD camera data − − − −0.4256 0.8573e 01 −1.3939e 03 2.0328e 03 (8) from each pixel is stored in a 2D array. The camera also Principle point ( Cx,Cy ) 102.97, 101.69 providesThe camera’s the pixels signal provide strength a complete of the image illumination of the scene, and with each pixel detecting the intensity or projection of the Radial distortion -0.4256, 0.0857 intensity information, which can be used to determine the Distortions3= optical image, respectively, while the range data obtained Tangential distortion -0.001394, 0.002033 quality of the distance value (a low amplitude indicates the − −03 − −05 from each pixel is stored in a 2D array. The camera 0.4394 1.0112 4.4002e 7.8725e (9) low accuracy of the measured distance in a pixel). The Table 3. Average calibration results of a 200 X 200 PMD CamCube2.0 camera also provides the signal strength of the illumination and coordinatesintensity information, of the object which with can respect be used to to the determine PMD camera the arequality obtained of the as distance a 2D matrix, value with (a low each amplitude element correspond‐ indicates IntrinsicTable parameters 3. Average calibration resultsCalibration of a 200 parameters X 200 PMD ingthe to low a pixel. accuracy As the of dimensions the measured of the distance image in frame a pixel). depend CamCube2.0 camera. The coordinates of the object with respect to the PMD upon its distance from the camera and the camera’s field of Radial distortionIntrinsic parameters-0.4394,Calibration 1.0112 parameters camera are obtained as a 2D matrix, with each element view, an object’s height and width can be calculated using Tangential Principledistortion point (Cx, Cy-0.0044002,) 102.97, -7.8725e 101.69-0.5 corresponding to a pixel. As the dimensions of the image Radial distortion -0.4256, 0.0857 itsframe pixel depend elements upon [47]. its distance from the camera and the Table 4. AverageTangential calibration distortion results of a PMD-0.001394, S3 camera 0.002033 camera’s field of view, an object’s height and width can be Twocalculated different using range its pixel frames elements of a rectangular [47]. object, i.e., X d - near and X - far from the camera, are represented By calibrating the camera, intrinsic and extrinsic camera Two different ranged frames of a rectangular object, i.e., parameters are detected to eliminate the geometric distor‐ pictoriallyX -near and in FigureX - far 6.from The pixel-size the camera, projected are represented on the object Table 4. Average calibration results of a PMD S3 camera. d d tion of the images. ispictorially differentin for Figure the two6. frames The pixel-size and decreases projected as the on distance the Intrinsic parameters Calibration parameters betweenobject is differentthe camera for theand two image frames frame and increases decreases assince the the Thus, the transformationRadial distortion between-0.4394, a point 1.0112 in the scene/ −05 object’sdistance area between projected the camera on the near and imageframe frame(i × i increases) is different Tangential distortion -0.0044002, −7.8725e xn yn world coordinates system Pwor and the image plane since the object’s area projected on the near frame (ix × from that (i × i ) on the far frame. As can be seen, nexcept i xf xf i × i coordinate system Ocam is provided through a rotational yn ) is different from that ( x f x f ) on the far frame. As can be seen, except for the projected pixel size of each pixel, matrix R and a translational vector T . The joint rotation- for the projected pixel size of each pixel, all the other By calibratingext the camera, intrinsicex andt extrinsic camera all the other parameters (Equations (14)-(19)) increase with parameters are detected to eliminate the geometric parameters (Equations (14)-(19)) increase with an increase translation matrix Rext |Text determines the extrinsic an increase in the distance of the object frame from the distortion of the images. in the distance of the object frame from the camera. parameters of the camera. camera. Thus, the transformation between a point in the Rotation matrix: scene/world coordinates system Pwor and the image Horizontal FOV Far plane coordinate system Ocam is provided through a é-0.437390661R 1.0 0.0 0.0 ù T rotational matrixêext and a translational ú vector ext. The jointR rotation-translation=- 4.42766333 matrix 0.0 [ 1.0Rext|T 0.0ext] determines the Near Vertical ext ê ú (10) FOV extrinsic parametersê of the camera. ú ë0.35243243 0.0 0.0 1.0 û Rotation matrix: Translation vector: ⎡ ⎤ −0.437390661 1.0 0.0 0.0 R = ⎣ −4.42766333 0.0 1.0 0.0 ⎦ (10) ext - X -near é0.3 ù d Xd -far 1.0 0.00.35243243 0.0 7.7065 0.0e 0.0 1.0 Camera ê ú ln ê0.0 1.0 0.0- 0.04777 ú TranslationText = ê vector: ú (11) hn 0.0 0.0⎡ 1.0 0.0074 ⎤ i ê ú−03 y n 1.0 0.0 0.0 7.7065e i x n ëê0.0 0.0⎢ 0.0 1.0 ûú ⎥ lf ⎢ 0.0 1.0 0.0 −0.04777 ⎥ Near Frame Text = ⎣ ⎦ (11) 0.0 0.0 1.0 0.0074 Xd - Distance Between the Camera and Image frame.

0.0 0.0 0.0éx ù 1.0 ln, lf - Total length of the near, far image frame é ù wo hf xcam ê ú hn,hf - Total height of the near, far image frame. i y y f ê ú êwo ú i i - Length of the near, far object in the respective frames. xn x f y=é R | T ù (12) i êcam ú ë ext ext û x f ê ú⎡ ⎤ iy ,i - Height of the near, far object in the respective frames. ê⎡ ú ⎤ zwo n y f ëzcam û ê ú xwo Far Frame xcam ê1 ú⎢ ⎥ ⎣ ⎦ = |ë û⎢ywo⎥ ycam Rext Text ⎣ ⎦ (12) Figure 6. PMD camera’s FoV and image frames zwo Figure 6. PMD camera’s FoV and image frames. zcam The relationship between the scene/world1 point Pwo and its Using geometry; projection pcam can be written as

l= tan ( FoV ( H ))´ X www.intechopen.com d :(14)5 PIRTPcam= cam [ ext | ext ] wo (13) A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

h= tan ( FoV ( V ))´ Xd (15) where (xwo, ywo, zwo) are the coordinates of a point Pwo in the

scene/world coordinate system and (xcam, ycam, zcam) are the l coordinates in the camera coordinate system. P = (16) h No.of pixels(H ) 4. Manipulation of PMD Camera Data h The camera’s pixels provide a complete image of the scene, P = (17) v No.of pixels(V ) with each pixel detecting the intensity or projection of the

6 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348 Using geometry; = ( ( )) × l tan FoV H Xd (14) = ( ( )) × h tan FoV V Xd (15) l P = (16) h No.of pixels(H) h Pv = (17) No.of pixels(V) (a) 350 mm (b) 450 mm = × iy Ph Ov (18) = × ix Pv Oh (19)

where l is the length of the image frame, h is the height of the image frame, Xd is the distance between the image frameUsing and the geometry; camera, Ph is the projected pixel size of each pixel in the object along the horizontal axis, P is the = ( ( )) × v projected pixel size ofl eachtan pixelFoV inH the objectXd along the (14) Using geometry; = ( ( )) × (c) 600 mm (d) 850 mm vertical axis, Oh is theh numbertan FoV of object-occupiedV Xd pixels (15) along the horizontal= axis( in( the)) image× l frame, Ov is the l tan FoV= H Xd (14) number of object-occupiedPh pixels along the( vertical) axis in (16) Figure 8. Experiment 1: rectangular box placed at different = ( (No.of)) × pixels H distances. h tan FoV V Xd (15) the image frame, ix is the total lengthh of the object in the P = (17) image frame, and iy is thev totall height of( the) object in the (a) 350 mm (b) 450 mm P = No.of pixels V (16) image frame. ihy= P h´ O v ( ) (18) No.of pixels=H × Distance Vs Error To obtain accurate readings, an experimentaliy Ph Ov setup is made (18) 10 h = × as shown in FigureP =7(a). The cameraix Pv andO testh object(17) are (19) 8 Length (157 mm) v ( ) (a) 350 mm (b) 450 mm placed on a horizontali= PNo.of rail,´ O as pixels shownV in Figure 7, and(19) the Height (80 mm) where l is thex length v ofh the image frame, h is the height 6 test object is moved to differenti = P distances× O without altering(18) of the image frame, Xy is theh distancev between the image the position of the camera.d = A rectangular× box of known 4 where l isframe the length and the of the camera, imageix ×PhP frame,vis theOh projectedh is the height pixel(19) sizeof of dimensions (157 mm (length) 80 mm (height)) is used as 2 each pixel in the object along the horizontal axis, Pv is the the the image reference frame, object X d and is the its lengths distance and between heights at the different image where l isprojected the length pixel of size the of image each pixel frame, in theh is object the height along the 0 framedistances and the calculated camera, P using is the the projected readings pixel from size the camera,of each (c) 600 mm (d) 850 mm of the imagevertical frame, axis,XO hisis the the distance number of between object-occupied the image pixels dh −2 pixelas inpreviously the object discussed along and the illustrated horizontal in Figureaxis, P8, which is the Relative Error in % frame andalong the the camera, horizontalP is axis the in projected the image pixel frame,v sizeOv ofis the are compared with the actualh dimensions and the relative Figure 8. Experiment 1: rectangular box placed at different projected number pixel size of object-occupied of each pixel pixels in the along object the alongvertical the axis in −4 eacherror, pixel are in calculated. the object along Relative the accuracies horizontal with axis, respectPv is the to distances. vertical axis,the imageOh is frame, the numberix is the of total object-occupied length of the object pixels in the projecteddistance pixel are shown size of in each Figure pixel9, with in the the calculated object along lengths the −6 along theimage horizontal frame, andaxis iy inis the the total image height frame, of the Ov object is the in the (c) 600 mm (d) 850 mm verticaland heights axis, O ofh is the the test number object plotted of object-occupied as blue and pixelsgreen −8 number ofimage object-occupied frame. pixels along the vertical axis in Distance Vs Error alonglines, the respectively. horizontal It axis can in be the seen image that the frame, relativeOv erroris the is 10 To obtain accurate readings, an experimental setup is made Figure−10 8. Experiment 1: rectangular box placed at different thenumber image of frame, object-occupied± ix is the total pixels length along of the the vertical object axisin the in 300 400 500 600 700 800 900 1000 1100 less thanas shown3%. in Figure 7(a). The camera and test object are 8 Distance in mm Length (157 mm) image frame, and i is the total height of the object in the distances. the imageplaced frame, oni axy horizontalis the total rail, length as shown of the in objectFigure in7, and the the Height (80 mm) 6 imageimage frame. frame,test object and i isy movedis the total to different height distances of the object without in altering the Figure 9. Plots of distance vs relative error Figure 9.4 Plotsofdistancevsrelativeerror. image frame.the position of the camera. A rectangular box of known Distance Vs Error To obtain accurate readings, an experimental× setup is made 10 To obtaindimensions accurate readings, (157 mm an(length) experimental80 mm (height)) setup is made is used as 2 as shown in Figure 7(a). The camera and test object are placed 4.1 Camera Mounting Angle the reference object and its lengths and heights at different 8 Length (157 mm) as shown in Figure 7(a). The camera and test object are 0 on a horizontaldistances rail, calculated as shown using in Figure the readings 7, and the from test the object camera, placed on a horizontal rail, as shown in Figure 7, and the Height (80 mm) The 3D6 −2 model of the Pioneer 3DX with the cameras as previously discussed and illustrated in Figure 8, which Relative Error in % istest moved object to is different moved distances to different without distances altering without the position altering are compared with the actual dimensions and the relative mounted at a tilt angle of Ψ =0 is created in Google Sketch‐ ofthe the position camera. of A therectangular camera. box A rectangularof known dimensions box of known (157 4 −4 error, are(a) calculated. Relative accuracies(b) with respect to Up, and the dimensions of the AGV and the camera’s FoV mm (length) x 80 mm (height)) is× used as the reference object −6 dimensionsdistance (157 mm are shown(length) in Figure80 mm9, with(height)) the calculated is used lengths as 2 extended to 7,000 mm are Fielddrawn of view (40 H x 40to V) closely resemble the andthe referenceits lengthsand objectheights and heights and of the itslengths testat different object and plotted heightsdistances as atblue calculat‐ different and green −8 Figure 7. Experimental setup for calibration. actual0 situation in order to assist the perception of the work eddistances using lines, thecalculated readings respectively. using from It the can the readings be camera, seen that from as the the previously relative camera, error is −10 Camera range 300 400 500 600 700 800 900 1000 1100 ± (Figure−2 10). discussedas previouslyless and than discussedillustrated3%. and in Figure illustrated 8, which in Figure are compared8, which Relative Error in % Distance in mm

withare compared the actual with dimensions the actual and dimensions the relative and the error, relative are −4 4.1. Camera mounting angle calculated.error, are calculated.Relative accuracies Relative with accuracies respect withto distance respect are to Figure 9. Plotsofdistancevsrelativeerror. −6 showndistanceThe in 3D areFigure model shown 9, with of in Figurethe the Pioneer calculated9, with 3DX the lengths withcalculated theand camerasheights lengths Figure 10. 3D model depicting P3DX AGV. ofand themounted heights test object atof plotted thea tilt test angle as object blue of and plottedΨ=0 green is ascreated lines, blue respective‐ inand Google green −8 lines, respectively. It can be seen that the relative error is SketchUp, and the dimensions of the AGV and the in front−10 of the robot could ideally be reduced. Because ly. It can be± seen that the relative error is less than ±3%. 300 400 500 600 700 800 900 1000 1100 lesscamera’s than 3%. FoV extended to 7,000 mm are drawn to closely of these considerations, a more optimalDistance in mm camera angle was resemble the actual situation in order to assist the adopted - this angle was determined as half the vertical perception of the work(a) (Figure 10). (b) FoV of the PMD. Figure 9. Plotsofdistancevsrelativeerror. Field of view (40 H x 40 V) As can be seen, for a camera orientation of ψ = 0, a large Figure 7. Experimental setup for calibration. ψ = 0.5 ∗ FoV (20) number of captured data would not be necessary to flag Camera range obstacles (due to being above the robot), and the blind spot

4.1. Camera mounting angle 6 2015, Vol. No, No:2015 www.intechopen.com The 3D model of the Pioneer 3DX with the cameras (a) (b) FigureFigure 10. 3D 10. model3D modeldepicting depicting P3DX P3DXAGV AGV. mounted at a tilt angle of Ψ=0 is created in Google Field of view (40 H x 40 V) SketchUp, and the dimensions of the AGV and the in front of the robot could ideally be reduced. Because FigureFigure 7. 7.ExperimentalExperimental setup setup for calibration for calibration. As can be seen, for a camera orientation of ψ = 0, a large camera’s FoV extended to 7,000 mm are drawn to closely of these considerations, a more optimal camera angle was number of captured data wouldCamera not range be necessary to flag resemble the actual situation in order to assist the adopted - this angle was determined as half the vertical Using geometry; perception of the work (Figure 10). obstaclesFoV of (due the PMD. to being above the robot), and the blind spot in front of the robot could ideally be reduced. Because of = ( ( )) × As can be seen, for a camera orientation of ψ = 0, a large l tan FoV H Xd (14)4.1. Camera mounting angle ψ = 0.5 ∗ FoV (20) number of captured data would not be necessary to flag these considerations, a more optimal camera angle was h = tan(FoV(V)) × X (15) d The 3Dobstacles model of (due the to being Pioneer above 3DX the robot), with andthe the cameras blind spot adopted - this angle was determined as half the vertical FoV = l Ψ Figure 10. 3D model depicting P3DX AGV. Ph (16)mounted at a tilt angle of =0 is created in Google of the PMD. No.of pixels(H) SketchUp, and the dimensions of the AGV and the in front of the robot could ideally be reduced. Because = h 6 2015, Vol. No, No:2015 www.intechopen.com Pv (17)camera’s FoV extended to 7,000 mm are drawn to closely No.of pixels(V) (a) 350 mm (b) 450 mm of these considerations, a more optimal camera angle was = × resemble the actual situation in order to assist the adopted - this angley = was 0.5´ determinedFoV as half the vertical iy Ph Ov (18) (20) = × perception of the work (Figure 10). FoV of the PMD. ix Pv Oh (19) As can be seen, for a camera orientation of ψ = 0, a large where l is the length of the image frame, h is the height ψ = 0.5 ∗ FoV (20) number of captured data would not be necessary to flag Setting ψ ensured that the top-most pixels were observed of the image frame, Xd is the distance between the image frame and the camera, Ph is the projected pixel size ofobstacles (due to being above the robot), and the blind spot directly ahead of the robot, thereby providing an adequate each pixel in the object along the horizontal axis, Pv is the detection of obstacles while maximizing the ground plane projected pixel size of each pixel in the object along the (c) 600 mm (d) 850 mm conception. The sketch shown below illustrates the mount‐ vertical axis, Oh is the number of object-occupied pixels6 2015, Vol. No, No:2015 www.intechopen.com along the horizontal axis in the image frame, Ov is the ing concept, the grid (localized) and the cameras’ FoV FigureFigure 8. Experiment 8. Experiment 1: rectangular 1: rectangular box placed box at different placed at distances different number of object-occupied pixels along the vertical axis in distances. projected as understood. the image frame, ix is the total length of the object in the image frame, and i is the total height of the object in the y Sobers Lourdu Xavier Francis, Sreenatha G. Anavatti, Matthew Garratt and Hyunbgo Shim: 7 image frame. Distance Vs Error A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics To obtain accurate readings, an experimental setup is made 10 as shown in Figure 7(a). The camera and test object are 8 Length (157 mm) placed on a horizontal rail, as shown in Figure 7, and the Height (80 mm) 6 test object is moved to different distances without altering the position of the camera. A rectangular box of known 4 × dimensions (157 mm (length) 80 mm (height)) is used as 2 the reference object and its lengths and heights at different 0 distances calculated using the readings from the camera, −2 as previously discussed and illustrated in Figure 8, which Relative Error in % are compared with the actual dimensions and the relative −4 error, are calculated. Relative accuracies with respect to distance are shown in Figure 9, with the calculated lengths −6 and heights of the test object plotted as blue and green −8 lines, respectively. It can be seen that the relative error is −10 ± 300 400 500 600 700 800 900 1000 1100 less than 3%. Distance in mm

Figure 9. Plotsofdistancevsrelativeerror.

(a) (b)

Field of view (40 H x 40 V) Figure 7. Experimental setup for calibration.

Camera range

4.1. Camera mounting angle The 3D model of the Pioneer 3DX with the cameras Figure 10. 3D model depicting P3DX AGV. mounted at a tilt angle of Ψ=0 is created in Google SketchUp, and the dimensions of the AGV and the in front of the robot could ideally be reduced. Because camera’s FoV extended to 7,000 mm are drawn to closely of these considerations, a more optimal camera angle was resemble the actual situation in order to assist the adopted - this angle was determined as half the vertical perception of the work (Figure 10). FoV of the PMD. As can be seen, for a camera orientation of ψ = 0, a large ψ = 0.5 ∗ FoV (20) number of captured data would not be necessary to flag obstacles (due to being above the robot), and the blind spot

6 2015, Vol. No, No:2015 www.intechopen.com Figure 11. Sketch of the configuration with FoV

Figure 13. Pictorial representation of the co-ordinates

The following figures produced in MATLAB illustrate the projection of the individual pixels for a mounting angle of -15 degrees. The points on the plots show the projected position of the pixels in 3D space prior to the ground plane being considered and after it has been used as a boundary. The blue pixels show pixels that are above 0 degrees and the green pixels reflect the pixels that are below or on the ground plane.

Figure 12. Interpretation of PMD camera data into the occupancy grid

Following the creation of the 3D model to depict the developmental robot and its FoV, the MATLAB trigonom‐ etry toolbox for spherical co-ordinates was engaged to determine the specific placement of the ToF cameras’ data points. The distance measurements returned per pixel could be thought of as individual measurements at a known elevation and azimuth, and thus could be projected to intersect the ground plane if they are within the maxi‐ mum range.

The ToF camera data are converted into Cartesian coordi‐ Figure 14. Non-truncated by the ground plane (-15 degrees) nates for the grid mapping using the following transfor‐ mation,

x= r´ cos (q - y ) ´ cos g (21)

y= r´ cos (q - y ) ´ sin g (22)

z= [ r´ sin (q - y ) + h ] (23)

where x is the distance in front of the AGV [mm], y is the distance left of the AGV’s midpoint [mm], z is the height above the AGV’s ground level [mm], θ is the elevation from the camera midpoint [degrees], γ is the azimuth from the camera midpoint [degrees], ψ is the downwards camera tilt [degrees] and h is the camera height above the ground from the centre of the receiver [mm]. Figure 15. Truncated by the ground plane (-15 degrees)

8 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348 5. Experimentation and Analysis ground plane is also flat and uniform (a flat grass field fits this criteria). This concept is illustrated in the figures below A PMD CamCube 2.0 camera is mounted on the Pioneer - in these figures, the camera is mounted on the AGV and 3DX mobile robot as depicted in Figure 16, whereas the a hinge-type bracket enables the mounting angle to be PMD S3 camera is hinged on the Pioneer AT robot. These changed.Mounting angle 0 Degrees Mounting angle -30 Degrees Mounting angle 45 Degrees PMDPMD S3cameras camera provide is hinged Cartesian on the coordinates Pioneer AT expressed robot. These in Mounting angle 0 Degrees Mounting angle -30 Degrees Mounting angle 45 Degrees PMD S3 cameraPMDmetres, is hinged cameras with on a providecorrection the Pioneer Cartesian which AT compensates coordinates robot. These for expressed the radial in PMD cameras providemetres,distortions with Cartesian of a the correction optics coordinates [48 which]. The coordinate compensates expressed system for in the is right- radial Mounting angle 0 Degrees Mounting angle -30 Degrees Mounting angle 45 Degrees metres, with a correctionPMD which S3 camera compensates is hinged on the for Pioneer the radial AT robot. These distortionshanded,PMD with of cameras the the opticsZ providecam coordinate [48 Cartesian]. The increasing coordinates coordinate expressed along system the in is distortions of theright-handed, opticsmetres, [48 with]. Thea thecorrection coordinateZcam whichcoordinate compensates system increasing for is the alongradial optical axis away from the camera, the Y cam coordinate distortions of the optics [48]. The coordinate system is right-handed, withthe optical the Z axiscam awaycoordinate from the increasing camera, the alongYcam coordinate increasingright-handed, vertically with upwards the Z andcoordinate the X c increasingam coordinate along the optical axis awayincreasing from vertically the camera, upwards thecamY andcoordinate the Xcam coordinate increasingthe opticalhorizontally axis away to the from left the -cam all camera, from the the Ypointcam coordinate of view increasing horizontally to the left - all from the point of increasing vertically upwardsincreasing vertically and the upwardsX andcoordinate the Xcam coordinate Figure 17. Sweep of mounting angles. viewof the of camera the camera (or someone (or someone standingcam standing behind behindit). The it).origin The increasing horizontally to the left - all from the point of increasing horizontallyoriginof the ofcoordinate the to coordinate the system left - system all(0, 0, from 0) (0, is 0,at the 0)the is pointintersection at theintersection of of the FigureFigure 17. 17. SweepSweep of mounting of mounting angles angles. view of the camera (or someone standing behind it). TheFigure 17.conducted.Sweep ofTesting mounting the camera angles. on a white surface recreated view of the cameraofoptical the (or optical axis someone and axis the andfront standing theof the front camera. behind of the As it). camera.the The cameras As are the origin of the coordinate system (0, 0, 0) is at the intersection conducted.the conditions Testing outlined the camera by on the a white PMD surface S3 data recreated sheet, whereby statically fixed on top of the AGV’s at a static angle of tilt In addition, a white surface was recreated using plain white origin of the coordinatecamerasof aresystem the statically optical (0, axis 0, fixed 0) and is on at the top the front of intersection theof the AGV’s camera. at a As static theconducted.thethe conditions white, Testing grey outlined the and camera by black the PMD surface on S3 a data white error sheet, versus surface whereby the recreated distance angleof Ψ, their ofcameras tilt coordinates of Ψ are, their statically are coordinates aligned fixed on with top are of those aligned the AGV’s of the with at AGV’s a those static paperthe white, and greya sweep and black of the surface camera error angles versus thewas distance conducted. of the optical axis and the frontΨ of the camera. As the the conditionsdata were outlined defined. by the PMD S3 data sheet, whereby ofwhich the AGV’sprovideangle of which tiltthe of+12 provide,V their power coordinates the to run+12 theV are powercamera. aligned to with run those the Testingdata were the defined. camera on a white surface recreated the cameras are statically fixedof the AGV’son top which of the provide AGV’s the + at12V apower static to run the camera. the white,conditionsAs testing grey outlined and involved black by the capturing surface PMD S3 error data a 60-second sheet, versus whereby exposure, the distance the the angle of tilt of Ψ, theircamera. coordinates are aligned with those As testing involved capturing a 60-second exposure, the + data werewhite,timingtiming defined. grey was was and synchronized synchronizedblack surface via error the via implementation versus the implementation the distance of a data of a of the AGV’s which provide the 12V power to run the werecountingcounting defined. routine routine on the on Pioneer. the Pioneer. Ten capture Ten capture sets were sets were camera. As testingtakentaken for involved for a sweep a sweep of capturing camera of camera angles a angles from 60-second 0 tofrom -45 degrees.0 exposure, to -45 degrees. the timingAsFromFrom was testing this this synchronized point, involved point, the spherical the capturing spherical via coordinate a the coordinate 60-second systemimplementation data exposure,system were data the were of a 90 deg 90 deg countingtimingmanipulatedmanipulated routine was tosynchronized on Cartesian to Cartesianthe Pioneer. coordinates via coordinates the forTen implementation the capture purposefor the ofpurpose sets of a were of countingplottingplotting in routine 3Din 3D space. space. on The the captured The Pioneer. captured frames Ten frameswere capture averaged were sets averaged were taken forover a the sweep 60-second of camera duration angles and a comparison from 0 to of -45 the degrees. taken for a sweep of camera angles from 0 to -45 degrees. From thisexpectedover point, the data 60-second the with spherical actual duration data coordinate on and the same a comparisonsystem axis was data of were the

90 deg Fromexpected this point, data the with spherical actual coordinate data on system the same data axiswere was manipulatedplotted. It to was Cartesian found that coordinates the averaged distance for the values purpose of (a) Pioneer 3DX (b) Pioneer 3AT manipulatedobtainedplotted. from It to thewas Cartesian 60-second found thatcoordinates exposures the averaged were for significantly the distancepurpose valuesof (a) Pioneer 3DX (b) Pioneer 3AT plottingdifferentobtained in 3D when space. from considering the The 60-second captured both the exposures number frames of data were were points significantly averaged Figure 16. PMD mounted on the Pioneer robots. plotting in 3D space. The captured frames were averaged overover thedetecteddifferent 60-second the on 60-second when the ground considering duration duration and the both detectedand and the a a comparison distance number comparison of of the data of the of points the Figure 16. PMD mounted on the Pioneer robots Figure 16. PMD mounted on the Pioneer robots. datadetected points. on The the plots ground below show and the the difference detected between distance of the expectedexpectedthe simulation data data with distancewith actual actual measurements data data on on the and the same the same actual axis axis was was For theseFor two these mobile two mobile robots, robots, the camerathe camera heights heights h hwerewereplotted.plotted.distancedata It points. was Itmeasurements was found The found plots that for thatψ below = the the 0 and averaged showψ = -15 the degrees. distance difference distance For values between values (a) Pioneer 3DX measured from the centre(b) Pioneer of the camera3AT receiver to a level the simulation distance measurements and the actual measured from the centre of the camera receiver to a levelobtainedobtainedthe following from from the plots, the 60-second the60-second black and exposuresexposures red points were are were the significantly actual significantly For theseground two plane mobile as h robots,S3onP3AT the= 347.9 camera mm and heightshCamcubeonPh were3DX datadistance while the measurements green and blue for pointsψ = are 0 and simulated.ψ = -15 degrees. For ground plane as h = 347.9 mm and h different= different when when considering considering both both the number number of ofdata data points points measured= 286.50 from mm. theS 3 centreonP3AT of the camera receiverCamcube toonP a3D levelX the following plots, the black and red points are the actual Figure 16. PMD mounted on the Pioneer robots. detected on the0 Degrees ground Simulated and Actual and PMD S3 Camera the data comparisondetected in Grass distance of the ground286.50 mm.In plane this as section,hS3onP several3AT = standardized347.9 mm and testshCamcubeonP are devised3detectedDX to data on while the groundthe Colour greenindicates detection and and[R>70%] [B<70] blue the[G=Simulation] points detected are simulated. distance of the = 286.50characterize mm. the PMD camera. The code that is written data points. The plots below show the difference between In this section, several standardized tests are devised todata points. The plots below show the difference between comprehensively analyses the camera mounting angle the simulation distance0 Degrees Simulated measurements and Actual PMD S3 Camera data comparison and in Grass the actual the simulation distance Colour indicates measurementsdetection [R>70%] [B<70] [G=Simulation] and the actual Incharacterize this section,sweep the tests several PMD when the camera. standardized AGV is The stationary. code tests Codethat are hasis devised written not been to distance measurements for ψ = 0 and ψ = -15 degrees. For For these two mobile robots, the camera heights h were 2000 ψ ψ characterizecomprehensivelywritten the to handle PMD analyses thecamera. reading the camera The of the code moving mounting that tests is with written angle anddistance measurements for = 0 and = -15 degrees. For measured from the centre of the camera receiver to a level the following1500 plots, the black and red points are the actual comprehensivelysweep testswithout when objects; the analyses AGV however, is thestationary. results camera for Code these mounting has scenarios not been angle werethe following plots, the black and red points are the actual data while1000 the green and blue points are simulated. ground plane assweephS3onP testsused3AT when= by 347.9 applying the mmAGV the and is raw stationary.h functionalityCamcubeonP Code of3DX has the notprogram. been written to handle the reading of the moving tests with anddata while the500 green2000 and blue points are simulated. This section intends to explain in-depth how to conduct = 286.50 mm. written to handle the reading of the moving tests with and 0 6000 without objects; however, results for these scenarios were 1500 the still angle sweep tests as well as how the moving tests 5000 without objects; however, results for these scenarios were −500 0 Degrees Simulated and Actual PMD S3 Camera data comparison4000 in Grass

used byand applying object recognitionthe raw functionality tests were performed. of the program. This Distance Above/Below Ground plane (+/− mm) 3000 1000 Colour2000 indicates detection [R>70%] [B<70] [G=Simulation]3000 In this section,used several by applying standardized the raw tests functionality are devised of the to program. 1000 2000 0 section intends to explain in-depth how to conduct the still 500 −1000 1000 Distance Left/Right of robot (+/− mm) −2000 To obtain a greater view of the ground, and necessarily, Distance from front of robot (mm) characterize theThis PMD section camera. intends The to explain code that in-depth is written how to conduct −3000 0 angle sweepthe camera tests as mounting well as how angle the pointingmoving tests downwards and object was 0 6000 comprehensivelythe analyses still angle sweep the camera tests as well mounting as how the angle moving tests 5000 recognitionadjusted. tests The were specific performed. angle of this mounting is explained; −500 4000

and object recognition tests were performed. Distance Above/Below Ground plane (+/− mm) 3000 sweep tests when the AGVnonetheless, is stationary. it was unexpected Code thathas the not IR been light incident on Figure 18. Isometric2000 view comparing the simulation and3000 actual 2000 1000 2000 To obtain a greater view of the ground, and necessarily, the 0 the angles to the ground plain would result in significant PMD S3 data for a 0-degree mounting−1000 angle. 1000 written to handle the reading of the moving tests with and Distance Left/Right of robot (+/− mm) −2000 To obtain a greater view of the ground, and necessarily, Distance from front of robot (mm) camerareceiver mounting loss angle and distortion pointing of downwards the distance measurements. was adjust‐ 1500 −3000 0 without objects;the however, camera results mounting for angle these pointing scenarios downwards were was ed. The specific angle of this mounting is explained; As can1000 be seen, a fraction of the data points (black pixels) adjusted. The specific angle of this mounting is explained; used by applying the5.1. raw Stationary functionality angle sweeps of the program. was detected for the 0-degree mounting angle. The nonetheless, it was unexpected that the IR light incident on 500 nonetheless, it was unexpected that the IR light incident on factFigure that predominately18. Isometric view black comparing pixels were the detected simulation also and actual This section intendsthe angles toTo explain performto the ground this in-depth type plain ofhow testing,would to theresult conduct PMD in significant camera was the angles to the ground plain would result in significant indicatesPMD0 S3 datathat thefor a measurements 0-degree mounting taken angle. occurred for less6000 the still angle sweep tests as well as how the moving tests receiverreceivermounted lossloss andand at distortiondistortion a constant of the height distance distance such measurements. measurements. that the angle than 70% of the capture frames. 5000 subtended by the camera’s FoV to the ground could be −500 4000 and object recognition tests were performed. Distance Above/Below Ground plane (+/− mm) 3000 BuildingAs can2000 be upon seen, what a fraction was noticed of the for data zero points3000 degrees, (black pixels) varied. The ground plane is also flat and uniform (a flat 1000 2000 5.1 Stationary Angle Sweeps 0 5.1. Stationarygrass field angle fits sweeps this criteria). This concept is illustrated low-confidencewas detected pixels for appear−1000 the 0-degree to flare above mounting1000 the ground angle. The Distance Left/Right of robot (+/− mm) −2000 To obtain a greater view of the ground, and necessarily, Distance from front of robot (mm) in the figures below - in these figures, the camera is plane.fact that High-confidence predominately pixels, black conversely,−3000 pixels0 can were be seen detected to also the camera mountingTo perform angle this pointing type of testing, downwards the PMD was camera was To performmounted this on type the AGV of testing, and a hinge-type the PMD bracket camera enables was the curveindicates below thatthe ground the measurements plane. taken occurred for less mounted at a constant height such that the angle subtended Figure 18. Isometric view comparing the simulation and actual PMD S3 data adjusted. The specificmounted anglemounting at ofa angle thisconstant tomounting be changed. height is such explained; that the angle Thethan trend 70% revealed of the by capture these plots frames. is consistent throughout by the camera’s FoV to the ground could be varied. The for a 0-degree mounting angle nonetheless, it wassubtended unexpectedIn addition, by the that camera’s a white the IR surface FoV light to was incident the recreated ground on using could plainFigure be the 18. anglesIsometric and for view the same comparing survey that the was simulation taken of a and actual Building upon what was noticed for zero degrees, the angles to thevaried. groundwhite The plain ground paper would and plane a result sweep is also of in flat the significant and camera uniform angles (a was flatPMD S3grass data surface. for a 0-degree From close mounting inspection angle. of the data returned grass field fits this criteria). This conceptSobers is illustrated Lourdu Xavier Francis,low-confidence Sreenatha G. pixelsAnavatti, appear Matthew to Garratt flare and above Hyunbgo the Shim: ground9 receiver loss and distortion of the distance measurements. A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics in the figures below - in these figures, the camera is plane. High-confidence pixels, conversely, can be seen to 8 2015, Vol. No, No:2015 www.intechopen.com mounted on the AGV and a hinge-type bracket enables theAs cancurve be seen, below a fraction the ground of plane. the data points (black pixels) was detected for the 0-degree mounting angle. The 5.1. Stationary anglemounting sweeps angle to be changed. The trend revealed by these plots is consistent throughout fact that predominately black pixels were detected also To perform thisIn type addition, of testing, a white the surface PMD was camera recreated was using plain the angles and for the same survey that was taken of a white paper and a sweep of the camera angles wasindicatesgrass that surface. the measurements From close inspection taken of occurred the data returned for less mounted at a constant height such that the angle than 70% of the capture frames. subtended by the camera’s FoV to the ground could be varied. The ground8 2015, planeVol. No, isNo:2015 also flat and uniform (a flat Building upon what was noticed forwww.intechopen.com zero degrees, grass field fits this criteria). This concept is illustrated low-confidence pixels appear to flare above the ground in the figures below - in these figures, the camera is plane. High-confidence pixels, conversely, can be seen to mounted on the AGV and a hinge-type bracket enables the curve below the ground plane. mounting angle to be changed. The trend revealed by these plots is consistent throughout In addition, a white surface was recreated using plain the angles and for the same survey that was taken of a white paper and a sweep of the camera angles was grass surface. From close inspection of the data returned

8 2015, Vol. No, No:2015 www.intechopen.com 0 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation] 5.2. Investigating the ToF performance discrepancy 100 90 The function of a ToF camera was researched in the early 80 stages of this work; however, no risk was perceived from 70 the point of view of the physics via which the camera 60 functions. The error seen in the plots was attributed to a 50 well-known phenomenon, namely ‘light scattering’. 40

30 The key difference between indoor and outdoor ToF

20 applications was that of gaining a conception of the Distance Above/Below Ground plane (+/− mm)

10 ground required for IR light to be incident on angles much

0 closer to 0 degrees compared to 90 degrees. This meant the

1200 1400 1600 1800 2000 2200 Distance from front of Robot (mm) susceptibility to IR scattering needed to be considered. Essentially, IR emitted from the ToF camera was subject to Figure 19. Side view comparing the simulation and the actual less volume-return to the receiver, thus tricking the device PMD S3 data for a 0-degree mounting angle. into thinking the distance measurements were further −15 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass away. Fortunately, the difference in performance was Colour indicates detection [R>70%] [B<70] [G=Simulation] uniform and predictable. Because of this it was possible to develop correction, thereby making the camera useful for obstacle avoidance. 350

300 250 5.3. Quantifying the ToF performance discrepancy 200 150 Several methods were used in the analysis of the camera’s 100 performance: histograms were produced for sample pixels 50 6000 0 5000 in the HPxVP grid to ensure that normally distributed 4000

Distance Above/Below Ground plane (+/− mm) −50 3000 3000 errors were present for the camera sampling. Histograms 2000 1000 2000 0 −1000 1000 were also applied to analyse the pixel detection, which Distance Left/Right of robot (+/− mm) −2000 Distance from front of robot (mm) −3000 0 was grouped. For most angles, it could be seen that a majority of the pixels were within the > 90% detection bracket and a second large group was located in the < 50% Figure 20. Isometric view comparing the simulation and actual > PMD S3 data for a -15-degree mounting angle. bracket. Unfortunately, 70% the detected pixels (high confidence) were seen to contribute to curve up along As can be seen, a fraction of the data points (black pixels) −15 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation] was detected for the 0-degree mounting angle. The fact that the extremities of the detected distance in front of the predominately black pixels were detected also indicates 70 camera - this consideration was important in developing that the measurements taken occurred for less than 70% of 60 the correction. the capture frames. 0 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass 50 0 Colour Degrees indicates Simulated detection and [R>70%] Actual PMD [B<70] S3 [G=Simulation] Camera data comparison in Grass 5.2. Investigating the ToF performance discrepancy 100 Colour indicates detection [R>70%] [B<70] [G=Simulation] 40 5.2. Investigating the ToF performance discrepancy Pixel at(1,1) Pixel at(1,16) Pixel at(1,32) Pixel at(1,48) Pixel at(1,64) 100 300 300 300 300 300 90 30 The function of a ToF camera was researched in the early 90 200 200 200 200 200 80 20 The function of a ToF camera was researched in the early stages of this work; however, no risk was perceived from100 100 100 100 100 8070 10 thestages point of of view this work; of the however, physics via no which risk was the perceived camera0 from 0 0 0 0 −5−4−3−2−101234 −5−4−3−2−101234 −5−4−3−2−101234 −5−4−3−2−101234 −5−4−3−2−101234 7060 Pixel at(1,16) Pixel at(16,16) Pixel at(32,16) Pixel at(48,16) Pixel at(64,16) 0 functions.the point The oferror view seen of in the the plotsphysics was via attributed which to the a300 camera 150 200 200 300

6050 Distance Above/Below Ground plane (+/− mm) 150 150 −10 well-knownfunctions. phenomenon, The error namely seen in ‘light the plots scattering’. was attributed200 to a 100 200 100 100 5040 100 50 100 −20 well-known phenomenon, namely ‘light scattering’. 50 50 30 The key difference between indoor and outdoor ToF 40 0 0 0 0 0 −5−4−3−2−101234 0 1000 2000 3000 0 1000 2000 3000 0 1000 2000 3000 −5−4−3−2−101234 600 800 1000 1200 1400 1600 1800 20 applications was that of gaining a conception of the Pixel at(1,32) Pixel at(16,32) Pixel at(32,32) Pixel at(48,32) Pixel at(64,32)

Distance Above/Below Ground plane (+/− mm) Distance from front of Robot (mm) 30 The key difference between indoor and outdoor80 ToF 60 80 80 60

10 60 60 60 ground required for IR light to be incident on angles much 40 40 20 Figure 21. Side view comparingapplications the simulation and was actual PMD that S3 data of for gaining a conception of the Distance Above/Below Ground plane (+/− mm) 40 40 40 0 a -15-degree mountingcloser angle to 0 degrees compared to 90 degrees. This meant the 20 20 10 ground required for IR light to be incident on angles20 much 20 20 1200 1400 1600 1800 2000 2200 Distance from front of Robot (mm) Figure 21. Side view comparing the simulation and actual PMD 0 0 0 0 0 conception ofsusceptibility the ground plain to consistedIR scattering of consistent needed to be considered. 800 1000 1200 900 950 1000 900 950 1000 950 1000 1050 800 900 1000 1100 0 S3 data for a -15-degreecloser mounting to 0 angle. degrees compared to 90 degrees. This meantPixel at(1,48) the Pixel at(16,48) Pixel at(32,48) Pixel at(48,48) Pixel at(64,48) curves (as shown), the compilation of error would com‐ 80 80 60 60 80 1200 1400 1600 1800 2000 2200 Distance from front of Robot (mm) Essentially,susceptibility IR emitted to IR from scattering the ToF cameraneeded was to be subject considered. to60 60 60 pound and lead to improper functionality. Hence, from 40 40 here, an in-depth process of analysis, characterization and 40 40 40 Figure 19.FigureSide 19. Side view view comparing comparing the simulation the simulationand the actual PMD andby S3 data the the actual ToF camera,less volume-return it seems that to the the receiver, discrepancy thus tricking in the device 20 20 PMD S3 datafor a 0-degree for a 0-degreemounting angle mounting angle. devising correctionsEssentially, was initiated. IR emitted from the ToF camera was subject20 to 20 20 performance conformsinto thinking to some the form distance of a measurements trend. The were further0 0 0 0 0 600 700 800 900 650 700 750 660 670 680 690 670 680 690 700 600 700 800 Figure 19. Side−15 view Degrees Simulated comparing and Actual PMD S3 Camera the data simulationcomparison in Grass difference and the actual in theaway. measurementsless Fortunately, volume-return can the be to difference seen the toreceiver, conform in performance thus tricking was the device PMD S3 data for a Colour 0-degree indicates detection [R>70%] mounting [B<70] [G=Simulation] angle. 5.2 Investigating the ToF Performance Discrepancy to a curve downuniform alonginto the and thinking x predictable. and y the axes. distance Because Ordinarily, ofmeasurements this this it was possible were further The function of a ToF camera was researched in the early −15 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass to developaway. correction, Fortunately, thereby the making difference the camera in performance useful was Colour indicates detection [R>70%] [B<70] [G=Simulation] would notstages be of this significant work; however, but no therisk was data perceived points from vary from Figure 22. Sample of the distance measurement histograms for the point offor view obstacleuniform of the physics avoidance. and viapredictable. which the camera Because of this it was possible 350 100 mm brackets. This is significant for the AGV because various ψ =-15grasssurfaces. functions. The error seen in the plots was attributed to a 300 it cannot traverse obstaclesto develop greater correction, than 150 thereby mm. Ifmaking the the camera useful well-known phenomenon, namely ‘light scattering’. 250 AGV’s normal conception5.3. Quantifyingfor obstacle of the the avoidance. ground ToF performance plain consisted discrepancy 350200 The key difference between indoor and outdoor ToF 300150 of consistentapplications curves Severalwas that (as of methods shown),gaining a conception were the compilation used of the inground the analysis of error of the camera’sAnother method of holistic analysis was to plot the 250100 would compoundrequired for performance:IR light and5.3. to Quantifyingbe lead incident histograms to on improper angles the much ToF were performancecloser functionality. produced for discrepancy sample pixels 50 detection rate and maximum perceived distances against 200 6000 to 0 degrees compared to 90 degrees. This meant the 0 Hence,5000 from here,in the an HPxVP in-depth grid to process ensure of that analysis, normally distributed 150 susceptibility to IR scattering needed to be considered. the camera mounting. This also enabled a direct 4000

Distance Above/Below Ground plane (+/− mm) −50 Several methods were used in the analysis of the camera’s 3000 1003000 characterization anderrors devising were present corrections for the was camera initiated. sampling. Histogramscomparison of the performance of the camera from a grass 2000 1000 2000 Essentially, IR emitted from the ToF camera was subject to 0 performance: histograms were produced for sample pixels 50 −1000 1000 were also applied to analyse the pixel detection, which Distance Left/Right of robot (+/− mm) −2000 Distance from front of robot (mm) 6000 less volume-return to the receiver, thus tricking the device surface with a white surface. These plots are shown in −3000 0 0 5000 into thinkingwas thein grouped. distance the HPxVP measurements For most grid were angles, to further ensure it could that be normally seen that distributed a 4000 Figure 23. FigureDistance Above/Below Ground plane (+/− mm) −50 20. Isometric view comparing the simulation and actual PMD S3 data 3000 away. Fortunately, the difference in performance was > 3000 majorityerrors of were the pixels present were for within the camera the sampling.90% detection Histograms for a -15-degree2000 mounting angle 1000 2000 0 uniform and predictable. Because of this it was possible to < −1000 1000 bracketwere and also a second applied large to group analyse was locatedthe pixel in the detection,50% which Figure 20. IsometricDistance Left/Right of view robot (+/− mm) comparing−2000 the0 simulationDistance from front of robot and (mm) actualdevelop correction, thereby making the camera useful for Building upon what was noticed for−3000 zero degrees, low- bracket. Unfortunately, > 70% the detected pixels (high PMD S3 data for a -15-degree mounting angle. www.intechopen.comobstacle avoidance.was grouped. For most angles, it could be seen that a : 9 confidence pixels appear to flare above the ground plane. confidence) were seen to contribute to curve> up along High-confidence−15 Degrees Simulated pixels, and Actualconversely, PMD S3 Camera datacan comparison be seen in Grass to curve majority of the pixels were within the A ToF-camera90% detection as a 3D Vision Sensor for Autonomous Mobile Robotics Colour indicates detection [R>70%] [B<70] [G=Simulation] below the ground plane. 5.3 Quantifyingthe thebracket extremitiesToF Performance and ofDiscrepancy a second the detected large group distance was in located front of in the the < 50% Figure 20. Isometric view comparing the simulation and actual 70 camera - this consideration was> important in developing PMD S3 dataThe fortrend a revealed -15-degree by these mounting plots is consistent angle. throughout Several methods werebracket. used in the Unfortunately, analysis of the camera’s 70% the detected pixels (high the angles60 and for the same survey that was taken of a grass performance:the histograms correction.confidence) were produced were for sample seen pixels to contribute to curve up along 50 −15 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass in the HPxVP grid to ensure that normally distributed surface. From Colour indicatesclose detection inspection [R>70%] [B<70] of [G=Simulation]the data returned by the errors were presentthe for the extremities camera sampling. of Histograms the detected distance in front of the ToF40 camera, it seems that the discrepancy in performance Pixel at(1,1) Pixel at(1,16) Pixel at(1,32) Pixel at(1,48) Pixel at(1,64) conforms70 to some form of a trend. The difference in the were also applied to analyse300 the pixel300 detection, which300 was 300 300 30 camera - this consideration was important in developing grouped. For most angles,200 it could be200 seen that a 200majority 200 200 measurements60 can be seen to conform to a curve down 20 the correction. along the x and y axes. Ordinarily, this would not be of the pixels were within100 the >90% detection100 bracket100 and a 100 100 50 10 significant but the data points vary from 100 mm brackets. second large group was0 located in0 the <50% bracket.0 0 0 −5−4−3−2−101234 −5−4−3−2−101234 −5−4−3−2−101234 −5−4−3−2−101234 −5−4−3−2−101234 Pixel at(1,16) Pixel at(16,16) Pixel at(32,16) Pixel at(48,16) Pixel at(64,16) This40 0is significant for the AGV because it cannot traverse Unfortunately, >70% the detected pixels (high confidence) 300 Pixel 150at(1,1) Pixel200 at(1,16) Pixel200 at(1,32) 300Pixel at(1,48) Pixel at(1,64)

Distance Above/Below Ground plane (+/− mm) 300 300 300 300 300 obstacles greater than 150 mm. If the AGV’s normal were seen to contribute to curve up along the extremities150 150 −1030 200 100 200 200 200 100 200 100 200 200 100 50 100 10 Int J−2020 Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348 50 50 100 100 100 100 100 0 0 0 0 0 −5−4−3−2−101234 0 1000 2000 3000 0 1000 2000 3000 0 1000 2000 3000 −5−4−3−2−101234 10 600 800 1000 1200 1400 1600 1800 Pixel at(1,32)0 Pixel at(16,32)0 Pixel at(32,32)0 Pixel at(48,32) 0 Pixel at(64,32) 0 Distance from front of Robot (mm) 80 −5−4−3−2−160 01234 −5−4−380−2−101234 −580−4−3−2−101234 60−5−4−3−2−101234 −5−4−3−2−101234 Pixel at(1,16) Pixel at(16,16) Pixel at(32,16) Pixel at(48,16) Pixel at(64,16) 0 60 60 60 300 40 150 200 200 40 300 Distance Above/Below Ground plane (+/− mm) 40 40 150 40 150 −10 200 20 100 20 200 20 20 20 100 100 Figure 21. Side view comparing the simulation and actual PMD 0 100 0 50 0 0 0 100 −20 800 1000 1200 900 950 1000 900 950 50 1000 950 1000 501050 800 900 1000 1100 S3 data for a -15-degree mounting angle. Pixel at(1,48) Pixel at(16,48) Pixel at(32,48) Pixel at(48,48) Pixel at(64,48) 80 0 80 0 60 0 60 0 80 0 −5−4−3−2−101234 0 1000 2000 3000 0 1000 2000 3000 0 1000 2000 3000 −5−4−3−2−101234 600 800 1000 1200 1400 1600 1800 60 Pixel at(1,32)60 Pixel at(16,32) Pixel at(32,32) Pixel60 at(48,32) Pixel at(64,32) Distance from front of Robot (mm) 40 40 80 60 80 80 60 40 40 40 by the ToF camera, it seems that the discrepancy in 60 20 60 20 60 20 20 40 20 40 performance conforms to some form of a trend. The 0 40 0 0 40 0 40 0 600 700 800 900 650 70020 750 660 670 680 690 670 680 690 700 600 700 80020 difference in the measurements can be seen to conform 20 20 20 Figure 21. Side view comparing the simulation and actual PMD 0 0 0 0 0 800 1000 1200 900 950 1000 900 950 1000 950 1000 1050 800 900 1000 1100 S3to data a curve for a down -15-degree along mounting the x and angle. y axes. Ordinarily, this Pixel at(1,48) Pixel at(16,48) Pixel at(32,48) Pixel at(48,48) Pixel at(64,48) 80 80 60 60 80

60 60 60 would not be significant but the data points vary from 40 40 Figure 22. Sample40 of the40 distance measurement histograms for40 by100 the mm ToF brackets. camera, This it is significant seems that for the the AGV discrepancy because in 20 20 various ψ =-15grasssurfaces.20 20 20 performanceit cannot traverse conforms obstacles to somegreater form than 150of a mm. trend. If the The 0 0 0 0 0 600 700 800 900 650 700 750 660 670 680 690 670 680 690 700 600 700 800 differenceAGV’s normal in the conception measurements of the ground can be plain seen consisted to conform toof a consistent curve down curves along (as theshown), x and the y compilation axes. Ordinarily, of error this Another method of holistic analysis was to plot the wouldwould not compound be significant and lead but to the improper data points functionality. vary from detection rate and maximum perceived distances against Hence, from here, an in-depth process of analysis, theFigure camera 22. mounting.Sample of the This distance also measurement enabled a direct histograms for 100 mm brackets. This is significant for the AGV because various ψ =-15grasssurfaces. itcharacterization cannot traverse and obstacles devising greater corrections than was 150 initiated. mm. If the comparison of the performance of the camera from a grass AGV’s normal conception of the ground plain consisted surface with a white surface. These plots are shown in Figure 23. of consistent curves (as shown), the compilation of error Another method of holistic analysis was to plot the would compound and lead to improper functionality. detection rate and maximum perceived distances against Hence,www.intechopen.com from here, an in-depth process of analysis, the camera mounting. This also enabled: a direct9 characterization and devising corrections was initiated. A ToF-cameracomparison as a 3D Visionof the Sensor performance for Autonomous of the Mobile camera Robotics from a grass surface with a white surface. These plots are shown in Figure 23.

www.intechopen.com : 9 A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics 0 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass Colour indicates detection [R>70%] [B<70] [G=Simulation] 5.2. Investigating the ToF performance discrepancy 100 90 The function of a ToF camera was researched in the early 80 stages of this work; however, no risk was perceived from 70 the point of view of the physics via which the camera 60 functions. The error seen in the plots was attributed to a Comparison of Expected and Actual Percentage of pixels returning distance measurents 5.4. Grid-based ground surface reconstruction 50 100 well-known phenomenon, namely ‘light scattering’. Simulation 40 Actual 90 The ground surface from the robots’ traversed paths 30 The key difference between indoor and outdoor ToF was devised in MATLAB and was reconstructed using 20 applications was that of gaining a conception of the Distance Above/Below Ground plane (+/− mm) 80 captured depth frames. The Cartesian conversion of data 10 ground required for IR light to be incident on angles much points scattered around the area which the robot traversed 0 closer to 0 degrees compared to 90 degrees. This meant the 70 1200 1400 1600 1800 2000 2200 is obtained by synchronizing the orientation and position Distance from front of Robot (mm) susceptibility to IR scattering needed to be considered. 60 of the robot platform - and thus the camera - and the data Essentially, IR emitted from the ToF camera was subject to 50 captured from the PMD cameras. The grid produced for a Figure 19. Side view comparing the simulation and the actual less volume-return to the receiver, thus tricking the device mounting angle of -20 degrees and a grid size of 100 mm PMD S3 data for a 0-degree mounting angle. into thinking the distance measurements were further 40 square is shown in Figure 25. The routine was coded to −15 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass away. Fortunately, the difference in performance was Colour indicates detection [R>70%] [B<70] [G=Simulation] go for 60 seconds until the capture frames, to synchronize uniform and predictable. Because of this it was possible 30 Percentage of Pixels returning a distance measurement (%) to develop correction, thereby making the camera useful each to a position and orientation, and to distribute the 20 −45 −40 −35 −30 −25 −20 −15 −10 −5 0 Cartesian coordinates into squares of the grid. for obstacle avoidance. Camera Angles (Degrees) 350

300 Grid heights percieved in area encountered by PMD Camcube 2.0 Camera at−20 Degrees 250 5.3. Quantifying the ToF performance discrepancy 200 Figure 23. Pixel detection simulation and actual case. 150 Several methods were used in the analysis of the camera’s 100 performance: histograms were produced for sample pixels In both instances, it can be seen that the detection of the 50 6000 0 5000 in the HPxVP grid to ensure that normally distributed pixels is considerably less in reality. The white surface can 4000

Distance Above/Below Ground plane (+/− mm) −50 3000 3000 errors were present for the camera sampling. Histograms be seen to have approximately 35% less pixel detection 2000 1000 2000 0 ψ −1000 1000 were also applied to analyse the pixel detection, which for any given below 20 degrees; grass can be seen to Distance Left/Right of robot (+/− mm) −2000 0 Distance from front of robot (mm) 8000 −3000 was grouped. For most angles, it could be seen that a have about 15% less. It is interesting to note that the grass > 7000 majority of the pixels were within the 90% detection 500 performance is better in this measure. This is due to the 0 6000 bracket and a second large group was located in the < 50% −500 Figure 20. Isometric view comparing the simulation and actual intuitive fact that grass is very smooth and thus incident 3000 bracket. Unfortunately, > 70% the detected pixels (high 5000 PMD S3 data for a -15-degree mounting angle. 2000 IR light has a much greater chance of bouncing back off an 4000

−15 Degrees Simulated and Actual PMD S3 Camera data comparison in Grass confidence) were seen to contribute to curve up along 1000 3000 Colour indicates detection [R>70%] [B<70] [G=Simulation] object closer to perpendicular than flat ground. the extremities of the detected distance in front of the 0 2000 70 −1000 camera - this consideration was important in developing Distance Left/Right of AGV (+/− mm) Distance from start location of the detected distance in front of the camera - this 1000 60 Comparison of Expected and Actual Maximum detected range −2000 the correction. 6000 consideration was important in developing the correction. −3000 0 50 Simulation Actual 40 Pixel at(1,1) Pixel at(1,16) Pixel at(1,32) Pixel at(1,48) Pixel at(1,64) 300 300 300 300 300 5000 30 200 200 200 200 200 20 100 100 100 100 100 4000 Figure 25. Surface reconstruction: grid heights perceived in the 10 0 0 0 0 0 −5−4−3−2−101234 −5−4−3−2−101234 −5−4−3−2−101234 −5−4−3−2−101234 −5−4−3−2−101234 area encountered by the PMD CamCube camera at -20 degrees. Pixel at(1,16) Pixel at(16,16) Pixel at(32,16) Pixel at(48,16) Pixel at(64,16) 0 300 150 200 200 300

Distance Above/Below Ground plane (+/− mm) 150 150 −10 200 100 200 3000 100 100 100 50 100 −20 50 50 Distance (mm)

0 0 0 0 0 6. Real-time Motion Testing −5−4−3−2−101234 0 1000 2000 3000 0 1000 2000 3000 0 1000 2000 3000 −5−4−3−2−101234 600 800 1000 1200 1400 1600 1800 2000 Pixel at(1,32) Pixel at(16,32) Pixel at(32,32) Pixel at(48,32) Pixel at(64,32) Distance from front of Robot (mm) 80 60 80 80 60

60 60 60 In the motion experiment, the AGV relies on the PMD 40 40 40 40 40 20 20 1000 camera - as a single vision sensor - to obtain information 20 20 20

Figure 21. Side view comparing the simulation and actual PMD 0 0 0 0 0 800 1000 1200 900 950 1000 900 950 1000 950 1000 1050 800 900 1000 1100 about its surroundings and to guide it to achieve its S3 data for a -15-degree mounting angle. Pixel at(1,48) Pixel at(16,48) Pixel at(32,48) Pixel at(48,48) Pixel at(64,48) 80 80 60 60 80 0 task, and it is equipped with two shaft encoders to track 60 60 60 −45 −40 −35 −30 −25 −20 −15 −10 −5 0 40 40 Camera Angles (Degrees) θ 40 40 40 its position (X and Y ) and orientation . The by the ToF camera, it seems that the discrepancy in 20 20 rob rob veh 20 20 20 performance conforms to some form of a trend. The 0 0 0 0 0 experiments were performed without prior knowledge of 600 700 800 900 650 700 750 660 670 680 690 670 680 690 700 600 700 800 Figure 24. Grass maximum detected range at angle difference in the measurements can be seen to conform Figure 24. Grass maximum detected range at angle. the workspace, such as the location, velocity, orientation to a curve down along the x and y axes. Ordinarily, this Figure 22. Sample of the distance measurement histograms for various ψ = and number of obstacles. The range data from the PMD would not be significant but the data points vary from -15 grass surfaces Following this type of quantifying analysis, the ideal Figure 22. Sample of the distance measurement histograms for camera were utilised to detect and estimate the relative 100 mm brackets. This is significant for the AGV because various ψ =-15grasssurfaces. Similarcamera mounting to the previous angles ψ for plots, the PMD both S3 drastically on P3AT and under distances between the vehicle and the obstacles. As it cannot traverse obstacles greater than 150 mm. If the Another method of holistic analysis was to plot the performCamCube2.0 when cameras it comes on toP3DX comparing were postulated with the as simulation’s -15 and the camera senses the 3D coordinates of an obstacle in AGV’s normal conception of the ground plain consisted detection rate and maximum perceived distances against expectations.-20 degrees respectively. In the case Adopting of the better this performing ensured that grass, the a different frame sequences, it uses this information to detect of consistent curves (as shown), the compilation of error Anotherthe camera method mounting. of holistic This analysisalso enabled was a todirect plot compari‐ the maximumcameras could detected provide range an plateauadequate can conception be seen to of be obsta‐ slightly the obstacles using a scene flow. The ego-motion of the would compound and lead to improper functionality. detectionson of ratethe performance and maximum of the perceived camera distancesfrom a grass against surface lesscles, thanmaximizing 3,000 m. the ground These twoplane methods conception. of analysis were PMD camera mounted on the AGV can be estimated by Hence, from here, an in-depth process of analysis, thewith camera a white mounting. surface. These This plots also are enabledshown in aFigure direct 23. useful in quantifying the difference in terms of sheer data tracking features between subsequent images [49]. The characterization and devising corrections was initiated. comparison of the performance of the camera from a grass Comparison of Expected and Actual Percentage of pixels returning distance measurents 5.4.measurements. Grid-based5.4 Grid-based ground Ground surface Surface reconstruction Reconstruction ‘good features to track’ feature detection algorithm [50] 100 surface withComparison a white of Expected surface. and Actual Percentage These of pixels plots returning are distance shown measurents in 5.4. Grid-based ground surface reconstruction Simulation100 is used to stabilize the moving AGV by comparing two FigureActual23. Simulation TheFollowing ground surfacethis type from of quantifying the robots’ analysis, traversed the paths ideal 90 Actual The Theground ground surface surface from the from robots’ the traversed robots’ paths traversed was pathsconsecutive frames. 90 camera mounting angles ψ for the PMD S3 on P3AT wasdevised devisedwas in devisedMATLAB in MATLAB in and MATLAB was and reconstructed was and reconstructed was using reconstructed captured using using 80 and CamCube2.0 cameras on P3DX were postulated as The ego-motion compensation is a transformation from 80 captureddepth depth frames. frames. The Cartesian The Cartesian conversion conversion of data points of data www.intechopen.com : 9 -15 andcaptured -20 degrees depth respectively.frames. The Cartesian Adopting conversion this ensured of data A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics points scattered around the area which the robot traversed the image coordinates of a previous image to that of the 70 scatteredpoints around scattered the around area which the area the robotwhich traversed the robot is traversed 70 that the cameras could provide an adequate conception of current image so that the effect of the ego-motion of the is obtainedobtainedis obtained byby synchronizing by synchronizing the the orientation orientation the orientation and position and position and of position 60 obstacles, maximizing the ground plane conception. camera can be eliminated [257]. The feature pairs fit−1, fit, 60 of thethe robot robotof the platform platform robot platform - and and thus thus - and the the thus camera, camera the camera and - and the -the dataand data the data ( − ) where t frame 1 is the last image and t frame denotes the 50 capturedcapturedcaptured from from the fromthe PMD PMD the cameras. cameras. PMD cameras. The The grid grid The produced produced grid produced for fora a for a 50 current image, and the ego-motion of the camera can be mountingmounting angle angle of of -20 -20 degrees degrees and a a grid grid size size of of100 100 mm mm mounting angle of -20 degrees and a grid size of 100 mmestimated by using a transformation model. As such, we 40 40 squaresquare issquare shownis shown is shown in in FigureFigure in 25. Figure25 .The The routine25. routine The was routine wascoded coded wasto go coded to to

30 30 go forfor 60go seconds seconds for 60 until seconds until thethe capture until capture the frames, capture frames, to synchronize frames, to synchronize to each synchronize Percentage of Pixels returning a distance measurement (%) Percentage of Pixels returning a distance measurement (%) eachtoto a positioneach a position to and a position orientation, and orientation, and and orientation, to distribute and to and the distribute Cartesian to distribute the the 20 20 10 2015, Vol. No, No:2015 www.intechopen.com −45 −40 −35−45 −30−40 −25−35 −20−30 −15−25 −20−10 −15−5 −100 −5 0 CartesiancoordinatesCartesian coordinates into coordinates squares into of squares the into grid. squares of the ofgrid. the grid. Camera Angles (Degrees)Camera Angles (Degrees)

Figure 23. Pixel detection simulation and actual case Grid heights percievedGrid heights in area percieved encountered in area by encountered PMD Camcube by PMD 2.0 CameraCamcube at−20 2.0 Camera Degrees at−20 Degrees Figure 23. FigurePixel detection 23. Pixel simulation detection simulationand actual and case. actual case. In both instances, it can be seen that the detection of the In both instances,Inpixels both is instances, it considerably can be seen it canless that bein reality. seen the detection that The thewhite detection of surface the can of the pixels is considerablypixelsbe seen is considerablyto have less approximately in reality. less in The reality. 35% white less The surfacepixel white detection can surface for can be seen tobe haveany seen given approximately to ψ have below approximately 20 degrees; 35% less grass 35% pixel can less detectionbe pixelseen to detection have forabout any 15% given less.ψ below It is interesting 20 degrees; to note grass that can the be grass seen to for any given ψ below 20 degrees; grass can be seen to 8000 haveperformance about 15% is better less. Itin is this interesting measure. to This note is thatdue theto the grass 8000 have about 15% less. It is interesting to note that the grass 7000 500 7000 performanceintuitive fact that is better grass inis very this smooth measure. and Thisthus incident is due to IR the 0 6000 500 −500 performance is better in this measure. This is due to the 0 6000 intuitivelight has fact a much that greater grass is chance very smoothof bouncing and back thus off incident an −500 3000 5000 3000 intuitive fact that grass is very smooth and thus incident 2000 5000 IRobject light closer has a to much perpendicular greater chance than flat of bouncingground. back off an 4000 2000 IR light has a much greater chance of bouncing back off an 1000 4000 object closer to perpendicular than flat ground. 3000 Similar to the previous plots, both drastically under 1000 0 3000 object closer to perpendicular than flat ground. 2000 0 −1000 perform when it comes to comparing with the simulation’s Distance Left/Right of AGV (+/− mm) Distance from start location 2000 1000 Comparison of Expected and Actual Maximum detected range −1000 −2000 Distance Left/Right of AGV (+/− mm) Distance from start location expectations.6000 In the case of the better performing grass, a 1000 Comparison of ExpectedSimulation and Actual Maximum detected range −2000 −3000 0 6000 maximum detectedActual range plateau can be seen to be slightly Simulation −3000 0 lessActual than5000 3,000 m. These two methods of analysis were 5000 useful in quantifying the difference in terms of sheer data Figure 25. Surface reconstruction: grid heights perceived in the area encounteredFigure by 25.the PMDSurface CamCube reconstruction: camera at -20 degrees grid heights perceived in the measurements.4000 area encountered by the PMD CamCube camera at -20 degrees. 4000 Figure 25. Surface reconstruction: grid heights perceived in the area encountered by the PMD CamCube camera at -20 degrees. 3000 Sobers Lourdu Xavier Francis, Sreenatha G. Anavatti, Matthew Garratt and Hyunbgo Shim: 11 A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics 3000 Distance (mm) 6. Real-time Motion Testing 2000 Distance (mm) 6. Real-time Motion Testing 2000 In the motion experiment, the AGV relies on the PMD 1000 In the motioncamera -experiment, as a single vision the AGV sensor relies - toobtain on the information PMD about its surroundings and to guide it to achieve its 1000 camera - as a single vision sensor - to obtain information 0 task, and it is equipped with two shaft encoders to track −45 −40 −35 −30 −25 −20 −15 −10 −5 0 about its surroundings and to guide it to achieve its Camera Angles (Degrees) θ its position (Xrob and Yrob) and orientation veh. The 0 task, and it is equipped with two shaft encoders to track −45 −40 −35 −30 −25 −20 −15 −10 −5 0 experiments were performed without prior knowledge of Camera Angles (Degrees) its position (X and Y ) and orientation θ . The the workspace,rob suchrob as the location, velocity,veh orientation Figure 24. Grass maximum detected range at angle. experiments were performed without prior knowledge of and number of obstacles. The range data from the PMD the workspace, such as the location, velocity, orientation Figure 24. Grass maximum detected range at angle. camera were utilised to detect and estimate the relative Similar to the previous plots, both drastically underand numberdistances of obstacles. between Thethe vehicle range data and from the obstacles. the PMD As perform when it comes to comparing with the simulation’scamerathe were camera utilised senses to detect the 3D and coordinates estimate ofthe an relative obstacle in Similar toexpectations. the previous In theplots, case both of the drastically better performing under grass,distances a different between frame the sequences, vehicle it and uses the this obstacles. information to As detect perform whenmaximum it comes detected to comparing range plateau with the can simulation’s be seen to be slightlythe camerathe obstacles senses the using 3D a coordinates scene flow. of The an ego-motion obstacle in of the expectations.less In than the 3,000 case of m. the These better two performing methods of grass, analysis a weredifferentPMD frame camera sequences, mounted it uses on this the information AGV can be to estimated detect by maximumuseful detected in rangequantifying plateau the can difference be seen in to termsbe slightly of sheer datathe obstaclestracking using features a scene between flow. subsequent The ego-motion images of [49 the]. The less than 3,000measurements. m. These two methods of analysis were PMD camera‘good featuresmounted to on track’ theAGV feature can detection be estimated algorithm by [50] useful in quantifying the difference in terms of sheer data trackingis features used tobetween stabilize the subsequent moving AGV images by [ comparing49]. The two Following this type of quantifying analysis, the ideal measurements. ‘good featuresconsecutive to track’ frames. feature detection algorithm [50] camera mounting angles ψ for the PMD S3 on P3AT is used to stabilize the moving AGV by comparing two Followingand this CamCube2.0 type of quantifying cameras on analysis, P3DX were the postulatedideal as The ego-motion compensation is a transformation from consecutive frames. camera mounting-15 and -20 angles degreesψ for respectively. the PMD Adopting S3 on P3AT this ensured the image coordinates of a previous image to that of the and CamCube2.0that the cameras could on P3DX provide were an adequate postulated conception as The of ego-motioncurrent image compensation so that the is effect a transformation of the ego-motion from of the -15 and -20obstacles, degrees maximizing respectively. the Adopting ground plane this conception. ensured the imagecamera coordinates can be eliminated of a previous [257]. image The feature to that pairs offi thet−1, fit, ( − ) that the cameras could provide an adequate conception of currentwhere image sot frame that the1 is effect the last of image the ego-motion and t frame denotes of the the current image, and the ego-motion of the camera can be obstacles, maximizing the ground plane conception. camera can be eliminated [257]. The feature pairs fit−1, fit, (estimated− ) by using a transformation model. As such, we where t frame 1 is the last image and t frame denotes the current image, and the ego-motion of the camera can be estimated by using a transformation model. As such, we 10 2015, Vol. No, No:2015 www.intechopen.com

10 2015, Vol. No, No:2015 www.intechopen.com simply apply linear regression to train the constants. The next procedure is to eliminate the bad features and refine the transformation modal. Now, the transformation model obtained is used to manipulate the whole-image pixels in order to eliminate the effect of the ego-motion of the camera. The initial minimal path is calculated by the grid-based efficient D* Lite path-planning algorithm and the PMD camera provides information on the obstacles in real-time. When an obstacle is perceived via the cameras’ FoV, the AGV processes the sensor information and, if required, it can continually re-plan its path to avoid any collision until it (a) Sensing the first obstacle (b) Re-planning its path reaches its final goal. The goal is to plan a collision-free path for the AGV to reach its desired position by implementing the efficient D* Lite algorithm on the P3DX’s onboard computer. The PMD camera is used as an exteroceptive sensor with a frame rate for the scene flow of 10 fps. All the experiments were carried out without any modifications to the P3DX controller’s parameter. In this experiment, the AGV is set to travel from an initial position (Xrob, Yrob) = (0, 0) to a goal position (9.0 m, 0), each coordinate being (Xrob, Yrob) with Xrob and Yrob in metres. To perceive its surroundings, it obtains (c) Avoiding the second (d) Overcoming the third information from the sensor rather than using a priori information, and it successfully avoids the three static Figure 27. Experiment (office): indoor test with three static obstacles in its path, as shown in Figures 26 and 27. obstacles. 6. Real-time Motion Testing 400 when it traversed towards its goal. In future, real-time AGV In the motion experiment, the AGV relies on the PMD 200 experiments will be conducted in dynamic environments.

camera - as a single vision sensor - to obtain information 0 about its surroundings and to guide it to achieve its task, 8. References −200 and it is equipped with two shaft encoders to track its

−400 [1] S. Patnaik. Robot Cognition and Navigation: An position (X rob and Y rob) and orientation θveh . The experi‐

Y−coordinates in mm Experiment with Mobile Robots. Springer Berlin ments were performed without prior knowledge of the −600 Heidelberg New York, 2007. workspace, such as the location, velocity, orientation and −800 [2] J. Martinez Gomez, A. Fernandez Caballero, I. Garcia number of obstacles. The range data from the PMD camera −1000 Varea, L. Rodriguez Ruiz, and C. Romero Gonzalez. were utilised to detect and estimate the relative distances A taxonomy of vision systems for ground mobile −1200 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 between the vehicle and the obstacles. As the camera senses X−coordinates in mm robots. International Journal of Advanced Robotic the 3D coordinates of an obstacle in different frame Systems, (1729-8806), July 2014. sequences, it uses this information to detect the obstacles FigureFigure 26. 26. ExperimentExperiment (office): (office): plot plot of the of AGV’s the AGV’s x-coordinates x-coordinates vs y- [3] H. R. Everett. Sensors for Mobile Robots: Theory and using a scene flow. The ego-motion of the PMD camera vscoordinates y-coordinates. Application. A. K. Peters, Ltd., Natick, MA, USA, 1995. mounted on the AGV can be estimated by tracking features [4] Fardi, Basel, Ullrich Schuenert, and GerdWanielik. between subsequent images [49]. The ‘Good Features to Shape and motion-based pedestrian detection in simply apply linear regression to train the constants. The 7. Conclusion Track’ feature detection algorithm [50] is used to stabilize infrared images: A multi sensor approach. In IEEE next procedure is to eliminate the bad features and refine the moving AGV by comparing two consecutive frames. The optimal deployment of a ToF-based PMD camera on Intelligent Vehicles Symposium IV, pages 18–23, 2005. the transformation modal. an AGV is presented in this paper, the overall mission [5] G. Sgouros, Papakonstantinous, and P. Tsanakas. Now,The ego-motion the transformation compensation model is a transformation obtained is usedfrom the to of which is to traverse the AGV from one point to Localized qualitative navigation for indoor manipulateimage coordinates the whole-image of a previous pixels image in to order that of to the eliminate current another in hazardous and hostile environments without environments. In IEEE Intl. Conf. on Robotics and theimage effect so that of the effect ego-motion of the ego-motion of the camera. of the camera The initial can human intervention. A ToF camera is used as the key Automation, pages 921–926, 1996. minimal path is calculated by the grid-based efficient be eliminated [257]. The feature pairs f it−1, f it, where sensory device for perceiving the operating environment, [6] Bertozzi, Broggi, Cellario, Fascioli, Lombardi, and D* Lite path-planning algorithm and the PMD camera the depth data of which are populated into a workspace Porta. Artificial vision in road vehicles. In 28th IEEE (t frame −1) is the last image and t frame denotes the current provides information on the obstacles in real-time. When grip map. An optimal camera mounting angle is Industrial Electronics Society Annual Conf, pages 1258 – image, and the ego-motion of the camera can be estimated an obstacle is perceived via the cameras’ FoV, the AGV adopted by the analysis of various cameras’ performance 1271, 2002. by using a transformation model. As such, we simply apply processes the sensor information and, if required, it can discrepancies. A series of still and moving tests were [7] K. Rebai, A. Benabderrahmane, O. Azouaoui, and continuallylinear regression re-plan to train its path the constants. to avoid any The collision next procedure until it (a) Sensing the first obstacle (b) Re-planning its path carried out to verify the correct sensor operation. Finally, N. Ouadah. Moving Obstacles Detection and reaches its final goal. is to eliminate the bad features and refine the transforma‐ in the real-time autonomous path-planning experiment, Tracking with Laser Range Finder. In Advanced Thetion modal. goal is to plan a collision-free path for the AGV to the AGV relied completely on its perception system to Robotics, 2009. ICAR 2009. International Conference on, reach its desired position by implementing the efficient pages 1–6, 2009. Now, the transformation model obtained is used to sense the operating environment and avoid static obstacles D* Lite algorithm on the P3DX’s onboard computer. The PMDmanipulate camera the is whole-image used as an exteroceptivepixels in order sensor to eliminate with a framethe effect rate for of the scene ego-motion flow of of 10 fps.the Allcamera. the experiments The initial wereminimal carried path out is calculated without any by modificationsthe grid-based to efficient the P3DX D* www.intechopen.com : 11 controller’sLite path-planning parameter. algorithm and the PMD camera pro‐ A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics Invides this information experiment, on thethe AGVobstacles is setin real-time. to travel When from an initialobstacle position is perceived (Xrob, Y rob via) = the (0, cameras’ 0) to a goal FoV, position the AGV (9.0 mprocesses, 0), each the coordinate sensor information being (Xrob and,, Yrob if ) required, with Xrob it and can Ycontinuallyrob in metres. re-plan To its perceive path to its avoid surroundings, any collision it obtainsuntil it (c) Avoiding the second (d) Overcoming the third informationreaches its final from goal. the sensor rather than using a priori information, and it successfully avoids the three static Figure 27. 27. ExperimentExperiment (office): (office):indoor test indoor with three test static with obstacles three static obstaclesThe goal is in to its plan path, a collision-free as shown in path Figures for 26theand AGV27 to. reach obstacles. its desired position by implementing the efficient D* Lite 7. Conclusion 400 algorithm on the P3DX’s onboard computer. The PMD when it traversed towards its goal. In future, real-time AGV camera is used200 as an exteroceptive sensor with a frame rate Theexperiments optimal willdeployment be conducted of a ToF-based in dynamic PMD environments. camera on

for the scene0 flow of 10 fps. All the experiments were an AGV is presented in this paper, the overall mission of which8. References is to traverse the AGV from one point to another in carried out without−200 any modifications to the P3DX control‐ ler’s parameter. hazardous and hostile environments without human −400 [1] S. Patnaik. Robot Cognition and Navigation: An intervention. A ToF camera is used as the key sensory

Y−coordinates in mm Experiment with Mobile Robots. Springer Berlin In this experiment,−600 the AGV is set to travel from an initial deviceHeidelberg for perceiving New the York, operating 2007. environment, the depth position (X , Y ) = (0, 0) to a goal position (9.0 m, 0), each r−800ob rob data[2] of J. Martinezwhich are Gomez,populated A. Fernandezinto a workspace Caballero, grip map. I. Garcia An coordinate being (X , Y ) with X and Y in metres. To optimal camera mounting angle is adopted by the analysis −1000 rob rob rob rob Varea, L. Rodriguez Ruiz, and C. Romero Gonzalez. perceive its surroundings, it obtains information from the of variousA taxonomy cameras’ ofperformance vision systems discrepancies. for ground A series mobile of −1200 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 sensor rather than usingX−coordinates a priori in mm information, and it still androbots. movingInternational tests were carried Journal out of to Advancedverify the Roboticcorrect successfully avoids the three static obstacles in its path, as sensorSystems operation., (1729-8806), Finally, July in 2014.the real-time autonomous Figureshown 26.in FiguresExperiment 26 and (office): 27. plot of the AGV’s x-coordinates path-planning[3] H. R. Everett. experiment,Sensors the for AGV Mobile relied Robots: completely Theory and on vs y-coordinates. Application. A. K. Peters, Ltd., Natick, MA, USA, 1995. 12 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348 [4] Fardi, Basel, Ullrich Schuenert, and GerdWanielik. Shape and motion-based pedestrian detection in 7. Conclusion infrared images: A multi sensor approach. In IEEE The optimal deployment of a ToF-based PMD camera on Intelligent Vehicles Symposium IV, pages 18–23, 2005. an AGV is presented in this paper, the overall mission [5] G. Sgouros, Papakonstantinous, and P. Tsanakas. of which is to traverse the AGV from one point to Localized qualitative navigation for indoor another in hazardous and hostile environments without environments. In IEEE Intl. Conf. on Robotics and human intervention. A ToF camera is used as the key Automation, pages 921–926, 1996. sensory device for perceiving the operating environment, [6] Bertozzi, Broggi, Cellario, Fascioli, Lombardi, and the depth data of which are populated into a workspace Porta. Artificial vision in road vehicles. In 28th IEEE grip map. An optimal camera mounting angle is Industrial Electronics Society Annual Conf, pages 1258 – adopted by the analysis of various cameras’ performance 1271, 2002. discrepancies. A series of still and moving tests were [7] K. Rebai, A. Benabderrahmane, O. Azouaoui, and carried out to verify the correct sensor operation. Finally, N. Ouadah. Moving Obstacles Detection and in the real-time autonomous path-planning experiment, Tracking with Laser Range Finder. In Advanced the AGV relied completely on its perception system to Robotics, 2009. ICAR 2009. International Conference on, sense the operating environment and avoid static obstacles pages 1–6, 2009.

www.intechopen.com : 11 A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics its perception system to sense the operating environment GmbH, Am Eichenhang 50, 57076 Siegen, Germa‐ and avoid static obstacles when it traversed towards its ny, 2007. goal. In future, real-time experiments will be conducted in [13] D. Droeschel et al. Robust Ego-Motion Estimation dynamic environments. with ToF Cameras. In Proceedings of the 4th European Conference on Mobile Robots, Mlini/Dubrovnik, 8. Acknowledgements Croatia, September 2009. [14] J.W. Weingarten, G. Gruener, and R. Siegwart. A The author would like to sincerely thank Mr. Daniel Salier state-of-the-art 3d sensor for robot navigation. in for the MATLAB code. intelligent robots and systems. In In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, volume 3, 9. References pages 2155–2160, 2004. [15] Fang Yuan, A. Swadzba, R. Philippsen, O. Engin, M. [1] S. Patnaik. Robot Cognition and Navigation: An Hanheide, and S. Wachsmuth. Laser-based naviga‐ Experiment with Mobile Robots. Springer Berlin tion enhanced with 3d time-of-flight data. In In Heidelberg New York, 2007. Robotics and Automation, ICRA’09, pages 2844–2850, [2] J. Martinez Gomez, A. Fernandez Caballero, I. Garcia 2009. Varea, L. Rodriguez Ruiz, and C. Romero Gonza‐ [16] M. Hansard, S. Lee, O. Choi, and R. P. Horaud. Time lez. A taxonomy of vision systems for ground mobile of Flight Cameras: Principles, Methods, and Applica‐ robots. International Journal of Advanced Robotic tions. SpringerBriefs in Computer Science. Spring‐ Systems, (1729-8806), July 2014. er, November 2012. [3] H. R. Everett. Sensors for Mobile Robots: Theory and [17] M. Wiedemann, M. Sauer, F. Driewer, and K. Application. A. K. Peters, Ltd., Natick, MA, USA, Schilling. Analysis and characterization of the pmd 1995. camera for application in mobile robotics. In The [4] Fardi, Basel, Ullrich Schuenert, and GerdWanielik. 17thIFAC World Congress, Seoul, Korea, pages 13689– Shape and motion-based pedestrian detection in 13694, 2008. infrared images: A multi sensor approach. In IEEE [18] Roger Bostelman, Tsai Hong, Raj Madhavan, and Intelligent Vehicles Symposium IV, pages 18–23, 2005. Brian Weiss. 3d range imaging for urban search and [5] G. Sgouros, Papakonstantinous, and P. Tsanakas. rescue robotics research. In In Safety, Security and Localized qualitative navigation for indoor environ‐ Rescue Robotics, Workshop, 2005 IEEE International, ments. In IEEE International Conference on Robotics and pages 164–169, 2005. Automation, pages 921–926, 1996. [19] M. Weyrich, P. Klein, M. Laurowski, and Y. Wang. [6] Bertozzi, Broggi, Cellario, Fascioli, Lombardi, and Vision Based Defect Detection on 3D Objects and Porta. Artificial vision in road vehicles. In 28th IEEE Path Planning for Processing. In Proceedings of the Industrial Electronics Society Annual Conf, pages 1258 11th WSEAS international conference on robotics, – 1271, 2002. control and manufacturing technology, and 11th WSEAS [7] K. Rebai, A. Benabderrahmane, O. Azouaoui, and N. international conference on Multimedia systems & signal Ouadah. Moving Obstacles Detection and Tracking processing, ROCOM’11/MUSP’11, pages 19–24, with Laser Range Finder. In Advanced Robotics, 2009. Stevens Point, Wisconsin, USA, 2011. World ICAR 2009. International Conference on, pages 1–6, Scientific and Engineering Academy and Society 2009. (WSEAS). [8] Alefs, Bram, David Schreiber, and Markus Clabian. [20] Benjamin Huhle, Philipp Jenke, and Wolfgang Hypothesis based vehicle detection for increased Strasser. On-the-fly scene acquisition with a handy simplicity in multi sensor. In IEEE Intelligent Vehicles multi-sensor system. International Journal of Intelli‐ Symposium, pages 261–266, 2005. gent Systems Technologies and Applications, 5(3/4):255– [9] Mats Ahlskog. 3d vision. Master’s thesis, Depart‐ 263, 2008. ment of Computer Science and Electronics, Mälar‐ [21] R. Bostelman, P. Russo, J. Albus, T. Hong, and R. dalen University, 2007. Madhavan. Applications of a 3D Range Camera Towards Healthcare Mobility Aids. In IEEE Interna‐ [10] S. Hussmann, T. Ringbeck, and B. Hagebeuker. A performance review of 3d tof vision systems in tional Conference on Networking, Sensing and Control comparison to stereo vision systems. In Stereo Vision (ICNSC ’06), pages 416–421, August 2006. (Online book publication), pages 103–120, Vienna, [22] D. Falie and V. Buzuloiu. Wide range time of flight Austria, 2008. I-Tech Education and Publishing. camera for outdoor surveillance. In Microwaves, [11] Stefan May. 3D Time-of-Flight Ranging for Robotic Radar and Remote Sensing Symposium, 2008. MRRS Perception in Dynamic Environments. PhD thesis, 2008, pages 79–82, 2008. Institute for Computer Science, University of [23] V. Castaneda, D. Mateus, and N. Navab. Slam Osnabrück, Germany, 2009. combining tof and high-resolution cameras. In [12] T. Ringbeck and B. Hagebeuker. A 3D Time of Flight Applications of Computer Vision (WACV), 2011 IEEE Camera for Object Detection. PMD Technologies Workshop on, pages 672–678, 2011.

Sobers Lourdu Xavier Francis, Sreenatha G. Anavatti, Matthew Garratt and Hyunbgo Shim: 13 A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics [24] Sergio Almansa-Valverde, José Carlos Castillo, and Imaging Applications using Time-of-Flight Cam‐ Antonio Fernández-Caballero. Mobile robot map eras. In Computer Vision and Pattern Recognition building from time-of-flight camera. Expert Syst. Workshops, 2008. CVPRW. IEEE Computer Society Appl., 39(10):8835–8843, 2012. Conference on, pages 1–6, 2008. [25] Jochen Penne, Christian Schaller, Joachim Horneg‐ [36] T. Darrell, L. P. Morency, and A. Rahimi. Fast 3D ger, and Torsten Kuwert. Robust real-time 3d Model Acquisition from Stereo Images. In First respiratory motion detection using time-of-flight International Symposium on 3D Data Processing cameras. International Journal of Computer Assisted Visualization and Transmission, pages 172–176, Radiology and Surgery, 3(5):427–431, 2008. November 2002. [26] S. May, B. Werner, H. Surmann, and K. Pervolz. 3D [37] Y. Cui, S. Schuon, D. Chan, S. Thrun, and C. Theo‐ Time-of-Flight Cameras for Mobile Robotics. In balt. 3D Shape Scanning with a Time-of-Flight Intelligent Robots and Systems, 2006 IEEE/RSJ Interna‐ Camera. In IEEE CVPR 10, pages 1173–1180, 2010. tional Conference on, pages 790–795, 2006. [38] C. Distante, G. Diraco, and A. Leone. Active Range [27] Dirk Holz, Ruwen Schnabel, David Droeschel, Jörg Imaging Dataset for Indoor Surveillance. Annals of Stückler, and Sven Behnke. Towards semantic scene the BMVA, London, 3:1–16, December 2010. analysis with time-of-flight cameras. In Javier Ruiz- [39] J. Teizer. 3D Range Imaging Camera Sensing for del Solar, Eric Chown, and Paul G. Plöger, editors, Active Safety in Construction. In Sensors in Construc‐ RoboCup 2010: Robot Soccer World Cup XIV, volume tion and Infrastructure Management, pages 103–117. 6556, pages 121–132, Berlin, Heidelberg, 2011. ITcon, 2008. Springer-Verlag. [40] T.B. Moeslund and M. B. Holte. Gesture Recogni‐ [28] R. Koch et al. MixIn3D: 3D Mixed Reality with ToF- tion using a Range Camera. Technical report, Camera. In Andreas Kolb and Reinhard Koch, CVMT-07-01, February 2007. editors, Dynamic 3D Imaging, volume 5742 of Lecture Notes in Computer Science, pages 126–141. Springer [41] R. Conro et al. Shape and Deformation Measuremen‐ Berlin Heidelberg, 2009. tusing Heterodyne Range Imaging Technology. Proceedings of 12th Asia-Pacific Conference on NDT, [29] M. B. Holte, T. B. Moeslund, and P. Fihl. View- Auckland, New Zealand, November 2006. invariant gesture recognition using 3D optical flow and harmonic motion context. Comput. Vis. Image [42] T. Kahlmann, F. Remondino, and H. Ingensand. Underst., 114(12):1353–1361, December 2010. Calibration for Increased Accuracy of the Range Imaging Camera Swissranger. International Ar‐ [30] L.A. Schwarz, A. Mkhitaryan, D. Mateus, and N. chives of the Photogrammetry, Remote Sensing, and Navab. Estimating human 3d pose from time-of- Geoinformation Sciences, XXXVI(5):136–141, 2006. flight images based on geodesic distances and optical flow. In Automatic Face Gesture Recognition and [43] T. Möller, H. Kraft, J. Frey, M. Albrecht, and R. Lange. Workshops (FG 2011), 2011 IEEE International Confer‐ Robust 3D Measurement with PMD Sensors. ence on, pages 700–706, 2011. PMDTechnologies GmbH, Am Eichenhang 50, D-57076, Germany., 2005. [31] L. Schwarz, D. Mateus, V. Castaneda, and N. Navab. Manifold Learning for ToF-based Human Body [44] Marvin Lindner and Andreas Kolb. Lateral and Tracking and Activity Recognition. In Proceedings of Depth Calibration of PMD-Distance Sensors. In the British Machine Vision Conference, pages 80.1– Advances in Visual Computing, volume 4292 of Lecture 80.11. BMVA Press, 2010. doi:10.5244/C.24.80. Notes in Computer Science, pages 524–533. Springer Berlin Heidelberg, 2006. [32] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox. RGB-D mapping Using Kinect-Style Depth Cam‐ [45] S K Ramanandan. 3D ToF Camera Calibration and eras for Dense 3D Modeling of Indoor Environ‐ Image Pre-processing. Master’s thesis, Deparment of ments. 2012. Electrical Engineering and Computer Science, [33] Muhammad Attamimi, Takaya Araki, Tomoaki University of Applied Sciences Bremen, Bremen, Nakamura, and Takayuki Nagai. Visual recogni‐ Germany, August 2011. tion system for cleaning tasks by humanoid robots. [46] http://en.wikipedia.org/wiki/OpenCV. Accessed in International Journal of Advanced Robotic Systems, Aug 2011. 10(384), 2013. [47] S.L.X. Francis, S.G. Anavatti, and M. Garratt. [34] H. Du, T. Oggier, F. Lustenberger, and E. Charbon. Reconstructing the geometry of an object using 3d tof A Virtual Keyboard Based on True-3D Optical camera. In Merging Fields Of Computational Intelli‐ Ranging. In BMVC’05, 2005. gence And Sensor Technology (CompSens), 2011 IEEE [35] S. Soutschek, J. Penne, J. Hornegger, and J. Kornhub‐ Workshop On, pages 13–17, April 2011. er. 3D Gesture-based Scene Navigation in Medical [48] MESA imaging. SR4000 Manual.

14 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348 [49] Boyoon Jung and Gaurav S. Sukhatme. Detecting [50] J. Shi and C. Tomasi. Good features to track. In moving objects using a single camera on a mobile Computer Vision and Pattern Recognition, 1994. robot in an outdoor environment. In in International Proceedings CVPR ’94., 1994 IEEE Computer Conference on Intelligent Autonomous Systems, pages Society Conference on, pages 593–600, 1994. 980–987, 2004.

Sobers Lourdu Xavier Francis, Sreenatha G. Anavatti, Matthew Garratt and Hyunbgo Shim: 15 A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics