A Tof-Camera As a 3D Vision Sensor for Autonomous Mobile Robotics
Total Page:16
File Type:pdf, Size:1020Kb
International Journal of Advanced Robotic Systems ARTICLE A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics Regular Paper Sobers Lourdu Xavier Francis1*, Sreenatha G. Anavatti1, Matthew Garratt1 and Hyunbgo Shim2 1 University of New South Wales, Canberra, Australia 2 Seoul National University, Seoul, Korea *Corresponding author(s) E-mail: [email protected] Received 03 February 2015; Accepted 25 August 2015 DOI: 10.5772/61348 © 2015 Author(s). Licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV) of The aim of this paper is to deploy a time-of-flight (ToF) the PMD camera. A series of still and moving tests are based photonic mixer device (PMD) camera on an Auton‐ conducted on the AGV to verify correct sensor operations, omous Ground Vehicle (AGV) whose overall target is to which show that the postulated application of the ToF traverse from one point to another in hazardous and hostile camera in the AGV is not straightforward. Later, to stabilize environments employing obstacle avoidance without the moving PMD camera and to detect obstacles, a tracking human intervention. The hypothesized approach of feature detection algorithm and the scene flow technique applying a ToF Camera for an AGV is a suitable approach are implemented to perform a real-time experiment. to autonomous robotics because, as the ToF camera can provide three-dimensional (3D) information at a low Keywords 3D ToF Camera, PMD Camera, Pioneer Mobile computational cost, it is utilized to extract information Robots, Path-planning about obstacles after their calibration and ground testing, and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D) world map which has been divided into a grid/cells, where the 1. Introduction collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the As an AGV must be able to adequately sense its surround‐ ings in order to operate in unknown environments and target. PMD depth data is used to populate traversable execute autonomous tasks, vision sensors provide the areas and obstacles by representing a grid/cells of suitable necessary information required for it to perceive and avoid size. These camera data are converted into Cartesian any obstacles to accomplish autonomous path-planning. coordinates for entry into a workspace grid map. A more Hence, the perception sensor becomes the key sensory optimal camera mounting angle is needed and adopted by device for perceiving the environment in intelligent mobile analysing the camera’s performance discrepancy, such as robots, and the perception objective depends on three basic pixel detection, the detection rate and the maximum system qualities, namely rapidity, compactness and perceived distances, and infrared (IR) scattering with robustness [1]. Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348 1 Over the last few decades, many different types of sensors optimal camera angle is adopted. To ensure that the top- [2] have been developed in the context of AGV path- most pixels are observed directly ahead of the robot, planning to avoid obstacles [3] such as infrared sensors [4], thereby giving an adequate conception of obstacles and ultrasonic sensors, sonar [5], LADAR [6], laser rangefinders maximizing the ground plane conception, various analyses [7], camera data fused with radar [8] and stereo cameras of the camera performance are carried out in this paper. with a projector [9]. These sensors’ data along with data Later, the parametric calibration for the PMD cameras is processing techniques are used to update the positions and performed by obtaining necessary camera parameters to directions of the vertices of obstacles. However, these derive the true information of the imaging scene. The sensor systems are unable to provide necessary informa‐ imaging technology of the PMD camera is better under‐ tion without ease about any surroundings. stood whereby camera pixels provide a complete image of As the world’s attention is increasingly focusing on the scene with each pixel detecting the range data stored in automation in every field, extracting 3D information about a 2D array, which are utilized and interpreted in this paper an obstacle has become a topical and challenging task. As to extract information about the surroundings. A few such, an appropriate sensor is required to obtain 3D experiments are carried to measure the camera’s parame‐ information that has small dimensions, low power con‐ ters and distance errors with respect to each pixel. sumption and real-time performance. The main limitation To determine the difference between the function of the of a 2D camera is that, as it is the projection of 3D informa‐ camera in an environment similar to that claimed by the tion onto a 2D image plane, it cannot provide complete manufacturer data sheet, white surface testing and grass information of the entire scene. Thus, the processing of surface testing are conducted for the PMD camera. This also these images will depend upon the view point (rather than provided a means to compare the performance of the the actual information about the object). In order to camera from a flat white surface to a flat grassy surface. overcome this drawback, the use of 3D information has Later, the camera data is synchronized with the instanta‐ emerged. In general, researchers use a setup consisting of neous orientation and position of the platform (and thus a charge-coupled device (CCD) camera and light projector the camera), which translates the Cartesian coordinates in order to obtain a 3D image, such as that of the 3D into squares of grids. It reconstructed the ground region visualization of rock [9]. (extremities that the camera could see) into a grid of A 3D sensor is selected for our work, which is based on the suitable grid cells which are imputed to the path-planning photonic mixer device (PMD) technology that delivers algorithm. range and intensity data with low computational cost as During real-time experimentation, the grid-based Efficient well as compactness and economical design with a high D* Lite path-planning algorithm and scene-flow technique frame rate. This camera system delivers absolute geomet‐ rical dimensions of obstacles without depending upon the were programmed on the Pioneer onboard computers object surface, distance, rotation or illumination. Hence, it using the OpenCV and OpenGL libraries. In order to is rotation-, translation- and illumination-invariant [10]. compensate for the ego-motion of the PMD camera, which Nowadays, RGBD cameras (e.g., Kinect, Asus Xtion, is aligned with the AGV coordinates, a features detection Carmine) have been widely used in object recognition and algorithm using Good − Features −to −Track from the OpenCV mobile robotics applications. However, these RGBD library is adopted. cameras cannot operate in outdoor environments [11]. A The paper is organized as follows: a brief comparison of the PMD camera with a working range of (0.2 - 7 metres) 3D sensors and their fundamental principles is presented provides better depth precision compared to Kinect (0.7 - in Section II. In Section III, the calibration of the PMD 3.5 metres) and Carmine (0.35 - 1.4 metres). camera is performed by parametric calibration. Section IV However, the PMD camera is constrained by its limited presents the manipulation of the PMD camera data. In FoV, namely its need to adjust the camera mounting Section V, several standardized tests are devised to downwards to obtain a greater view of the ground. The characterize the PMD camera and Section VI describes the specific angle of this mounting is explained in this paper; experimental results. nonetheless, it was unexpected that light incident at different angles to the ground would result in significant 2. ToF-based 3D Cameras - state-of-the-art receiver loss and distortion of distance measurements due to scattering. The camera is mounted on the front of the Nowadays, 3D data are required in the automation robot through brackets that enable variable camera industries for analysing the visible space/environment. mounting angles at a static angle of tilt - it is perceived that The rapid acquisition of 3D data by a robotic system for this configuration would enable the best compromise navigation and control applications is required. New 3D between the ground conception and a straight ahead cameras at affordable prices which have been successful‐ conception as a function of the camera tilt angle ψ. Due to ly developed using the ToF principle to resemble LIDAR being mounted above the robot, it would be necessary to scanners. In a ToF camera unit, a modulated light pulse flag closer obstacles so as to reduce the blind spot in front is transmitted by the illumination source and the target of the robot. Because of these considerations, a more distance is measured from the time taken by the pulse to 2 Int J Adv Robot Syst, 2015, 12:156 | doi: 10.5772/61348 reflect from the target back to the receiving unit. PMD intensity and the 3D structure of the range data. Gesture technologies have developed 3D sensors using the ToF recognition has been performed based on shape contexts principle, which provide for a wide range of field and simple binary matching [40] whereby motion informa‐ applications with high integration and cost-effective tion is extracted by matching the difference in the range production [12]. data of different image frames. The measurement of shapes and deformations of metal objects and structures using ToF ToF cameras do not suffer from missing texture in the scene range information and heterodyne imaging are discussed or bad lighting conditions, are computationally less in [41].