
Downloaded from orbit.dtu.dk on: Sep 26, 2021 Mobile Robot Navigation in a Corridor Using Visual Odometry Bayramoglu, Enis; Andersen, Nils Axel; Poulsen, Niels Kjølstad; Andersen, Jens Christian; Ravn, Ole Published in: Prooceedings of the 14th International Conference on Advanced Robotics Publication date: 2009 Document Version Publisher's PDF, also known as Version of record Link back to DTU Orbit Citation (APA): Bayramoglu, E., Andersen, N. A., Poulsen, N. K., Andersen, J. C., & Ravn, O. (2009). Mobile Robot Navigation in a Corridor Using Visual Odometry. In Prooceedings of the 14th International Conference on Advanced Robotics (pp. 58). IEEE. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Mobile Robot Navigation in a Corridor Using Visual Odometry Enis Bayramoglu˘ ∗ , Nils Axel Andersen∗, Niels Kjølstad Poulsen†, Jens Christian Andersen∗ and Ole Ravn∗ ∗Department of Electrical Engineering, Automation and Control Group, Technical University of Denmark, Lyngby, Denmark Emails: {eba, naa, jca, or}@elektro.dtu.dk †Department of Informatics and Mathematical Modelling. Technical University of Denmark, Lyngby, Denmark Email: [email protected] Abstract— Incorporation of computer vision into mobile wheel encoders. A small mobile robot operating in a corridor robot localization is studied in this work. It includes the environment is then equipped with this pose estimator to generation of localization information from raw images and perform simple navigation tasks as an indicator of overall its fusion with the odometric pose estimation. The technique is then implemented on a small mobile robot operating at performance. a corridor environment. A new segmented Hough transform The fields of computer vision and mobile robot local- with an improved way of discretization is used for image line ization have been studied extensively to date. The work of extraction. The vanishing point concept is then incorporated to Kleeman[8] is a good example of mobile robot localization classify lines as well as to estimate the orientation. A method with multiple sensors, namely odometry and advanced sonar. involving the iterative elimination of the outliers is employed to find both the vanishing point and the camera position. The camera pose estimation, independent from robotics is The fusion between the vision based pose estimation and the also a very active research field. Yu et al. [16] used the odometry is achieved with an extended Kalman filter. A distance trifocal tensor with point features to estimate the path of driven error model is used for the odometry while a simple the camera from a sequence of images and Makadia [11] error model with constant noise is assumed for the vision. investigated the camera pose estimation restricted on a plane. An extended Kalman filter as a parameter estimator is also applied to estimate odometry parameters. Experimental results In the subject of mobile robot localization with vision, are included. The robustness and the precision of the entire Andersen et al. [2] employed monocular vision to assist the system is illustrated by performing simple navigation tasks. laser scanner. Lin [9] used stereo vision and Munguia and Grau [12] studied monocular vision directly. I. INTRODUCTION Previous work investigating problems similar to this The field of robot vision is gaining prominence as the pos- project should also be noted; Tada [15] uses monocular sibilities are explored. The importance of vision in humans vision in a corridor and incorporates the vanishing point to as compared to all other senses pays credit to that. Many follow the center while Shi et al. [14] studies navigation in a tasks performed by humans, today, require vision and their corridor using lines but they are mainly interested in a safe automation could be made possible as the computer vision region to travel instead of the pose and follow a very different field develops. The most significant advantage of vision is strategy. Guerra et al. [6] obtain incremental measurements its capability to acquire information even in very complex from lines in environments unknown apriori and they also environments, without interfering with the surroundings. This carry out experiments in a corridor. makes it a very flexible sense. On the other hand, mobile robotics, still in its infancy, II. SOLUTION OUTLINE is one area where flexibility towards the environment is In the assumed setup, the mobile robot has a single camera most desired. The major challenge of mobile robotics, as the mounted on it without any ability to turn or move w.r.t name suggests, is to navigate the robot to where it needs the robot itself. The active wheels of the robot also have to be. In addition to the obvious requirement of primary encoders available for dead reckoning. The robot localization motion capabilities, such as turning and moving backwards is performed at a corridor, where the vision is used to and forwards, the robot has to sense its possibly dynamic estimate the robot orientation and its position across the environment, determine its own location and generate a width of the corridor. Dead reckoning, on the other hand, motion plan accordingly. is used to keep track of both the orientation and the position The project described in this article aims to contribute across the width and the depth of the corridor. The corridor to both fields by applying computer vision to perform one dimensions are assumed to be known apriori. of the most important tasks of mobile robotics, localization. The choice of the corridor as the working environment has The scope of this project includes the generation of this two important reasons. First, the corridor is a common part information from the images as well as its fusion with of most domestic environments and being able to navigate in it has potential on its own. Second, it has a very regular and to those images. This segmentation can be shown to speed simple structure, making it easier to start the development of up the overall Hough transform proportional to the square a more general vision based solution. root of the number of segments. The performance is further Dead reckoning is always applied to keep track of the increased by the extensive use of table lookups made possible pose estimation with a growing error. Therefore, for each raw by the small size of each sub image. The lines obtained from image taken, a prior estimate is available to aid the visual each sub image are then traced across the sub images. estimation. Initially, the robot is either started from a known The algorithm with all the performance optimizations location or a high uncertainty in the pose is assumed. resulted in 15-60 times speed increase compared to the Visual estimation is performed using image lines as fea- OpenCV 1 implementation of SHT. The Hough transform tures. Lines are extracted using a form of segmented Hough is also modified to keep track of supporting pixels for each transform. These lines are then classified w.r.t direction using line through the use of look-up tables. Two line segments are invariant environmental information and the prior estimate. combined if their combined supporting pixels describe a line The vanishing point corresponding to the direction along precisely enough with a threshold. As well as being a robust the corridor is also calculated robustly. The lines are then criterion for line combination, this provides the endpoints of matched to each of the four lines lying along the corners of lines robustly and without any need for further processing. the corridors. Finally, the vanishing point is used to estimate the orientation while the line matches are used to estimate B. Vanishing Point Detection the translation. If a number of lines are parallel in the 3D scene, their When the visual estimate is available it is checked for projection on the image all meet at a single point. This point consistency using the prior estimate and Bayesian hypothesis is the so called ”vanishing point” specific to the direction testing. If it passes the check, it is fused with the dead of those lines. The vanishing point is a useful tool both reckoning using an extended Kalman filter (EKF). The model for the detection of the 3D directions of image and for the for the dead reckoning is modified to allow for the estimation calculation of the camera orientation w.r.t that direction. of its parameters along with the pose itself resulting in an The vanishing point is expected to sit on a point where extended Kalman filter as a parameter estimator (EKFPE). many lines intersect with all others, if there are enough The processing of each image spans a few sampling peri- supporting lines. When all the intersection points between ods of the wheel encoders. During this time, dead reckoning the image lines are calculated, a dense cluster is supposed is continued and all the encoder output is recorded at the to be formed around the vanishing point. Using the available same time. When the visual estimate is available, it is used prior estimate, it is also possible to calculate an estimate for to refine the estimate at the time of the taking of the image the vanishing point along with an uncertainty.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-