Active Perception for Outdoor Localisation with an Omnidirectional Camera

Active Perception for Outdoor Localisation with an Omnidirectional Camera

2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) October 25-29, 2020, Las Vegas, NV, USA (Virtual) Active Perception for Outdoor Localisation with an Omnidirectional Camera Maleen Jayasuriya1, Ravindra Ranasinghe1, Gamini Dissanayake1 Abstract— This paper presents a novel localisation frame- Vision based systems operating in outdoor environments work based on an omnidirectional camera, targeted at outdoor need to deal with the challenges caused by appearance urban environments. Bearing only information to persistent changes (illumination, occlusions) and the presence of and easily observable high-level semantic landmarks (such as lamp-posts, street-signs and trees) are perceived using a dynamic objects. Traditionally this has been addressed Convolutional Neural Network (CNN). The framework utilises by resorting to Simultaneously Localisation and Mapping an information theoretic strategy to decide the best viewpoint (SLAM) or pure Visual Odometry (VO) which do not require to serve as an input to the CNN instead of the full 360° the environment representation to be invariant. However, this coverage offered by an omnidirectional camera, in order to is at the expense of a significant increase in computational leverage the advantage of having a higher field of view with- out compromising on performance. Environmental landmark cost and decrease in robustness. Comparatively, given a observations are supplemented with observations to ground suitable sensor and a representation of the environment surface boundaries corresponding to high-level features such as that does not change with time, localisation on a prebuilt manhole covers, pavement edges and lane markings extracted map is a relatively straightforward task. Thus, the proposed from a second CNN. Localisation is carried out in an Extended framework aims to address this by focusing on high-level Kalman Filter (EKF) framework using a sparse 2D map of the environmental landmarks and Vector Distance Transform semantic observations of environmental landmarks (street (VDT) based representation of the ground surface boundaries. lamps, tree trunks, parking meters etc) and ground surface This is in contrast to traditional vision only localisation systems boundary observations (pavement edges, manhole covers etc) that have to carry out Visual Odometry (VO) or Simultaneous obtained from state of the art Convolutional Neural Networks Localisation and Mapping (SLAM), since low level features (CNN). Localisation is carried out on a prebuilt sparse 2D (such as SIFT, SURF, ORB) do not persist over long time frames due to radical appearance changes (illumination, occlusions etc) map of the landmark features and Vector Distance Transform and dynamic objects. As the proposed framework relies on high- (VDT) representation of ground surface boundaries. The level persistent semantic features of the environment, it offers work presented in this paper builds on our previous work [2], an opportunity to carry out localisation on a prebuilt map, [3] by leveraging an omnidirectional camera and an active which is significantly more resource efficient and robust. Ex- vision paradigm to improve robustness and performance. periments using a Personal Mobility Device (PMD) driven in a representative urban environment are presented to demonstrate The use of an omnidirectional camera overcomes an and evaluate the effectiveness of the proposed localiser against inherent limitation most vision based localisation approaches relevant state of the art techniques. face due to the limited field of view (FoV) of conventional cameras, in comparison to the blanket coverage that sensors I. INTRODUCTION such as LIDARs offer. This is specially crippling in highly Localisation remains one of the most fundamental and dynamic environments such as outdoor urban settings where challenging tasks for an autonomous vehicle. Among the significant portions of the available FoV maybe occluded [4]. sensor modalities that aid in localisation, vision sensors stand However, utilising omnidirectional cameras for localisation out as low-cost compact sources that offer rich information also introduces it’s own set of challenges. In particular, a about the operating environment of a mobile robot [1]. high resolution sensor is required to counteract the impact This paper presents an outdoor vision based localisation of the large field of view on the environmental features framework, targeted at low speed (under 15 Kmph), resource resulting in an increase in the computational cost. An active constrained autonomous vehicles operating in urban and perception paradigm [5] to determine when and where to suburban pedestrian environments such as footpaths and make observations is proposed to deal with this issue. An pavements. Typical examples range from Personal Mobility information gain based metric is employed to select the Devices (PMDs) such as mobility scooters and powered portion of the image that is to be processed for feature wheelchairs, to delivery robots. There is a significant in- extraction instead of the entire 360 image, in order to terest in incorporating self driving capability to PMDs for provide the best trade off between processing limitations and improving their safety and efficacy. The framework proposed localisation accuracy. The contributions of this paper are: in this paper has been implemented and evaluated on an • A localisation framework that takes advantage of omni- instrumented mobility scooter platform described in [2]. directional cameras through CNN based perception of high-level semantic information of the environment. 1Maleen Jayasuriya, Ravindra Ranasinghe and Gamini Dissanayake are with the Centre for Autonomous Systems (CAS), University of Technology • An information gain based active vision paradigm for Sydney, Australia. perspective viewpoint selection to improve the efficacy 978-1-7281-6211-9/20/$31.00 ©2020 IEEE 4567 and functionality of the proposed framework. Xaio et. al. [29] extracts landmarks such as poles, street signs • A detailed evaluation, comparison (against ORB- and traffic lights obtained through a semantic segmentation SLAM2 [6], VINS [7] and RTAB [8]) and demon- process to match these against a 3D map of such features. stration of the proposed system based on real world Authors in [30] use a combination of LIDARs and cameras to experiments carried out using an instrumented PMD. observe similar high-level features as well as road-markings The remainder of this paper is structured as follows: and facades to localise on a 3D map. Section II discusses the state of the art related to the work Our previous work [3] proposed an Extended Kalman presented in this paper. Section III details the proposed Filter (EKF) based localisation scheme that fuses CNN localisation framework. Section IV provides a brief overview based object detection of common environmental landmarks of the configuration of the hardware platform that was used and ground surface boundaries for localisation on a sparse for evaluation. Section V presents an evaluation of the 2D map. However, the sparse nature of the bearing only proposed system on real world data, including a comparison environmental landmark observations is a major limitation with state of the art open source visual localisation schemes. to the robustness and accuracy of the framework presented Finally, Section VI offers a brief discussion on the results in [3]. The work presented in this paper, proposes to use and proposed future work. an omnidirectional camera to overcome this challenge. Al- though CNN based semantic segmentation and object detec- II. RELATED WORK tion schemes that directly apply to distorted fish-eye images The past decade has witnessed enormous progress in the have been attempted [31]–[33], these do not match the real- field of vision based localisation. Feature based approaches time performance and accuracy required to carry out visual that first extract low level handcrafted features such as SIFT, localisation. Comparatively, the YOLO CNN object detection SURF or ORB features in order to carry out pose estimation, framework [34], [35] which was used in our previous work, such as ORB-SLAM [6], [9], VINS [7], Kimera [10] and offers state of the art performance and accuracy. Although RTAB [8] are noteworthy state of the art contributions. Direct it is possible to re-project the image from a fish-eye camera and semi-direct appearance based approaches that take into into multiple perspective images that are suitable for YOLO, account pixel intensities of the entire image for motion using the set of all re-projected images incurs a significant estimation such as LSD-SLAM [11] and DSO [12] have computational cost and therefore is not suitable in a resource also proven to be remarkably effective. However, all these constrained platform. Hence an active vision paradigm is techniques rely on generating a location estimate while at proposed to maintain an efficient processing pipeline for the same time building a local or global representation of YOLO, in order to perform real-time localisation on our the environment, significantly adding to the computational mobility scooter platform. effort required. Furthermore, these rely on “loop closure” Active perception that directs the sensors on board a which is essential to avoid the inevitable drift in the location mobile platform in order to improve localisation performance estimate, which is challenging in outdoor urban settings due has been proposed for use with sonar [36], LIDAR [37] to dynamic

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us