Spherical-Model-Based SLAM on Full-View Images for Indoor Environments

Spherical-Model-Based SLAM on Full-View Images for Indoor Environments

applied sciences Article Spherical-Model-Based SLAM on Full-View Images for Indoor Environments Jianfeng Li 1 , Xiaowei Wang 2 and Shigang Li 1,2,* 1 School of Electronic and Information Engineering, also with the Key Laboratory of Non-linear Circuit and Intelligent Information Processing, Southwest University, Chongqing 400715, China; [email protected] 2 Graduate School of Information Sciences, Hiroshima City University, Hiroshima 7313194, Japan; [email protected] * Correspondence: [email protected] Received: 14 October 2018; Accepted: 14 November 2018; Published: 16 November 2018 Featured Application: For an indoor mobile robot, finding its location and building environment maps. Abstract: As we know, SLAM (Simultaneous Localization and Mapping) relies on surroundings. A full-view image provides more benefits to SLAM than a limited-view image. In this paper, we present a spherical-model-based SLAM on full-view images for indoor environments. Unlike traditional limited-view images, the full-view image has its own specific imaging principle (which is nonlinear), and is accompanied by distortions. Thus, specific techniques are needed for processing a full-view image. In the proposed method, we first use a spherical model to express the full-view image. Then, the algorithms are implemented based on the spherical model, including feature points extraction, feature points matching, 2D-3D connection, and projection and back-projection of scene points. Thanks to the full field of view, the experiments show that the proposed method effectively handles sparse-feature or partially non-feature environments, and also achieves high accuracy in localization and mapping. An experiment is conducted to prove that the accuracy is affected by the view field. Keywords: full-view image; spherical model; SLAM 1. Introduction For a mobile robot, finding its location and building environment maps are basic and important tasks. An answer to this need is the development of simultaneous localization and mapping (SLAM) methods. For vision-based SLAM systems, localization and mapping are achieved by observing the features of environments via a camera. Therefore, the performance of a vision-based SLAM method depends not only on the algorithm, but also the feature distribution of environments observed by the camera. Figure1 shows a sketch of an indoor environment. For such a room, there may be few features within the field of view (FOV). Observing such scenes with a limited FOV camera, the SLAM often fails to work. However, these problems can be avoided by using a full-view camera. Appl. Sci. 2018, 8, 2268; doi:10.3390/app8112268 www.mdpi.com/journal/applsci Appl. Sci. 2018, 8, 2268 2 of 16 Appl.Appl. Sci.Sci. 20182018,, 88,, xx FORFOR PEERPEER REVIEWREVIEW 22 ofof 1616 FiFiFigureguregure 1.1. TheTheThe performanceperformance performance ofof aa visionvision vision-based--basedbased SLAMSLAM isis influencedinfluenced influenced byby thethe fieldfield field of of view. view. AnotherAnother reasonreason forfor usingusing aa fullfull full-view--viewview cameracamera isis thatthat itit it improvesimproves improves thethe the accuracyaccuracy accuracy ofof of SLAMSLAM SLAM methods.methods. methods. BecauseBecause somesome differentdifferent motionsmotions maymay havehave similarsimilar changeschanges inin in anan imageimage ofof aa limitelimite limiteddd FOVFOV camera,camera, thesethese motionsmotions motions areare are discriminateddiscriminated discriminated withwith with aa afullfull full-view--viewview image.image. image. ForFor For example,example, example, asas shownshown as shown inin FigureFigure in Figure 2,2, thethe2, translationthetranslation translation ofof aa of limitedlimited a limited-view--viewview cameracamera camera alongalong along thethe the horizontalhorizontal horizontal axisaxis axis oror or rotationrotation rotation inin in thethe the horizontalhorizontal horizontal directiondirection direction maymay resultresult inin the the same same movementmovement forfor somesome feafea featuresturestures onon images,images, i.e.,i.e., i.e., TargetTarget Target 11 inin FigureFigure 22.2.. However,However, forfor Target TargetsTargetss 2–4,22––4,4, thethe cameracamera motionmotion causescauses differentdifferent movements.movements. ThisThis This meansmeans means thatthat that inin in some some cases, cases, differentdifferent motionsmotions maymay bebebe difficultdifficultdifficult to toto decouple decoupledecouple from fromfrom the thethe observation observationobservation of ofof limited limitedlimited FOV FOVFOV images, imagesimages while,, whilewhile it it isit ispossibleis possiblepossible to toto distinguish distinguishdistinguish from fromfrom the thethe observation observationobservation of ofof full-view fullfull--viewview images. images.images. Target1 Target1 m1 m2 m1 m2 Rotation Rotation Target2 m1 Target2 m1 m2 Camera m2 Target4 m2 Camera m2 Target4 m1 Translation m1 Translation Target3 Target3 m2 m1 m2 m1 Figure 2. The translation of a limited-view camera along the horizontal axis and the horizontal rotation FigureFigure 2.2. TheThe translationtranslation ofof aa limitedlimited--viewview cameracamera alongalong thethe horizontalhorizontal axisaxis andand thethe horizontalhorizontal may result in the same movement for some features in images (Target 1); however, the same camera rotationrotation maymay resultresult inin thethe samesame movementmovement forfor somesome featuresfeatures inin imagesimages (Target(Target 1)1);; however,however, thethe samesame motion causes different movements for the features in other places (Target 2–4), only a full-view camera cameracamera motionmotion causescauses differentdifferent movementsmovements forfor thethe featuresfeatures inin otherother placesplaces (Target(Target 22––4),4), onlyonly aa can capture them. fullfull--viewview cameracamera cancan capturecapture them.them. Based on the above observations, a vision-based SLAM method of using full-view images can Based on the above observations, a vision-based SLAM method of using full-view images can effectivelyBased on manage the above sparse-feature observations, or partiallya vision- non-featurebased SLAM environments, method of using and full also-view achieve images higher can effectively manage sparse-feature or partially non-feature environments, and also achieve higher effectivelyaccuracy in mana localizationge sparse and-feature mapping or partially than conventional non-feature limited environments field-of-view, and methods.also achieve Until higher now, accuracy in localization and mapping than conventional limited field-of-view methods. Until now, accuracyfew SLAM in methodslocalization for usingand mapping omnidirectional than conventional images have limited been proposed, field-of-view and althoughmethods. a Until wider now, view few SLAM methods for using omnidirectional images have been proposed, and although a wider fewresults SLAM in a bettermethods performance for using omnidirectional in localization and images mapping, have therebeen areproposed, no experiments and alth assessingough a wider how view results in a better performance in localization and mapping, there are no experiments assessing viewaccuracy results is affected in a better by performance the view field. in localization and mapping, there are no experiments assessing how accuracy is affected by the view field. how accuracyIn this paper, is affected we realize by the simultaneous view field. localization and mapping (SLAM) on full-view images. In this paper, we realize simultaneous localization and mapping (SLAM) on full-view images. The principleIn this paper, is similar we realize to the simultaneous typical approach localization of the conventional and mapping SLAM (SLAM) methods, on full PTAM-view (parallelimages. TheThe prprincipleinciple isis similarsimilar toto thethe typicaltypical approachapproach ofof thethe conventionalconventional SLAMSLAM methods,methods, PTAMPTAM (parallel(parallel tracking and mapping) [[1].1]. In the proposed method, a full-viewfull-view image is captured by Ricoh Theta [[2].2]. trackingNext, feature and mapping) points are [1]. extracted In the proposed from the method, full-view a full image.-view Then,image spherical is captured projection by Ricoh is Theta used [2]. to Next,Next, featurefeature pointspoints areare extractedextracted fromfrom thethe fullfull--viewview image.image. ThThen,en, sphericalspherical projectionprojection isis usedused toto compute projection and back-projectionback-projection of the scene points. Finally, feature matching is performed computeusing a spherical projection epipolar and back constraint.-projection The of characteristic the scene points. of this Finally, paper isfeature that a matching spherical modelis performed is used usingusing aa sphericalspherical epipolarepipolar constraint.constraint. TheThe characteristiccharacteristic ofof thisthis paperpaper isis thatthat aa sphericalspherical modelmodel isis usedused throughout processing,processing, from featurefeature extractingextracting toto localizationlocalization andand mappingmapping computing.computing. throughoutThe rest proc of thisessing, paper from is organized feature extracting as follows. to Inlocalization the next section, and mapping we introduce computing. the related research. TheThe restrest ofof thisthis paperpaper isis organizedorganized asas follows.follows. InIn thethe nextnext section,section, wewe introduceintroduce thethe relatedrelated Inresearch. Section In3, weSection introduce 3, we aintroduce camera model.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us