Stereo Depth Map Fusion for Robot Navigation

Stereo Depth Map Fusion for Robot Navigation

Stereo Depth Map Fusion for Robot Navigation Christian Hane¨ 1, Christopher Zach1, Jongwoo Lim2, Ananth Ranganathan2 and Marc Pollefeys1 Department of Computer Science1 Honda Research Institute2 ETH Zurich,¨ Switzerland Mountain View, CA, USA Abstract— We present a method to reconstruct indoor envi- In order to overcome this problem we combine several ronments from stereo image pairs, suitable for the navigation depth maps calculated from multiple stereo images into of robots. To enable a robot to navigate solely using visual a common representation. Thus, inconsistencies between cues it receives from a stereo camera, the depth information needs to be extracted from the image pairs and combined several acquired depth maps can be discovered and removed. into a common representation. The initially determined raw We propose to use a height map based representation of the depthmaps are fused into a two level heightmap representation environment for the globally combined representation of the which contains a floor and a ceiling height level. To reduce the scene. For indoor environments the height map is composed noise in the height maps we employ a total variation regularized of two levels, a floor and a ceiling level. energy functional. With this 2.5D representation of the scene the computational complexity of the energy optimization is reduced Our target setting is a humanoid robot that is equipped by one dimension in contrast to other fusion techniques that with a stereo camera, which enables it to observe its sur- work on the full 3D space such as volumetric fusion. While we roundings. The two cameras are mounted on a stereo rig show only results for indoor environments the approach can be with fixed relative position. It is further assumed that the extended to generate heightmaps for outdoor environments. intrinsic calibration of the two cameras are known. For the fusion procedure the camera poses are needed as well. For I. INTRODUCTION this we use the output of existing VSLAM pipelines. To enable a robot to navigate with visual data captured The remainder of the paper is structured as follows. In from a stereo camera it needs to be able to have some section II prior work related to this paper is presented. Our kind of representation of the observed world that is suitable height map based reconstruction approach is explained in for this task. There is a large literature on visual simul- section III. In section IV we show the results we obtained taneous localization and mapping (VSLAM) generating a for indoor environments and compare them to a full 3D re- sparse representation of the environment and the respective construction of the scene. Finally we draw some conclusions camera/robot poses. However such a sparse representation in section V. generally only includes points that are salient enough in the considered images and thus lie in well textured regions. For II. RELATED WORK instance, uniformely painted walls and homogeneous regions There exists a vast literature on computational stereo, in general are not covered by the model obtained from fusion of range data, and general 3D modeling from images. VSLAM and therefore problematic for robot navigation. We focus on approaches suitable for efficient implementation The representation of the robot’s surroundings needs to or explicitly addressing reconstruction of indoor environ- be appropriate for higher-level tasks such as path planning ments. Merell et al. [1] propose a visibility-based approach and collision avoidance. Although one could use the re- capable of real-time operation to fuse a sequence of nearby constructed sparse feature points from a VSLAM pipeline depthmaps to a single depthmap with a higher confidence. directly for collision avoidance, this can potentially miss This is done by projecting the depthmaps to a reference view whole objects that lack some feature points. In such a case and then choose one of the depths projecting to the same the possible obstacle would just be invisible for the robot. In position that reduces the number of visibility conflicts. A indoor environments such objects are quite common e.g. a visibility conflict is the situation where a measurement lies uniformly colored wall. Thus a denser representation of the in the free space of some other measurement. While this scene is favorable. gives more accurate depthmaps, which could be certainly Many local and global stereo methods are available to used for collision avoidance there is no direct way to generate determine depth from images, but the returned depth maps a globally consistent representation of the complete observed are generally still contaminated by inaccurate and erroneous space. Because there is no regularization used in this fusion depth estimates. This problems emerge especially in texture- the resulting depthmaps also still contain some noise. less parts of images, which are quite common in indoor In order to obtain globally consistent 3D models from environments. Although global stereo methods optimize an a set of range images Zach et al. [2], [3] use an implicit energy functional over the whole image domain in order to representation of the space. The input depthmaps are con- hypothesize reasonable depth values in ambiguous regions, verted to signed distance fields. A total variation energy the resulting depth maps may still contain errors. functional with a L1 distance data fidelity term, is defined on a regularized signed distance field that simultaneously ap- proximates all the input fields. This convex energy functional is minimized to get the final reconstructed scene. In [4] Furukawa et al. use the Manhattan world assump- tion, which means that all the surfaces are oriented according to the coordinate axes, to formulate a stereo algorithm that is able to handle flat textureless surfaces. From the directions obtained by the patch-based multi view stereo (PMVS) software [5], which extracts a semi-dense oriented point cloud from a set of input images with known camera ori- entations, the three orthogonal main directions are extracted. Fig. 1. Two level height map, the floor and the ceiling are approximated by a height level. Afterwards the problem of assigning a plane to each of the pixels is formulated as a Markov random field (MRF) and solved using the graph cut method. This finally leads to a According to this considerations it makes sense to constrain depthmap that contains flat surfaces oriented according to the final reconstruction to be represented by a floor and the three main directions. In [6] they use this depthmaps ceiling height level (Fig. 1). to reconstruct building interiors. In a similar way as in [2], [3] they integrate the depth maps to a volumetric structure. A. Depth Integration Here the voxels have a binary label interior or exterior. To As first step the raw depthmaps are integrated into a vol- get a full labeling of the space that best approximates all umetric representation of the space. This is done by creating the given input depthmaps a MRF is formulated and solved an occupancy grid that stores a negative number for free using graph cuts. This leads to complete polygonal meshes space and a positive one for occupied space. With increasing of building interiors but comes with a high computational absolute value the decision for free or occupied space gets cost: reported run-times range from a few hours to multiple more certain. This is similar to the probabilistic formulation days. of occupancy grids e.g. [9]. The z-axis is aligned with the Gallup et al. [7], [8] use a height map representation upright direction of the scene such that the discontinuities to reconstruct urban environments. They integrate the raw in the height map are vertical walls. We detect the upright depth maps to a three dimensional occupancy grid, which is direction in the input images by detecting the associated aligned such that the z-axis points to the upright direction. vanishing point. For each voxel a number is stored, free space gets a negative In order to integrate a new depthmap to the common weight and occupied space a positive weight. The space in volumetric representation, each voxel is projected onto the front of the measured depth is assumed to be free and the new depthmap. For every voxel v the weight that it adds to space behind the measurement is expected to be occupied the grid is calculated. It is dependent on the depth zv that with an exponential drop of the weight. One or multiple the voxel v has according to the new view and on the depth height levels are then extracted by getting the minimum measurement zp of the depthpixel p to which v projects. Due of an energy functional for each z-column. Because each to the constant disparity resolution in the stereo image the column is optimized independently, this can be calculated depth resolution is decreasing with increasing depth [10]. very efficiently by GPUs, but has the drawback that there is To include this into our cost function a band around the no regularization between the columns. measurement whose width is dependent on the measured In this work we focus on representing indoor environments depth zp is defined. The width of the band is given by, by two level height maps. But in contrast to [7], [8] we use a ( ) total variation energy functional to obtain regularized results. z2 l (z ) = max p δ ;" : (1) This leads to reconstructions where most of of the noise is p p bf d removed. It has the benefit over full 3D reconstructions like the one in [2], [3] that it can be computed faster due to the The first operand approximates the depth uncertainty [10], reduced representation, which is still able to represent the where δd is the disparity resolution, b the stereo rig base scene accurately enough to be used for robot navigation.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us