Visual Terrain Mapping for Mars Exploration1,2

Visual Terrain Mapping for Mars Exploration1,2

Visual Terrain Mapping for Mars Exploration1,2 Clark F. Olson Larry H. Matthies, John R. Wright Ron Li, Kaichang Di Computing and Software Systems Jet Propulsion Laboratory The Ohio State University University of Washington, Bothell California Institute of Technology Department of Civil and Environmental 18115 Campus Way NE, Box 358534 4800 Oak Grove Drive Engineering and Geodetic Science Bothell, WA 98011-8246 Pasadena, CA 91109-8099 2070 Neil Avenue, 470 Hitchcock Hall +1-425-352-5288 [email protected] Columbus, OH 43210-1275 [email protected] [email protected] [email protected], [email protected] Abstract—One goal for future Mars missions is to navigate a descent images from the lander, and orbital images from rover to science targets not visible to the rover, but seen in current and future Mars orbiters. Laser rangefinder data orbital or descent images. In order to support and improve from rovers and/or landers can also be integrated with this long-range navigation capabilities, we generate 3D terrain methodology. maps using all available images, including surface images from the lander and/or rover, descent images from the We apply bundle adjustment techniques to overlapping lander, and orbital images from current and future Mars stereo images from panoramic image sets in order to create orbiters. The techniques used include wide-baseline stereo highly accurate maps of terrain nearby the rover. This mapping for terrain distant from the rover, bundle method automatically determines corresponding features adjustment for high-accuracy mapping of surface images, between multiple pairs of stereo images, even in cases where and structure-from-motion techniques for mapping using the overlap is small. These correspondences are used to descent and orbital images. The terrain maps are compiled update the camera positions precisely. Finally, seamless using a system for unifying multi-resolution models and maps are constructed from the stereo data. integrating three-dimensional terrains. Distant terrain is mapped using a combination of rover, TABLE OF CONTENTS lander, and orbital images. Onboard the rover, maps of distant terrain can be created using wide-baseline stereo ...................................................................... vision. In addition, we create maps using images collected 1. INTRODUCTION....................................... 1 as a lander descends to the surface or from orbital images. 2. WIDE-BASELINE STEREO VISION .......... 2 3. MAPPING SURFACE PANORAMAS .......... 4 While conventional stereo vision performed on the rover has 4. MAPPING DESCENT IMAGES .................. 5 limited accuracy for distant terrain owing to the small 5. MULTI-MODAL DATA ............................. 6 distance between the stereo cameras (known as the baseline 6. INTEGRATING DATA SETS...................... 7 distance), we can achieve accurate mapping for distant 7. CONCLUSIONS......................................... 8 terrain using wide-baseline stereo vision. In this technique, 8. ACKNOWLEDGEMENTS .......................... 8 images from the same camera, but at different rover 9. REFERENCES .......................................... 8 positions, are used to generate a virtual pair of stereo images 10. BIOGRAPHIES ....................................... 9 with a large baseline distance. In this work, we overcome two significant problems. First, the relative positioning 1. INTRODUCTION between the cameras is not well known, unlike conventional stereo vision, where the cameras can be carefully calibrated. For a Mars rover capable of long-range mobility, it is highly In addition, the problem of determining the matching desirable to travel to science targets observed in orbital or locations between the two images is more difficult owing to descent images. However, current rover technologies do not the different viewpoints at which the images are captured. allow rovers to autonomously navigate to distant targets with a single command. Since communication with Mars Images captured during the descent of a lander to the surface rovers usually occurs only once per day, navigation errors can be mapped using similar techniques. This data faces the can result in the loss of an entire day of scientific activity. additional difficulty that the direction of movement is towards the terrain being imaged, complicating image In order to improve long-range navigation capabilities, we rectification. We determine the terrain height by resampling generate 3D terrain maps using all available images, one image several times according possible terrain heights including surface images from the lander and/or rover, 1 0-7803-8155-6/04/$17.00© 2004 IEEE 2 IEEEAC paper #1176, Version 2, Updated December 3, 2003 1 and selecting the height at which the best match occurs for introduces two new problems. First, stereo systems typically each image location. use a pair of cameras that are carefully calibrated so that the relative position and orientation between the cameras are Techniques similar to wide-baseline stereo vision are known to high precision. This is not possible with wide- applied to the creation of terrain maps using baseline stereo, owing to rover odometry errors. Second, the correspondences between descent images and orbital images change in the viewpoint between the images makes stereo or between orbital images from different sensors. In this matching more difficult, since the terrain no longer has the case, the different sensors have different responses to same appearance in both images. Our algorithm addresses various terrain features. For this reason, we use a these problems using a motion refinement step based on the representation of the images that is based on local entropy to structure-from-motion problem of computer vision [9] and perform matching. robust matching between the images [10]. The terrain maps computed from the various images are Motion Refinement compiled using SUMMITT (System for Unifying Multi- We assume that an initial estimate of the motion between the resolution Models and Integrating Three-dimensional positions at which the images were captured is available Terrains), which is designed to merge disparate data sets from the rover odometry (or other sensors). We refine this from multiple missions into a single multi-resolution data estimate by determining matching points between the set. images and updating the motion to enforce geometrical constraints that must be satisfied if the points are in A considerable amount of previous work has been done on correspondence. robotic mapping. However, much of it concerns indoor robots, while we are concerned with mapping natural terrain Once the corresponding matches have been found (using, with rovers and spacecraft. We also concentrate on the use for example, normalized correlation), we seek precise values of cameras for mapping, but other sensors have also been for the translation T and rotation R relating the positions of used for mapping Mars, including laser altimeter [1] and the rover camera at the two locations. In this procedure, we delay-Doppler radar [2]. Aside from our own work, much include in the state vector not only the six parameters of the early and recent work on terrain mapping for rovers describing the relative camera positions (only five are has been performed at CMU [3, 4, 5, 6]. Also of note is recoverable, since the problem can be scaled to an arbitrary work at CNRS, where stereo mapping is performed using an size), but also the depth estimates of the features for which autonomous blimp [7, 8]. we have found correspondences. With this augmented state vector, the objective function that we minimize becomes the In Section 2, we discuss the use of wide-baseline stereo sum of the squared distances between the detected feature mapping for terrain that cannot be mapped effectively using position in one image and the estimated feature position conventional stereo vision. Section 3 describes our approach calculated using backprojection from the other image to mapping surface terrain close to the rover using locally according to the current motion estimate. We use the captured stereo panoramas. Methods by which images Levenberg-Marquardt optimization technique [11] to adjust captured during a spacecraft descent to the planetary surface the motion parameters in order to minimize this function. are given in Section 4. Section 5 discusses techniques by which multi-model image pairs (such as from different After we have determined the new motion estimate, we orbiters or orbital/descent image pairs) can be used to create apply a rectification process that forces corresponding points three-dimensional maps. The SUMMITT system for between the images to lie on the same row in the images integrating multi-resolution data sets is described in Section [12]. Figure 1 shows an example of matching features that 6. We give our conclusions in Section 7. were detected in an image pair with a baseline distance of 20 meters. Figure 2 shows the images after motion refinement 2. WIDE-BASELINE STEREO VISION and rectification has been performed. In this example, the The accuracy of stereo vision decreases with the square of camera pointing angles converged by 20 degrees from the distance to the terrain position. For this reason, perpendicular so that the same rock was located at the center conventional stereo vision cannot accurate map terrain many of both images. meters away. One solution to this problem is to use a larger baseline distance (the distance between the

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us