Vision-Aided Absolute Trajectory Estimation Using an Unsupervised Deep Network with Online Error Correction

Vision-Aided Absolute Trajectory Estimation Using an Unsupervised Deep Network with Online Error Correction

Vision-Aided Absolute Trajectory Estimation Using an Unsupervised Deep Network with Online Error Correction E. Jared Shamwell1, Sarah Leung2, William D. Nothwang3 Abstract— We present an unsupervised deep neural network Images Ij , Ij+1 Reconstruction ^ approach to the fusion of RGB-D imagery with inertial mea- θ0 Jacobian surements for absolute trajectory estimation. Our network, dubbed the Visual-Inertial-Odometry Learner (VIOLearner), ^ learns to perform visual-inertial odometry (VIO) without iner- + δθ1 CNN tial measurement unit (IMU) intrinsic parameters (correspond- IMU ing to gyroscope and accelerometer bias or white noise) or the Reconstruction 2a0 !0 3 extrinsic calibration between an IMU and camera. The network θ^1 Jacobian learns to integrate IMU measurements and generate hypothesis 6a1 !1 7 4 5 trajectories which are then corrected online according to the :::::: ap !p Jacobians of scaled image projection errors with respect to + δθ^2 CNN a spatial grid of pixel coordinates. We evaluate our network against state-of-the-art (SOA) visual-inertial odometry, visual odometry, and visual simultaneous localization and mapping ... (VSLAM) approaches on the KITTI Odometry dataset [1] and demonstrate competitive odometry performance. θ^n;0, Reconstruction θ^n;1, Reconstruction θ^n;m, Reconstruction ... I. INTRODUCTION Originally coined to characterize honey bee flight [2], the term “visual odometry” describes the integration of apparent image motions for dead-reckoning-based navigation (liter- argmin ally, as an odometer for vision). While work in what came to be known as visual odometry (VO) began in the 1980s θ^n for the Mars Rover project, the term was not popularized in R; t the engineering context until around 2004 [3]. Approaches in VIO combine visual estimates of motion with those measured by multi-axis accelerometers and gy- roscopes in inertial measurement units (IMUs). As IMUs Fig. 1: Generalized overview of our VIOLearner network measure only linear accelerations and angular velocities, (see Fig. 2 for a more detailed/accurate representation of the inertial approaches to localization are prone to exponential architecture). Note the hierarchical architecture where θ, the drift over time due to the double integration of accelerations true change in pose between images, is estimated at multiple into pose estimates. Combining inertial estimates of pose scales and then corrected via convolutional processing of change with visual estimates allows for the ‘squashing’ of the Jacobian of Euclidean projection error at that scale with inertial estimates of drift. respect to a grid of source coordinates. The final Level n While VO, VIO, and visual simultaneous localization arXiv:1803.05850v1 [cs.CV] 8 Mar 2018 computes m hypothesis reconstructions (and associated esti- and mapping (VSLAM) can be performed by a monocular mates θ^ ) and the θ^ with the lowest paired Euclidean camera system, the scale of trajectory estimates cannot be n;m n;m projection error between the reconstruction and true target directly estimated as depth is unobservable from monocular image I is output as the final network estimate θ^ . cameras (although for VIO, integrating raw IMUs measure- j+1 n ments can provide noisy, short-term estimates of scale). Generally, depth can be estimated from RGB-D cameras or from stereo camera systems. We have chosen to include *This work was supported by the US Army Research Laboratory depth in our input domain as a means of achieving absolute 1E. Jared Shamwell, PhD is a research scientist with GTS sta- tioned at the US Army Research Laboratory, Adelphi, MD 20783. scale recovery: while absolute depth can be generated using [email protected]; [email protected] an onboard sensor, the same cannot be said for the change 2 Sarah Leung is a research scientist with GTS stationed in pose between image pairs. at the US Army Research Laboratory, Adelphi, MD 20783. [email protected] GPS, proposed as a training pose-difference signal in 3William D. Nothwang, PhD is the Branch Chief (a) of the Micro and [4], has limited accuracy on the order of meters which Nano Devices and Materials Branch in the Sensors and Electron Devices Directorate at the US Army Research Laboratory, Adelphi, MD 20783. then inherently limits the accuracy of a GPS-derived inter- [email protected] image training signal for bootstrapping purposes. While Real Time Kinematic (RTK) GPS can reduce this error to the for depth uncertainty. Pang et al. [11] also demonstrated a order of centimeters, RTK GPS solutions require specialized depth-enhanced VIO approach based on MSCKF, with 3D localized base-stations. Scene depth, on the other hand, can landmark positions augmented with sparse depth information be accurately measured by an RGB-D sensor (e.g. Microsoft kept in a registered depth map. Both approaches showed im- Kinect, Asus Xtion, or new indoor/outdoor Intel RealSense proved accuracy over VIO-only approaches. Finally, Zhang cameras), stereo cameras, or LIDAR (e.g. a Velodyne VLP- and Singh [12] proposed a method for leveraging data from 16 or PUCK Lite) directly onboard a vehicle. a 3D laser. The approach utilized a multi-layer pipeline to The main contribution of this paper is the unsupervised solve for coarse to fine motion by using VIO as a subsystem learning of trajectory estimates with absolute scale recovery and matching scans to a local map. It demonstrated high po- from RGB-D + inertial measurements with sition accuracy and was robust to individual sensor failures. • Built-in online error correction modules; • Unknown IMU-camera extrinsics; and B. Learning-Based Methods • Loosely temporally synchronized camera and IMU. Recently, there have been several successful unsupervised approaches to depth estimation that are trained using re- II. RELATED WORK constructive loss [13], [14] from image warping similar to A. Traditional Methods our own network. Garg et. al and Godard et. al used stereo In VO and VSLAM, only data from camera sensors is image pairs with known baselines and reconstructive loss used and tracked across frames to determine the change for training. Thus, while technically unsupervised, the known in the camera pose. Simultaneous localization and mapping baseline effectively provides a known transform between two (SLAM) approaches typically consist of a front end, in which images. Our network approaches the same problem from the features are detected in the image, and a back end, in which opposite direction: we assume known depth and estimate an features are tracked across frames and matched to keyframes unknown pose difference. to estimate camera pose, with some approaches performing Pillai and Leonard [4] demonstrated visual egomotion loop closure as well. ORB-SLAM2 [5] is a visual SLAM learning by mapping optical flow vectors to egomotion system for monocular, stereo, and RGB-D cameras. It used estimates via a mixture density network (MDN). Their ap- bundle adjustment and a sparse map for accurate, real-time proach not only required optical flow to already be externally performance on CPUs. ORB-SLAM2 performed loop closure generated (which can be very computationally expensive), to correct for the accumulated error in its pose estimation. but also was trained in a supervised manner and thus required ORB-SLAM2 has shown state-of-the-art (SOA) performance the ground truth pose differences for each exemplar in the on a variety of visual odometry (VO) benchmarks. training set. In VIO and visual-inertial SLAM, the fusion of imagery SFMLearner [15] demonstrated the unsupervised learning and IMU measurements are typically accomplished by filter- of unscaled egomotion and depth from RGB imagery. They based approaches or non-linear optimization approaches. input a consecutive sequence of images and output a change ROVIO [6] is a VIO algorithm for monocular cameras, using in pose between the middle image of the sequence and every a robust and efficient robocentric approach in which 3D land- other image in the sequence, and the estimated depth of mark positions are estimated relative to camera pose. It used the middle image. However, their approach was unable to an Extended Kalman Filter (EKF) to fuse the sensor data, recover the scale for the depth estimates or, most crucially, utilizing the intensity errors in the update step. However, the scale of the changes in pose. Thus, their network’s trajec- because ROVIO is a monocular approach, accurate scale tory estimates needed to be scaled by parameters estimated is not recovered. OKVIS [7] is a keyframe-based visual- from the ground truth trajectories and in the real-world, this inertial SLAM approach for monocular and stereo cameras. information will of course not be available. SFMLearner also OKVIS relied on keyframes, which consisted of an image required a sequence of images to compute a trajectory. Their and estimated camera pose, a batch non-linear optimization best results were on an input sequence of five images whereas on saved keyframes, and a local map of landmarks to our network only requires a source-target image pairing. estimate camera egomotion. However, it did not include loop UnDeepVO [16] is another unsupervised approach to closure, unlike the SLAM algorithms with SOA performance. depth and egomotion estimation. It differs from [15] in that There are also several approaches which enhance VIO it was able to generate properly scaled trajectory estimates. with depth sensing or laser scanning. Two methods of depth- However, unlike [15] and similar to [13], [14], it used stereo enhanced VIO are built upon the Multi-state Constraint image pairs for training where the baseline between images is Kalman Filter (MSCKF) [8], [9] algorithm for vision-aided known and thus, UnDeepVO can only be trained on datasets inertial navigation, which is another EKF-based VIO algo- where stereo image pairs are available. Additionally, the rithm. One method is the MSCKF-3D [10] algorithm, which network architecture of UnDeepVO cannot be extended to used a monocular camera with a depth sensor, or RGB-D include motion estimates derived from inertial measurements camera system.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us