Learned Factor Graph-Based Models for Localizing Ground Penetrating Radar
Total Page:16
File Type:pdf, Size:1020Kb
Ground Encoding: Learned Factor Graph-based Models for Localizing Ground Penetrating Radar Alexander Baikovitz1, Paloma Sodhi1, Michael Dille2, Michael Kaess1 Learned sensor Correlation added to factor Estimation of system Abstract— We address the problem of robot localization using model graph of system poses trajectory ground penetrating radar (GPR) sensors. Current approaches x x ... x x for localization with GPR sensors require a priori maps of t-k-1 t-k t-1 t xt-k xt the system’s environment as well as access to approximate net global positioning (GPS) during operation. In this paper, T t-k,t we propose a novel, real-time GPR-based localization system GPR Submaps for unknown and GPS-denied environments. We model the localization problem as an inference over a factor graph. Our T ime t-k T ime t approach combines 1D single-channel GPR measurements to form 2D image submaps. To use these GPR images in the graph, we need sensor models that can map noisy, high- dimensional image measurements into the state space. These are challenging to obtain a priori since image generation has a complex dependency on subsurface composition and radar physics, which itself varies with sensors and variations in subsurface electromagnetic properties. Our key idea is to instead learn relative sensor models directly from GPR data that map non-sequential GPR image pairs to relative robot motion. These models are incorporated as factors within the Fig. 1: Estimating poses for a ground vehicle using subsurface measure- factor graph with relative motion predictions correcting for ments from Ground Penetrating Radar (GPR) as inference over a factor accumulated drift in the position estimates. We demonstrate our graph. Since GPR measurements are challenging to correlate, a relative approach over datasets collected across multiple locations using transformation is learned between submaps to correct the system’s pose. a custom designed experimental rig. We show reliable, real-time localization using only GPR and odometry measurements for varying trajectories in three distinct GPS-denied environments. structure of the subsurface. In order to use GPR for localiza- I. INTRODUCTION tion, we need a representation for GPR data that (1) captures We focus on the problem of localization using ground prominent local features and (2) is invariant over time. penetrating radar (GPR) in unknown and GPS-denied en- In GPS-denied environments, operating using only pro- vironments. Hardware failure, repetitive or sparse features, prioceptive information (e.g. IMU, wheel encoders) will and poor visibility and illumination can make localization in accumulate drift. Prior work [4, 5] on GPR localization has warehouses, mines, caves and other enclosed, unstructured addressed this challenge by using an a priori global map of environments challenging. Consider an operational subsur- arrayed GPR images. During operation, the system compares face mine where continuous drilling and blasting changes the the current GPR image against this global map database, and line-of-sight appearance of the scene and creates unexplored corrects for accumulated drift within a filtering framework. environments, which could lead to poor localization using However, this approach requires approximate global posi- visual and spatial information. While the visual environment tioning and a prior map of the operating environment which arXiv:2103.15317v1 [cs.RO] 29 Mar 2021 may change, subsurface features are typically invariant and would not generalize to previously unknown environments. can be used to recognize the system’s location. We propose an approach that allows for correction of Currently, GPR is widely used in utility locating, concrete this accumulated drift without any prior map information. inspection, archaeology, unexploded ordinance and landmine In the absence of a prior map, we must reason over multiple identification, among a growing list of applications to deter- GPR measurements together to be able to infer the latent mine the depth and position of subsurface assets [1, 2]. GPR robot location. We formulate this inference problem using information is often difficult to interpret because of noise, a factor graph, which is now common with many modern variable subsurface electromagnetic properties, and sensor localization and SLAM objectives [6–10]. GPR, inertial, and variability over time [3]. A single GPR measurement only wheel encoder measurements are incorporated into the graph reveals limited information about a system’s environment, as factors to estimate the system’s latent state, consisting requiring a sequence of measurements to discern the local of position, orientation, velocity, and IMU biases. To in- This work was supported by a NASA Space Technology Graduate corporate measurements into the graph, a sensor model is Research Opportunity. needed to map measurements to states. For GPR sensors, 1 The Robotics Institute, Carnegie Mellon University, 2 Intelligent Robotics Group, NASA Ames Research Center, Correspondence to a priori models are typically challenging to obtain since [email protected] image generation has a complex dependency on subsurface composition and radar physics. Instead, we learn relative Our approach enables robust GPR-based positioning by sensor models that map non-sequential GPR image pairs to leveraging the benefits of structured prediction in factor relative robot motion. The relative motion information in turn graph inference with learning-based sensor models. We use enables us to correct for drift accumulated when using just a smoothing-based approach, which is more accurate and proprioceptive sensor information. Our main contributions efficient than filtering-based approaches used by previous are: localizing GPR systems, and can be solved in real-time using 1) A formulation of the GPR localization problem as state-of-the-art algorithms [25, 26]. Learning-based sensor inference over a factor graph without a prior map. models for GPR can outperform engineered correlation ap- 2) A learnable GPR sensor model based on submaps. proaches by exploiting unique features in a sequence of 3) Experimental evaluation on a test platform with a single- GPR measurements, enabling improved performance in even channel GPR system operating in three different GPS- sparsely featured environments. denied environments. III. PROBLEM FORMULATION II. RELATED WORK We formulate our GPR localization problem as inference GPR in robotics: Most use of GPR in the robotics domain over a factor graph. A factor graph is a bipartite graph with has been for passive inspection, which includes excavation of two types of nodes: variables x 2 X and factors φ(·): X! subsurface objects [11], planetary exploration [12–14], mine . Variable nodes are the latent states to be estimated, and detection [15, 16], bridge deck inspection [17], and crevasse R factor nodes encode constraints on these variables such as identification [18]. measurement likelihood functions. Localizing GPR (LGPR) was introduced by Cornick et Maximum a posteriori (MAP) inference over a factor al. to position a custom GPR array in a prior map [4]. graph involves maximizing the product of all factor graph In this method, the current GPR measurement is registered potentials, i.e., to a grid of neighboring prior map measurements using m a particle swarm optimization, which attempts to find the Y 5-DOF position of the measurement that maximizes the x^ = argmax φi(x) (1) x correlation with the grid data. Ort et al. extended this work i=1 using the same LGPR device for fully autonomous navigation Under Gaussian noise model assumptions, MAP inference without visual features in variable weather conditions [5]. is equivalent to solving a nonlinear least-squares problem [6]. This approach involves two Extended Kalman Filters; one That is, for Gaussian factors φi(x) corrupted by zero-mean, filter estimates the system velocities from wheel encoder and normally distributed noise, inertial data and the other filter fuses these velocities with the LGPR-GPS correction. In the evaluation across different 1 2 φi(x) / exp − jjfi(x) − zijj weather conditions, it is apparent that GPR measurements 2 Σi acquired in rain and snow were less correlated to the a priori T (2) X map than in clear weather using the heuristic correlation for ) x^ = argmin jjf (x) − z jj2 i i Σi x registration method proposed in [4]. t=1 Factor graphs for SLAM: Modern visual and spatial lo- where, f (x) is the measurement likelihood function predict- calization systems often use smoothing methods, which have i ing expected measurement given current state, z is the actual been proven to be more accurate and efficient than classical i measurement, and jj · jj is the Mahalanobis distance with filtering-based methods [7]. The smoothing objective is typi- Σi measurement covariance Σ . cally framed as a MAP inference over a factor graph, where i For the GPR localization problem, variables in the graph at the variable nodes represent latent states and factor nodes time step t are the 6-DOF robot poses s 2 SE(3), velocities represent measurement likelihoods. To incorporate sensor t v , and IMU bias b , i.e. x = [s v b ]T . Factors in the graph measurements into the graph, a sensor model is needed to t t t t t t incorporate different likelihoods for GPR, IMU, and wheel map high dimensional measurements to a low-dimensional encoder measurements. At every time step t, new variables state. Typically these sensor models are analytic functions, and factors are added to the graph. Writing out Eq. 2 for the such as camera geometry [19, 20] and scan matching [21]. GPR localization objective, Learned sensor models: Learning-based methods provide an alternate option to model complex sensor measurements. T X n x^ =argmin jjf (x ; x ) − zgpr jj2 + Prior work in visual SLAM has produced dense depth recon- 1:T gpr t-k t t-k;t Σgpr x1:T structions from learned feature representations of monocular t=1 (3) camera images [8, 10]. In computer vision, spatial correlation jjf (x ; x ) − zwh jj2 + wh t-1 t t-1;t Σwh networks have been used to learn optical flow and localize imu 2 jjfimu(xt-1; xt) − z jj RGB cameras in depth maps [22–24].