JrnlID 10846_ ArtID 9025_Proof# 1 - 14/10/2005 Journal of Intelligent and Robotic Systems (2005) # Springer 2005 DOI: 10.1007/s10846-005-9025-1 Temporal Range Registration for Unmanned 1 j Ground and Aerial Vehicles 2 R. MADHAVAN*, T. HONG and E. MESSINA 3 Intelligent Systems Division, National Institute of Standards and Technology, Gaithersburg, MD 4 20899-8230, USA; e-mail: [email protected], {tsai.hong, elena.messina}@nist.gov 5 (Received: 24 March 2004; in final form: 2 September 2005) 6 Abstract. An iterative temporal registration algorithm is presented in this article for registering 8 3D range images obtained from unmanned ground and aerial vehicles traversing unstructured 9 environments. We are primarily motivated by the development of 3D registration algorithms to 10 overcome both the unavailability and unreliability of Global Positioning System (GPS) within 11 required accuracy bounds for Unmanned Ground Vehicle (UGV) navigation. After suitable 12 modifications to the well-known Iterative Closest Point (ICP) algorithm, the modified algorithm is 13 shown to be robust to outliers and false matches during the registration of successive range images 14 obtained from a scanning LADAR rangefinder on the UGV. Towards registering LADAR images 15 from the UGV with those from an Unmanned Aerial Vehicle (UAV) that flies over the terrain 16 being traversed, we then propose a hybrid registration approach. In this approach to air to ground 17 registration to estimate and update the position of the UGV, we register range data from two 18 LADARs by combining a feature-based method with the aforementioned modified ICP algorithm. 19 Registration of range data guarantees an estimate of the vehicle’s position even when only one of 20 the vehicles has GPS information. Temporal range registration enables position information to be 21 continually maintained even when both vehicles can no longer maintain GPS contact. We present 22 results of the registration algorithm in rugged terrain and urban environments using real field data 23 acquired from two different LADARs on the UGV. 24 Key words: iterative registration, position estimation, unmanned ground and aerial vehicles, 25 LADAR. 26 1. IntroductionUNCORRECTED PROOF 27 The National Institute of Standards and Technology (NIST) is developing archi- 28 tectures and algorithms for unmanned vehicles with funding from the Army Re- 29 search Laboratory (ARL) and the Defense Advanced Research Projects Agency 30 (DARPA). The NIST Highly Mobile Multipurpose Wheeled Vehicle (HMMWV) 31 j Commercial equipment and materials are identified in this article in order to adequately specify certain procedures. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the materials or equipment identified are necessarily the best available for the purpose. * Corresponding author. Tel. no: (301) 975-2865; Fax: (301) 990-9688. DO00059025/JINT2005-1866; No. of Pages 23 JrnlID 10846_ArtID 9025_Proof# 1 - 14/10/2005 R. MADHAVAN ET AL. and an eXperimental Unmanned Vehicle (XUV) developed under the Army’s 32 Demo III program [31] serve as test beds for this research. These vehicles are 33 commanded by the hierarchical, distributed, hybrid 4D/RCS (Real-time Control 34 System) architecture [1, 2]. 35 The 4D/RCS architecture developed for Demo III specifies the simultaneous 36 representation of information about entities and events in a hierarchical 37 distributed knowledge database wherein information is presented in a form that 38 is ideally suited for path planning and task decomposition. Maps are populated 39 both with knowledge from a priori sources such as digital terrain databases, and 40 with knowledge from sensors. The range and resolution of maps at different 41 levels are specified to correspond to the range and resolution of planning 42 algorithms. This limits the amount of computational power required to maintain 43 maps and symbolic data structures with a latency that is acceptable for planning 44 and reactive processes at each level. 45 The position estimation for the above Unmanned Ground Vehicles (UGVs) 46 relies on fusing Global Positioning System (GPS) reported estimates with other 47 on-board navigation sensors. The required accuracy of the GPS estimates cannot 48 be guaranteed for the entirety of a particular mission as the direct line of sight to 49 the satellites cannot be maintained at all times. GPS can be lost due to 50 multipathing effects and terrain conditions, especially for on-road driving tasks. 51 Sufficiently accurate vehicle positions are necessary to derive correct locations of 52 sensed data towards accurate representations of the world and for correctly 53 executing planned trajectories and missions. In order to compensate for such 54 unavailability and unreliability of GPS, another form of secondary position esti- 55 mation becomes inevitable. 56 The following reasons also warrant the need to develop robust 3D data 57 registration algorithms: 58 Within RCS, the use of a priori maps would enhance the scope of the world 60 model. These maps may take a variety of forms including survey and aerial 61 maps and may provide significant information about existing topology and 62 structures. In order to take advantage of this knowledge, research is needed 63 to registerUNCORRECTED these a priori maps with the sensor-centric PROOF maps [15]. Addi- 64 tionally, for incorporating a priori knowledge into the world model, some 65 form of weighting is required and this depends on how well the a priori data 66 and the sensed information are registered. 67 There is also the need to generate higher resolution a priori terrain maps as 68 the current survey maps are too coarse for off-road autonomous driving and 69 also for maintaining up-to-date representations of the world even if the maps 70 are of higher resolution. 71 Another potential application for registering LADAR data is the computa- 72 tion of ground truth as such registration is not dependent on time-based drift 73 (unlike inertial navigation systems), vehicle maneuvers and terrain of travel. 74 JrnlID 10846_ArtID 9025_Proof# 1 - 14/10/2005 TEMPORAL RANGE REGISTRATION FOR UNMANNED GROUND AND AERIAL VEHICLES Such ground truth is necessary for evaluating performance of navigation 75 algorithms and systems. 76 Active range sensing has become an integral part of any unmanned vehicle 77 navigation system due its ability to produce unambiguous, direct, robust, and 78 precise images consisting of range pixels, for example, using LAser Detection 79 And Ranging (LADAR) imagery. This is in direct contrast to passive sensing 80 where the inference of range largely remains computationally intensive and not 81 robust enough for use in natural outdoor environments. Depending on the speed 82 of the vehicle, operating environment, and data rate, such range images acquired 83 from a moving platform need to be registered to make efficient use of 84 information contained in them for various navigation tasks within the 4D/RCS 85 architecture. 86 One of the following two approaches is commonly employed for matching 87 range images to a priori maps [13]: 88 1 2 Feature-based Matching: In this approach, two sets of features, F i and F j , 90 are extracted from two sets to be matched and then correspondences between 91 1 2 features, F ik and F jk, k 2 i, j, that are globally consistent, are found. The 92 displacement between the two sets can then be computed to deduce the 93 sensor pose. 94 Point Matching: This approach directly works on two setsÀÁÀÁ of data points, P1 95 andÀÁP2, by minimizing a cost function of the form FTP2 ; P1 , where 96 T P2 is the second set of points subjected to a transformation T. Any 97 sensible cost is acceptable as long as its minimum corresponds to a best 98 estimate of T in some sense. Usually, the minimization leads to an iterative 99 gradient-like algorithm. 100 Lines and edges are two of the most widely used feature primitives. Matching 102 between sensor observations and modeled features in a map have been 103 considered as a search in an Interpretation Tree [10]. Drumheller extracts lines 104 from sonar data and matches them against a room model to enable robot 105 localizationUNCORRECTED [8]. The complexity of the search problem is PROOF minimized by applying 106 local constraints (distances, angles and normal directions) to the set of possible 107 pairings between observed and modeled features. The Hough transform is a 108 shape detection technique which can be used to isolate features of a particular 109 shape within an image or Time-of-Flight (TOF) sensor data. Schiele and Crowley 110 extract line segments using the Hough transform from an occupancy grid and 111 update the position of the robot using a Kalman filter [28]. Other researchers [9, 112 19] have combined odometric data and laser measurements using an extended 113 Kalman filter where the range weighted Hough transform is employed to extract 114 lines from laser data. The resulting peaks are used as feature coordinates. Even 115 though the Hough transform provides good results in indoor cluttered environ- 116 JrnlID 10846_ArtID 9025_Proof# 1 - 14/10/2005 R. MADHAVAN ET AL. ments, it is restricted to operating in rectangular-shaped scenarios where no more 117 than two predominant walls are present. 118 Kanade et al. [18] compared elevation maps obtained from 3D range images 119 to determine vehicle location. A similar point matching approach has also been 120 adopted by Shaffer [29]. Cox [6] proposes a point matching method for an indoor 121 robot named Blanche where scan-points from an optical rangefinder are matched 122 to an a priori map composed of straight line segments. Blanche’s position 123 estimation system utilizes a robust matching algorithm which estimates the 124 precision of the corresponding match/correction that is then optimally combined 125 with odometric position to provide an improved estimate of robot position.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages24 Page
-
File Size-