
Towards autonomous navigation of miniature UAV Roland Brockers Martin Humenberger Stephan Weiss Jet Propulsion Laboratory Austrian Institute of Technology Jet Propulsion Laboratory [email protected] [email protected] [email protected] Larry Matthies Jet Propulsion Laboratory [email protected] Abstract Micro air vehicles such as miniature rotorcrafts require high-precision and fast localization updates for their con- trol, but cannot carry large payloads. Therefore, only small and light-weight sensors and processing units can be de- ployed on such platforms, favoring vision-based solutions that use light weight cameras and run on small embedded computing platforms. In this paper, we propose a navi- gation framework to provide a small quadrotor UAV with accurate state estimation for high speed control including Figure 1. Asctec Hummingbird with Odroid-U2 flight computer 6DoF pose and sensor self-calibration. Our method al- mounted on top. lows very fast deployment without prior calibration proce- dures literally rendering the vehicle a throw-and-go plat- get. Consequently, only light-weight sensors and process- form. Additionally, we demonstrate hazard-avoiding au- ing units can be used on such platforms, favoring vision- tonomous landing to showcase a high-level navigation ca- based pose estimation solutions that use small light-weight pability that relies on the low-level pose estimation results cameras and MEMS (microelectromechanical systems) in- and is executed on the same embedded platform. We explain ertial sensors. As recent developments in multi-core smart- our hardware-specific implementation on a 12g processing phone processors are driven by the same size, weight, and unit and show real-world end-to-end results. power (SWaP) constraints, micro aerial vehicles (MAVs) can directly benefit from new products in this area that provide more computational resources at lower power bud- 1. Introduction gets and low weight, enabling miniaturization of aerial plat- forms that are able to perform navigation tasks fully au- Miniature rotorcraft platforms have several advantages tonomously. Additionally, novel algorithmic implementa- in exploration and reconnaissance missions since they can tions with minimal computational complexity, such as pre- operate in highly cluttered environments (forest, close to the sented in this paper, are required. ground) or confined spaces (indoors, collapsed buildings, Once pose estimation is available, higher level au- caves) and offer stealth from their small size. But in order tonomous navigation tasks which leverage and require this to operate these platforms safely, fast and accurate pose esti- information can be executed. Examples for such tasks are: mation that is independent from external sensors (e.g. GPS) autonomous landing, ingress, surveillance, exploration, and is needed for control. other. Autonomous landing is especially important not only Literature has shown that a viable solution for GPS in- for safety reasons, but also for mission endurance. Small dependent pose estimation is to use visual and inertial sen- rotorcrafts inherently suffer from overall short mission en- sors [6, 11]. However, a major algorithmic challenge is to durance, since payload restrictions do not allow carrying process sensor information at high rate to provide vehicle large batteries. For surveillance or exploration tasks, en- control and higher level tasks with real-time position infor- durance can be greatly improved by not requiring the plat- mation and vehicle states. Since micro rotorcrafts can only form to be airborne at all time. Instead, such tasks may carry a few grams of payload including batteries, this has to even favor a steady quiet observer at a strategic location be accomplished with a very small weight and power bud- (e.g. high vantage points like rooftops or on top of telephone 631 poles) - still with the ability to move if required - which also s s s qi pi Y could include re-charging while in sleep mode (e.g. from i i X Y s i solar cells). Such a capability allows long term missions, w X Z i Z p w but introduces additional constraints on the algorithm side: i Zs q w ps First, to eliminate memory and processing power issues, all qs w w w X w algorithms need to run at constant complexity with a con- Y stant maximal memory footprint, independently of mission length and area, which is particularly true for the pose es- Figure 2. Setup illustrating the robot body with its sensors with timation module. Second, time and impacts with the envi- respect to a world reference frame. The system state vector is X = {pi vi qi b b λps qs} pi qi ronment may change sensor extrinsics. To ensure long-term w w w ω a i i and w and w denote the robot’s operation, the pose estimator not only has to provide accu- position and attitude in the world frame. rate pose information but also needs to continuously self- calibrate the system. Third, since a base station may not for simultaneously estimating pose information and system be within communication reach, all algorithms have to be calibration parameters (such as IMU biases and sensor ex- self-contained and executed on-board the vehicle. trinsics). This self-calibrating aspect eliminates the need In this paper, we introduce a novel implementation of of pre-launch calibration procedures and renders the system a fully self-contained, self-calibrating navigation frame- power-on-and-go work on an embedded, very small form-factor flight com- In our current framework, we propose a hierarchical ar- puter (12grams) that allows to autonomously control a small chitecture for our vision-front-end with a downward look- rotorcraft UAV (Figure 1) in GPS denied environments. ing camera. Using the camera as a velocity sensor based on Our framework enables quick deployment (throw-and-go), inertial-optical flow (IOF), allows to estimate full attitude and performs autonomous landing on elevated surfaces and drift-free metric distance to the overflown terrain [22], as a high-level navigation task example, which we chose which we use for rapid deployment and emergency maneu- because of three reasons: 1) Landing requires accurate vers. For accurate position estimation, we use a keyframe- and continuous pose estimation over large scale changes based visual self localization and mapping (VSLAM) strat- (i.e. from altitude to touch down). Thus, it implicitly shows egy to render the camera into a 6DoF position sensor. The the capabilities of our underlying estimator. 2) It demon- VSLAM approach is based on [7] but tailored to run on our strates how the information of our robust low-level pose es- embedded architecture with constant computational com- timator can be further leveraged and integrated to improve plexity by deploying a sliding window, local mapping ap- subsequent high level tasks. 3) The computational complex- proach [20]. Whereas our velocity based IOF approach ity of 3D reconstruction and landing site detection needs to drifts parallel to the terrain our VSLAM approach is locally be tractable on such low SWaP devices. drift free and better suited for long-term MAV navigation The remainder of the paper is organized as follows: Sec- in large outdoor environments – only limited by the battery tion 2 gives an overview of our vision-based pose estima- life-time and not by processing power nor memory. But it tion framework including related work and a short introduc- requires specific motion for initialization. Since the IOF tion of its self-calibration and the underlying filter approach. based approach does not need any particular initialization, Section 3 describes our autonomous landing algorithm, and it can be used for very quick deployment of the MAV – even details our approach for 3D reconstruction and landing site by simply throwing it in the air. Thus, for rapid deployment, detection. Section 4 introduces the on-board implementa- the thrown MAV will quickly stabilize itself with our IOF tion, provides a performance evaluation, and presents ex- approach, and then autonomously initialize our VSLAM ap- perimental results, whereas Section 5 concludes the paper. proach as shown in [21] – rendering it a throw-and-go GPS independent system. 2. GPS Independent Pose Estimation Both, the pose of the VSLAM algorithm as well as the output of the IOF, are fused with the measurements of an Autonomous flight in unknown environments excludes IMU using an Extended Kalman Filter (EKF). The EKF the use of motion capture systems, and using GPS is not prediction and update steps are distributed among differ- always an option since it may be unavailable due to ef- ent processing units of the MAV because of their compu- fects such as shadowing or multipath propagation in city- tational requirements as described in [19]. The state of the i like environments. Therefore, commonly used sensors for filter is composed of the position pw, the attitude quater- i i pose estimation are stereo [8] and monocular cameras [20] nion qw, the velocity vw of the IMU in the world frame, as well as laser scanners [16]. Since heavy sensors cannot the gyroscope and accelerometer biases bω and ba, and the be used on low SWaP platforms, monocular, visual-inertial metric scale factor λ. Additionally, the extrinsic calibra- s pose estimators might be the most viable choice for MAVs. tion parameters describing the relative rotation qi and po- s In our previous work, we demonstrated that pose estima- sition pi between the IMU and the camera frames were tion based on the fusion of visual information from a single also added. This yields a 24-element state vector X = iT iT iT T T s s camera and inertial measurements from an IMU can be used {pw vw qw bω ba λpi qi }. Figure 2 depicts the setup with 632 Framelist Feature extraction & matching Landing map latest input Spatial analysis image if |T|>b (Does the vehicle fit?) distortion s , p removal 00 Rectification Surface analysis s-1 , p -1 (Surface flat enough and Calculate Waypoints no obstacles?) waypoint 1 s-2 , p -2 Stereo Proc.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-