<<

Emergency of Small Unmanned Aerial Systems

Parker C. Lusk∗ and Randal W. Beard Electrical and Computer Engineering Brigham Young University, Provo, UT

Emergency event Abstract— An algorithm for vision-based emergency landing of a multirotor is outlined. Using a recently devel- oped real-time visual multiple target tracker, a potential sUAS Mission fields-based obstacle avoidance algorithm is employed to Motion detected, find the “safest” part of a pre-determined landing site. A vision assisted landing sliding mode controller for image-based visual servoing is takes control of vehicle designed that allows globally asymptotic tracking of a line Predetermined of sight vector and the optical axis. The presented Site landing scheme allows for quick emergency to Fig. 1: Notional emergency landing scenario. During a mis- be performed in populated environments with moving sion, the on-board computer signals an emergency event. The ground obstacles. multirotor then commences a forced landing maneuver while using visual feedback to avoid moving ground obstacles. I.INTRODUCTION Emergency measures are an important aspect of autonomous systems that provide safety to people and of sight (BVLOS) operations. However, in order for this property. In the case of small unmanned aerial systems structure to truly allow BVLOS missions, the autonomy (sUAS), this critical capability takes the form of forced of sUAS must be enhanced. We claim that a vision- landings. Common examples of events necessitating based landing procedure that can avoid moving ground emergency landings are poor battery health, loss of obstacles increases the operational capabilities of sUAS ground communication, and GPS degradation. Because integrated in the NAS. these emergency events require immediate landing, it Previous solutions to unmanned emergency is important to be able to land in potentially populated landing exist, but are cost-prohibitive for the size, areas with people and cars. In this work we address weight and power (SWaP) constraints of sUAS, such emergency landing of multirotors using an on-board as in the work by Scherer et al. [6]. Other works rely camera to avoid moving ground obstacles. on the presence of large enough areas with no motion As technology and computing capabilities continue to land using image segmentation techniques at high to extend the use of sUAS into various civil and altitudes [7] [8], which break down as the sUAS gets commercial domains, many societal and economical closer to the ground in a forced landing event where it benefits of sUAS will be identified [1]. However, in is not able to find a large enough stretch of land with order to fully realize these benefits, a structure must no motion. And others are currently not able to run in be put in place to integrate sUAS into the National real time. We extend the literature on forced landing by Airspace System (NAS). Many efforts are currently applying the problem to a low-cost multirotor, wherein being made in this regard, both in terms of policy processing is down on-board in real time to allow and technology [2]–[4]. Most notable is the UAS landing in environments where motion is present. Traffic Management (UTM) effort being led by NASA Our vision-assisted landing (VAL) system assumes and the United States Federal Aviation Administration that landing site determination is either running in (FAA) [5]. UTM is a traffic management ecosystem for parallel or a database of potential landing sites exists. low-altitude (below 500ft) sUAS operations in uncon- In this way, our solution complements and extends trolled airspace. Its purpose is to coordinate missions previous work in this area. Using image-based visual between sUAS operators and allow beyond visual line servoing to align the line of sight (LOS) vector to the target with the optical axis of the on-board camera, this ∗corresponding author (email: [email protected]) solution allows tactical avoidance at unprepared sites and solves the “last 50 feet problem” [5]. The notional clutter [9]–[12]. Measurements are contained in the 2 scenario considered in this paper is shown in Figure 1. surveilance region R of the system, where R ⊂ R The organization of this paper is as follows. Sec- in this work. Using the incoming measurements, the tion II gives an overview of visual tracking system. In algorithm forms hypothesis models that fit the specified Section III we present a image-based visual servoing motion dynamics. At every timestep, the following four controller using the LOS error. The proposed moving tasks are performed, as shown in Figure 2. ground obstacle avoidance algorithm is outlined in Model propagation: Existing models are propagated Section IV. Results are then reported and discussed in forward with nearly constant jerk motion dynamics. Section V. The new scan of measurements are classified as inliers or outliers to each predicted model position based II.VISUALTRACKING on Euclidean distance. Measurements within the inlier As shown in Figure 1, the visual multiple target region defined by a circle of radius τR are classified as tracker used in this work consists of two major com- inliers. ponents: the visual measurement front-end and the Model initialization: For each measurement that is Recursive-RANSAC (R-RANSAC) tracker. Using the an outlier to all models, a RANSAC-based initialization visual tracker in conjunction with an altitude-dependent step is performed to find models that fit nearly constant tuning scheme allows the algorithm to continuously velocity dynamics. Only the best RANSAC hypothesis track objects as the sUAS descends. We discuss these model is kept. components in the following subsections. Model update: At the end of each timestep, each model uses its associated inliers to perform a Kalman 1 update. Model elevation: A model is elevated to a track once it has survived τT iterations without having more than 2 τCMD consecutive missed detections. This step is also 3 where models with poor support are pruned and models with similar positions are merged if within τx and τy (a) Propagation (b) Initialization of each other.

2 R-RANSAC’s strength lies in its ability to ini- tialize new hypothesis models from noisy data and subsequently manage those models without operator intervention. This allows the use of computationally cheap computer vision algorithms with less precision. Additionally, R-RANSAC is not strictly a computer vi- (c) Update (d) Elevation sion algorithm; it can filter measurements from diverse sensor modalities [13]. The work of fusing different Fig. 2: The four steps of R-RANSAC demonstrated on a measurement sources in R-RANSAC is currently be- surveillance region in R2. Timesteps are denoted by the grey vertical lines, the current timestep is rightmost. Measure- ing investigated. For more information about the R- ments (◦) may be clutter or from targets. (a) Hypothesis RANSAC derivation we refer the reader to [9]. models (4) are predicted forward in time (dotted). New mea- surements are associated as inliers or outliers. (b) Outliers B. Visual Measurement Front-end are used to generate RANSAC hypotheses (1, 2, 3). (c) Inlier In this work, R-RANSAC receives data from a measurements are used to correct the model prediction. The visual measurement front-end. The vision processing is RANSAC hypothesis with the most support (hypothesis 2) done with a calibrated camera in a three-step pipeline is stored as a model. (d) Models that have good support in order to (i) find feature correspondences between and have been tracked for a while without too many missed detections are elevated to a track (∗). images, (ii) compute a homography, and (iii) detect true object motion. The input video rate is controlled by the frame stride parameter which dictates how many A. Recursive-RANSAC Tracker frames to skip. For example, with incoming video at R-RANSAC is an online estimation algorithm ca- 30fps, stride = 3 results in 10Hz processing. pable of tracking an arbitrary number of objects in (i) Feature management: At each timestep k, features (a) KLT Tracking (b) Homography (c) Moving Features (d) Tracking Results Fig. 3: The three steps (a)-(c) of the visual measurement front-end with the resulting tracks (d) from R-RANSAC. Images are taken from sequence run2a. Feature correspondences from (a) are used to estimate a homography. Note how the homography- compensated difference image in (b) masks out the feature motion resulting from camera motion and exposes independently moving objects.

from the last image Xk−1 are propagated forward into independently moving objects are defined as feature + the current image as Xk−1 using optical flow. Feature points that have a velocity magnitude within predefined + correspondences (Xk−1,Xk−1) are sent as input to thresholds, given by the next step in the pipeline for further processing. + A new set of features Xk are then found using the Z = {(xi, vi): xi ∈ Xk−1, vi ∈ V, τvmin ≤ vi ≤ τvmax }. Shi-Tomasi corner detection method for the current image Ik. These features will be propagated on the next This scan of measurements is then given to R-RANSAC iteration. This step is known as Kanade-Lucas-Tomasi to estimate the position of targets. Figure 3d shows the (KLT) tracking and is depicted in Figure 3a. tracking results. Homography generation (ii) : Using the feature corre- Note that the calibration parameters of the camera spondences (X ,X+ ) from the KLT tracker, a per- k−1 k−1 are used to undistort the features extracted in step (i). H spective transformation known as a homography is Further, the camera matrix is used to project features estimated using a RANSAC-based scheme. This step is from 2D pixel space to the normalized image plane in crucial for moving platform tracking because it allows 3D space where coordinates are normalized such that X X the set of features k−1 and k to be represented in the the depth is unity. This results in tracker parameters that same coordinate frame through image registration. The are less sensitive to differences in calibrated cameras quality of a homography estimation between camera and allows tuning to be done in more intuitive units, views can be visualized via difference imaging (see as described below. Figure 3b), defined as C. Consistent Tracking During Descent D = I − I+ = I − HI . (1) k k k−1 k k−1 Track continuity is an important attribute of sit- Note that the R-RANSAC visual tracker only makes uationally aware systems. This attribute implies that use of KLT features and that the difference image moving targets maintain a unique track ID throughout its lifetime. As the aerial vehicle changes altitude, Dk is only computed when assessing the homography estimation quality. objects will change in size with respect to the camera (iii) Moving object detection: Equipped with a ho- field of view. This can cause tracks to fragment into mography and a set of feature correspondences, the multiple IDs. velocity of each of the feature points can be calculated In order to maintain track continuity during a UAS as descent, we propose using the vehicle altitude to tune parameters of the R-RANSAC visual tracking system. + The relevant tuning parameters for R-RANSAC are the V = Xk−1 − HXk−1. (2) inlier region τR and the absolute difference threshold If the homography estimate is good, then the velocity for model merging, τx and τy. The visual front-end

of static features will be nearly zero, leaving behind feature velocity thresholds τvmin and τvmax are also tuned the motion of independent objects only, as shown during flight. Denote the UAS altitude as h. The > in Figure 3c. Measurements (zj = [x, y, vx, vy] ) of parameters are then s d τ = ; τ = τ = merge (3) R 2h x y h v v τ = min ; τ = max , (4) vmin h vmax h where the tuning parameters are: s, the object size in meters; dmerge, the distance for model merging in me- ters; vmin and vmax the minimum and maximum target velocities in meters per second. Results1demonstrating the effectiveness of this visual tracking scheme during a descent can be found in [14]. III.VISUALSERVOINGCONTROLDESIGN Fig. 4: Geometry of the error function. Shown are two possible LOS vectors: `ˆ and `ˆ . Using the projection matrix Controlling the multirotor to avoid moving objects is 1 2 Pmˆ , the horizontal error component, exi is found. Note how similar to the problem of commanding a multirotor to this choice of error function contains information about the follow a moving target. In light of this, we first design direction of the error, as opposed to an error function based a image-based visual servoing (IBVS) controller based on the cosine distance (1 − mˆ >`ˆ), for example. on [15] to move the multirotor using its pitch dynamics. This allows us to use the image information coming from R-RANSAC. that we use y to indicate the horizontal position of the In this paper, we assume that the autopilot (see ground target in the inertial plane and we assume that Figure 5) allows acceleration commands and we design y¨ = 0. a controller for a “flying box” (i.e., no rotational The objective of the control strategy is to drive the component) with the following simple dynamics: tracking error between mˆ and `ˆ to zero. To do this, we first define the following error function:

x¨ = u1 (5) e (t) = i>P `,ˆ (8) z¨ = u2, (6) x mˆ where u and u are commanded inputs and the dy- which represents the horizontal component of the error 1 2 ˆ namics are expressed in a NED inertial frame. Note that between mˆ and ` as shown in Figure 4. The first time we only want to give our controller authority in the x- derivative of the tracking error is direction, i.e., although we can control the altitude, we > ˆ˙ will assume that the altitude control is done elsewhere. e˙x = i Pmˆ ` (9) ˙ Thus, the only way that we are allowing our controller which we can measure since we can obtain ˆ` numeri- to minimize error is through lateral motion. cally. We desire to have ex(t) → 0 as t → ∞. In order Suppose that a camera is mounted to the box at a to do this, we must find where our input u appears in ◦ 1 45 angle below the body x-axis. The optical axis of these error dynamics. The LOS vector is defined as the camera goes through the center of the image, and > > therefore can be written as ` =  y 0 0  −  x 0 z  (10)

h i> with first and second time derivatives  > √1 0 √1 mˆ = m1 m2 m3 = 2 2 , (7) > > `˙ =  y˙ 0 0  −  x˙ 0z ˙  (11) where aˆ a . Given a target on the ground, the , ||a|| > > camera sensor gives a normalized line of sight (LOS) `¨=  y¨ 0 0  −  x¨ 0z ¨  . (12) vector `ˆthat at the target. Note that the projective nature Note that the controlled dynamics appear in the of camera geometry removes depth information and second time derivative of the LOS vector. In order to thus the magnitude of the LOS vector is unknown. Note relate equation (12) to the error dynamics in (9), we let 1Video results can be found at https://youtu.be/ L = ||`|| and find the first and second time derivatives UIlvXSdVvqA of `ˆ to be Our goal, as stated above, is to drive the error x1 , ˙ `L˙ − `L˙ `˙ L˙ ex to zero, thus aligning the optical axis mˆ with the `ˆ = = − `ˆ (13) ˆ L2 L L normalized LOS vector `. " #! A sliding mde controller (SMC) is of interest here ¨ `L¨ − `˙L˙ ˙L˙ LL¨ − L˙ 2 `ˆ = − `ˆ + `ˆ (14) because of its ability to perform robustly in the presence L2 L L2 of uncertainties. Recall that βi are unknown quantities `¨ `˙L˙ ˙L˙ L¨ L˙ 2 because they are composed of the target depth, L. = − − `ˆ − `ˆ + `ˆ . (15) L L2 L L L2 Therefore, if we can reach some sliding surface s = `˙ kx1 + x2 = 0 in finite time, then x1 will satisfy the Rearranging (13) and plugging it into (15) for L yields the simplified expression differential equation x˙ 1 = −kx1 and will be driven to zero exponentially fast for k > 0 [16]. This is because ¨ `¨ ˙L˙ L¨ the surface represents a first-order LTI system, which `ˆ= − 2`ˆ − `ˆ , (16) L L L has an exponentially stable origin. which has control coming in through the `¨ term. Dif- The appropriate control is found to be ferentiating the error dynamics in (9) and substituting 1 in (16), (9), (8) and results in u = [h(x) + ke˙x + αs] , (26) g(x)

> ˆ¨ which asymptotically stabilizes system trajectories to e¨x = i Pmˆ ` (17) ! the sliding surface s =e ˙x + kex = 0. Because β1, β2 `¨ ˙L˙ L¨ = i>P − 2`ˆ − `ˆ (18) and β3 are unknown, they are left as tuning parameters. mˆ L L L IV. OBSTACLE AVOIDANCE ALGORITHM 1 ˙L˙ L¨ = i>P `¨ − 2i>P `ˆ − i>P `ˆ (19) mˆ L mˆ L mˆ L Now that we have a IBVS controller to align a 1 L˙ L¨ LOS vector to the optical axis, we can use use the = i>P `¨ − 2e ˙ − e . (20) same scheme to follow a “virtual” target. Because the mˆ L x L x L control law designed in Section III only uses the control ¨  > Note from (16) that ` = −x¨ 0 −z¨ = authority in the x-direction of acceleration, we design  > > > ¨ −u1 0 −u2 and therefore i Pmˆ ` = an additional simple control law that commands an 2 − 1 − m1 u1 − m1m3u2, which relates the control acceleration in the z-direction so that the multirotor de- input to the error dynamics. Defining the unknown scends. Decoupling the control of the x and z directions quantities as in this way results in a descent trajectory that is based on the angle of the optical axis with the body frame. 1 Because the camera in this application is mounted at a β1 = (21) L 45◦ angle, the descent of the multirotor will also be at L˙ a 45◦ angle. β = (22) 2 L Consider a multirotor descending at a 45◦ angle L¨ towards a site that is infinitely long and wide. The se- β = , (23) 3 L lected landing site potentially has a number of moving we can write (20) as obstacles that we would like the multirotor to avoid as it lands. Using an onboard camera, mounted at a 45◦ e¨ = − 1 − m2 u β − 2e ˙ β − e β , fixed angle, the objective is to use the LOS IBVS x 1 1 1 x 2 x 3 controller to avoid the moving obstacles. Using the which can be written as the following second-order output of the visual tracker discussed in Section II, system we can formulate an algorithm that uses the track information in the image plane in the form of LOS x˙ 1 , e˙ = x2 (24) vectors. By creating a number of virtual targets in the landing site, the obstacle avoidance algorithm is x˙ 2 , e¨ = h(x) + g(x)u1 (25) 2  able to score each virtual target and identify which is = (−x1β3 − 2x2β2) + − 1 − m1 u1β1 . the “safest” for the multirotor to land on. Using this | {z } | {z } h(x) g(x)u1 scheme relies only on image plane information, thus members, and in part by AFRL grant FA8651-13-1- 0005.

REFERENCES [1] M. Michał, A. Wisniewski,´ and J. McMillan, “Clarity from above,” Pwc Drone Powered Solutions, no. May, p. 38, 2016. [2] Amazon.com Inc., “Revising the Airspace Model for the Safe Integration of Small Unmanned Aircraft Systems,” NASA UTM 2015: The Next Era of Aviation, pp. 2–5, July 2015. [3] P. Narayan, P. Wu, D. Campbell, and R. Walker, “An In- telligent Control Architecture for Unmanned Aerial Sys- tems (UAS) in the National Airspace System (NAS),” 2nd Australasian Unmanned Air Vehicle Systems Conference, Fig. 5: 3DR Y6 multirotor used in hardware demonstration. no. March, pp. 20–31, 2007. The Pixhawk autopilot runs APM:Copter firmware. The [4] P. Kopardekar and S. Bradford, “UAS Traffic Management (UTM): Research Transition Team (RTT) Plan.” camera has a resolution of 800×600 at 30 fps and is mounted ◦ [5] P. Kopardekar, J. Rios, T. Prevot, M. Johnson, J. Jung, and at a 45 angle. J. E. Robinson III, “UAS Traffic Management (UTM) Concept of Operations to Safely Enable Low Altitude Flight Opera- tions,” AIAA Aviation Technology, Integration, and Operations eliminating the necessity of global state information Conference, no. June, pp. 1–16, 2016. [6] S. Scherer, L. Chamberlain, and S. Singh, “Autonomous land- and allowing the multirotor to perform autonomous ing at unprepared sites by a full-scale ,” Robotics landing in GPS-denied situations. and Autonomous Systems, vol. 60, no. 12, pp. 1545–1562, In the case of a single moving obstacle in the field 2012. [7] Y. F. Shen, Z. U. Rahman, D. Krusienski, and J. Li, “A vision- of view, a potential fields method can be used to repel based automatic safe landing-site detection system,” IEEE a virtual target away from the ground obstacles. By Transactions on Aerospace and Electronic Systems, vol. 49, repeling the virtual target, the LOS vector changes and no. 1, pp. 294–311, 2013. the IVBS controller uses its control autority to align [8] L. Mejias and D. Fitzgerald, “A Multi-layered Approach for Site Detection in UAS Emergency Landing Scenarios the center of the camera (i.e., the optical axis) with the using Geometry-Based Image Segmentation,” in Int. Conf. position of the virtual target. Because the camera is at Unmanned Aircraft Syst., pp. 366–372, 2013. a 45◦ angle and the multirotor is descending, it will [9] P. C. Niedfeldt and R. W. Beard, “Multiple target tracking us- attempt to “land on” the virtual target, which is at a ing recursive RANSAC,” 2014 American Control Conference, pp. 3393–3398, 2014. low risk area of the landing zone. [10] P. C. Niedfeldt, “Recursive-RANSAC: A Novel Algorithm for Tracking Multiple Targets in Clutter,” All Theses and V. DISCUSSION Dissertations, no. July, p. Paper 4195, 2014. In this paper, we have outlined an algorithm for [11] K. Ingersoll, P. C. Niedfeldt, and R. W. Beard, “Multi- ple target tracking and stationary object detection in video vision-assisted emergency landing in environments with Recursive-RANSAC and tracker-sensor feedback,” 2015 with moving ground obstacles. In future work, we wish International Conference on Unmanned Aircraft Systems, to address the local minima problem of the potential ICUAS 2015, pp. 1320–1329, 2015. field method and demonstrate this emergency landing [12] P. C. Niedfeldt, K. Ingersoll, and R. W. Beard, “Comparison and Analysis of Recursive-RANSAC for Multiple Target strategy in hardware. Tracking,” IEEE Transactions on Aerospace and Electronic Systems, vol. 53, pp. 461–476, feb 2017. ACKNOWLEDGMENTS [13] E. B. Quist, P. C. Niedfeldt, and R. W. Beard, “Radar The authors thank the NASA Langley Systems Anal- odometry with recursive-RANSAC,” IEEE Transactions on ysis & Concepts Directorate (SACD) and the NASA Aerospace and Electronic Systems, vol. 52, no. 4, pp. 1618– 1630, 2016. UAS Traffic Management (UTM) project for funding [14] P. Lusk and R. Beard, “Visual Multiple Target Tracking From this work and providing flight test support at the a Descending Aerial Platform,” American Control Conference, NASA Langley sUAS testing site. This research was 2018. (to appear). [15] J. Lee and R. Beard, Nonlinear Control Framework for Gim- additionally supported by the Utah NASA Space Grant bal and Multirotor in Target Tracking. PhD thesis, Brigham Consortium and the Center for Unmanned Aircraft Young University, 2018. Systems (C-UAS), a National Science Foundation- [16] H. Khalil, Nonlinear Systems. Pearson Education, Prentice sponsored industry/university cooperative research cen- Hall, 2002. ter (I/UCRC) under NSF Award No. IIP-1161036 along with significant contributions from C-UAS industry