A Shared Pilot- Control Architecture for Resilient Flight Amir B. Farjadian, Benjamin Thomsen, Anuradha M. Annaswamy, and David D. Woods

 control deficiencies or sensor failures. The goal of a control Abstract—We address the problem of flight control in the design is to establish an envelope, over which performance is presence of actuator anomalies. A shared control architecture robust in the face of disturbances, but the design still has that includes the actions of both a human pilot and an autopilot is limits--disturbances, failures and surprising events can occur proposed to ensure resilient tracking performance in the that fall outside the envelope of robust performance. Systems presence of anomalies. The pilot is tasked with higher-level decision making tasks such as anomaly detection, estimation and can be brittle when events challenge its design/performance command regulation. The autopilot is assigned a lower-level task envelope. Resilience, in part, refers to mechanisms that allow of accurate command following. The main innovation in the a system to extend its performance when events challenge proposed shared architecture is the use of human pilot in boundaries [1]. Defining resilience as the property of a system utilizing the concepts of Capacity for Maneuver (CfM) and that characterizes its ability to withstand and recover from Graceful Command Degradation (GCD), both of which originate extreme and high impact disturbances, and robustness as an in Social Sciences. Together, they provide guidelines for a system to be resilient, which corresponds to the system’s readiness to ability to withstand nominal disturbances, the question that we unforeseen events. The proposed design incorporates elements address in this paper is if flight controllers could be designed from adaptive control theory under the control of human pilot. to not only be robust but also resilient. Of particular interest is The shared control architecture is shown to be capable of the design of a resilient flight controller that retains the desired achieving maximum CfM while allowing minimal GCD, as well tracking performance in the presence of severe operational as satisfactory command following post-anomaly, resulting in anomalies. resilient flight capabilities. The proposed controller is analyzed in a simulation study of a nonlinear F-16 aircraft under actuator The approach that we propose to achieve our goal is shared anomalies. It is shown through numerical studies that under control. One way to extend the control envelope is the use of suitable inputs from the pilot, the shared controller is able to shared control architectures where human controllers deliver a resilient flight. supplement flight control automation. As the capability of the automated controller to keep pace with growing disturbances Index Terms—Resilient Control, Capacity for Maneuver, saturates, the human flight crew takes over more Graceful Command Degradation, Cyber-Physical Systems, Flight Control, Adaptive Control, Resilience Engineering. responsibilities in flight control. In such an event, it is important to deploy a shared control architecture which I. INTRODUCTION addresses the problem of bumpy transfers of control from the first stage (automated) controller to the second stage (human) LIGHT ANOMALIES may occur due to a variety of reasons controller. Current architectures for shared control generally such as electromechanical damage, sensor faults, severe F assume a partially autonomous machine does all of the work environmental disturbances, or direct failures in the vehicle to handle variability and disturbances—until external demands control surfaces or engines. A standard practice in flight imposed on the machine exceed the automation's capabilities control for aerial vehicles is to design a resident flight system to handle the situation—and then control is transferred to so as to ensure the desired performance, such as tracking pilot people who have to take over when the situation is already / guidance commands robustly during operations while quite difficult to handle, for example as the control surfaces minimizing dependencies on possible perturbations in the reach the limit of their control authority (i.e., as they aircraft dynamics caused by environmental disturbances, saturate). This class of architectures virtually guarantees

bumpy and late transfers of control that increase the risk of This research is supported by funding from National Science Foundation, Grant No. 1549815. decompensation—inability of a joint human-machine control Amir B. Farjadian is with Active Adaptive Control Lab, Department of system to keep pace with growing or cascading demands Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, [3]. In real cases of human supervision of automation based on MA, USA (e-mail: [email protected]). Benjamin Thomsen is with Active Adaptive Control Lab, Department of this architecture, the bumpy and late transfers of control have Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, contributed to actual accidents [2], [30]. This paper addresses MA, USA (e-mail: [email protected]). the question: can shared flight control systems be designed to Anuradha Annaswamy is with Active Adaptive Control Lab, Department of Mechanical Engineering, Massachusetts Institute of Technology, be more resilient than the standard approach? The paper Cambridge, MA, USA (e-mail: [email protected]). specifies and demonstrates the performance of a new design David Woods is with Cognitive Systems Engineering Laboratory, for resilient shared flight control. The new architecture Department of Integrated Systems Engineering, Ohio State University Columbus, OH, (e-mail: [email protected]). provides desired tracking performance as operational anomalies challenge the performance of the first stage automated controller and the second stage human pilot gets control, adaptive control consists of a self-tuned ‘engaged bumplessly’. computational mechanism that adapts to the uncertainties by In order to achieve a resilient control design, we combine allowing the control parameters to be adjusted so as to adaptive control [5] with work from cognitive engineering that minimize a suitably constructed performance error. Since the studies how people add resilience to complex systems [2-4]. problem that is addressed in this paper is flight control in the This design is based on two concepts in particular, Capacity presence of uncertainties, using an autopilot that is based on for Maneuver (CfM), and Graceful Command Degradation an adaptive control principle is an apt solution, and forms an (GCD). CfM is the system’s reserved capacity that will remain important component of our shared control architecture. The after the occurrence of an anomaly [3]. It is hypothesized that adaptive controller we propose will be built on the principles resiliency is rooted in and achieved via monitoring and proposed in [6]-[8]. regulation of the system’s CfM [4]. In an engineering context, The second aspect of the proposed shared controller is the viewing the actuator input power as the system’s capacity, and human-pilot action. A number of aircraft accidents have been noting that a fundamental capacity limit exists in all actuators reported in the literature (ex. US Airways 1549 [9], 1862 in the form of magnitude saturation, one can define a CfM as [10], Delta Air Lines 1141 [11], Air Florida 90 [12]). During the distance between the control input and saturation limits. those unpredictable events pilots showed a fair amount of The need to avoid actuator saturation, and therefore increase understanding of the ongoing anomalies. In these examples, CfM, becomes even more urgent in face of anomalies which pilots have perceived and reported a variety of failures such as may push the actuators to their limits. With such a definition, losing thrust in both engines due to birds encounter (US we will investigate in this paper if a resilient flight control Airways 1549), an engine fire followed by losing another system can be designed with the optimization of the CfM in engine (El Al 1862), an engine failure (Delta Air Lines 1141), mind. and inadvertently stalling the aircraft (Air Florida 90). As a The second concept that we utilize to design a resilient potential remedy to these types of events, we postulate that flight controller is GCD. The underlying problem of interest is pilots perceive the presence of the anomalies within a certain performance in the presence of severe anomalies. Typical reaction time [13], [42] and they transmit estimates of these performance goals in a flight control problem include failures to the autopilot. And as mentioned above, in addition satisfactory tracking of commands. However, delivering the to these estimates, we propose that the pilot also inputs an same command both pre- and post-anomaly may not be estimate of allowable GCD that can be tolerated in a given feasible especially if the anomalies cause a significant loss of mission while ensuring that a CfM remains for safe actuator effectiveness. GCD is proposed as an inherent metric performance in the future. adopted by humans [2] that will allow the underlying system In order to formally analyze flight control anomalies in to function so as to retain a target CfM. In other words, a closed-loop dynamics of a piloted aircraft, mathematical human may introduce a certain amount of GCD as a control models of a human pilot behavior need to be considered. variable, tuned in a way that permits a system to reach its Detailed models of the pilot behavior, especially at targeted CfM. The control architecture that we propose in this frequencies where stable feedback action is most urgently paper makes use of such a strategy. With resilient control needed, have been developed by McRuer [14] and Hess [15]- defined for the purposes of this paper as the procurement of [17]. These models have been studied by Hess [16]-[17] to the optimal CfM with minimal GCD while delivering high understand a pilot’s actions following significant anomalous tracking performance, we will design the overall shared events. Mulder et al., [41] examined the weaknesses in [14] architecture so as to have the flight control system perform and provide modifications based on studies of human pilots. resiliently in the face of anomalies. The results of these papers are utilized in determining the The control architecture proposed in this paper consists of pilot’s role in our proposed shared control architecture. shared decision making between the human pilot and the Additional studies of experienced pilots when it comes to adaptive autopilot. The presence of an anomaly imposes a recognizing anomalies and modifying controller response have number of difficult challenges, all of which have to be dealt been carried out in [34-41]. In [34-38], Sarter and Woods with quickly and correctly. In order to meet these challenges, conducted an extensive set of empirical studies with line pilots we contend that, both the human pilot and an advanced on actual advanced aircraft in full scope simulation studies. autopilot having adaptive and learning capability need to be The results on human interaction with cockpit automation utilized. In the shared architecture that is proposed in this showed that flight crews are sensitive to anomalies especially paper, the human pilot will be allocated higher-level cognitive when conditions do not match expectations (do not fit internal tasks such as a perception of the anomaly, and an estimate of representations of expected aircraft behavior under automatic allowable GCD that will ensure maximum CfM. The adaptive control in different flight contexts). These studies also show autopilot on the other hand will be tasked with carrying out that flight crews’ ability to notice anomalies in automation fast inner-loop decisions such as adaptive articulation of control surfaces that will ensure command tracking in a stable behavior strongly depends on the design of cockpit displays, manner. for example, tactile displays greatly increase the speed and Adaptive control systems were created to control dynamic reliability of anomaly recognition without interfering with processes in the presence of uncertainties [5]. With successes other pilot tasks [39]. In [41] Mulder et al., point out that the reported in flight control, process control, and automotive capability of human controllers to recognize anomalies grows with experience “the Internal Representation (IR), the quality approach as in [24], and an adaptive control approach using of which increases with exposure and experience, that is the multiple inputs as in [7] and similar to [6]. The anomalies critical driver behind human control adaptations." (p. 472). A considered are successive losses of actuator effectiveness. recent industry-wide review [40] drawing on diverse data Several performance metrics related to CfM and GCD are also sources supports the results from the full scope simulator proposed in Section IV, which demonstrate that a shared studies. The results from the above studies are suitably controller out-performs all other controllers. Summary and integrated into our shared control architecture. conclusions are provided in Section V. The overall schematic of the control architecture proposed in this paper is shown in Fig. 1, which consists of perception II. THE SHARED CONTROLLER and adaptation components. The perception component The model of the aircraft to be controlled is assumed to consists of both anomaly recognition and estimation. The have the dynamics: adaptation component determines how the control input should be altered in the face of an anomaly. Such a shared task 푇 푥̇(푡) = 퐴푥(푡) + 퐵Λ푓푢(푡) + 푑 + Φ 푓(푥) (1) distribution between a human pilot and autopilot is similar to the distinction between actions in the face of abnormal as where 푥 ∈ 푅푛 and 푢 ∈ 푅푚 are deviations around a trim opposed to unexpected conditions [2], [18]. Anomalies could condition in aircraft states and control input, respectively, 푑 cause discrepancies between what is observed and what is represents uncertainties associated with the trim condition, and expected (i.e. unexpected), or discrepancies between observed the last term Φ푇푓(푥) represents higher order effects due to and desired states (i.e. abnormal). While the abnormal nonlinearities. 퐴 is a (푛 × 푛) system matrix and 퐵 is a conditions may be addressable by simply including an (푛 × 푚) input matrix, both of which are assumed to be advanced autopilot, the unexpected discrepancies are far more known, with (퐴, 퐵) controllable, and Λ is a diagonal matrix complex and requires elaborate decision-making such as 푓 that reflects a possible actuator anomaly with unknown findings-to-be-explained or diagnostic reasoning followed by positive entries 휆 . It is assumed that the anomalies occur at quick corrective actions [2]. Our focus in this paper is on the 푓푖 case of unexpected conditions as we propose a shared pilot- time 푡푎, so that 휆푓푖 = 1 for 0 ≤ 푡 < 푡푎, and 휆푓푖switches to a autopilot decision making architecture. Similar task value that lies between 0 and 1 for 푡 > 푡푎. Finally it is distributions have been reported in our earlier papers, [19]- assumed that the higher order effects are such that 푓(푥) is a [20]. known vector that can be determined at each instant of time, while Φ푇 is an unknown vector parameter. Such a dynamic model is often used in flight control problems [25]. The goal is to choose the control input 푢 so that the plant state 푥 follow the desired command, which is suitably altered so as to account for the presence of anomalies. The control input 푢 is assumed to be position / amplitude limited and modelled as follows:

푢 (푡), |푢 (푡)| ≤ 푢 Fig. 1. Shared flight control architecture composed of pilot and autopilot. 푢푐 (푡) 푐푖 푐푖 푚푎푥푖 푢 (푡) = 푢 푠푎푡 ( 푖 ) = { (2) 𝑖 푚푎푥푖 푢 푚푎푥푖 ( ) ( ) ( ) 푢푚푎푥푖 푡 푠푔푛 (푢푐푖 푡 ), |푢푐푖 푡 | > 푢푚푎푥푖 The specific adaptive autopilot that we use combines the architecture from [8] and [21]-[23]. The advantage of the where 푢푚푎푥푖 for 𝑖 = 1, … , 푚 are the physical amplitude limits former is the ability to gracefully degrade the command ( ) of actuators, and 푢푐푖 푡 are the control inputs to be determined tracking requirement to retain sufficient CfM and prevent by the shared control architecture. The functions 푠푎푡(⋅) and actuator saturation. The advantage of the latter is to ensure a 푠푔푛(⋅) denote saturation and sign functions, respectively. smooth tracking performance even with constraints on the Our proposed shared control architecture consists of the control effort. By combining suitable elements of these two human pilot and adaptive autopilot (Fig. 1) providing control solutions we will show that when anomalies occur, and perception and adaptation in response to actuator anomalies. following the pilot input, the adaptive controller determines a The details of each control unit are provided in the following control solution that not only guarantees bounded solutions sections. but also minimizes the command tracking error. The rest of the paper is organized as follows. The problem A. Autopilot: μ-mod adaptive control statement is given in Section II. The shared control A typical procedure in any adaptive control design is to architecture is described in Section III. Extensive numerical generate the desired command for the plant state to follow studies are carried out using a high-fidelity fighter aircraft using the notion of a reference model. This reference model in model composed of 5 states, 2 control inputs and 2 outputs in turn guides the selection of a control law such as a state- Section IV. In this section, we also compare the proposed feedback controller with adjustable gains. The reference architecture with other non-shared control solutions based on model is typically of the form [5] only , which include a standard optimal control ( ) ( ) ( ) + 푇 푥̇푚 푡 = 퐴푚푥푚 푡 + 퐵푚푟0 푡 (3) 퐶푓푀 = 푟푚푠(푚𝑖푛𝑖 (푐𝑖(푡)))|푡푎

Ci = ci(t)/푢푚푎푥푖 푘 + where 푟0 ∈ 푅 is a reference input, 퐴푚(푛 × 푛) is a Hurwitz 퐶푓푀 푛 퐶푓푀 = (10) matrix, 푥푚 ∈ 푅 is the state of the reference model and 퐶푓푀푑

(퐴푚,퐵푚) is controllable. The input 푟0 as well as 퐴푚 and 퐵푚 where min is the minimum operator over the ith index, rms is should be chosen such that 푥푚 corresponds to the desired reference system state response due to external commands. A the root mean square operator, and 푡푎 and T refer to the time relatively recent modification to this reference model is the of anomaly and final time, respectively. We will also use of a closed-loop reference model (CRM) as in [21]-[23]: introduce the variable Ci which is a normalized derivative of instantaneous CfM to be used for presentation purposes. The control input (6) includes a parameter μ whose purpose 푥̇푚(푡) = 퐴푚푥푚(푡) + 퐵푚푟0(푡) − 퐿푒(푡) (4) is to move the control input away from saturation under ( ) 훿 where demanding conditions. For example, if |푢푎푑푖 푡 | > 푢푚푎푥푖 , the

extreme case of μ = 0 will simply set 푢푐푖= 푢푎푑푖, thereby 푒(푡) = 푥(푡) − 푥푚(푡) (5) removing the effect of the virtual limit imposed in (7). As 휇 is increased from zero, we note that the control input would and 퐿 < 0 is a constant or a matrix selected such that (Am + L) decrease in magnitude and move towards the virtual saturation is Hurwitz. The goal of the adaptive autopilot is then to choose 훿 limit (푢푚푎푥 ). It should be noted that such an increase 푢 (푡) in (6) so that error 푒(푡) tends to zero asymptotically 푖 푐푖 therefore leads to an automatic increase in the CfM as well, by with all signals in the adaptive system remaining bounded. definition as in (10). The design of adaptive controllers in the presence of control The adaptive parameters 퐾푥(푡) and 퐾푢(푡) are adjusted as: magnitude constraints was first addressed in [6], with guarantees of closed-loop stability through modification of the 퐾̇ (푡) = −Γ 푥(푡)푒푇(푡)푃퐵 (11) error used for the adaptive law. The stability was shown to be 푥 푥 퐾̇ (푡) = −Γ r (푡)푒푇(푡)푃퐵 global if the plant is open-loop stable, and semi-global 푟 푟 0 ̂̇ 푇 otherwise with respect to the magnitude of the input- 푑(푡) = −Γ푑푒 (푡)푃퐵 ̂̇ 푇 saturation. The same problem was addressed in [8] where the Φ(푡) = −Γ푓푓(푥(푡))푒 (푡)푃퐵 푇 effect of input saturation was accommodated through a 퐾푢̇ (푡) = Γ푢∆푢푎푑푒 (푡)푃퐵푚 modification of the reference model. The adaptation component that we propose in this paper is based on [8] which where 푃 = 푃푇 is a solution of the Lyapunov equation (for is termed the “μ-mod adaptive control”. In this design, in 푄 > 0): addition to the hard-limit imposed on the control input as in 훿 (2), a virtual limit (푢푚푎푥푖 ) is introduced on the control input 푇 퐴푚푃 + 푃퐴푚 = −푄 (12) as: 푢 (푡), |푢 (푡)| ≤ 푢훿 푎푑푖 푎푑푖 푚푎푥푖 Γ = Γ푇 > 0, Γ = Γ푇 > 0, Γ = Γ푇 > 0. The term ∆푢 (푡) 푢 (푡) = { 1 (6) 푥 푥 푟 푟 푢 푢 푎푑푖 푐푖 (푢 (푡) + 휇 푠푔푛 (푢 (푡)) 푢훿 ), |푢 (푡)| > 푢훿 1+휇 푎푑푖 푎푑푖 푚푎푥푖 푎푑푖 푚푎푥푖 is a correction to the reference command due to input saturation defined as: ( ) where 푢푎푑푖 푡 is an adaptive control input of the form: 푢 (푡) ∆푢 (푡) = 푢 푠푎푡 ( 푐푖 ) − 푢 (푡) (13) 푇 푇 푇 푎푑푖 푚푎푥푖 푢 푎푑푖 ( ) ( ) ( ) ( ) ( ) ̂( ) ̂ ( ) ( ) 푚푎푥푖 푢푎푑푖 푡 = 퐾푥 푡 푥 푡 + 퐾푟 푡 푟0 푡 + d 푡 + Φ 푡 푓 푥 (7)

훿 This correction is applied to the input into the CRM, similar to In (7) 푢푚푎푥 represents a virtual control limit, defined as: 푖 Lavretsky, et al. (2007) [8], as

푢훿 = (1 − 훿)푢 , 0 ≤ 훿 < 1 (8) 푚푎푥푖 푚푎푥푖 푥̇ (푡) = 퐴 푥 (푡) + 퐵 (푟 (푡) + 퐾푇(푡)∆푢 (푡)) − 퐿푒(푡) (14) 푚 푚 푚 푚 0 푢 푎푑 where δ is a virtual “buffer” that we introduce into the control In other words, when an anomaly occurs, Λ푓 may be input to help retain the desired 퐶푓푀푑, which is defined as reduced, and to make up for this, the control input may tend to 훿 increase, causing it to saturate. To prevent this from occurring, 퐶푓푀푑 = max𝑖(푢푚푎푥 − 푢푚푎푥 ) (9) 푖 푖 we apply a correction in the form of ∆푢 generating 푥 . 푎푑푖 푚 th where index i refers to the i control surface, max is the And since the plant state will follow 푥푚, this correction maximum operator over the ith index, and the actual CfM is helps the control input from going into saturation. In defined as follows: particular, if at any time, the control input goes beyond

certain limits, then ∆푢푎푑푖 becomes nonzero, thereby suitably ( ) | ( )| allowing a graceful degradation of 푥 from being chosen as in 푐𝑖 푡 = 푢푚푎푥푖 − 푢𝑖 푡 푚 (3) to that as in (14). The GCD criterion is defined as: actuator utilization [18]. While (10) may be one measure 퐺퐶퐷 = ‖푥푚(푡) − 푟0(푡)‖/‖푟0(푡)‖, 푡 ∈ 푇0 (15) of CfM, alternate metrics based on the peak value of 푢(푡) or an average value over a certain time interval may be where the norm function is defined in (31), 푇0 denotes the used as well. 2) CfM-GCD Trade-off interval of interest, 푟0(푡) represents the reference input, and When an anomaly occurs, an increased CfM may be 푥푚(푡) represents the reference command that the aircraft is capable of following, and defined as in (14). The complete obtainable only at the expense of a degradation of the expansion of this term for the particular example is shown in trackable command. As was argued in the previous (34). section, μ is a compact parameter that directly impacts both CfM and GCD. We assume that the pilot is capable of

determining the optimal CfM which corresponds to the maximum CfM that can be achieved with a minimum GCD. We therefore propose that with such expertise, the pilot determines the optimal μ and enters the parameter for the adaptive autopilot to use. 3) Anomaly Estimation The second task that we assign to the pilot is one of anomaly estimation. Noting that the anomaly, which is assumed to result in a loss of control effectiveness, is Fig. 2. The closed–loop shared controller: Pilot/autopilot task contribution. represented by 훬푓 in (1), we assume that pilot provides an

estimate 훬̂푓 of 훬푓. It is easy to see from (13)-(15) that similar to CfM, as μ 푝 In summary, the pilot is tasked with providing two pieces of increases, GCD increases as well. This presents a trade-off information to the shared controller, μ and 훬̂ (see Figure 2 between CfM and GCD which is of critical importance to 푓푝 achieving the desired mission objectives. While a larger CfM for a schematic). The former provides the requisite GCD that improves the responsiveness of the system to future allows the overall system to retain the desired CfM. The latter anomalies, a lower bound on the reference command is is a diagnosis of the anomaly as perceived by the pilot. necessary to finish the mission within practical constraints. That is, μ needs to be chosen so that GCD remains above a C. The Shared Controller lower limit. As a result, selecting μ is a critical tradeoff in a The goal of the shared controller is to choose 푢 (푡) in (6) resilient control system design. In this work, we define 푐푖 resiliency as the system’s ability to trade-off CfM with GCD so that error 푒(푡) tends to zero asymptotically with all signals so as to deliver the same tracking performance both before and in the adaptive system remaining bounded. In addition to this after the occurrence of an anomaly. standard goal, the shared controller also attempts to ensure The adaptive autopilot in (5)-(14) provides the required that the closed-loop system achieves a certain CfM-GCD control input in (2) as a solution to the underlying problem. trade-off in (10) and (15) so as to ensure the tracking The autopilot includes several free parameters including μ in performance post-anomaly. (7), δ in (9) and L in (14). The choice of δ sets the desired The overall task distribution among human pilot and CfM and the choice of μ has a proportional effect both on CfM autopilot is shown in Fig. 2, where the human pilot and and GCD. In our proposed shared control design, we allocate adaptive autopilot respond to an unexpected anomaly. While the task of μ-selection to the human pilot, as described below. the autopilot is the default controller to provide the required tracking performance, the human pilot detects the presence of B. Human pilot: CfM-GCD Trade-off & anomaly estimation an anomaly. The pilot then provides two pieces of information Upon occurrence of an anomaly, the pilot is the first to the adaptive autopilot, one that concerns an estimation of responder to notice and perceive the malfunction through the anomaly, and another that decides the CfM-GCD trade-off. sensory interaction with the aircraft via pilot control ̂ The first corresponds to 훬푓푝 , and the latter to μ. The adaptive interfaces, such as stick or a wheel, rudder pedals, as well autopilot then uses these two pieces of information and control indicators / gages in the cockpit displays. Based on this computes (i) the trackable command, in the form of the premise, we postulate that the pilot plays a critical supervisory reference model state x , and (ii) the control input u that role to take care of unexpected conditions. In particular, we m ensures that the plant state x tracks x . As will be shown relegate the actions of (a) CfM-GCD trade-off, and (b) m numerically in Section IV, the choice of μ directly impacts anomaly estimation, to the human pilot (see Fig. 2). These actions may be performed by the pilot as follows: CfM (defined in (10)) with minimal addition of the term ∆푢 . The adaptive autopilot ensures that x tracks the 푎푑푖 1) CfM Monitoring resulting xm. ̂ As a first step towards realizing a desired CfM, it is While μ is directly used in (6), 훬푓푝 is utilized by the assumed that the pilot monitors the actual CfM, which is adaptive autopilot in the following manner. Because the defined as in (10). This may be made possible through a expertise of the pilot, in general needs to be accommodated in display interface in the cockpit through the monitoring of the design of the shared controller, we introduce a new parameter 휂 which represents the pilot’s rating. We assume A. Adaptive Autopilot that 0 < 휂 ≤ 1 and is calculated based on the pilot’s expertise, The adaptive controller is adopted from [7], where the f as a function of flying hours [26, 32-33]. One choice of 휂 is term, associated with the actuator’s locking, is omitted due to given by in-applicability in the problem definition. Assuming the control input to the plant as in (2), the control input by the 휂 = 푓(퐹퐻) (16) adaptive autopilot can be described as: where FH stands for hours of flight in the past 6 months and ( ) ( ) ( ) ( ) ( ) 푢푐푖 푡 = 퐾푥 푡 푥 푡 + 퐾푟 푡 푟0 푡 (20) f(.) normalizes hours to eta. With this pilot-rating, we assume that an estimate 훬̂ of 훬 is used in the shared controller as ̂ 푓 푓 The adaptive parameters 퐾푥(푡), 퐾푟(푡) and 휆(푡) are adjusted as: ̂ ̂ ̇ 푇 푇 훬푓 = 휂훬푓푝 + (1 − 휂)훬푛표푚 (17) 퐾푥(푡) = −Γ푥퐵 푃푒푢푥 (21) 푇 푇 퐾푟̇ (푡) = −Γ푟퐵 푃푒푢푟0 where 훬 is an identity matrix representing the nominal ̂̇ 푇 푛표푚 λ(푡) = Γ휆푑𝑖푎푔(∆u)퐵 푃푒푢 value of 훬 . 푓 푇 We now choose the remaining free parameters of the where 푃 = 푃 is a solution of the Lyapunov equation (for adaptive controller as follows. These parameters include the 푄 > 0) 퐴푇 푃 + 푃퐴 = −푄 (22) reference model parameters 퐴 , 퐵 and the control 푚 푚 푚 푚 parameters 퐾 (0), 퐾 (0), and 퐾 (0). If no anomalies are 푥 푟 푢 Γ = Γ푇 > 0, Γ = Γ푇 > 0, Γ = Γ푇 > 0, and ∆푢(푡) present, then 훬 = 훬 = 퐼 which implies that the plant state 푥 푥 푟 푟 휆 휆 푛표푚 푓 represents the control deficiency signal: can follow the reference model state if the following conditions are satisfied: ∆푢 = 푢𝑖 − 푢푐푖 (23)

푇 퐴푚 = 퐴 + 퐵퐾푥 (0) (18) and 푒 is the augmented error computed from the auxiliary 푇 −1 −1 u 퐾푟 (0) = −(퐴푚 퐵) error 푒 as follows: 푇 ∆ 퐵푚 = 퐵퐾푟 (0) 퐾푇(0) = −퐴−1퐵 푢 푚 푒u = 푒 − 푒∆ (24) ̂ 푒∆̇ = 퐴푚푒∆ + 퐵 푑𝑖푎푔(λ)∆푢 (25) where 퐾푥(0) is computed using a linear–quadratic regulator (LQR) method [24], and 퐾푟(0) is selected as in (18) to The parameter λ̂ is a vector, the elements of which are the provide unity low frequency DC gain for the closed-loop current estimates of the diagonal terms of 훬 in (1). system. When anomalies occur, 훬 ≠ 퐼, and is estimated as 훬̂ 푓 푓 푓 The adaptation laws in (21) are initialized using a baseline using the pilot’s input as explained above. This information is LQR control law, similar to (18). The LQR method is applied used to update 퐴 , 퐵 at 푡 = 푡 when the anomaly occurs as 푚 푚 푎 to calculate 퐾 (0) using A and 퐵, and the feedforward gain 푥 (퐾 ) is initialized to achieve unity DC gain in the closed-loop 퐴 = 퐴 + 퐵훬̂ 퐾푇(푡 ) (19) 푟 푚 푓 푥 푎 system. 퐾푇(푡 ) = −(퐴−1퐵)−1 푟 푎 푚 퐴 = 퐴 + 퐵퐾푇(0) (26) ̂ 푇 푚 푥 퐵푚 = 퐵훬푓퐾푟 (푡푎) 퐾푇(0) = −(퐴−1퐵)−1 푇 −1 푟 푚 퐾푢 (푡푎) = −퐴푚 퐵 푇 퐵푚 = 퐵퐾푟 (0) The primary goal of the adaptive autopilot is to minimize The overall shared controller is summarized in (6)-(19). the tracking error in normal and anomaly conditions. Using the procedure outlined in [8] and [23], it is easy to show that the adaptive controller proposed in (6)-(19) is guaranteed to generate globally bounded solutions and that 푒(푡) tends to B. Optimal Control zero asymptotically, if the plant is open-loop stable; for The optimal controller is formulated using a conventional unstable plants, a bounded domain of attraction can be LQR technique [24]. Considering (2), the optimal control established. The proof is omitted due to space limitations. The input is generated as: reader is referred to [27] for details of the proof. Due to the presence of magnitude constraints, the overall tracking error ( ) ( ) ( ) 푢푐푖 푡 = 퐾푥푥 푡 + 퐾푟푟0 푡 (27) can be shown to be bounded and of the order of ‖∆푢푎푑‖. The control gains (퐾푥, 퐾푟) are fixed and chosen similar to III. COMPARISON WITH OTHER CONTROLLERS the initialization in (18) and (26). The LQR method is applied

To evaluate the performance characteristics of the shared to calculate (퐾푥(0) = 퐾푥퐿푄푅) using A and 퐵 matrices (훬푓 = 퐼). controller, we compare our proposed method with The feedforward gain (퐾푟) is initialized to achieve unity DC conventional adaptive [7] and optimal controllers [24], as gain in the closed-loop system as follows: shown in Fig. 3. 푇 TABLE I 퐴푚 = 퐴 + 퐵퐾푥 (0) (28) 푇 −1 −1 THE OPEN-LOOP SYSTEM MODEL PARAMETERS. 퐾푟 (0) = −(퐴푚 퐵) 푇 A B 퐵푚 = 퐵퐾푟 (0) 0 500 0 500 0 00  The primary goal of the optimal autopilot is to minimize the 0 0 0 0 1 00   tracking error under normal and anomaly conditions. 0.0001 32.17  0.013  2.948  1.028 0.102 0.002  0 0 0.003 0.751 0.928 0.0020  0 0 0 1.837 1.027 0.1340

IV. NUMERICAL ASSESSMENT A. Simulation Example The control systems have been simulated in the flight control problem of the nonlinear longitudinal dynamics of a F- 16 aircraft model. The nonlinear model with two inputs and two outputs is considered, where the plant state 푥푙표푛푔 = [ℎ, 휃, 푉, 훼, 푞]푇 contains the altitude, pitch angle, airspeed, angle of attack, and pitch rate, respectively. The plant input is 푇 푢 = [훿푒푙 훿푡ℎ] which are the elevator deflection and thrust force relative to trim, respectively. Table 1 gives the linearized system matrices at ℎ0 = 10000 푓푡 and 푉0 = 500 푓푡/푠. For ease of exposition, the trim disturbance d and nonlinear parameter Φ were set to zeros. The corresponding adaptive laws for 푑̂ and Φ̂ in (11) were set to zero as well. The control objective is to track step-wise commands in

Fig. 3. Different approaches applied to the flight control under anomaly, altitude (∆ℎ = 80 ft) while regulating airspeed (∆푉 = 0). To Top: proposed shared controller, Bottom: Optimal/adaptive/µ-mod adaptive ensure zero steady-state command tracking error, the control autopilots. model augments the plant state with the integral of the altitude command tracking error,

C. Situation Awareness of Pilots 푡 ℎ퐼(푡) = ∫ [ℎ(휏) − ℎ (휏)]푑휏 (29) (31) Pilots will have disparate levels of cognitive awareness of 0 푐푚푑 the anomalous situation, where they may be either situation- where ℎ푐푚푑(휏) is the commanded altitude. The plant state aware or situation-unaware. We quantify these differences in 퐼 푇 the following manner, based on μ. Additionally, we associate a used in simulation is thus 푥푙표푛푔 = [ℎ , ℎ, 휃, 푉, 훼, 푞] . Two CfM metric to the human pilot and based on his/her reaction consecutive faults were introduced at 125 s and 215 s, time. Our premise is that a situation-unaware pilot (SUP) has a resulting in post-anomaly actuator effectiveness of 30% and longer reaction time (RT) compared to a situation-aware pilot 10%, respectively. That is, it is assumed that both the elevator (SAP), and hence a lower CfM [13]. These models will be deflection and thrust inputs were compromised with the included in our numerical studies as follows. corresponding 2x2 matrix 훬푓 containing equal diagonal entries 1) Situation Aware Pilot (SAP) 휆푓 with (see Figure 4). The pilot has a high situation awareness, a small RT, and therefore can compute μ with a delay of RT. Moreover, ̂ he/she can perceive the anomaly as 훬푓푝 leading to an estimate as in (17). This will be used in (19) to re-initialize the parameters in the reference model and controller gains in the adaptive autopilot. 2) Situation Unaware Pilot (SUP) In contrast to the SAP, the SUP is cognitively saturated Fig 4. Timeline of flight scenario used in simulations. and slow to respond, and possesses a large RT. He/she is therefore assumed to not be able to provide an anomaly 1 푡 < 125 푠 ̂ 휆 = {0.3 215 > 푡 ≥ 125 푠 (30) estimate 훬푓푝 . It is assumed that the pilot is capable of 푓 providing a μ after a large delay of RT. In the absence of 0.1 푡 ≥ 215 푠 pilot’s anomaly estimate, the adaptive autopilot pursues the anomaly estimation process, which may exhibit poor In addition, it was assumed that the anomaly causes the performance. deflection of the elevator to be limited. The elevator’s

saturation limits (푢푚푎푥푖 ) were set to 3 deg, and the thrust saturation limit to 1500 lbs above the nominal value associated diminish with time, and (14) includes a feedback of the with the steady-state trim condition. Inspired by [42], we state error 푒(푡), a choice of T0 when e is small ensures that assumed a reaction time of RT = 0.68 s for SAP and RT = 5 s the term in (15) represents a reasonable measure of GCD. for SUP in the shared controller. Additionally we assumed δ = 0.25 to specify the desired value for CfM in (9). All LQR 4) Parameter Estimation Error ̂ designs in (19), (26) and (28) were computed using the state After anomaly, the pilot may deliver an estimate (훬푓푝 ) and input cost matrices (Q and R) given by which results in a parameter estimate (훬̂푓) as in (14)

푄 = 푑𝑖푎푔(0.01, 0.01, 1, 10, 1, 1) ∆Λ푓 = ‖푑𝑖푎푔(Λ푓 − 훬̂푓)‖ (35) 푅 = 푑𝑖푎푔(1, 1) where diag extracts the diagonal elements of the matrix B. Performance Metrics argument, ∆Λ푓 is the Euclidean norm of parameter estimation ̂ To evaluate the control performance, the Euclidian norm error, Λ푓 is the actual failure repressing the system and 훬푓is and root mean squared error were used as follows: the estimate computed from the pilot input in (14). Substituting (17) into (35), the following can be derived to 2 2 relate the pilot’s estimation error to the initial parameter ‖(푥1, 푥2)‖ = √푥1 + 푥2 (31) 1 estimation error. 1 푇 푟푚푠(푥)|푇 = ( ∫ 푥(휏)2푑휏)2 , 푇 > 0 (32) 0 푇 0 ̂ ∆Λ푓 = ‖푑𝑖푎푔(휂 (Λ푓 − 훬푓푝) + (1 − 휂)(Λ푓 − Λ푛표푚))‖ (36) where T is the integration period. With these definitions, we chose four different metrics to assess the controllers’ where 훬̂푓 is the pilot’s input estimate, and 휂 is the pilot’s performance in the above example of command tracking in 푝 expertise from (16). The term Λ − 훬̂ can be considered as height (h) and velocity (V). 푓 푓푝 pilot’s estimation error (∆Λ ). 1) Tracking performance (휌) 푓푝 The command following characteristic, or change in root C. Results and Discussion mean squared error (RMSE) before and after the first anomaly in each output state. The plant dynamics and reference model were considered as in (1)-(3). The shared controller was simulated using (5)-(19), 푡푎 µ-mod adaptive autopilot using (5)-(14), adaptive autopilot 푅푀푆퐸− = 푟푚푠(푒 )| 1 𝑖 𝑖 0 using (20)-(26) and optimal autopilot using (27)-(28). The + 푇푒푛푑 푅푀푆퐸𝑖 = 푟푚푠(푒𝑖)| 푡푎1 simulation scenario as in Fig. 4 was used, where there is a + − 휌𝑖 = 푅푀푆퐸𝑖 − 푅푀푆퐸𝑖 (33) 70% loss of actuator effectiveness followed by a second anomaly. Depending on the metric of interest, the second where 𝑖 = ℎ, 푣, 푒ℎ ft, 푒푉 ft/s are altitude and velocity anomaly was assumed to be fixed at 10% or varied in a range − tracking errors, 푅푀푆퐸 is the root mean square error 10% ≤ 휆 ≤ 30%. The shared controller as well as the + 푓푖 before the first anomaly and 푅푀푆퐸 is computed after the adaptive and optimal controllers described in Section II and III first anomaly to the end of simulation T , 푡 , 𝑖 = 1,2 end 푎푖 were assessed using the metrics in (10), (15), (33), (34) and refers to first and second anomalies. (36). The role of pilots, capacity for maneuver (CfM) and graceful command degradation (GCD) were quantified 2) CfM Metric corresponding cases. Using (10), the existing CfM with respect to the desired

CfM is computed, where 푢 is the actuator saturation 푚푎푥푖 TABLE II limit, with 𝑖 = 푒푙, 푡ℎ. SUMMARY COMPARISON AMONG DIFFERENT CONTROLLERS.

# Method 푅푀푆퐸− 휌 CfM* GCD 3) Graceful Command Degradation (GCD) (ℎ,푣) (ℎ,푣) Using (15), and in this particular example, GCD is defined 1 SUP (14e-4, 43e-4) (56e-4, 31e-3) 1.01 84e-4 2 SAP** (14e-4, 43e-4) (33e-4, 19e-3) 1.08 55e-4 as follows: 3 SAP*** (14e-4, 43e-4) (0.1, 0.2) 0.74 0.01 4 Optimal (20.88, 3.88) (233.8, 14.10) 0.74 NA 퐺퐶퐷ℎ = ‖ℎ푚(푡) − ℎ0(푡)‖/‖ℎ0(푡)‖, 푡 ∈ 푇0 5 Adaptive (21.39, 3.86) (0.98, 1.59) 1.17 NA 6 μ-mod (14e-4, 43e-4) (12e-2, 32e-2) 0.75 16e-3 퐺퐶퐷푉 = ‖푉푚(푡) − 푉0(푡)‖/‖푉0(푡)‖, 푡 ∈ 푇0 * CfMd = 0.75; NA: Not Applicable; 퐺퐶퐷 = (퐺퐶퐷푉 + 퐺퐶퐷ℎ)/2 − 1 (34) SAP**: ‖∆Λ푓‖ = 0.2; SAP***: ‖∆Λ푓‖ = 0.4

where the norm function is defined in (31), and interval T0 Illustrative performance of the shared controller, adaptive was chosen to cover the last cycle [390 s, 510 s]. The and optimal autopilots are presented in Figures 5-11. Figs. 5-6 proposed GCD metric as in (34) presents a normalized show the details of the shared controller and corresponding scalar indicating the cumulative percentage of command analysis. Figs. 7-9 show the performance of the optimal, degradation along both output states. Since transients may adaptive and μ-mod autopilots. Table II lists a summary of the overall performance metrics in different controllers. Table III considering the low choice of μ, the SUP increases the value carries out a similar comparison in the case when the second of μ to 3 which shows to be a not-large-enough value in the anomaly was varied from 30% to 10%. rest of flight maneuver, as the CfM reaches the zero limit Figs. 5-9 display the system variables in a similar format, multiple times (t > 220). On the other hand, and in this last with each figure consisting of 3 rows and 2 columns. The left period, the μ-mod autopilot starts lowering the desired ∗ column corresponds to the tracking performance and error and command (ℎ ) to restore the CfM and improve tracking error the right column to the control inputs and instantaneous CfM. through the GCD effect as in (14). This is a compensatory In the left column, the first and second rows show the plant action by the autopilot to account for the small μ as chosen by state outputs (ℎ, 푉), reference model states (ℎ∗, 푉∗) when the situation unaware pilot. The SUP shared controller’s performance pre- and post-anomaly is numerically listed in the applicable, and desired commands (ℎ , 푉 ), while the last row 0 0 1st row of Table II. shows the instantaneous error (5) about each state output. In the right column, the first and second rows show the elevator

(훿푒푙) and thrust (훿푡ℎ) deflections within the actual (푢푚푎푥푖 ) and 훿 virtual saturation (푢푚푎푥푖 ) limits, where applicable. The last row in right column shows Ci as in (10). In each plot, the vertical lines correspond to the anomalies, and where applicable to the pilot input in the case of the shared controller.

Fig. 6. The shared controller with SAP: Left) Tracking performance; Right) Control inputs’ status. The vertical lines correspond to anomalies (t = 125 s, 215 s) and pilot’s perception (t = 125.68 s, 215.68 s), when the pilot provided the anomaly estimate and also tuned the parameter μ, where μ(t = 0) = [1 1]; μ(t = 125.68) = [10 1]; μ(t = 215.68) = [30 1]. The instantaneous errors and CfMs are shown in the last column.

Fig. 6 shows the performance of the shared controller with a situation aware pilot (SAP). In contrast to the SUP, the SAP takes better decisions about μ and will also provide an Fig. 5. The shared controller with SUP: Left) Tracking performance; Right) anomaly estimate that will be used in the shared controller as Control inputs’ status. The vertical dashed lines at t = 125 s and 215 s indicate in (19), faster after each anomaly. In this example, it was two subsequent anomalies. The vertical lines at t = 130 s and t = 220 s are the pilot inputs to tune the parameter μ, where μ(t = 0) = [1 1]; μ(t = 130) = [2 1]; assumed that the SAP’s estimation and rating are such that μ(t = 220) = [3 1]. The instantaneous errors and normalized CfMs are shown ‖∆Λ푓‖ = 0.2. In addition, the SAP is assumed to have a in the last row. reaction time RT = 0.68 s of providing the parameters. As shown in the right columns, and in contrast to SUP, the pilot D. The Shared Controller Performance actions keep the control system away from saturation limits throughout the flight maneuver. At the same time it achieves a Fig. 5 shows the performance of the shared controller with a better tracking error and lower command degradation. At t = situation unaware pilot (SUP). The SUP is assumed to have a 130.68, the SAP perceives the first anomaly and provides an reaction time of RT = 5 s after which he/she will determine a st anomaly estimation (훬̂ ) in addition to μ = 10. The higher μ. The 1 anomaly occurs at t = 125 s and therefore the pilot is 푓푝 assumed to provide the first input at t = 130 s, which is a choice of μ (as compared to SUP) is only to avoid actuator compensatory action by increasing μ to 2. Yet, notably the saturation since SAP has a better understanding of the elevator still reaches the saturation limits, an indication of anomaly magnitude and hence the requirements to lowering insufficient CfM. During this period (130 s ≤ t ≤ 215 s), and as the 휌. Similarly and upon the second anomaly (t = 215.68 s), ̂ shown in the right column, the elevator still reaches virtual the SAP delivers appropriate 훬푓푝 and μ to avoid actuator saturation limits. After the second anomaly, at t = 220 s, and saturation and achieve lower 휌. The responses in Fig. 6 show that the shared controller tunes the reference model outputs acceptable performance. Yet, upon the 2nd anomaly the ∗ ∗ (ℎ , 푉 ) suitably to minimize the tracking error. The SAP elevator control input (훿푒푙) reaches the saturation limit and the shared controller performance is numerically encapsulated in lack of CfM leads to oscillatory behavior and poor tracking the 2nd row of Table II. The 3rd row is also dedicated to SAP performance. As shown in the last row, the errors largely but with a higher estimation error ‖∆Λ푓‖ = 0.4 which seems increase after the anomalies and the elevator Cel reaches zero. to have a worse performance than SUP. Despite good performance under normal condition, the fixed It is important to note that the pilot (SUP/SAP) achieved gain optimal controller is not able to cope with uncertainties th stability and low tracking error by compromising the original and deal well with anomalies. The 4 row in Table II shows command input (r0), i.e. the notion of graceful command the numerical performance of this autopilot, which is the worst degradation. It is the unique feature of the shared controller to among other methods. trade-off CfM with command degradation. When anomalies Fig. 8 shows the performance of the adaptive autopilot cause the control inputs to exceed beyond the virtual during the flight scenario in Fig. 4. The adaptive controller saturation limits, the autopilot has a built-in mechanism of shows an acceptable performance in the initial phase and even nd relaxing the required control input magnitude by lowering the post-occurrence of the first anomaly. Yet, following the 2 command magnitude. This effect is clearly shown in Figs. 5-6, anomaly, the control input reaches saturation limits, which where by tuning μ, the pilot has reduced the reference impacts the output tracking performance. The adaptive commands to increase the available CfM. Yet, the obvious controller has the potential to deal with uncertainty and question that can be raised from this feature is if such a estimate the new parameters, yet it runs out of control input command degradation is acceptable in a given mission (resources) to respond, and the absence of CfM results in poor th context. While increasing μ can lead to more CfM, that change performance. The 5 row in Table II shows the numerical can also lead to excessive command degradation. Our performance of adaptive autopilot better than the fixed-gain hypothesis is to defer that decision to the human pilot who has optimal autopilot but worse than the other methods. the highest amount of information at the time.

Fig. 8. The adaptive autopilot: Left) Tracking performance; Right) Control inputs’ status. The vertical dashed lines at t = 125 s and 215 s indicate the two Fig. 7. The optimal autopilot: Left) Tracking performance; Right) Control subsequent anomalies. The instantaneous error/CfM are shown in the last row. inputs’ status. The vertical dashed lines at t = 125 s and 215 s indicate the two subsequent anomalies. The instantaneous error/CfM are shown in the last row. Fig. 9 shows the performance of the μ-mod adaptive autopilot. This autopilot has the ability to trade CfM with GCD, yet has a fixed predetermined μ=100. There is no pilot E. Comparison with Other Controllers or mechanism to tune the μ in face of anomalies making the To provide a benchmark comparison, the performance of best selection for the CfM-GCD trade-off. Similar to the two optimal, adaptive and μ-mod adaptive autopilots are shown in autopilots, the μ-mod autopilot shows an acceptable Figs. 7-9, using the same simulation scenario as in Figs. 5-6. performance before the 2nd anomaly, yet suffers from actuator The summary results are also listed in the bottom rows of saturation afterwards. This is due to the very high magnitude Table II. Table III provides a more comprehensive comparison of μ which is an initial design to prioritize CfM (over GCD). over the full range of loss of actuator effectiveness at second While the higher choice of μ could be advantageous in less anomaly (Λ푓 = [0.1:0.02:0.3]). severe anomalies (see Table III), it showed a counter- The performance of optimal autopilot is shown in Fig. 7, productive effect given the very high amount of anomaly and where prior and post the first anomaly, the controller shows an plant nonlinearities. The 6th row in Table II shows the trade off mechanism to improve tracking performance but numerical performance of this autopilot. suffers from a fixed gain. The advantageous effect of having a high μ (μ = 100) is observable in this result and in the range of anomalies mostly above 0.1. The adaptive and optimal autopilots follow these controllers with the worst performance and low CfM.

F. Pilot’s Contribution A primary component of the shared controller is the human pilot, whose contribution to estimation error is illustrated in Fig. 10. The top plot shows the pilot’s parameter estimation input and the bottom plot shows the pilot’s estimation error. Both plots are generated using algebraic simulations of (17)

and (36) and by considering a scalar 휆푓푖. The top contour is a direct demonstration of (17) showing the estimated parameter ̂ ̂ (휆푓) by varying pilot’s expertise (휂) and estimate input (휆푓푝 ). The range of parameters are selected as η = [0.1:0.01:1], and ̂ 휆푓푝 = [0.1:0.01:1]. Accordingly each point on the plot ̂ represents the estimated parameter (휆푓) based on the pilot’s ̂ input estimate (휆푓푝) and expertise (휂) using (17). The contour ̂ lines show a hyperbolic effect, which is due to the 휂훬푓푝 ̂ product in (17). Moving along the 휆푓푝 -axis with (η = 0) results Fig. 9. The μ-mod adaptive autopilot: Left) Tracking performance; Right) in high parameter estimate (≈ 1), due to the dominance of the Control inputs’ status. The vertical dashed lines at t = 125 s and 215 s indicate the two subsequent anomalies. The instantaneous error/CfM are second term (1 − 0)휆푛표푚. It is the characteristic of the shown in the last row. proposed model to undermine the pilot’s input estimate in case ̂ of low expertise. Conversely, moving on the top along the 휆푓푝- Table II shows the summary results for controllers, and axis with (η = 1) delivers a parameter estimate exactly equal to associated performance metrics using a single simulation case the pilot’s input. The pilot’s input receives the highest weight as shown in Figures 5-9. All controllers show good in case of higher expertise (η). performance prior to anomaly, yet their differences emerge in response to anomalies. Post-anomalies, the shared controller with SAP has the best RMSE+, resiliency metric and CfM. SAP’s outstanding performance is followed by SUP and adaptive autopilot. This is due to the effect of managing CfM in each controller.

TABLE III COMPREHENSIVE CONTROLLER COMPARISON AVERAGED OVER A RANGE OF ANOMALY SEVERITIES.

# Method 휌(ℎ,푣) CfM* GCD 1 Optimal (26.59, 5.43) 1.12 NA 2 Adaptive (3.68, 0.45) 1.24 NA 3 μ-mod adaptive (23e-3, 78e-3) 1.12 79e-4 4 SAP (28e-4, 18e-3) 1.21 41e-4

* CfMd = 0.75; ** 횲풇 = [0.1:0.02:0.3]; NA: Not applicable.

Table III provides a more comprehensive comparison among the controllers by showing the result over a range of actuator effectiveness Λ푓 = [0.1:0.02:0.3]. Accordingly, the listed values are averaged over Λ푓. We also assumed 0.2 parameter estimation error (‖∆Λ푓‖) for SAP. Accordingly, the SAP shows the most resilient performance with post-anomaly error lower than pre-anomaly, while also achieving high CfM and Fig. 10. Pilot’s estimate contribution; Top: Parameter estimate based on pilot’s input and expertise; Bottom: Mean parameter estimation error based on GCD. The SAP is followed by μ-mod adaptive, adaptive and the pilot’s estimation error and expertise. optimal autopilots in terms of ρ while they show similar CfM and GCD. The μ-mod adaptive autopilot uses the GCD-CfM The second analysis, in the bottom plot, is the mean based on regulating Capacity for Maneuver (CfM) and parameter estimation error based on the pilot’s error and Graceful Command Degradation (GCD). Furthermore, this expertise. The range of parameters are the same as the top paper provides an architecture for shared resilient control with plot, with the addition of actuator effectiveness Λ푓 = a specification of coordinated roles for the pilot and automatic [0:0.01:1], whose effect is averaged to produce a 2-D figure. controller. The pilot recognizes/estimates anomalous behavior The pilot estimation error is assumed to vary Λ푓 = and manages the value of parameter μ, which leads to a [−0.5: 0.01: 0.5]. Accordingly, each point on the contour maximum CfM with a minimal GCD in the command, while the adaptive autopilot delivers tracking performance and corresponds to a specific pilot error (∆휆푓 = 휆푓 − 휆푓 ) and 푝 푝 stability. expertise (η) averaged over the full range of actuator The paper provides a comparative test of the shared resilient effectiveness (Λ = [0:0.01:1]). Similar to the top contour, 푓 control architecture for a specific flight control case and higher values of expertise (η) is associated with less difference scenario. The architecture provides specific performance between the parameter estimation error (∆휆푓) and pilot metrics, and the test provides detailed performance measures. estimation error (∆휆푓푝). Lower range of η removes the The key finding is that the shared controller improved resilient sensitivity to the pilot error axis, leading to maximum error. performance for handling anomalous conditions. As a result, While moving to the left of ∆휆푓-axis increases the final error the test provides a potential standard for quantitative analysis (∆휆푓), moving to the right mostly decreases the error. This is of resilient performance. an interesting characteristic of this plot model, where pilot’s As a shared control design, the architecture provides a novel approach for coordinating human and automated roles. Shared under-estimation (e.g. 휆푓 = 0.6, 휆푓 = 0.4) produces lower 푝 control is based on human pilots recognizing and estimating error than over-estimation (e.g. 휆 = 0.4, 휆 = 0.6). Hence 푓 푓푝 size/trend of anomalous behaviors, and pilot commanding of a the pilot should avoid over-estimating the actuator parameter μ, that governs how the autopilot controls the trade- effectiveness. off between CfM and GCD. G. Resilient Control This new shared control architecture addresses the problem of bumpy transfers of control in current human-automation As mentioned earlier, resilient control was defined as the systems. Current forms of shared control virtually guarantee system’s ability to trade-off CfM with GCD so as to deliver late transfers of control that increase the risk of the same tracking performance both before and after the decompensation—inability of a human-machine system to occurrence of anomalies. As mentioned earlier, the choice of μ keep pace with growing or cascading demands [3]. This risk directly affects CfM and indirectly affects GCD. For any μ, has contributed to actual accidents [2], [30] where loss of the adaptive controller guarantees that the state error e goes to vehicle control resulted from bumpy and late transfers from zero. From (6) and (10), it can also be observed that as μ autopilot to human pilot. The shared resilient control increases, the rms value of e decreases. In summary, the architecture developed here demonstrates there are alternative choice of μ directly affects the resilience of the overall system. architectures, and the simulation studies in this paper show Given the performance of the shared controller, the question that alternative shared control architectures can resolve the is how one chooses the best μ? Higher the μ, better the CfM, bumpy transfer of control problem. but at the expense of increased GCD. Greater the μ, greater the The paper also provides a demonstration of the value of departure of the command signal 푟 (푡) + 퐾푇(푡)∆푢 (푡) from 0 푢 푎푑 theoretical advances in Resilience Engineering [1]-[4]. The the original command signal 푟 (푡). In general, the desired 0 Theory of Graceful Extensibility [4] specifies that adaptive command may not be altered beyond certain acceptable limits. units and networks (in this case the shared human and Too much command degradation, for instance, may lead to too autopilot control) should regulate Capacity for Maneuver in long an arrival-time or, in extreme case scenarios, a non- order to be resilient in the face of anomalies that challenge arrival altogether at the destination. These limits in turn will, normal function (in this case what an autopilot can handle by most likely, determine the achievable tracking performance itself). The shared control architecture was developed to and CfM and therefore the overall resilient control regulate Capacity for Maneuver. Other research results have performance. We therefore propose that the ideal selection of shown that adaptive units at different scales can monitor their μ be delegated to the pilot, as he or she may be aware of the capacity for maneuver and can adapt to compensate when the allowable GCD and desirable CfM. risk of saturation gets too high [31]. This paper demonstrates The analysis tested the shared resilient controller that the general concept in the theory can be defined and used performance relative to alternative autopilot designs using a in specific control problems. This paper also shows that numerical simulation based on a nonlinear F-16 aircraft. systems designed to regulate Capacity for Maneuver can Primary assessment metrics used were CfM, GCD, tracking produce improved performance in the face of anomalies. The performance and parameter estimation error. The proposed end result is a new path forward for the design of shared shared controller showed superior performance over optimal, human and machine systems that can reliably produce resilient adaptive and μ-mod adaptive autopilots. performance in the face of disturbances, anomalies, and cascades that exceed the machine’s capability alone. V. IMPLICATIONS The Resilience Engineering agenda has aspired to develop ACKNOWLEDGMENT mechanisms for resilient control [29]. This paper addresses This research is supported by funding from National that aspiration by providing a resilient control architecture Science Foundation, Grant No. 1549815 with additional [15] R. A. Hess, “Simplified approach for modelling pilot support from U.S. Department of Transportation, University pursuit control behaviour in multi-loop flight control Transportation Center (UTC) Program (DTRT13-G-UTC47). tasks,” Proceedings of the Institution of Mechanical We would also like to thank Dr. Eugene Lavretsky, of Boeing Engineers, Part G: Journal of Aerospace Engineering, Company, for his valuable inputs. vol. 220, no. 2, pp. 85-102, 2006. [16] R. A. Hess, “Modeling pilot control behavior with sudden changes in vehicle dynamics,” Journal of Aircraft, vol.

REFERENCES 46, no.5, pp.1584-1592, Sep 2009. [17] R. A. Hess, “Modeling Human Pilot Adaptation to Flight Control Anomalies and Changing Task Demands,” [1] D. D. Woods, “Four Concepts of Resilience and the Journal of Guidance, Control, and Dynamics, 2015. Implications for Resilience Engineering,” Reliability [18] D. D. Woods and N. Sarter, “Learning from Automation Engineering and Systems Safety, vol. 141, pp. 5-9, 2015. Surprises and Going Sour Accidents,” in Cognitive [2] D. D. Woods and E. Hollnagel, “Joint Cognitive Engineering in the Aviation Domain, N. Sarter and R. Systems: Patterns in Cognitive Systems Amalberti, Eds. Hillsdale NJ: Erlbaum, pp. 327-354, Engineering, Boca Raton FL: Taylor & Francis, 2006. 2000. [3] D. D. Woods and M. Branlat, “How Adaptive Systems [19] A. B. Farjadian, A. M. Annaswamy and D. D. Woods, “A Fail,” in Resilience Engineering in Practice. E. Hollnagel, Resilient Shared Control Architecture for Flight Control,” J. Paries, D. D.Woods, and J. Wreathall, Eds., Aldershot, Proceedings of the International Symposium on UK: Ashgate, 2011, pp. 127-143. Sustainable Systems and Technologies, 2016. [4] D. D. Woods, “The Theory of Graceful [20] A. B. Farjadian, A. M. Annaswamy, D. D. Woods, Extensibility,” Environment Systems and Decisions, in “Bumpless Reengagement Using Shared Control between press, 2018. Human Pilot and Adaptive Autopilot,” IFAC [5] K. S. Narendra, A. M. Annaswamy, “Stable adaptive Proceedings, Toulouse, France, Jul 2017. systems,” Dover; 2005. [21] T.G. Lee and U.Y. Huh, “An error feedback model based [6] S. P. Karason, and A. M. Annaswamy, “Adaptive control adaptive controller for nonlinear systems,’’ in Proc. IEEE in the presence of input constraints,” American Control Int. Symp. Ind. Electron., pp. 1095–1100, Jul. 1997. Conference, 1993. [22] E. Lavretsky, ‘‘Adaptive output feedback design using [7] M. Schwager, A. M. Annaswamy, and E. Lavretsky, asymptotic properties of LQG/LTR controllers,’’ in Proc. “Adaptation-based reconfiguration in the presence of AIAA Guid. Navigat. Control Conf., no. AIAA, pp. 2010– actuator failures and saturation,” Proceedings of the 2005 7538, 2010. American Control Conference, 2005, IEEE, 2005. [23] T. E. Gibson, A. M. Annaswamy, and E. Lavretsky, “On [8] E. Lavretsky, and N. Hovakimyan, “Stable adaptation in adaptive control with closed-loop reference models: the presence of input constraints,” Systems & Control transients, oscillations, and peaking,” IEEE Access 1, pp. Letters, vol. 56 no.11, pp. 722-729, 2007. 703-717, 2013. [9] National Transportation Safety Board (USA) (2010) [24] A. E. Bryson, "Optimal control-1950 to 1985." IEEE Aircraft Accident Report: Loss of Thrust in Both Engines Control Systems, vol. 16, no. 3, pp. 26-33, 1996. After Encountering a Flock of Birds and Subsequent [25] E. Lavretsky, & K. A. Wise, Robust and Adaptive Ditching on the Hudson River, US Airways Flight 1549, Control. Springer London, 2013, pp. 317-353. Airbus A320-214, N106US, Weehawken, New Jersey, [26] L. Guohua, “Pilot-related factors in aircraft crashes: a January 15, 2009, Washington DC. NTSB/AAR-10 /03 review of epidemiologic studies,” Aviation, Space, and [10] Netherland Board (Netherland) (1994) Environmental Medicine, 1994. Aircraft Accident Report: El Al flight 1862, - [27] D. P. Wiese, A. M. Annaswamy, J. A. Muse, M. A. 258f 4x-axg, October 4, 1992, , , Bolender, & E. Lavretksy, “Sequential Loop Closure 92-11. Based Adaptive Output Feedback,” IEEE Access, 2017. [11] National Transportation Safety Board (USA) (1989) [28] P. Kapasouris, M. Athans and G. Stein, “Design of Aircraft Accident Report: Delta Air Lines, Inc. Boeing feedback control systems for unstable plants with 727-232, N473DA, Dallas-Fort Worth International saturating actuators,” Proc. IFAC Symp. on Nonlinear Airport, Texas, August 31, 1989, Washington DC. Control System Design, Pergamon Press, pp.302-307, NTSB/AAR-89/04 1990. [12] National Transportation Safety Board (USA) (1982) [29] C.G. Rieger, “Notional examples and benchmark aspects Aircraft Accident Report: Air Florida, Inc., Boeing 737- of a resilient control system,” In: Proceedings of the 222, N62AF, Collision with 14th Street Bridge, Near IEEE, 3rd international symposium on resilient control Washington National Airport, Washington, D.C., January systems (ISRCS), pp. 64–71, 2010. 13 1982, Washington DC. NTSB-AAR-82-8 [30] National Transportation Safety Board. (1994). In-flight [13] R. Hosman, and H. Stassen, “Pilot's perception in the Icing Encounter and Loss of Control Simmons Airlines, control of aircraft motions,” Control Engineering d.b.a. American Eagle Flight 4184 Avions de Transport Practice, vol. 7, no. 11, pp. 1421-1428, 1999. Regional (ATR) Model 72-212, N401AM. Aircraft [14] D. McRuer, “Human dynamics in man-machine systems,” Accident Report NTSB/AAR-96/01. Washington, DC. Automatica vol. 16, no. 3, pp. 237-253, 1980. [31] R.J. Stephens, D. D. Woods and E. S. Patterson, “Patient Dr. Amir B. Farjadian received his B.S. Boarding in the Emergency Department as a Symptom of degree in biomedical engineering from Complexity-Induced Risks.” In Resilience in Everyday Azad University, Tehran, Iran in 2003, the Clinical Work. R.L. Wears, E. Hollnagel, J. Braithwaite, M.S. degree in biomedical engineering from Eds., Farnham, UK: Ashgate, pp. 129-144, 2015. Iran University of Science and Technology [32] T. S. Carretta, D. C. Perry Jr, & M. J. Ree, “Prediction of (IUST), Tehran, Iran in 2006, and the PhD situational awareness in F-15 pilots”, The International in bioengineering from Northeastern Journal of Aviation Psychology, pp. 21-41, 1996. University, Boston, MA, USA in 2015. [33] J. L. Taylor, Q. Kennedy, A. Noda, & J. A. Yesavage, He is currently a postdoctoral associate at Massachusetts “Pilot age and expertise predict flight simulator Institute of Technology (MIT), Cambridge, MA, USA, performance A 3-year longitudinal study”, Neurology, pp. pursuing research in human-machine cognitive interaction, 648-654, 2007. and resilient shared human-machine control systems. [34] Sarter N. and Woods, D.D. (1992). Pilot Interaction with Cockpit Automation I: Operational Experiences with the Benjamin Thomsen is a graduate student Flight Management System. International Journal of in the Department of Mechanical Aviation Psychology, 2:303–321. Engineering at the Massachusetts Institute [35] Sarter N. and Woods, D.D. (1994). Pilot Interaction with of Technology, where he is a researcher Cockpit Automation II: An Experimental Study of Pilot's in the Active Adaptive Control Model and Awareness of the Flight Management System. Laboratory. He received his B.Eng. in International Journal of Aviation Psychology, 4:1—28. Mechanical Engineering from McGill [36] Sarter N. B. and Woods, D. D. (1995). “How in the University in 2015. His research interests world did we get into that mode?” Mode error and lie in the area of motion planning and control for autonomous awareness in supervisory control. Human Factors, 37: aerial vehicles, with the goal of ensuring stable and consistent 5—19. UAV behavior in the presence of uncertain vehicle dynamics. [37] Sarter, N. and Woods, D.D. (1997). Team play with a Powerful and Independent Agent: A Corpus of Dr. Anuradha M. Annaswamy received her Operational Experiences and Automation Surprises on the Ph.D. in Electrical Engineering from Yale Airbus A-320. Human Factors, 39, 553—569. University in 1985. She has been a member [38] Sarter, N. and Woods, D.D. (2000). Team Play with a of the faculty at Yale, Boston University, and Powerful and Independent Agent: A Full Mission MIT where currently she is the director of Simulation. Human Factors, 42, 390—402. the Active-Adaptive Control Laboratory and [39] Sklar, A. E. and Sarter, N. (1999). Good Vibrations: a Senior Research Scientist in the Tactile Feedback in Support of Attention Allocation and Department of Mechanical Engineering. Her Human-Automation Coordination in Event-Driven research interests pertain to adaptive control Domains Human Factors, 41(4), 543 - 552 theory and applications to aerospace, automotive, and https://doi.org/10.1518/001872099779656716 propulsion systems, cyber physical systems science, and CPS [40] Abbott, K., McKenney, D. and Railsback, P. (2013). applications to Smart Grids, Smart Cities, and Smart Operational Use of Flight Path Management Systems. Infrastructures. She is the author of a hundred journal Final report of the Flight Deck Automation Working publications and numerous conference publications, co-author Group, Performance-based operations Aviation of a graduate textbook on adaptive control (2004), co-editor of Rulemaking Committee PARC / Commercial Aviation several reports including “Systems & Control for the future of Safety Team CAST / FAA. humanity, research agenda: Current and future roles, impact http://www.faa.gov/about/office_org/headquarters_offices and grand challenges,” (Elsevier) “IEEE Vision for Smart /avs/offices/afs/afs400/parc/ Grid Control: 2030 and Beyond,” (IEEE Xplore) and Impact [41] M. Mulder, D.M. Pool, D.A. Abbink, E.R. Boer, and of Control Technology, (ieeecss.org/main/IoCT-report, M.M. van Paassen, “Manual Control Cybernetics: State- ieeecss.org/general/IoCT2-report). of-the-Art and Current Trends,” IEEE Transaction on Dr. Annaswamy has received several awards including the Human-Machine Systems, pp. 468 – 485, 2018. George Axelby and Control Systems Magazine best paper [42] A. Jain, R. Bansal, A. Kumar, & K. D. Singh, “A awards from the IEEE Control Systems Society (CSS), the comparative study of visual and auditory reaction times Presidential Young Investigator award from NSF, the Hans on the basis of gender and physical activity levels of Fisher Senior Fellowship from the Institute for Advanced medical first year students”, International Journal of Study at the Technische Universität München, the Donald Applied and Basic Medical Research, pp.124, 2015. Groen Julius Prize from the Institute of Mechanical Engineers, a Distinguished Member Award, and a Distinguished Lecturer Award from IEEE CSS. Dr. Annaswamy is a Fellow of the IEEE and IFAC. She has served as the Vice President for

Conference Activities (2014-15), and is currently serving as the VP for Technical Activities (2017-18) in the Executive Committee of the IEEE CSS. She is the Deputy Editor of the

Elsevier publication Annual Reviews in Control (2016- present).

David D. Woods PhD Purdue University, 1979. He is Professor in Integrated Systems Engineering at the Ohio State University, Columbus OH, USA. His work on safety in high risk, complex settings is found in the books Behind Human Error (1994; 2nd Edition 2010), Joint Cognitive Systems (Foundations, 2005; Patterns, 2006). Resilience Engineering: Concepts and Precepts (2006), Resilience Engineering in Practice (2011). He studies of human coordination with automated and intelligent systems and accident investigations in aviation, nuclear power, critical care medicine, crisis response, military operations, outages of critical digital services, and space operations (he was an advisor to the Columbia Accident Investigation Board). He began developing Resilience Engineering in 2000-2003 as part of the response to several NASA accidents. Dr. Woods is Past-President of the Resilience Engineering Association and also Past-President of the Human Factors and Ergonomics Society. He has received many awards such as a Laurels Award from Aviation Week and Space Technology (1995), the Jimmy Doolittle Fellow Award from the Air Force Association, and served on many national advisory committees such as National Research Council committees on Dependable Software (2006), and on Autonomy in Civil Aviation (2014), the FAA Human Factors and Cockpit Automation Team (1996; and its reprise in 2013), the Defense Science Board Task Force on Autonomy (2012).