<<

IFAC 2014 Autonomous Safety Critical Systems for Manned and Unmanned Aviation

Dave Vos, Ph.D. August 2014 A Systems Level Perspective on Autonomy Relevant Vos Background

• MIT Aero/Astro Ph.D. 1992. • World’s first successful Autonomous Unicycle Robot

• Founder, CEO, CTO Athena Technologies, Inc. Acquired by Rockwell Collins 2008. Control, Nav and Guidance Systems specialists • UAS Solutions and Manned Aviation solutions • Certificated Light Sport Aviation Engine Controls for BRP Rotax engine

• Example UAS: US Army Shadow Nav/Guidance/Control System • Thousands of flights per month • Operational time in excess of I million flight hours • Automated Launch, Flight, Mission, Recovery ( )

• These are Personal Comments and Thoughts on Autonomy • Not affiliated with any entity Levels of Autonomy All must Work as Advertised at Top Level

UnicycleDavid Vos and his unicycle robot - YouTube Video https://www.youtube.com/watch ?v=Uj1m0z0cScU

Baby Steps MIT 1992

Robotic Unicycle tracks Heading and Speed Inputs Full Autonomy, Including Handling of Failures

Push Button Autonomy Execute flight plan Recover from damage Reroute to

Autonomous Damage Tolerance Rockwell Collins & DARPA 2009 https://www.youtube.com/watch?v=dGiPNV1TR5k A word on “Autonomous”

• FAA Currently is concerned about implication of “Autonomous”. We need to help clarify • Two main Perspectives • Predictable systems: The resulting action due to a stimulus is always predictable • Unpredictable systems: The resulting action due to a stimulus is not necessarily predictable • A Candidate Definition of Autonomy • Autonomous System: Automatically performing tasks without requiring continuous supervision This Discussion Contemplates such Predictable Autonomous Systems Some Recent Events – Manned and Unmanned

• US Airways Airbus A320 Landing in Hudson river • Precious time spent figuring out what had happened • Only remaining realistic option was land in the river - an heroic accomplishment under the circumstances • Autonomous system could very likely have landed back on the runway at LaGuardia

• Air France Airbus A330 Crash in Atlantic Ocean en-route from Rio to Paris • From 38,000ft into the ocean after air data discrepancies. • Crew unaware the was stalled at 38,000 ft. • Autonomous system could have made this a non-event

• UAS: Predator B crash - US Border • Switching crew consoles confused the operators as to control modes configuration • Autonomous system cross-checks onboard the aircraft and in GCS could have prevented the mishap

• Asiana Air Boeing 777 Crash Landing at San Francisco Airport • Approach was slow and not stabilized, with Autothrottle inadvertently disabled • ie automatic airspeed control was off. Crew unaware. • Autonomous system could have ensured a safe landing Heroic safe landing in Hudson River, New York 2009.

• ∆T=0s: Bird Strike at 4.5 miles from LaGuardia Runway • Altitude 2818ft AGL • Both engines lost • ∆T=40s: Engine restart attempts initiated. Continued for 30-40 seconds ∆ • T=210s: Touchdown in the river

• Airspeed was well below best range speed for most of • NTSB Piloted simulations landed safely on LaGuardia runway 53% of attempts • Automated system could achieve very high success rate • NTSB Recommends automated engine operational status monitoring for timely decision making Ref: NTSB report NTSB/AAR-10/03

Air France flight 447

• En route from Rio to Paris • Cruising at 35,000ft • Crossing through a line of storms in mid- Atlantic 400km wide • Multiple Airspeed sensors malfunctioned • Automatically Disconnected and no longer tracked airspeed

• Two properly trained First Officers in the cockpit could not stabilize the aircraft • Captain was not in cockpit at time of events. Returned too late • Significant confusion and erratic behavior ensued • E.g large pitch up and roll commands applied at various stages. • Aircraft was steadily flown into stalled state and remained there from 38,000 ft down to impact • Over one period of 30s full up pitch and full roll command Ref: BEA Final Report. AF 447 accident was applied

Ref: BEA Final Report AF 447 accident Ref: BEA Final Report AF 447 accident Ref: BEA Final Report AF 447 accident US Customs and Border Patrol. Predator B

• UAV_IIC_Pres_10-16-07.pdf Asiana Airlines Flight 214

• Poor communication existed amongst crew, with lack of callouts and confirmations • Pilot Flying (PF) was on Checkout flight Monitored by another Captain (PM) • PF inadvertently caused Autothrottle to enter HOLD mode • PF incorrectly believed Autothrottle would always prevent low airspeed • PF was not aware that Autothrottle was in HOLD mode and not controlling Airspeed • Approach was not stabilized at 500ft AGL – PF or PM or both should have called for go around • Go around was called too late – at 90 ft AGL

Ref: NTSB report NTSB/AAR-14/01 Ref: NTSB report NTSB/AAR-14/01 2005 NASA Aviation Safety Report on Crew Performance in Emergencies

• 86% of textbook emergencies are handle well by crew • These have well proven checklist procedures in place

• Only 7% of non-textbook emergencies are handled well • No procedure exists for these – crew must innovate in real time. Very difficult! We Must Commit to Delivering the Advertised Level of Autonomy • Design Philosophy is critical, Either we assume • Crew are Superhuman • Make no mistakes • Can resolve any complex situation arbitrarily quickly whilst performing other tasks • Or, more realistically • Crew are Human • Will make mistakes • Able to manage the system, but not be the system

• Don’t expect Crew to Multi-task and Figure out Multi-Level Problems and/or Determine Emergency Solutions in Real Time • We must make the level of automated backup and recovery support the advertised level of Autonomy • We cannot require a Superhuman Crew to resolve lower level problems Automation Beyond Normal Envelope

Darpa and Rockwell Collins 2010 Autonomous all-attitude flight http://www.youtube.com/watch?v=D4b3hgG3jCk Certification Requirements Drive Architecture Decisions • Current certification philosophy allows dependence on superhumans as ultimate backup to resolve complex failure scenarios

• FMECA-based Systems Engineering must be employed and iterated upon to reach architecture design • Assume the crew is a normal human, ie not very good at simultaneous real time multi-tasking and problem solving • Build appropriate levels of redundancy, including analytic redundancy to resolve problems automatically and keep flying • Enable crew to coordinate emergency actions vs solving emergency problems

System Level , SW, Algorithm Design process • Set top level functional requirements including requisite levels of autonomy for flight plans or missions • Define notional architecture and algorithms • Use FMECA to establish adequacy in terms of failures • Iterate until converged • Prove in Test (Simulations/Simulators, HWIL etc) • Iterate until converged • Flight test • Iterate until converged

Software Development Challenges

• Safety critical SW development is still a relatively immature engineering discipline • By definition, SW enables designers to “Tweak Infinitely” • SLOC (Lines of Source Code) certification cost can range from $75 to many $100s per line • Can rapidly become Cost Prohibitive • Tough business decisions need to be made • Adjust fielded level of autonomy accordingly • Often leads to ultimately depending on a Superhuman Crew • Cost, simplicity and reduced SLOC count go hand-in-hand

Summary: Some Key Themes for Autonomy

• A relevant report: National Academies TRB Autonomy in Civil Aviation. http://www.nap.edu/catalog.php?record_id=18815 • We must Commit to Delivering the Autonomy we advertise • Focus on the System Architecture and Necessary Levels of Autonomy to meet System Requirements • Fundamentally driven by solid Systems Engineering practice • Each level of Autonomy needs to be delivered without need for Crew to be Superhuman in cases of Failures or Mode selection

Summary: Some Needed Technologies and Tools for Enabling Broad use of Safety Critical Autonomy

• Mature Systems Engineering and Software Engineering Disciplines • Structured System Architecture Design Techniques and Tools • Safety Critical Systems Design through Systems Engineering and use of FMECA as design driver • Versatile Requirements Tracking Tools and Techniques • Automated testing Tools and Techniques • High Reliability/Integrity Automatic code generation • Tools for Tracking and Mapping of Test Plans and Results to Requirements • Methods for Reducing System Complexity