Johan Johan Rigné kth royal institute of technology R Adapting to increased automation in the aviation industry through performance measurement and training training and measurement performance through industry aviation the in automation increased to Adapting

Doctoral Thesis in Industrial Work Science Adapting to increased automation in the aviation industry through performance measurement and training

Barriers and potential

JOHAN RIGNÉR

ISBN: 978-91-7873-693-5

TRITA-ITM-AVL 2020:42 2020 KTH www.kth.se Stockholm, Sweden 2020 Adapting to increased automation in the aviation industry through performance measurement and training Barriers and potential

JOHAN RIGNÉR

Academic Dissertation which, with due permission of the KTH Royal Institute of Technology, is submitted for public defence for the Degree of Doctor of Technology on Thursday the 10th December 2020, at 10:00 a.m. in F3, Lindstedsvägen 26, Stockholm.

Doctoral Thesis in Industrial Work Science KTH Royal Institute of Technology Stockholm, Sweden 2020 © Johan Rignér

ISBN: 978-91-7873-693-5 TRITA-ITM-AVL 2020:42

Printed by: Universitetsservice US-AB, Sweden 2020

Abstract

The increased use of automation has affected the work on the flight deck. The Single European Sky ATM Research (SESAR), deployed with the purpose to increase the European ATM system performance, identifies automation as a key enabler to increase future system performance. The aviation system is a complex large socio-technical system. The system is affected by internal and external stressors at all system levels. At a work process level of this system, the flight deck represents a Joint Cognitive System. When accidents or incidents do occur, the importance to look beyond the label of flight crew error to understand what happened is widely recognized. As flight safety improves, there are fewer incidents and accidents to learn from, which increases the importance to look at normal operations data for improvement.

The flight crew training environment is increasingly relying on collected data about an individual airline’s flight operational environment and performance. Through airlines’ performance measurement system, a large amount of performance data is collected. However, this data is not in a format immediately useful for studies of neither complex socio-technical, nor joint cognitive systems. In addition, regulatory, financial, and other constraints limit airlines’ use of collected data as well as how they perform training.

The purpose of this research is to increase knowledge about how training content and learning opportunities for flight crew relates to airline performance monitoring and measurement processes, given a highly automated dynamic environment. Against this background, barriers and potential for improvements to support the flight crew for the operation of the highly automated aircraft are identified.

This research has been conducted using a mixed method approach for collecting and analyzing data. The overall research approach is conducted in an applied research tradition. The empirical data in this thesis are primarily based on two research projects, HILAS and Brantare, both with explicit goals of knowledge generation and learning among participating organizations. The results are based on the following methods: 1) System analysis using Rasmussen’s model for a socio-technical system involved in risk management as the framework, to describe the aviation system, primarily with a perspective from the flight crew and their automated work environment, 2) Interviews of pilots, 3) Workshops with groups of pilots and safety office staff, 4) Implementation attempt of a proposed method how to use data and 5) Collection of flight operational data.

Based on Rasmussen’s model of a dynamic socio-technical system, the aviation system of interest ranges from “A single European Sky” to regulators, national legislation to flight operations, training, and the work on flight deck as well as political and financial pressures on the airline. The conclusions drawn from this comprehensive scope is reliant on the author’s domain knowledge acquired from some 30 years of experience in the aviation industry.

Several barriers against the use of performance data for knowledge and learning improvements are identified. The airline monitoring systems are not ideal for specifically

i

measuring automation related problems and flight crew – automation interactions. Due to the already high flight safety levels, new performance measurement processes and activities are neither prioritized, invested in nor explored. When a proposed data-use method was attempted to be implemented it showed difficulties in finding causalities and relationships between available airline parameters. With unclear causality between various parameters recorded and actual outcomes, it is difficult for airlines to use data available as a source for confident training design. This is also the case for the selection of Safety Performance Indicators, that often are outcome based at a high level. More cross-system integration may render the current measurement systems insufficient to understand difficulties and possibilities in the greater aviation system.

Potential for improvement related to the use of data, knowledge and learning are also identified. Flight crew show a high acceptability towards a proposed learning concept based on normal flight data. A greater emphasis of using indicators showing airline adaptability and flexibility is proposed. Also, moving from a scheduled training activity mindset to a wider learning and knowledge management and sharing concept is suggested as a cost-efficient way forward. Increased utilization of normal operational flight data should be used for this purpose and have potential to contribute to both efficiency and safety in aviation.

This thesis contributes to airline performance measurement and flight crew training knowledge. Results from this research is valuable in other highly automated safety critical domains with a high acceptance of performance being measured and analyzed.

ii

Sammanfattning

Den ökade automatiseringen har påverkat piloternas arbete i flygplanets cockpit. Den gemensamma europeiska flygtrafik-forskningen, som i stor utsträckning sker inom ramen för Single European Sky ATM research, SESAR, syftar till att utveckla det europeiska flygtrafiksystemet. SESAR har identifierat automation som en viktig faktor för förbättrad prestanda. Flygsystemet kan ses som ett komplext stort sociotekniskt system. Detta system påverkas och belastas av interna och externa faktorer på alla systemnivåer. På arbetsprocessnivå kan arbetet på flight deck ses som ett Joint Cognitive System. När olyckor eller incidenter trots allt inträffar är det idag vedertaget att se till systemet som en helhet för att kunna förstå vad som hänt i stället för att skuldbelägga individer. Samtidigt, när flygsäkerheten förbättras blir det färre incidenter och olyckor att dra lärdom av vilket ökar vikten av att förlita sig på data från dagliga, normala operationer för att förbättra verksamheten vidare.

Piloters utbildningsmiljö förlitar sig alltmer på information och insamlade data om enskilda flygbolags specifika operativa miljö. Genom flygbolagens system för prestandamätning samlar de in stora mängder data om deras verksamhet. Dessa data är emellertid inte i ett format som är användbart för traditionella studier av stora, komplexa socio-tekniska system, eller för studier av ett Joint Cognitive System. Dessutom styr regelverk, ekonomi och andra begränsningar flygbolagens användning av uppgifter som samlas in, vilket påverkar hur utbildning kan utföras.

Syftet med denna forskning är att öka kunskap om hur utbildningsinnehåll och inlärningsmöjligheter för piloter relaterar till flygbolagets system för prestandamätning, givet den mycket automatiserade dynamiska miljö piloter befinner sig i. Mot denna bakgrund undersöks i denna avhandling hinder och möjligheter för systemförbättringar för att stödja piloter i driften av de automatiserade flygplanen.

Inom ramen för avhandlingens empiriska studier har olika forskningsmetoder använts: 1) Systemanalys med Rasmussens modell för ett sociotekniskt system involverat i riskhantering som ramverk, för att beskriva luftfartssystemet, främst med ett perspektiv från piloter och deras automatiserade arbetsmiljö, 2) Intervjuer av piloter, 3) Workshops med grupper av piloter och säkerhetskontorspersonal, 4) Implementeringsarbete av en föreslagen metod för hantering av data och 5) Insamling av flygoperativa data. De empiriska uppgifterna i denna avhandling baseras främst på två forskningsprojekt, HILAS och Brantare, båda med tydliga mål för kunskapsgenerering och lärande bland deltagande organisationer.

Med Rasmussens ramverk för sociotekniska system som modell, ges en omfattande beskrivning av komplexiteten inom luftfartsindustrin. Denna beskrivning sträcker sig från det Europeiska projektet Single European Sky, tillsynsmyndigheter, nationell lagstiftning till flygoperationer, utbildning och till arbetet på flight deck. Även politiska och ekonomiska påtryckningar på flygbolaget berörs. De empiriska uppgifterna tolkas mot författarens 30- åriga domänkunskap inom flygbranschen.

Flera hinder för systemutveckling identifierades. De existerande mät- och övervakningssystemen är inte idealiska för att specifikt mäta automatiseringsrelaterade

iii

problem, inklusive interaktion mellan piloter och de automatiserade systemen. På grund av de redan höga flygsäkerhetsnivåerna kan investeringar i och kartläggning av nya processer för t.ex. prestandamätning ges låg prioritet. Resultatet av implementeringsarbetet med en föreslagen ny metodik angående ett flygbolags datahantering visade på svårigheter att hitta orsak och samband mellan tillgängliga parametrar. Med oklart orsakssamband mellan olika registrerade parametrar och faktiska resultat är det svårt för flygbolagen att använda tillgängliga data som en användbar källa för design av återkommande utbildningsaktiviteter. Denna begränsning gäller även indikatorer för säkerhetsprestanda, som ofta är resultatbaserade på en hög nivå. Mer integration mellan olika delar av luftfartssystemet kan göra att de nuvarande mätsystemen blir otillräckliga för att förstå svårigheter och möjligheter i det större luftfartssystemet.

Potential och möjligheter för förbättringar identifieras också. Piloter visade stor acceptans gentemot ett föreslaget inlärningskoncept baserat på data från normala flygningar. Ett större fokus på att använda indikatorer som visar flygbolags anpassningsförmåga och flexibilitet föreslås. Samtidigt föreslås att flytta fokus från en rigid schemalagd träningsstruktur till ett bredare koncept för inlärning och kunskapshantering. På olika sätt kan normala operativa flygdata användas för detta ändamål.

Denna avhandling bidrar med kunskap inom området prestandamätning, hantering av flygdata och utbildning av piloter. Denna kunskap bör vara värdefull även inom andra högautomatiserade säkerhetskritiska områden där det finns en hög acceptans för att prestationer, och tillhörande resultat, mäts och analyseras.

iv

Preface and acknowledgement

The work for this thesis started out in the late 1990’s. I was at that time doing research related to the environmental impact of aviation and I concluded that a significant move towards a more automated aviation system would be required to achieve the desired environmental developments. At the same time, a significant development related to aircraft automation, flight deck design and the associated human system interaction was taking place. The introduction of Electronic Flight Instrument System (EFIS), Flight Management System (FMS), fly by wire and other sophisticated automated systems had changed the airline pilots’ workplace profoundly while the training curricula for pilots had remained largely unchanged. Later, through the work of SESAR and NextGen, ambitious further goals towards a, potentially fully, automated system was formulated.

My research started out in the context of a time where much research was conducted about various issues related to the work on the increasingly automated flight deck. At the same time, higher system level theories continued to evolve moving focus from the sharp end to the blunt end, from human machine interaction to human system interaction, from human error to resilient organizations owing its success to the presence of humans.

The fact that the work for this thesis has been conducted over such an extended period has been both a curse, as well as a blessing. As automation on the flight deck has proliferated it has become increasingly integrated and more difficult to address as separate subsystems. As such, it has become more difficult to identify specific issues that pilots might have operating the aircraft’s automated systems, and how such potential issues could be resolved. At the same time, theories regarding safety approaches has continued to evolve which is discussed and shown extensively throughout this thesis.

In a context where automation continues to develop and system complexity increases, the need to understand how close to the border of unacceptable risk operations is conducted remains a key activity in any airline. Both from an efficiency and a safety perspective. Hopefully this thesis can contribute to the development of knowledge that can improve our understanding of how the complex socio-technical aviation system works.

There was a longer break in my research period, essentially between the two main research projects HILAS and Brantare. During this period, I was promoted to captain, working fulltime as a pilot and initially commuting to a crew base in another country. This limited my possibilities to conduct continuous research. However, I gained insights in a new professional role (as captain) that I believe have contributed to my research.

I am deeply grateful for the unlimited patience and continuous support from my supervisor Professor (emeritus) Lena Mårtensson. Without your encouragement this would not have been possible. I am equally grateful for the valuable discussions and collaborations over the years with my assistant supervisor, Associate Professor Pernilla Ulfvengren. I am also thankful for all the people I have met during this significant period of my life, providing insights, and equally important, joy and interesting lunch discussions.

v

To my family, my wife Maria, encouraging me to finish the work, and our amazing and wonderful children, David and Anna, for putting up with me talking about these things for such an extended period of time. Thank You!

Stockholm, October 2020

Johan Rignér

vi

List of appended papers

This thesis is based on the work presented in the following papers:

1. Sharing the Burden of Flight Deck Automation Training (Rignér & Dekker). The International Journal of Aviation Psychology, 10(4), 317-326. 2000. The main idea and most of the writing for the paper comes from the first author. However, each of the two coauthors contributed with thoughts and writing expressed in this paper. 2. Measuring safety performance – Strategic risk data (airline safety and human factors issues) (Rignér, Ulfvengren, & Kay). In Proceedings of the 21st Annual European Aviation Safety Seminar, EASS. March 16-18, 2009. Nicosia, Cyprus. The first author proposed the method that was finalized by all authors. The first author did most of the writing with inputs from all authors. 3. Study of safety performance indicators and contributory factors as part of an airline systemic safety risk data model (Rignér, Ulfvengren, Cooke, Leva, & Kay). In Proceedings of the International Ergonomics Association (IEA) Conference. Aug 9-14, 2009. Beijing, China. The first author designed the research, led the implementation effort, and extracted the data. All co-authors contributed in the analysis part of the study. The first author did most of the writing with inputs from the others. 4. In Need of a Model for Complexity Assessment of Highly Automated Human Machine Systems (Barchéus, Ulfvengren, & Rignér). In the 1st annual conference of ComplexWorld, SESAR. Seville, Spain. July 6-8, 2011. I contributed with inputs to the manuscript and made sure that the final paper adequately described the aviation system. 5. Airline perspective on future automation performance - Increased need for new types of operational data (Rignér, Ulfvengren, & Moberg). 5th International Conference on Research in Air Transportation – ICRAT 2012, May, University of Berkeley. The first author designed the questionnaire and did all the interviews. The first and second author did most of the writing. All authors contributed with inputs to the manuscript. 6. Fine-tuning flight performance through enhanced functional knowledge (Rignér, Ulfvengren, Moberg, & Näsman). Submitted October 2020 to Aviation Psychology and Applied Human Factors. The first author devised the idea described in the paper. The method how to complement available flight data with additional interviews and a workshop were jointly developed. The 4th author set up the database. The first author carried out half of the interviews conducted and led the workshop with pilots. The first author did most of the writing for the manuscript with inputs from all authors.

vii

List of additional papers

1. Approximation of pilot operational behavior affecting noise footprint in steep approaches (Moberg, Rignér, Ulfvengren, & Näsman). Noise Control Engineering Journal, 68(2), 179-198. March-April 2020. 2. Pilot impact on the noise abatement effect of steeper approaches – Initial analysis of wind and flight data (Rignér, Moberg, & Ulfvengren). In Proceedings of the Internoise 2017 conference, 27-30 August, Hong Kong, 2017. 3. Förstudie för metodutveckling för inflygningsprocedurer för minskat buller (Rignér, Ulfvengren, & Moberg). TRV 2014/87274, Trafikverket, 2014. 4. How predictive can we be? (Rignér). In SAS internal safety magazine Safety feedback, 2010. 5. SAS HILAS Participation and Experiences (Ulfvengren, Rignér, Ydalus, & Mårtensson), Deliverable No DI 50.36, Submitted to the EU commission September 2009. 6. Integrating Safety and Innovation in an Airline (Ulfvengren, Rignér, & Mårtensson). In Proceedings of the International Ergonomics Association (IEA) Conference. Aug 9-14, 2009. Beijing, China. 7. Modern Flight Training – Managing automation or learning to fly? (Rignér & Dekker). In S. W. A. Dekker, & E. Hollnagel (Eds.), Coping with computers in the cockpit, 1999. Ashgate. 8. A sustainable transport system for Sweden in 2040 (Steen, Åkerman, Dreborg, Henriksson, Höjer, Hunhammar, & Rignér). In Meersman, Van de Voorde, & Winkelmans (Eds.), World Transport Research: Selected Proceedings from the 8th World Conference on Transport Research, 1999. 9. Färder i framtiden – Transporter i ett bärkraftigt samhälle (Dreborg, Henriksson, Hunhammar, Höjer, Rignér, & Åkerman). KFB-Rapport, 1997. 10. Flygteknik – Potential för minskad miljöpåverkan (Rignér). FOA-Rapport, 1996.

Other

1. Pilot study to identify needs of a performance management system for future flight deck automation issues (Rignér, Ulfvengren, & Barchéus). Poster session at the 1st International conference on application and theory of automation in command and control systems, ATACCS 2011. Barcelona, Spain. May 26-27, 2011.

viii

List of abbreviations

ANS Air Navigation Services ANSP Air Navigation Service Provider APC Airline Pilot Certificate AQP Advanced Qualification Programme (FAA) ATC Air Traffic Control ATM Air Traffic Management ATO Approved Training Organization ATQP Alternative Training and Qualification Programme ATPL Air Transport Pilot License CAA Civil Aviation Authority CAST Commercial Aviation Safety Team CDM Collaborative Decision Making CICTT CAST/ICAO Common Taxonomy Team CNS/ATM Communication, Navigation, Surveillance/Air Traffic Management CPDLC Controller – Pilot Data Link Communication CPL Commercial Pilot’s License CRM Crew Resource Management CWA Cognitive Work Analysis DSPI Direct Safety Performance Indicator DST Decision Support Tools EASA European Union Aviation Safety Agency EBT Evidence Based Training (ICAO) EFB Electronic Flight Bag EFIS Electronic Flight Instrument System EGPWS Enhanced Ground Proximity Warning System FAA Federal Aviation Administration FANS Future Air Navigation System FCL Flight Crew Licensing FDM Flight Data Monitoring FMS Flight Management System FRAM Functional Resonance Analysis Method FRMS Fatigue Risk Management System FTL Flight Time Limitations GEMS Generic Error Modeling System GPS Global Positioning System GPS/GNSS Global Positioning System/Global Navigation Satellite System HILAS Human Integration into the Lifecycle of Aviation Systems IATA International Air Transport Association ICAO International Civil Aviation Organization IEA International Ergonomics Association ILS Instrument Landing System INCOSE International council of Systems Engineering IOSA IATA Operational Safety Audit JAR Joint Aviation Requirements

ix

JCS Joint Cognitive System KTH/CSA Kungliga Tekniska Högskolan/Center for Sustainable Aviation KPI Key Performance Indicators LOA Levels of Automation MCC Multi Crew Cooperation MPL Multi crew Pilot License MRO Maintenance and Repair Organization NextGen US ATM Modernization Project NDM Naturalistic Decision Making NP Nominated Person NTSB National Transportation Safety Board NTSC National Transportation Safety Committee PAN Procedures for Air Navigation Services RNP Required Navigational Performance SARP Standards and Recommended Practices SES Single European Sky SESAR Single European Sky ATM Research SESAR JU Single European Sky ATM Research Joint Undertaking SNA Social Network Analysis SOP Standard Operation Procedure SPI Safety Performance Indicator STAMP Systems-Theoretic Accident Model and Processes STEADES IATA Safety Trend Evaluation, Analysis & Data Exchange System SWIM System Wide Information Management TCAS Traffic Collision Avoidance System TEM Threat and Error Management UAV Unmanned Aerial Vehicles WHO World Health Organization

x

Table of Content

Abstract ...... i Sammanfattning ...... iii Preface and acknowledgement ...... v List of appended papers ...... vii List of additional papers ...... viii List of abbreviations ...... ix 1 Introduction ...... 1 1.1 Automation and aviation safety ...... 1 1.2 A socio-technical system ...... 2 1.3 The flight deck – also a socio-technical system ...... 4 1.4 Airline performance measurement and flight crew training ...... 4 1.5 Research question ...... 6 1.6 Thesis scope/delimitations ...... 6 1.7 Timeline – Research overview ...... 6 1.8 Thesis structure ...... 7 2 A pilot and airline perspective in the aviation system ...... 9 2.1 System of interest – perspective and boundaries ...... 9 2.2 Government level – EU & National governments ...... 10 2.2.1 A Single European Sky, SES ...... 11 2.2.2 Single European Sky ATM Research (SESAR) ...... 11 2.3 Regulators and associations level – EU & national legislation ...... 12 2.4 Company level – The airline ...... 13 2.4.1 Safety Management System requirements ...... 13 2.5 Management level – Flight Ops, Training & Safety departments ...... 15 2.6 Staff level – Flight Crew ...... 16 2.7 Work level – The flight deck environment and the role of the flight crew ...... 18 2.8 The hazardous process level – the flight ...... 19 2.9 Environmental stressors ...... 20 2.9.1 Changing political climate and public awareness ...... 20 2.9.2 Changing market conditions and financial pressure ...... 20 2.9.3 Changing competency and levels of education ...... 21 2.9.4 Fast pace of technological change ...... 22 2.10 Performance measurement and training ...... 24 xi

2.10.1 Performance measurement ...... 24 2.10.2 Training ...... 25 2.11 Aviation system from an airline perspective – summary ...... 27 3 Theoretical framework ...... 29 3.1 Socio-technical system of systems ...... 29 3.2 Automation – Implications for flight deck work ...... 29 3.2.1 Issues with automation ...... 30 3.3 Cognitive Systems Engineering and Joint Cognitive Systems ...... 33 3.4 Complexity ...... 34 3.5 Safety and human behavior in context ...... 34 3.5.1 Early approaches to safety ...... 34 3.5.2 “Safety I” – system perspective assuming simplified understandable causality ...... 36 3.5.3 “Safety II” and Resilience engineering ...... 36 3.5.4 Conflicting views – what does it mean? ...... 38 3.6 Understanding and controlling the function of a socio-technical system ...... 38 3.6.1 Performance measurement ...... 39 3.6.2 Safety Performance Indicators ...... 40 3.6.3 Methods for socio-technical systems and safety ...... 41 3.6.4 Using what is measured for improvements ...... 42 3.7 Theoretical summary ...... 43 4 Methods ...... 45 4.1 Overall research approach ...... 45 4.1.1 Applied research...... 45 4.1.2 System analysis ...... 45 4.1.3 Domain knowledge – my own professional experience ...... 46 4.1.4 Collection of qualitative and quantitative data ...... 47 4.2 Interviews ...... 48 4.3 Workshops ...... 49 4.4 Flight data collection and analysis ...... 49 4.5 The appended papers´ association to the research question ...... 50 5 Results ...... 53 5.1 Explorative interviews (2003) ...... 53 5.2 Appended paper 1 – Sharing the burden of flight deck automation training (2000) ...... 54

xii

5.3 Appended paper 2 – Measuring Safety Performance: Strategic Risk Data (Airline Safety and Human Factors Issues) (2009) ...... 54 5.4 Appended paper 3 – Study of safety performance indicators and contributory factors as part of an airline systemic safety risk data model (2009) ...... 56 5.5 Appended paper 4 – In need of a model for complexity assessment of highly automated human machine systems (2011) ...... 57 5.6 Appended paper 5 – Airline perspective on future automation performance – increased need for new types of operational data (2012) ...... 58 5.7 Appended paper 6 – Fine-tuning flight performance through enhanced functional knowledge (2020) ...... 59 6 Analysis ...... 61 6.1 System capabilities and competence ...... 61 6.2 Socio-technical model further elaborated ...... 63 6.3 Barriers for improved performance measurement and training/learning ...... 65 6.3.1 Performance measurement ...... 65 6.3.2 Training/learning ...... 66 6.3.3 Barriers for potential improvements ...... 67 7 Discussion and Conclusions ...... 71 7.1 Results vs. theories ...... 71 7.2 Potential for improvements ...... 72 7.3 Application of theoretical models to airline safety processes ...... 75 7.4 Limitations ...... 76 7.4.1 Evaluation and scrutiny of methods ...... 76 7.5 Time aspect ...... 77 7.6 Contribution ...... 77 7.7 Future research...... 78 7.8 Conclusions ...... 78 Final thoughts ...... 80 8 References ...... 81

xiii

Appendices

A. Paper 1: Sharing the Burden of Flight Deck Automation Training B. Paper 2: Measuring Safety Performance – Strategic Risk Data C. Paper 3: Study of Safety Performance Indicators and Contributory Factors D. Paper 4: In Need of a Model for Complexity Assessment E. Paper 5: Airline Perspective on future Automation Performance F. Paper 6: Fine-tuning flight performance through enhanced functional knowledge G. Interview questions conducted 2003 directing future work

xiv

Chapter 1 – Introduction

1 Introduction This chapter describes the automated environment of the aircraft flight deck as part of a greater socio-technical system, outlines the reasoning behind the research question and describes the structure of the thesis.

1.1 Automation and aviation safety The aviation domain has experienced an increased use of automation. Many advantages, such as increased productivity, better quality, robustness, consistency, and efficiency is attributed to automation (e.g. Boy, 2014). Sophisticated automation on the flight deck has also been introduced with the purpose of reducing flight crew workload, adding new capabilities, and increasing fuel efficiency (e.g. Martins et al., 2014). Features introduced over the last decades include e.g. the Flight Management System (FMS), Electronic Flight Bags (EFBs) and increased use of controller pilot datalink communication (CPDLC) devices. Global Positioning System/Global Navigation Satellite System (GPS/GNSS) based procedures allows Required Navigational Performance (RNP) operations and new approach and departure procedures. Lately, new flight crew operational support procedures have been proposed and are currently being tested to improve efficiency (Abdelmoula & Scholz, 2018). Automation is a key enabler of future system efficiency gains (SESAR, 2015).

Despite all the advantages, the increase in flight deck automation has introduced challenges to activities on the flight deck, assigned to design issues of information technologies, tools, and displays creating problems for the pilots. Already in the 1990s, the Federal Aviation Administration, FAA, launched a study to address the interfaces between flight crews and the flight deck systems, where the study recognized the importance of looking beyond the label of flight crew error to understand why errors occur (FAA, 1996). As awareness about what impact this had on the work place, more focus was directed towards conscious design of the automated systems, emphasizing the importance of including the human operator in design and not only looking at automation technologies in isolation (e.g. Wiener & Curry, 1980; Billings, 1996; Parasuraman et al., 2000).

The aviation transportation is an extremely safe system. Globally, the risk of death per boarding over 2008-2017 fell by more than half compared to the previous decade (Barnett, 2020). The year 2017 was documented being the safest year in history, where there were no passenger fatalities on larger jet transport aircraft (IATA, 2019). However, 2018 and 2019 have included some serious accidents including many fatalities. For example, Cubana de Aviación Flight 972, that crashed during climb out, and Saratov Airlines Flight 703 that crashed shortly after taking off killing everyone on board. Also, the two recent major accidents involving the 737 MAX aircraft took place during these years. Below follows a brief summary of an accident where the flight crew’s interaction with the automated systems was identified as a contributing factor. Ironically the automated systems are often installed to enhance safety.

San Francisco International Airport, USA, B777, 2013 In good weather conditions, the Asiana Airlines flight 214 crashed just short of the runway during a visual approach to runway 28L. As the aircraft struck the seawall projecting into the San Francisco Bay, parts of the aircraft separated.

1

Chapter 1 – Introduction

Due to an inadequately managed initial approach the aircraft ended up above the desired glidepath. By selecting an inappropriate autopilot mode, the autothrottle did no longer control airspeed. The crew were not fully aware of the situation and the recovery attempt was conducted too late. A lack of systems understanding and over-reliance on automation was cited as major factors in this accident (NTSB, 2014).

Other noticeable related and well discussed accidents happened at Gottröra (1991), Nagoya (1994), Cali (1995), South Atlantic Ocean (2009), Amsterdam (2009) as well as the previously mentioned recent 737 Max accidents (2018/2019).

Human (pilot) error was previously often cited as the primary cause of many airline accidents and incidents and is still sometimes inferred as seen in some of the wordings used in the accident summary above. However, it increasingly became recognized that other factors played a significant role in the accident/incident causality chain where people can cause or contribute to accidents, or mitigate the consequences, in several ways. ICAO (2013a) describes the evolution of aviation safety as consisting of three overlapping phases: the technical era (from early 1900 until late 1960s), the human factors era (from the early 1970s until mid-1990s) and the current organizational era (from the mid-1990s and onwards). Relevant for the organizational era is that undesired factors may lie dormant in an organization for some time where the impact of these factors on the operational outcome is delayed. Reason (1990) calls these factors, that unintendedly are implemented and caused by humans, latent conditions. The two accidents with the Boeing 737 Max aircraft, and the publication of the subsequent accident report from the first crash in Indonesia, points at several factors interacting together contributing to the first accident (NTSC, 2019). In this case, a powerful automated system, potentially not known to all pilots, failed due to a faulty sensor and worked against the pilots trying to save the situation. According to the Indonesian investigators, airlines and their flight crews had not received enough instructions how to override the software in case of a malfunction. Here, a combination of design factors, lack of communication and training as well as the pilots’ skills and knowledge played a role in these outcomes.

1.2 A socio-technical system The air transportation system is a socio-technical system, characterized by being highly complex and comprised of several sub socio-technical systems (Harris & Stanton, 2010). In socio-technical systems, the chain of causality leading up to incidents or accidents is not easily understood and previous approaches assuming understandable relationships between system components proved oversimplified and insufficient as a means of understanding the operations of a socio-technical system. Rather, properties in complex systems arise from the interactions among the system components. Safety is seen as such an emergent property (e.g. Blom, et al., 2016; Leveson, 2016). Consequently, an incident/accident evolves through the interactions and relationships between components, not necessarily because a component is broken (Dekker, 2011; Leveson, 2016).

One field in socio-technical research is cognitive systems engineering. It is concerned with “managing the complexity of cognitive work through design, based on empirical enquiry of how work is achieved in context” (Naikar & Brady, 2019, p.219, giving several references).

2

Chapter 1 – Introduction

The Three Mile Island accident and the work of Jens Rasmussen is often referred to as central to the origin of the field (e.g. Klein, 2008). Compared to earlier socio-technical studies, e.g. Emery and Trist (1960), the research was now more focused on operational risk and safety than design of work functions and organizations. Jens Rasmussen, with a degree in electrical engineering, was recruited to design the control room of the nuclear research reactor at Risø, Denmark. He became increasingly interested in the interplay between operators and the instrumentation and the operators’ reactions and behavior under abnormal conditions (Waterson et al., 2016). Later, Rasmussen addressed the issue of deficiencies with early models of accident causation in a model of a dynamic socio-technical systems. He argues that to understand risk in a socio-technical system several system levels, ranging from legislators, managers and work planners to system operators must be included and that a system-oriented approach based on functional abstraction rather than structural decomposition is required (Rasmussen, 1997).

Figure 1.1. The socio-technical system involved in risk management (Rasmussen, 1997, p.185).

The system is influenced by public opinions such as environmental concerns, heavy competition as well as new technology enabling new way of operations. At the same time, airline operations are heavily regulated and subject to regulatory oversight. “Many levels of politicians, managers, safety officers, and work planners are involved in the control of safety

3

Chapter 1 – Introduction by means of laws, rules, and instructions that are formalized means for the ultimate control of some hazardous, physical process” (ibid, p.184). This view is captured in the well-known model for the socio-technical system involved in risk management (see Fig. 1.1 above).

1.3 The flight deck – also a socio-technical system The term socio-technical system may be used for a part of the system at all of the various levels in Rasmussen’s model. System boundaries may also be drawn extending up or down describing a different unit of analysis (system of interest). An aircraft is a socio-technical system operating within the larger aviation socio-technical system (Harris, 2013). In the 1980s the concept of Joint Cognitive System (JCS) was proposed as an improved analysis framework to understand how socio-technical systems work. This unit of study consists of both human and machine agents (Hollnagel & Woods, 1983; 2005; Hutchins, 1995). The core idea of a JCS is “how multiple human and machine roles interact and coordinate to meet the changing demands of the situations they face as they attempt to achieve multiple and conflicting goals” (Woods, 2016, p.5). The flight deck can be seen as a joint cognitive system, consisting of the pilots and the automated systems, where this system, and the natural and engineered environments/contexts in which they function, are studied as a single unit of analysis (Hollnagel, 2007). As in any system hierarchy the system boundaries may also be expanded to include an even greater JCS including the cabin crew, planners, Air Traffic Control (ATC), other flights and so on.

Part of Jens Rasmussen’s legacy includes a framework for the analysis, design, and evaluation of complex sociotechnical systems called cognitive work analysis (CWA) (Rasmussen, 1986; Vicente, 1999). CWA focuses on the constraints imposed on the workers of a system to achieve the desired system outcome. The focus on constraints acknowledge the importance of worker adaptation in an unpredictable environment (Naikar, 2017). Other approaches of socio-technical system analysis are STAMP (Systems-Theoretic Accident Model and Processes) and FRAM (Functional Resonance Analysis Method). With the STAMP model, Leveson has taken systems thinking and system theory applied to safety. Compared to the Rasmussen model, STAMP creates a more inclusive model of accident causality and new hazard and risk analysis methods (Leveson, 2004; 2016). With FRAM, Hollnagel has also proposed a method for modelling socio-technical systems that centers around the idea of resonance arising from variability in everyday performance (Hollnagel, 2012a). These models/methods largely corresponded to a need to develop safety approaches relying less on simplified linear causality models, accepting unclear causality and allowing adaptability and design of robust systems that may become more resilient in case of disturbances or unexpected events. Hollnagel et al. (2015) calls this a move from Safety I, assuming clearer causality between various parameters and outcomes, to Safety II where causality is unclear, but adaptability is emphasized.

1.4 Airline performance measurement and flight crew training Pilots operate in an environment in which they have great acceptance to be monitored and measured on their performance during normal line operations, not only during simulator training and checking sessions. At recurrent intervals check pilots occupy the jump seat to monitor crew’s performance. In addition, Flight Data Monitoring (FDM) programs capture enormous amount of data, both to discover individual operational exceedances as well as to follow aggregated data trends, across an entire fleet, to discover unwanted operational drift.

4

Chapter 1 – Introduction

However, this data is only fully utilized (e.g. by analyzing the cockpit voice recorder) after rare severe events, such as serious incidents or accidents. It has been argued that incidents and accidents, when things go wrong, can be avoided by striving further to make things go right (Hollnagel, 2014). This suggests that safety could be improved further by understanding normal operations data in addition to primarily the relatively few accidents and incidents.

Accidents and serious incidents are extremely rare events. This means that the probability that the next accident waiting to happen will be avoided by learning from the previous ones are small and shows the need to find other ways to understand the system and improve safety. Consequently, airlines use Safety Performance Indicators, SPIs, to observe operations and monitor for slow trends or sudden shifts in performance during, the usually extended, periods without accidents and serious incidents. Essentially, what is monitored is the outcome of the flight crews’ actions and interactions with the flight deck automation (the flight deck JCS) in the context of the environment and the extended JCS.

Previous flight crew training activities were largely prescriptive. Specified training activities were trained at certain intervals. However, the industry is currently moving towards a performance based regulatory environment where training activities are based on actual operating conditions and identified training needs by the means of a required measuring capacity (e.g. EASA, 2018a). At the same time, airlines use data to be proactive or predictive rather than reactive to past events. One way of trying to be more predictive is to identify potential unwanted operational drift by looking for “clues” in data gathered from normal operations (Rasmussen, 1997; Snook, 2000; Dekker, 2011). These clues are important input when adjusting flight crew training, procedures or when selecting focus areas. However, as many of these indicators are outcome related, such as the number of runway incursions, number of unstabilized approaches etc., little information in these indicators provides deeper insights into the pilot – automation relationship and the potential pitfalls that may or may not be present (Dekker, 2000).

Through airlines’ performance measurement system, a large amount of performance data is collected. However, little of that data is in a format immediately useful for traditional studies of complex socio-technical systems, or for studies of the flight deck JCS. Also, some argue that industrial constraints cannot allow the extensive analysis associated with CWA of complex systems (Bodin, 2016). Therefore, approaches for scaling CWA analysis to smaller projects could give better opportunities to use the CWA framework. However, when doing so there is a risk to lose the overall picture of the socio-technical system that is analyzed. Other studies have explored possibilities to expand the FRAM framework, combining it with other approaches trying to maintain the systemic perspective (e.g. Adriaensen et al., 2019). Despite alternative approaches this thesis argues that there is still room for improvement with the traditional approach and question that the use of the available airline data and performance measurement systems are fully utilized. So, the question is, how can improved data utilization contribute to improve the flight crews’ skills and knowledge, both on an individual level and as a team performance in a heavily automated environment?

To meet future challenges and expected demand for air travel, several high-level goals, related to capacity, safety, environmental impact and efficiency are set by the European Commission for a Single European Sky (SESAR, 2020). At the same time, airlines are working

5

Chapter 1 – Introduction under immense competition, pressured to reduce costs further to stay in business. Against that backdrop, using Rasmussen’s model for socio-technical system involved in risk management as the framework, this thesis explores the dynamics of the aviation system, primarily with a perspective from the flight crew and their automated work environment.

1.5 Research question It is tempting to state that the purpose of this research is to improve flight safety. Naturally, much, if not all, of the research conducted in relations to this thesis is performed hoping that what is learned somehow will contribute to improved flight safety levels in the future. However, clear relationships between the research conducted and increased flight safety levels would be difficult, if not impossible to validate. Therefore, the purpose of this research is to increase knowledge about how training content and learning opportunities for flight crew relates to airline performance monitoring and measurement processes, given a highly automated dynamic environment. Against this background the following research question will be explored in this thesis.

RQ: What are the barriers and potential for training and airline performance measuring processes to support the flight crew for the operation of the highly automated aircraft?

1.6 Thesis scope/delimitations Although the aviation system is a large, complex, socio-technical system, the system of interest in this thesis is the flight crew, and their work environment on the flight deck, and the relationship and how this work interacts with the flight operations-, training-, and safety departments. The aviation system at large, especially from a technological system design perspective, is addressed as is, and discussions on human centered system design etc. are not included.

1.7 Timeline – Research overview The following figure shows the main phases of the research conducted.

Figure 1.2. Research timeline.

The empirical data of the thesis, presented in papers 1-6, are based on two research projects. Project HILAS (Human Integration into the Life-cycle of Aviation Systems) was a FP6 EU-funded project aiming at integrating human factors knowledge into the life cycle of aviation (HILAS, 2010). The research aimed at developing new technologies as well as new operational and management concepts and processes including new capabilities to model

6

Chapter 1 – Introduction operational systems and to gather and integrate a range of data sources to enable deep systemic analysis (McDonald et al., 2009). Although the project comprised several research strands, my participation related to the flight operations environment, and especially to investigate possible ways to further utilize airline data and the performance measurement systems as well as developing and implementing a safety management system.

Project Brantare (funded by Trafikverket through KTH/CSA) addressed the potential to reduce noise on ground under the approach sector of arriving aircraft, including potential changes to the flight crews’ operational handling of the aircraft. The opportunity to use flight data from a participating airline was utilized to explore new learning opportunities for pilots.

1.8 Thesis structure  Chapter 1 describes the overall problem and outlines the structure of the thesis.  Chapter 2 applies Rasmussen’s socio-technical system model to airline risk management. By doing so, this chapter elaborates on and puts the pilot’s role in the airline socio-technical system in perspective.  Chapter 3 provides a theoretical framework used for analysis of the aviation system model, including aspects of training and performance measurement in focus of this research.  Chapter 4 describes the methods used for data collection and analysis.  Chapter 5 summarizes the results from the appended papers.  Chapter 6 includes a synthesis of the studies in relation to the analysis of the system and the research question, including barriers for system improvements.  Chapter 7 includes a critical review of the research conducted, discusses the obtained results, including potential for system improvements, generalizability, and limitations as well as future research ideas. This chapter also concludes the thesis.  Chapter 8 contains the references.  Appendices include the six appended papers (appendix A-F) as well as interview questions (appendix G).

7

Chapter 1 – Introduction

8

Chapter 2 – A pilot and airline perspective in the aviation system

2 A pilot and airline perspective in the aviation system In this chapter a dynamic model for socio-technical systems involved in risk management is applied to aviation and described from an airline system perspective. This chapter is primarily based on the work conducted in relation to the HILAS project but is also considering regulatory requirements setting the boundaries for flight operations.

Safety is a core value and part of the business model of any airline. A socio-technical system is an open system and sensitive to both internal and external disturbances, distractions, and challenging situations. To manage safety, every airline needs to have an effective risk management system in place that control risk at an acceptable level, avoiding incidents and accidents while protecting company interests and its production (Reason, 1997).

The key actors of the air transportation industry are the airlines, the airports, Air Navigation Services Providers (ANSPs) and the regulators. Other actors are the aircraft manufacturers, maintenance, and repair organizations (MROs), training organizations, financial institutions etc. Airlines interact with other actors in the system as well as other airlines. An airline today may very well consist of several companies divided into several organizational layers.

2.1 System of interest – perspective and boundaries When applying a model, it is important to consider its applicability for the system of interest. Hence, the dynamic model for socio-technical systems involved in risk, proposed by Rasmussen (1997), applied in this thesis, needs to be adapted to the aviation system with the airline as the system perspective. The following sections summarize the various levels included in the model. First, a short description of the higher levels is included. Subsequently a more in-depth description follows of the areas more immediately relevant for the pilots in their daily operational environment. The hazardous process of concern is the flight process.

Although the higher levels in Rasmussen’s model to a large extent may be similar for different “low level hazardous processes in the same domain”, the lower levels vary depending on what process is described. The flight process, which is the hazardous process of concern in this thesis, is not the same as the aircraft maintenance process or the air traffic control process although there are obvious interactions. Even so, when the flight process is selected, one need to decide from which perspective the process is observed, being from the eyes of the flight crew, cabin crew, dispatchers etc. Below, the Rasmussen risk model is applied to the flight process with the perspective of the flight crew. Emphasis in this chapter is on factors affecting the flight crew and their automated work environment, and the levels in immediate proximity of their level in the model. Also, although focus in this chapter is on the immediate levels surrounding the pilots in the model, these levels and interactions are dependent on actions and interactions taking place at higher levels in the model.

A version of Rasmussen’s socio-technical model involved in risk management, adapted for aviation and an airline, with focus on the pilots and the flight deck’s JCS, is shown in Fig. 2.1. below.

9

Chapter 2 – A pilot and airline perspective in the aviation system

Figure 2.1. The socio-technical system involved in airline risk management (adapted from Rasmussen, 1997). The red circled area shows the area of particular interest in this thesis.

2.2 Government level – EU & National governments The aviation system is assumed to be set for continued global expansion. The full impact of the recent disrupt of pandemic is not integrated in this analysis. The International Air Transport Organization, IATA, forecasts that the number of passengers will double by 2036 (IATA, 2018). Hence, aviation will face large challenges when max capacity limits is reached because of apparently ever-increasing traffic demands. The International Civil Aviation Organization, ICAO, formulated their vision of the future air traffic system already in the beginning of the 1990s under the Communication, Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Future Air Navigation System (FANS) acronyms (Galotti, 1997). The goal would be a free-flight concept where pilots and airlines are free to select, and change during flight, their flight path as they see fit. The target date for this concept for global implementation was initially set to the year 2010. Although still not implemented, the vision remains. In Europe, this is addressed in the Single European Sky concept and the Single European Sky ATM Research project, SESAR.

10

Chapter 2 – A pilot and airline perspective in the aviation system

2.2.1 A Single European Sky, SES In 2004, the European Union and Eurocontrol formulated the concept a “Single European Sky”, SES, an ambitious initiative launched to reform the architecture of European Air Traffic Management, ATM. It proposes a legislative approach to meet future capacity and safety needs at a European rather than at a local level. “Flight Path 2050” is a document which presents the ultimate vision (European Commission, 2011).

Key objectives of SES are to restructure European airspace as a function of air traffic flows, to create additional capacity and to increase the overall efficiency of the air traffic management system. The European Commission set challenging goals in relation to these objectives to be reached by 2020. The high-level goals, set in relation to a 2004/2005 baseline, are described below (SESAR, 2009):  Enable a 3-fold increase in capacity which will also reduce delays both on the ground and in the air.  Improve safety by a factor of 10.  Enable a 10% reduction in the effects flights have on the environment.  Provide ATM services to the airspace users at a cost of at least 50% less.

As 2020 has arrived, the goals have been revised, but in the latest edition of the corresponding ATM Master Plan (SESAR, 2020) they are no longer as explicit. However, the original goals are still referred to in a note.

2.2.2 Single European Sky ATM Research (SESAR) The SESAR project was launched in 2004, as the technological pillar of the Single European Sky. Its role is to define, develop and deploy what is required to increase the ATM system performance. Established in 2007 as a public-private partnership, the SESAR Joint Undertaking (SESAR JU) is responsible for the modernization of the European ATM system by coordinating and concentrating all ATM relevant research and innovation efforts in the EU.

According to the ATM Master Plan the vision “builds on the notion of trajectory-based operations and relies on the provision of air navigation services (ANS) in support of the execution of the business or mission trajectory – meaning that aircraft can fly their preferred trajectories without being constrained by airspace configurations. This vision is enabled by a progressive increase of the level of automation support, the implementation of virtualization technologies as well as the use of standardized and interoperable systems” (SESAR, 2015, p.8). To this end, SESAR has established six key features to achieve such an ATM paradigm shift:  Moving from airspace toward 4D trajectory-based operations.  Traffic synchronization.  Network collaboration and dynamic airspace management.  SWIM (System Wide Information Management) providing all parties involved access to up-to-date key flight info.  Airport integration and throughput.  Conflict management and automation.

11

Chapter 2 – A pilot and airline perspective in the aviation system

The goal of the second SESAR Research and Innovation program, SESAR 2020, is that it shall contribute to establishing the performance benefits envisioned (SESAR, 2018). This program includes areas such as automation, autonomy, information management etc.

NextGen (the US ATM modernization project) and SESAR have published a joint State of Harmonization Document (now in its third edition) providing a high-Level summary of the current state of progress towards achieving harmonization and the necessary level of interoperability between the two programs. Both SESAR and NextGen visions propose a new ATM system obtained by evolving the current situation from airspace-based operations toward a trajectory-based operation, improving the aircraft trajectories planning processes to reduce the tactical interventions, and building up a more distributed operational scenario (SESAR/FAA, 2018).

2.3 Regulators and associations level – EU & national legislation Aviation is heavily regulated. Regulations cover almost all aspects of the industry; from construction and maintenance of aircraft, to how an airline should be organized. The International Civil Aviation Organization, ICAO, established in 1944, is a specialized agency of the UN. At the end of 1944, the Convention on International Civil Aviation (Chicago Convention) was signed by 52 member states. The Convention establishes e.g. rules of airspace, aircraft registration and safety and has been revised several times. In addition to other aviation development objectives, ICAO works with the current 192 member states and industry groups to develop the international civil aviation Standards and Recommended Practices (SARPs). These SARPs are contained in annexes supporting the Convention, and in turn are used by member states and transformed to local regulations or law to ensure international harmonization and standardization. In Europe, the European Union Aviation Safety Agency, EASA, is responsible for civil aviation safety. Among other things, EASA drafts and advises on EU safety legislation. Other associations, such as the International Air Transport Association, IATA, with the interest of airlines in mind, various pilot unions etc. can be included in the airline socio-technical system model.

In the ICAO Safety Management Manual, SMM, an overview of fundamental aviation safety management concepts and practices are described (ICAO, 2013a). Aviation safety is defined as “the state in which the possibility of harm to persons or of property damage is reduced to, and maintained at or below, an acceptable level through a continuing process of hazard identification and safety risk management” (ibid, p.2-1). It is recognized that the aviation system is not free of hazards and associated risks, and human activities and/or human built systems cannot be completely free of errors. Consequently, safety risks must be continuously mitigated and managed.

The management of risk is reflected in regulations and international standardization and described in the ICAO SMM where the purpose and content of an airline Safety Management System (SMS) is described. The system is designed to assure the safe operation of aircraft through effective management of safety risk. Safety risk is defined as “the projected likelihood and severity of the consequence or outcome from an existing hazard or situation” (ibid, p.2-27). The term safety risk management is used to differentiate this activity from the management of other risks; financial risk, legal risk, economic risk etc. The fourth iteration of the SMM makes a stronger point of the relationship between the various types of risk.

12

Chapter 2 – A pilot and airline perspective in the aviation system

ICAO describes the further improvement of aviation’s safety performance to be achieved through several activities (ICAO, 2016): 1. The development of global strategies contained in the Global Aviation Safety Plan and the Global Air Navigation Plan. 2. The development and maintenance of Standards, Recommended Practices and Procedures applicable to international civil aviation activities which are contained in 16 Annexes and 4 PANS (Procedures for Air Navigation Services). 3. The monitoring of safety trends and indicators. 4. The implementation of targeted safety programs to address safety and infrastructure deficiencies. 5. An effective response to disruption of the aviation system created by natural disasters, conflicts or other causes.

Also, ICAO and the Commercial Aviation Safety Team (CAST) have jointly set up the CAST/ICAO Common Taxonomy Team (CICTT) to develop common taxonomies and definitions for aviation accident and incident reporting systems. With standardization and a common language for accident and incident reporting, the idea is to enhance the aviation community’s ability to focus on common safety issues.

2.4 Company level – The airline Earlier, most airlines were fully or partly state owned or sponsored and they were heavily protected from competition. Deregulation and the open skies agreements increased competition. Fluctuating fuel prices, lower air fares and other events have totally changed the airline business. Also, bankruptcy, mergers and the introduction of low-cost airlines have transformed this sector, further increasing competition. Large differences exist how airlines conduct their business. Some airlines keep many functions in house, including e.g. ground services, while others keep their core company to a minimum, outsourcing many functional parts. Some functions are however mandated to stay in-house. Apart from an accountable manager, the airline shall nominate persons responsible for flight operations, crew training, ground operations and continuing airworthiness. A safety manager and compliance manager should also be appointed.

The airlines’ desire for flexible forms of employments has led to an increase in staffing companies. According to a study conducted by the University of Gent, indirect employment contracts stands for about 1/5 of the current European pilot workforce (Jorens et al., 2015). Prospective pilots undertake training at Approved Training Organizations (ATOs) to receive the required licenses and ratings that are required to work as a commercial airline pilot.

2.4.1 Safety Management System requirements The European Union Aviation Safety Agency (EASA) organizational requirements include general requirements for management systems and are designed to embed the ICAO Annex 19 in a way as to ensure SMS compatibility with existing management systems and to encourage an integrated management system (EASA, 2018b). The goal of the system is to continuously improve safety by proactively containing or mitigating risks before they result in accidents and incidents.

13

Chapter 2 – A pilot and airline perspective in the aviation system

The four main components of the SMS can be broken down into 12 elements:  Safety policy and objectives. o Management commitment and responsibility. o Safety accountabilities. o Appointment of key safety personnel. o Coordination of emergency response planning. o SMS documentation.  Safety risk management. o Hazard identification. o Safety risk assessment and mitigation.  Safety assurance. o Safety performance monitoring and measurement. o The management of change. o Continuous improvement of the SMS.  Safety promotion. o Training and education. o Safety communication.

In particular, the SMS, as mandated for an airline, shall:  identify safety hazards,  ensure the implementation of remedial action necessary to maintain an agreed safety performance as well as,  provide for continuous monitoring and regular assessment of the safety performance.

To be able to meet those requirements, performance measuring and monitoring activities need to be refined and elaborated. This must include a thorough assessment of the human aspects of the operation of technologies and automated system performance (ICAO, 2013a). The safety assurance activities should include development and implementation of changes and corrective actions identified in response to findings of systemic deficiencies having a potential safety impact.

14

Chapter 2 – A pilot and airline perspective in the aviation system

The risk management process as described in the ICAO SMM can be seen in Fig. 2.2 below.

Figure 2.2. The risk management process (ICAO, 2013a, p.5-16).

Although a risk management system is regulatory mandated it is operationally viable for airlines to have such a system in place to be able to make conscious and deliberate company decisions regarding its operations, resource allocation etc. By identifying hazards, collecting, and analyzing data and continuously assessing safety risks, the goal is to improve safety continuously by proactively containing or mitigating risks before they result in accidents and incidents. The airline should be able to manage system change, a continuous improvement of the SMS itself as well as impacts on training, education, and safety communication. Another essential subcomponent is the safety performance monitoring and measurement requirement. The system should contain “measurable performance outcomes to determine whether the system is truly operating in accordance with design expectations and not simply meeting regulatory requirements” (ICAO, 2013a, p.5-30). Safety performance indicators (SPIs) should be defined and used to “monitor known safety risks, detect emerging safety risks and to determine any necessary corrective actions” (ibid, p.5-30).

2.5 Management level – Flight Ops, Training & Safety departments In this thesis, the management level of primary interest and relevance for flight operations are, except for the flight operations department, also the training and safety department. In addition to key personnel, such as Nominated Persons, NPs, regulatory requirements specify certain central functions as well as some of the required processes and activities to be conducted. These departments and the respective NP are, among other things, responsible for defining the Standard Operating Procedures (SOP) and the analysis of operational performance, adjusting procedures and training programs as deemed required. Supporting this are performance measurement processes such as the incident and accident reporting system and the Flight Data Monitoring (FDM) system. Also, in each production area there are additional operations managers, safety and quality personnel.

15

Chapter 2 – A pilot and airline perspective in the aviation system

2.6 Staff level – Flight Crew Actors at staff level of primary interest in this thesis are the flight crew, i.e. pilots. One of the prerequisites to be able to work as a pilot is that you hold a license valid in the country where you seek employment. Also, other factors are of importance to get a job in a major airline, such as personal characteristics, the amount of flying time experience as well as other forms of higher education. In the context of this research, focus is on the European environment where an EASA license is required. A license is issued by the respective national aviation authority in accordance with the EASA standards.

Flight Crew Licensing (FCL) regulations has been used for more than 100 years. EASA’s FCL regulation describes the current European licensing system in seven annexes. Annex I (Part- FCL) deals with flight crew licensing requirements for pilots flying EASA aircraft. To be able to work as an airline pilot in Europe, under the EASA umbrella, a Commercial Pilot’s License (CPL) is required. Similarly, to be a commander on larger aircraft an Air Transport Pilot License (ATPL) is required. This requires a minimum number of flying hours in addition to successful completion of ATPL theory training. The ATPL can be “frozen” if the theory is passed but the minimum number of flying hours have not been reached. The license must be valid for the class of aerial vehicle to be flown (e.g. aircraft) and an instrument rating is required to enable flying in all meteorological conditions. On top of this, multi pilot operational qualification and a type rating is also required on the specific aircraft type to be flown.

ICAO (2013b) describes the eight core competencies of flight crew as follows: 1. Application of procedures. 2. Communication. 3. Flight path management — automation. 4. Flight path management — manual control. 5. Leadership and teamwork. 6. Problem solving and decision making. 7. Situation awareness. 8. Workload management.

Competency is “encompassing the technical and non-technical knowledge, skills and attitudes to operate safely, effectively and efficiently in a commercial air transport environment” (ICAO, 2013b, p.I-3-2). To achieve those competencies, some of the main training stages are described below:  Initial basic training, “learning how to fly”.  Multi-Crew Cooperation (MCC) training.  Type rating training, learning how to handle the various systems of a specific aircraft type.  Initial airline (conversion) training to adapt to a particular airline’s specific set of operational procedures and  Yearly recurrent training to maintain proficiency and train any new procedures and/or system changes implemented.

There are several career paths that lead to a position as a pilot in a major airline, for example: straight from flight school, or more commonly through employments at smaller

16

Chapter 2 – A pilot and airline perspective in the aviation system airlines. There are also the more unconventional employment arrangements where the employee “pay to fly” to get a desired type rating on an attractive aircraft type. Once employed by an airline, some training could take place internally, e.g. recurrent training, and other training could be conducted by an external training provider, e.g. a type rating when changing aircraft type. Several airlines are managing their own initial training programs.

Individuals that want to become professional pilots typically undertake full-time integrated training, or modular step-by-step training at an Approved Training Organization, ATO, until obtaining the various certificates and ratings (and flying experience) required to apply for a job at an airline. Several training elements must be completed including theoretical training and examinations, flight instructions (aircraft and simulator training) and skill tests.

Table 2.1. The main career paths (in Europe) to required licenses, including a “frozen” ATPL (Air Transport Pilot License).

Integrated ATPL training This path takes the student from zero experience up to a frozen ATPL in approximately 12-18 months. If self-sponsored, this is a costly, yet quick, way to get to the license required. The training could be self-, airline or government sponsored. Modular ATPL training The modular training path could prove as a cheaper way to reach the ATPL than the integrated ATPL route. The training could potentially be combined with a normal job, where training is conducted during spare time. This increases time to completion compared to an integrated program. This training setup is usually self-sponsored. Integrated MPL training The Multi-crew Pilot License training setup comprises less time in an aircraft than a traditional integrated ATPL program. However, there will be more simulator training time and initial line training at the host airline. This training could be self-, airline-, or government sponsored.

The MCC training aims to give single seat pilots the team skills, such as communication and coordination skills, necessary to safely operate in a more complex, multi-crew aircraft environment. The Multi-Crew Cooperation course and jet familiarization training are intended to bridge the gap and transition from initial training to the airlines. Ryanair introduced and endorsed the Airline Pilot Certificate (APC) course to increase the likelihood that initial screening of prospective new pilots would be more successful in terms of number of approved applicants.

The Multi-crew Pilot License, MPL, and the associated training, is a newer form of training aimed at preparing pilots for multi-crew work on the modern flight deck. Each MPL course developed is tied to a host/partner airline where initial line training will be conducted. The MPL can be converted to an ATPL later on, in the same way as a CPL, once the requirements are fulfilled. In summary, there are many ways to become a pilot although the same licenses are required. Once training is complete it may be difficult to immediately find a job as a pilot. Consequently, a considerable amount of time may have passed once a job eventually is secured.

17

Chapter 2 – A pilot and airline perspective in the aviation system

2.7 Work level – The flight deck environment and the role of the flight crew The work level represents the flight deck environment and the role of the flight crew. The pilots’ responsibility, and especially the commander’s, is not only limited to flying the aircraft. Other responsibilities include weather considerations and fuel upload decisions, handling of unruly passengers, acceptance of the aircraft with respect to various technical deficiencies, performance of various aircraft inspections and ground supervision activities related to loading etc. During a flight, the pilots execute the air traffic controllers’ directives and manage the flight path. Today, almost all the activities on the flight deck involve pilots interacting with some form of automated aircraft system.

As described in the introduction, flight deck automation has changed significantly, not only over the period encompassing the work for this dissertation, but over the last 50 years. From the early add on autopilot, to the current fully integrated flight deck comprising a Flight Management System (FMS) interacting with the autopilot, auto-thrust, calculating optimum flight path profiles etc. Information systems have improved vastly, and new communication tools have become available. Also, automation is used for controlling several aircraft systems, such as maintaining cabin pressure and temperature.

With the use of the FMS the aircraft can fly a preprogrammed flight path where little intervention from the pilots is required during large parts of a flight. On some aircraft, the pilots do not directly control the control surfaces (such as elevator, rudder and ailerons) but rather interact with a flight control computer that in turn send signals to the control surfaces specifying the required surface deflection to achieve the desired flight path (fly by wire). Some flight control computers incorporate constraints, or boundaries, limiting pilot inputs to ensure that the aircraft stays within a safe flight envelope with regards to speed, attitude etc. (Pritchett, 2009). With several levels of automation available, airlines are mandated to provide guidance on what level of automation to use in various phases and situations during a flight. Still, manual flying is an essential key pilot skill where the autopilot is not used for take-offs and usually not during landings, unless for training purposes or when the weather conditions require its use.

Sophistication of the instruments have developed from the basic mechanical instrument to the current moving map and multi-functional displays. Fig. 2.3 shows how some of these displays, such as the map displays, primary flight displays, navigation displays as well as a system display may be configured in a common aircraft (Airbus A320). Other panels provide the pilots with various means of interaction with the aircraft, for flight path management as well as control of various aircraft systems. With increased monitoring capabilities regarding the status of various aircraft systems new types of warning and information systems have been introduced. In fact, the amount of information available about various system parameters is so vast that careful consideration about what to present to the pilots is required (Boy, 2011). Various types of information are available, e.g. aiding the pilots with their spatial orientation or assisting with fuel and time predictions. Warnings and alerts may be triggered if certain parameters are exceeded and pilot action is required. Different types of design philosophies exist, e.g. how to present information, use of auditory alerts, whether to use tactile information or not etc. However, on a fundamental level, most of the basic

18

Chapter 2 – A pilot and airline perspective in the aviation system information regarding airspeed, altitude and speed is presented in a similar way in most aircraft types.

Figure 2.3. A320 flight deck.

As technology has improved, so has the navigational precision of the aircraft. Today aircraft primarily relies on inertial reference systems and GPS positioning, rather than following ground-based radio beacons. This has made possible new types of approaches that are not reliant on ground-based navigation aids. To conduct approaches, based on an aircraft’s navigational performance capabilities, requires certain aircraft functionality and pilot training. Furthermore, rulemaking activities must keep up with technological development to allow the use of new aircraft and system capabilities for system efficiency and safety improvements. Other systems further supporting pilots in their work are Head Up Displays, Electronic Flight Bags (EFBs), for documentation, decision support and performance calculations, Controller Pilot DataLink Communication (CPDLC), Aircraft-Air Traffic Control (ATC) Collaborative Decision Making (CDM) procedures coordinating a departure etc. Many of these changes have distanced the pilots away from a direct control of the operation of the aircraft. Instead, several intermediate layers of automation have been introduced (e.g. Billings, 1996) (see Fig. 3.2).

2.8 The hazardous process level – the flight The flying, the actual flight operational process is representing the hazardous process at the bottom of the system model. Flying is conducted in an environment exposed to hazards that must be avoided or handled to operate with an acceptable level of risk. Problems can occur in any situation, e.g. with a fully functional aircraft (not considering design or other non- proximal issues affecting system functionality) (e.g. San Francisco 2013), or during situations with problems with the aircraft due to system malfunctions for various reasons (such as the Hudson River ditching 2009). A flight consists of several distinct phases: pre-flight planning and preparations, the actual flight and post-flight activities. Below a short description about the flight process from a flight crew perspective is included.

Pre-flight activities normally include weather and fuel considerations, check of aircraft status in relations to any technical malfunctions affecting operations, special considerations at the

19

Chapter 2 – A pilot and airline perspective in the aviation system departing, arriving and potential alternate airports, including a check of status of navigation equipment, work in progress etc. that might affect the flight. In case of challenging weather conditions, e.g. high landing weight in combination with a short, slippery landing runway, several alternate landing airports may be required. Such considerations are usually conducted prior to arriving at the aircraft but may also take place in the aircraft in case of consecutive flights where onwards planning/weather updates takes place during the duty period.

The actual flight can also be divided into several main phases: flight preparation, push-back if parked at a gate, engine start, taxi, takeoff, climb, cruise, descent, approach, landing, taxi in, parking and after parking procedures. These phases can also be broken down further considering aircraft-type and company procedures. Other phases exist, such as go-around and diversion. Some of these phases contain procedures with special/unusual operations requiring specific training, certain aircraft equipment and company approvals. Various phases require use of different equipment, such as manuals or an EFB for take-off and landing calculations. Post flight duties may include debriefings and reporting.

Apart from challenges related to weather, such as thunderstorms, icing, turbulence, slippery runway conditions, and crosswinds, other environmental challenges are related to birds, volcanic ash clouds etc. Other challenges concern areas, such as keeping up with changes, are operational procedures, understanding and operating the automated systems, including new systems introduced in the current aircraft type operated or when training to operate another aircraft type. Any non-normal situation must be managed. Such situations can be related not only to technical malfunctions but include unruly/sick passengers and airport closures. Other factors to consider are time-keeping pressures, pushing to complete a flight, fatigue/scheduling, inexperienced pilots, flight deck work environment, authority gradient between the experienced and a newly hired first officer etc.

Naturally, all levels in a company are concerned with the safety of flight. It can be related to the design of procedures, setting limitations, as well as operational support staff providing information. See further on Rasmussen’s concept of environmental stressors in relation to the risk management in the section below.

2.9 Environmental stressors 2.9.1 Changing political climate and public awareness Many countries in the world show an increased demand for air travel. At the same time, in other countries, environmental concerns create increased pressure on airlines to reduce the environmental footprint, by offsetting emissions and/or increase operational efficiency through revised more fuel-efficient operational procedures or new, more efficient aircraft. The public will also have preferences for low-cost travel, which links to another area of stressor, the market and financial pressure.

2.9.2 Changing market conditions and financial pressure The main stages of the industry must deal with problems related to a high fixed cost structure, capital intensive investments in long-term assets, and high wages. In 2018 more than 4 billion passengers were handled by the airline industry globally (ICAO, 2020). Profitability is generally low despite significant long-term sector growth of over 5 % annually.

20

Chapter 2 – A pilot and airline perspective in the aviation system

The industry is also pressured by conflicting interests, e.g. governments promoting more transportation possibilities to increase national tourism and create business opportunities, while at the same time negotiating increasing environmental concerns and land use interests close to airports.

According to Boeing (Boeing, 2017), some 500 000+ new pilots will be needed over the coming 20 years. A significant portion of that pilot requirement will be in Asia. However, considering the ageing pilot corps in Europe and the US, also these regions will have a high demand for new pilots. Fast expanding airlines will need experienced pilots capable and allowed to start in the left seat as a commander, further increasing the pressure on the training providers to provide qualified pilots to fill the position as first officer in the right seat.

Safety is one of the main goals for the aviation system, however closely linked to other goals, for example the success of the companies operating within the system. Airlines operate in a context of intense competition, low cost models, ever increasing financial pressures, where airlines face a huge challenge with the demand for increased productivity and efficiency without compromising safety. As mentioned earlier, some airlines hire their pilots through external contractors. As stated in the final conclusion by Jorens et al. (2015); regardless of the form of employment, the biggest threat to safety “is the management style being too focused on cost reduction, regardless of its consequences. Such management style is incompatible with rules and regulations on FTLs, Crew Resource Management, Safety Management Systems and a Just Culture. 'Business models' and management styles that involve a 'blame culture' and are aimed at or result in crew members not reporting or being afraid to report safety issues or pilots not acting on pilot authority in situations where such action is called for, are incompatible with such safety provisions” (ibid, p.270).

2.9.3 Changing competency and levels of education As pointed out by ICAO, weak management may see training as an expense rather than an investment in the organization’s future (ICAO, 2013a). Today, with few accidents and severe incidents, this is the backdrop against which training activities above the minimum required must be justified and take place. For many years, despite aircraft development and increased aircraft system complexity, airline pilot training hardly changed. Eventually, as the flight deck environment became more automated, the need to update the training setup came into focus. E.g. Lehman (1998) pointed out that often the initial, or ab initio, training taking place was based on outdated curricula and on aircraft without a glass cockpit and the step from the initial stages of training into the modern multi-crew flight deck environment at a major airline was huge. Others highlighted that, in conflict with advances in aircraft technology, training programs have generally declined in relevance, volume and duration (e.g. Chialastri, 2012). This at the same time as pilot experience levels are lowered as the industry grows.

As older initial training aircraft are being phased out, the basic training aircraft are gradually becoming more sophisticated, where EFIS (Electronic Flight Instrument System) and autopilots are part of the training environment. At the same time, full flight simulators are also becoming more sophisticated, especially the visual systems. Other lower-cost alternative training devices, still with high fidelity visual systems have become available

21

Chapter 2 – A pilot and airline perspective in the aviation system allowing new forms of training. This has paved the way for changes such as the MPL program. According to some, the introduction of the MPL training setup is the first significant change in pilot training from what was being outlined in the Chicago Convention in 1948, allowing credit for more simulator time over flight time in a real aircraft (e.g. Dahlstrom & Wikander, 2014). Some, e.g. airline pilot associations, however, claim that the MPL program is introduced primarily to reduce training costs. The Multi Pilot License training is competency-based allowing a student to move on to the next level of training once a certain degree of competence/skill has been reached, rather than relying on that a specific number of hours should be trained according to a fixed syllabus. Despite these changes, problems persist and fresh out of school fully licensed young pilots to a large extent fail the initial tests at airlines. David Learmount captures the problems in the title of an article in : “Licensed to fly but not up to the job” (Learmount, 2016).

2.9.4 Fast pace of technological change New technology is continually being introduced to meet the challenges the aviation system faces. In the report “Flight Fleet Forecast 2016-2035” by Flight Ascend Consultancy (Flightglobal, 2016) it is estimated that approximately 42 800 new commercial jet and turboprop aircraft will be delivered into service within the 2016-2035 period. Within the same timeframe it is expected that around 74% of the current world-wide aircraft fleet of passenger jets, freighters and turboprops will be removed from service. In total, the global commercial aircraft fleet in service is expected to increase to close to 50 000 aircraft in 2035, an increase by over 80% over today’s numbers. I.e. less than 15% of the aircraft in service today will constitute a part of the number of aircraft in worldwide service in 2035. However, there is a continuous substitution of aircraft over time, leading to a diversity of aircraft capabilities operating in the same air transport system.

In a European working paper presented at the 13th Air Navigation Conference in 2018, it was stated that the aviation system today is “challenged with rapid developments in new technologies, products, operations and business models. Significant technological (e.g. space, augmented reality, virtualization), environmental (e.g. new propulsion systems, climate change impacts), economic (e.g. new players and new types of air (hybrid) vehicles) and societal (e.g. urbanization, digitalization, passenger’s expectations, traffic growth) transformations are to be expected in the coming years” (ICAO, 2018, p.47). It was concluded that any emerging issues and risks stemming from these anticipated changes must be addressed, where the current safety approach was developed in a more static and specific aviation environment.

The Flight Deck Automation Working Group (FAA, 2013), captured several of the current trends in the air traffic system with potential to impact the future operational context:  A continued growth in the number of aircraft operations.  Changes to the demographics of the aviation workforce.  New knowledge and skills required both by pilots and air traffic personnel.  The low aviation accident rates are making the cost/benefit case challenging for new safety and regulatory changes.  Future airspace operations will exploit modern technology as well as new operational concepts.

22

Chapter 2 – A pilot and airline perspective in the aviation system

Implementing the new technology imposed by SESAR will change the working conditions for both pilots and air traffic controllers. Barchéus and Mårtensson (2007) gave some examples of how such changes could be both positive as well as negative for the operator. The most critical general issues about the human role in ATM are identified below (SESAR, 2015):  Automation: Automated support tools are fundamental to the successful introduction of the new ATM. This is to keep workload within acceptable limits, reduce human errors and increase situational awareness. However, automation must be adequately tailored to the operations and the tasks. This must take into account the balance between the efficiency created through automation and human capability. This is particularly valid for recovery from non-nominal and/or degraded modes of operation.  A potential redistribution of responsibilities between human roles; e.g. pilots, air traffic controllers and engineers including those responsible for system maintenance and oversight.  A continued expectation that the human role will manage unexpected events. Transition from legacy to SESAR systems, including their concurrent operation or cascade failures leading to deviations from planned trajectory execution, requiring an integrated view of the system design and the interaction of its various actors.  Possible ambiguities resulting from a redistribution of authority between human and system actors. These will have to be managed by careful procedure design accompanied by a clarification of liability issues.  The need for carefully designed system upgrade training for all affected humans, revising and refining the distribution of responsibilities and interactions between them.  An increased need for continuation training of humans to maintain the skills needed to deal with complex and unexpected events and to prevent skill decay due to more automation.  An increased need for proactive training to address the high complexity and rising cybersecurity needs of the SESAR systems while maintaining legacy systems in operation.  Potential social issues by redistributing responsibilities and changes to the business model of ANSP operation within the European ATM system.

Sheridan (2009) provided a partial list exemplifying human interaction with Decision Support tools (DST) on the flight deck in the future air traffic system. This list included tools to monitor trajectories, to self-separate when authorized and tools to acquire information from a system-wide information network.

Future airspace operations are likely to include more automation, less reliance on the current ground-based systems and increased integration and information exchange between various actors in the aviation system. This includes the integration of Unmanned Aerial Vehicles (UAVs) in the system. Studies are ongoing regarding single pilot operations on jet transport aircraft.

23

Chapter 2 – A pilot and airline perspective in the aviation system

2.10 Performance measurement and training This section focuses on the vertical interactions, depicted in Rasmussen’s risk model, between the flight crew and the flight operations and safety levels in the model. The training department is at the same level as, and often a part of, the flight operations department and is not specified separately in Fig. 2.1 above.

2.10.1 Performance measurement An airline’s safety reporting system is the backbone of the performance monitoring and measurement system. Some reports are mandatory (for serious incidents and accidents) whereas other reports are voluntary for the submission of information related to observed hazards or other safety related reporting. Deficiencies and hazards may also be identified by other means such as safety surveys and internal or external audits, feedback from training, as well as conclusions made during investigations on accidents/incidents.

Airlines can also gain further insight into operational performance through monitoring during normal flight observations, such as line checks, where a dedicated observer occupies the flight deck jump seat. Airlines can also use the FDM schemes to monitor normal line operations. Here, normal flight operational data, such as aircraft position, speed, status of various aircraft systems etc., is gathered and analyzed for unwanted trends and exceedances. Such data can also be analyzed in connection with accident and incident investigations.

One way for airlines to gain access to larger amounts of data is by using trend reports from de-identified incident reports from databases such as STEADES (the IATA Safety Trend Evaluation, Analysis & Data Exchange System) covering data from many airlines. This can potentially help an airline in the development of prevention strategies in different areas. It can e.g. be possible to benchmark against other comparable organizations to determine whether problems are shared by others. The Fatigue Risk Management System (FRMS) forum is another example of airlines coming together to address a specific topic with the aim of improved methods to address fatigue risk.

The incident and accident reporting system are naturally reactive. Surveys and audits are deemed to be more proactive, while the FDM system is seen as the most predictive system (ICAO, 2013a). The SMS should define Safety Performance Indicators (SPIs) to allow determination whether the system is operating within accepted safety boundaries and not just simply meeting regulatory requirements. Any defined SPI should have a corresponding alert and target value. The alert value should be the common criteria “to delineate the acceptable from the unacceptable performance regions” (ICAO, 2013a, p.5-31) for a SPI. SPIs could also be divided into high-consequence and lower-consequence SPIs indicating the priority to address any deviation observed. Such indicators may be categorized into leading (forward oriented) or lagging (past/output) indicators (Niven, 2006). Leading indicators could relate to activities such as safety training, surveys etc. not directly related to a specific incident/accident. Indicators related to past events (e.g. number of unstabilized approaches during a certain period of time or the outcome of an accident/incident investigation) are called lagging indicators. In an appendix to the ICAO SMM a few examples of SPIs for air operators are given. High-consequence indicators could be related to monthly serious incident rates or inflight engine shutdowns per flight hours. Lower-consequence indicators

24

Chapter 2 – A pilot and airline perspective in the aviation system are exemplified as being the monthly incident rate, internal audit findings or voluntary hazard report rates (ICAO, 2013a).

EASA has established the Data4Safety program with five participating European airlines, as well as Airbus and Boeing. The program is currently in a proof of concept phase aiming at assisting a transition from a reactive use of safety data towards a more proactive analysis of trends and risks to further improve aviation safety (EASA, 2017).

2.10.2 Training For cost reasons, airlines prefer to recruit new pilots that already hold a valid type rating license on the aircraft type they are intended to fly. Otherwise new recruits will have to start with type rating training prior to flying for the airline. In any case, airline conversion training is required, including training topics covering specific company procedures, callouts, checklists etc. Some initial ground training elements will also have to be conducted, related to emergency equipment, dangerous goods, and Crew Resource Management (CRM) training. Once initial airline training is completed, recurrent ground and simulator training and checking is conducted on a regular basis. In addition, line checks are performed regularly where a line check pilot occupies the flight deck jump seat and observe the pilots’ performance during normal line operations. The amount of training conducted vary between various airlines.

CRM training Human Factors continue to be relevant in relations to incidents and accidents (e.g. JATR, 2019). Human factors encompass CRM. EASA describes CRM “as the effective utilisation of all available resources (e.g. human resources, hardware, and information) to achieve safe and efficient operation. The objective of CRM is to enhance the communication and management skills of the crew members concerned. Emphasis is placed on the non-technical aspects of crew performance” (EASA, 2014, p.5). A majority of the flight crew core competencies, mentioned in chapter 2.6 above, are related to CRM.

Already in the late 1970 formal training related to human factors started to take place. Initially the term Cockpit Resource Management was used and applied to the process of training crews to reduce "pilot error" by making better use of the human resources on the flight deck. To emphasize the importance of the crew rather than the individual the term Crew Resource Management was proposed and became the used concept. The CRM concept subsequently evolved to comprise error management strategies, such as to avoid errors, trap errors and mitigate the consequence of errors (Helmreich, Merritt, & Wilhelm, 1999). Today, Threat and Error Management (TEM) is a core aviation concept deemed essential for flight safety, and CRM is a tool that support successful TEM (Helmreich, Klinect, & Wilhelm, 1999).

Performance based training methods As the operational environment for the pilots becomes increasingly complex it is important that pilots are given the opportunity to be “challenged and immersed in dealing with complex situations, rather than repetitively being tested in the execution of maneuvers” IATA, 2014, p.ii). There is an increased understanding that the “syllabus-based only” training does not properly address the required specific skills needed for modern highly automated

25

Chapter 2 – A pilot and airline perspective in the aviation system flight decks. Training programs constrained by repetitive testing in the execution of maneuvers to comply with outdated regulation, lack the variability to train effectively in this way (IATA, 2014).

In line with this understanding, safety and related training activities are increasingly relying on the ability to measure performance. The FAA introduced the Advanced Qualification Programme (AQP) in 1990 as an alternate training framework for airline initial and recurrent training. In Europe, the Alternative Training and Qualification Programme (ATQP) was introduced 2006. An ATQP allows operators to provide a more tailored, operator specific recurrent training and checking program for its flight crew. The program is a company and type specific alternative to traditional training. The idea is that by focusing on specific needs of fleets and groups of pilots, a more specific adapted training curricula should both enhance performance while in the long-term reduce costs. The other side of this coin is a larger dependency on the individual airline’s ability to, in a credible way, measure its performance and design the training program accordingly. A task analysis should determine knowledge, required skills, validated behavioral markers etc. The use of ongoing data collection, e.g. from internal reporting systems or FDM programs, shall be used for curriculum validation and refinement (EASA, 2016).

ATQP, AQP and EBT (Evidence Based Training, as defined by ICAO) are all based on training that is grounded in continuous analysis and process improvement (Boeing, 2015). According to Boeing the move toward more reliance on performance-based training methods is bound to continue.

Future training requirements Funded by FAA, a comprehensive study was conducted into the future training requirements to prepare the pilots for RNP (Required Navigation Performance), 4-D trajectory, self- separation and other NextGen envisioned operations (FAA, 2011). One of the key topics identified was training of the management and use of automated systems. The report summarized previous publications regarding the skills required, that should be addressed during training, related to the management and use of automation. Some of these topics addressed the areas included below:  How and when to appropriately use automation.  How the flight crew and automation should work together.  How and when to transition between levels of automation, including when to revert to manual flying.  How to maintain automation and mode awareness.  What are the human-factors implications of working with automation, e.g. input errors, automation bias and over-reliance.

Based on the research conducted, several guidelines were given related to training for the future system environment:  Training scenarios should be based on safety data.  Curriculum content should help pilots develop a comprehensive understanding of the automated systems going beyond strictly procedural knowledge.  Emphasis is needed on the development and maintenance of basic piloting skills.

26

Chapter 2 – A pilot and airline perspective in the aviation system

 Skills and knowledge such as integration of systems, spatial orientation in 4-D and self-separation environments, managing recovery from system failures and flight path disruptions etc. are required.  CRM skills are very important.  Measurements should be used at regular intervals to ensure that the training program goals and objectives are being met.

2.11 Aviation system from an airline perspective – summary The defined boundaries for the hazardous flight process are set by regulatory requirements and limitations, SOPs etc., and affected by all levels in the model. Also, technology itself impose limitations with alerts and warnings as well as flight envelope protections. IATA regularly summarizes and makes analysis of accidents and incidents that have happened. For the period 2013-2017 IATA concluded that the top 5 contributing factors for aircraft accidents in the latent condition’s category were related to (IATA, 2018):  Regulatory oversight (deficient regulatory oversight by the state authority) 33%.  Safety management (absent of deficient safety policy and objectives, safety risk management, safety assurance or safety promotion) 27%.  Flight operations: general 18%.  Flight operations: SOPs & checking (deficient or absent standard operating procedures, operational instructions, company regulations or controls to assess the SOP compliance) 12%.  Flight operations: training systems (omitted training, language skills deficiencies, qualification and experience of flight crew, operational needs leading to training reductions, deficiencies in assessment of training or training resources) 12%.

For instance, several of the organizational levels included in the system model are acknowledged to affect the operational outcome and flight safety. Due to system dynamics and the number of actors involved, the system will evolve and mature at different pace across the system, leading to different, both in term of technological and crew, system capabilities. This is likely to continue as the future system unfolds and, from a socio-technical perspective, will lead to new emergent issues affecting the system safety.

The accident mentioned in chapter 1 happened as a result of emergent issues stemming from the human – automation relationship. The move towards increased levels of automation seems bound to continue to affect the flight crew and their workplace – and also to affect the training requirement assisting and preparing the pilots for the changes taking place. At the same time, financial pressure will continue to limit resources available for deeper workplace analysis or SPI development.

The ICAO guidance on SPIs for airlines are primarily outcome oriented and, if used as indicated, gives little knowledge about the human automation interactions and how the flight deck JCS during normal operations works. Also, the ATQP requires that the airline use gathered operational and training data to validate how well the system functions and adapt the training as required. However, little guidance is provided exactly how that shall be conducted.

27

Chapter 2 – A pilot and airline perspective in the aviation system

This chapter described the airline as a socio-technical system and how it functions. It explains where the boundaries for the flight crews’ ability to perform are affected by several of the system levels. The appended papers for this dissertation particularly explore the levels closest to the flight crew and if the current airline performance measurement framework could be developed further to better support flight crew training and learning activities, and consequently improve the function of the flight deck JCS.

28

Chapter 3 – Theoretical framework

3 Theoretical framework This chapter contains theoretical concepts and analyzes their implications for the areas under review in this thesis.

3.1 Socio-technical system of systems The air transportation system is often referred to as a socio-technical system, characterized by being highly complex, large-scale, comprising many interwoven technical systems as well as social actors within a dynamic context influencing both the social and technical spheres. In work science organizations research, this concept originates in the 1960’s from the British researchers Emery and Trist (Emery & Trist, 1960). They stated that “between the technological system and the social system there is not a strictly determined one-to-one relation, but what is logically referred to as a correlative relation” (ibid, p.275). Later, researchers, e.g. Perrow (1984) suggest that the “characteristics of socio-technical systems include large problem spaces, social heterogeneous perspectives, distributed, dynamic potentially high hazards, many coupled subsystems, automated, uncertain data, mediated interaction via computers and disturbances” (ibid, p.284). Consequently, the aviation system as a whole can be seen as a socio-technical ‘system of systems’ (e.g. Harris & Stanton, 2010). Such a system encompasses critical human factors considerations such as usability, training, design, maintenance, safety, procedures, communications, workload and automation (Perrow, 1984). Maier (1998) characterized a ‘system of systems’ as possessing five basic traits, one of which being the possession of emergent behavior. Safety and other emergent properties of the air transportation system are the results from decades of evolutionary development and changes. Safety is not a property of any constituting element, but the result from the interactions between the constituting elements (e.g. Leveson, 2004; Blom et al., 2016).

3.2 Automation – Implications for flight deck work “Automation refers to the full or partial replacement of a function previously carried out by the human operator” (Parasuraman et al., 2000, p.287). Originally, aviation automation related to simple tasks allocated to the autopilot, such as maintaining a heading and/or a selected altitude. Gradually the capabilities of the autopilot have developed to include more refined tasks, such as keeping a selected rate of climb/descend, follow a ground-based radio beacon, capture a preset altitude or even performing auto-lands. Later, auto throttle functions were introduced, capable of assisting with various engine thrust settings, and maintaining a selected speed, flight management computers assisting with navigation and flight optimization etc.

In 1951, Paul Fitts stated what humans and machines respectively do better (Fitts, 1951). This is sometimes referred to as a “Fitts’ list” or a MABA-MABA list (Men Are Better At – Machines Are Better At). The list was used to determine the function allocation between humans and machines. However, it is neither possible to simply substitute the human operator with automation nor add new automated equipment without due considerations to the consequences that follow with the implementation of automation. It is argued that substitution-based function allocation, such as the MABA-MABA method described above, is simplifying things not considering the effects on the human practice in the workplace (e.g. Dekker & Woods, 2002). This is acknowledged by others (e.g. Wiener & Curry, 1980; Billings,

29

Chapter 3 – Theoretical framework

1996; Parasuraman et al., 2000) and was in fact also recognized by Fitts himself (Fitts, 1962). Automation has fundamentally changed the roles of people in the system and created new skill and knowledge requirements. In a way, the pilots have been distanced from the direct “control of the flight process” to managers of the different automated systems to manage the flight path. In general, automation has not reduced the need to invest in human expertise, but rather changed that need profoundly (Sarter & Woods, 1992; 1994; Sarter, Woods & Billings, 1996).

Sheridan and Verplank (1978) proposed a level of automation (LOA) list, describing the degree to which a task is automated. From the least automated level where the human operator does the task and if desired gives the computer directives to implement it, to where the computer does everything, possibly deciding to tell the human operator about its actions. Sheridan made a further analysis of the roles of the operator. In 1987 he described the human’s role in a technical system in the Supervisory Control model (Sheridan, 1987).

Operator Computer Process

PLAN PROCESS 1 TEACH HUMAN- TASK- INTER- INTER- PROCESS MONITOR ACTIVE ACTIVE 2 COMPUTER COMPUTER INTERVENE PROCESS LEARN n

Figure 3.1. The roles of the operator in the Supervisory Control model (Stahre, 1995, p.29, adapted from Sheridan (1987)).

In the supervisory control model, a relationship, a two-way communication channel is established between the operators and the process. The model includes a definition of five roles for the operator, who has the following responsibilities:  Plan what needs to be done before starting the technical system.  Transfer this plan to the system by programming (teaching).  Start up the system and monitor any deviations or disruption.  Intervene if there are any deviations and decide on any adjustments necessary to correct the automation.  Learn with each new task which the operator encounters in the event of errors.

In further work, primarily used for automation design, Sheridan and Parasuraman proposed a four-stage model of human information processing: information acquisition, information analysis, decision selection, action implementation. Each stage can be automated to different degrees or levels. (Parasuraman et al., 2000; Parasuraman & Wickens, 2008).

3.2.1 Issues with automation Several issues have been identified related to automation that affect the relationship between the human operator and automation. In the 1980’s and 1990’s, flight deck automation was a hot topic which resulted in a lot of research in this field. At that time, 30

Chapter 3 – Theoretical framework airlines were struggling to catch up with the technological developments and often had no formulated recommended work practices or policies regarding the work with automated systems. Many issues related to the topic were identified, including problems in the design phase (such as lack of user involvement), lack of or improper training, automation vs. human limitations etc. Some of the identified areas are summarized below.

Ironies of automation Lisanne Bainbridge made a presentation at a conference in 1983 with the title “Ironies of Automation” (Bainbridge, 1983). Bainbridge had studied process industry and her concern was that when most functions of the system became automated the operator was left with separate functions in between the automated ones as well as to handle disturbances and emergencies that automation could not handle, much of which had not been trained before. This arbitrary collection of tasks could create difficult situations for the operator.

Opacity Reason raised concern on the effects on operators of more automated, more complex and more dangerous systems. Systems have more defenses against failure while becoming opaquer. “Instead of having “hands on” contact with the process, people have been given a supervisory role. What direct information they have is filtered through the computer-based interface. And as many accidents have demonstrated they often cannot find what they need to know while at the same time being deluged with information they do not want nor know how to interpret” (Reason, 1990, p.174).

Clumsy automation The complexity of the automation affects possibilities for the pilot to understand it. "New technology that creates bottlenecks during busy, high-tempo, high-criticality event driven operations - whereas its benefits tend to occur during routine, low-workload situations - has been termed ‘Clumsy automation’ " which is a form of poor coordination between the human and the machine (Wiener, 1989).

Situation awareness Many automation attributes found in aviation mishaps relates to the loss of situation, or state awareness (Endsley, 1995; Billings, 1996). This loss of awareness is primarily associated with system complexity, coupling and interdependencies between automated functions, advanced automation autonomy as well as lack of feedback from the automated systems.

Automation surprises In an early study made on pilots flying the Boeing 757 it was discovered that after one year of flying on the type pilots were still surprised by the automatic system and did not understand the flight management system modes (Wiener, 1989). The pilots three most common questions were: "What is it doing?", "Why did it do that?”, "What will it do next?". Sarter and Woods (1992) added two more questions: "How am I going to stop it (the computer) from doing what it does?" and "How am I going to make it do what I want?". In the Gottröra accident (1991), the pilots were overwhelmed by the many automated warning systems activated at once (Mårtensson, 1995).

31

Chapter 3 – Theoretical framework

Wickens (1995) analyzed automation induced problems in aviation and defined the main factors as being: complacency and over trust, mistrust, workload, situation awareness and perceived authority and control. When automation fails the pilots experience mistrust in the system: failure to understand which could be due to a poor mental model, automation failure and perceived automation failures. If the human being does not understand what the system is doing, safety is at danger.

Lately, De Boer and Hurts (2017) carried out a field survey among 200 pilots where the prevalence and consequences of automation surprises during routine flight operations are dealt with. Results of their study show that automation surprises are a relatively widespread phenomenon, however it rarely has severe consequences. More serious occurrences, that pilots are required to report, are estimated to occur only once every 1-3 years per pilot. Factors leading to a higher prevalence of automation surprise include less flying experience, increasing complexity of the flight control mode, and flight duty periods of over 8 hours. The researchers conclude that automation surprise is a manifestation of system and interface complexity rather than cognitive errors.

Dealing with unexpected events Boy (2014) describes the cognitive aspects of how individuals working in life critical systems manage to act in a way that saves lives. In line with the reasoning by Provan et al. (2020), he describes people as creative and adaptable and that they can solve problems that are not known in advance. “Dealing with the unexpected requires accurate and effective situation awareness, synthetic mind, decision-making capability, self control, multitasking, stress management and cooperation” (Boy, 2014, p.5).

Wickens et al. (2004) emphasized the importance of a decentralized management in complex systems to be able to deal with unpredictable events. “High complexity generates unpredictable events that require the flexibility of a decentralized management structure” (ibid, p.493). According to Boy the best way to face the unexpected is to move from task training to skill training. The main question will be to maintain a good balance between automation that provides precision, flawless routine operations and relief in case of high- pressure situations, and flexibility required by human problem-solving. However, for operators to manage the unexpected they need to be able to understand what is going on, make their own judgements and act appropriately. Creativity is the key (Boy, 2014). This ability does not come without extensive training over a long period of time and these skills should be learned from experience (learning by doing). Boy emphasizes the importance of the use of simulators that enables human operators to experience various kinds of situations and configurations, which would most likely never be possible to experience in the real world.

Warning systems The area of flight deck alerting systems has developed beyond the pure attention-director role. This can be exemplified with the Traffic Collision Avoidance System (TCAS) or the Enhanced Ground Proximity Warning System (EGPWS) where the alerting system commands the pilot to adjust the flight path. The division between alerting system and e.g. flight path control is consequently somewhat blurred. Some of the potential problems with the various roles alerting systems takes are (Pritchett, 2001):

32

Chapter 3 – Theoretical framework

 Alerting systems can almost by default be seen as clumsy automation, since alerts may be triggered during already high workload situations.  Pilots gain little on the job experience on alerting systems since they are rare events.  Over/under reliance in the system (see also Woods (1986) about the “authority/responsibility double bind”). Distancing from the operational process On the flight deck, the effects of increasing the number of automated layers between the pilot and the control surfaces is illustrated by Billings (1996) in Fig. 3.2 below:

Figure 3.2. Effects of increasing complexity on the human-machine (pilot-aircraft) relationship (Billings, 1996, p.36).

The layers tend to distance the pilots from many details of the operation. Billings means that although this may have the desirable effect of lessening flight crew workload, it has a potential to accentuate a tendency toward peripheralization of the flight crew. The pilots are at risk of becoming blocked out of the loop that their primary task is to control.

3.3 Cognitive Systems Engineering and Joint Cognitive Systems The Three Mile Island nuclear power plant accident and work conducted at the nuclear research reactor at Risø, Denmark, were important for the development of new theoretical concepts related to system safety. Here, focus started to shift from human error and cognition to the relationship between actors and technology. It was proposed that the unit of analysis is a Joint Cognitive System, JCS, to focus attention on how cognitive work involves interaction and coordination over multiple roles both human and machine (Hollnagel & Woods, 1983). The airspace as a whole can be seen as a cognitive system, where the human agents and the technological functionality needs to be coordinated through means of effective communication tools (Lintern, 2011). In such a system “people are adaptive agents, learning agents, collaborative agents, responsible agents, tool creating/wielding agents. As such, responsible people create success under resource and performance pressure at all levels of sociotechnical systems by learning and adapting to information about the situations faced and the multiple goals to be achieved” (Woods, 2016, p.5).

33

Chapter 3 – Theoretical framework

The flight deck JCS consists of the pilots and the automated environment on the flight deck, controlling the flight path and various aircraft system and functions. The flight deck is part of the aircraft, part of flight operations organization, that is part of the airline and so on. The human – human and human – automation interactions taking place on the flight deck are part of the flight deck JCS. Crew Resource Management techniques can be seen as binding things together, promoting teamwork and coordination, within the flight deck JCS (Harris, 2013).

3.4 Complexity The study of complex systems is an approach to science that investigates how relationships between parts of a system, and how the parts are made into a whole, and how the system interacts and forms relationships with its environment (Bar-Yam, 2002). Transportation systems including the aviation system is often referred to as a complex system (e.g. Wittmer, et al., 2011). Complexity refers to the number of feedback loops, interconnected subsystems, and invisible, unexpected interactions. E.g. nuclear power and petrochemical plants are complex because the behavior of one subsystem may affect many others, where these interactions only can be perceived indirectly (Perrow, 1984). Perrow further connects complexity with coupling, where coupling refers to the degree that there is a tight connection between subsystems. In a tightly coupled system, a disruption in one part of the system quickly affects other parts of the system. The degree of complexity and coupling has implications for the likelihood of catastrophic failures with highly complex, tightly coupled systems being particularly vulnerable to catastrophic failures (ibid).

Increasing complexity has given rise to the system accident. Complex systems are sensitive to the initial design conditions where a small difference may propagate over the years into something big unexpected at a later stage. In simpler systems it can be possible to make risk assessment and probability calculations regarding future failures. This does not apply equally to more complex systems, where possible outcomes only can be indicated (Dekker, 2011). Also, any technical, organizational or process change to the current system may interact with other systems, processes or latent conditions present combining in unexpected ways leading to catastrophe. Methods and theories to rely on trying to understand where an airline’s flight operations currently is manoeuvring in the safety space may be outdated. “Complexity in society has got ahead of our understanding of how complex systems work and fail” (Dekker, 2011, p.6, paraphrasing Cilliers) captures the dilemma.

3.5 Safety and human behavior in context 3.5.1 Early approaches to safety People design airline company processes and procedures, operate technology, build the company culture, create trust/mistrust and use their judgment to coordinate and integrate different system functions, especially across system boundaries. In short, people are central to the success or failure of airline operations and ultimately the success of the company. Setting broad boundaries for the discipline, Meister (1999) suggests that everything relating the human to technology is within the Human Factors and Ergonomics area. Several aspects of Human Factors specifically relevant for the aviation system include topics such as the system perspective, pilot performance and Human Factors in aircraft design (Wiener, 1989). Similarly, Harris (2011) specifically addressed Human Factors knowledge relevant for the flight deck, e.g. human information processing, the human in the system, flight deck design

34

Chapter 3 – Theoretical framework and automation as well as safety management issues, both on the flight deck and in the rest of the airline environment.

Rasmussen (1986) proposed that human activity can be grouped into three main behavioral levels: skill-based, rule-based and knowledge-based. The skill, rule and knowledge-based information processing terms then refer to the degree of conscious control exercised by the individual operator. The knowledge-based mode requires deep understanding of the system based on training and experience, full attention, consciousness and lots of system feedback in a, for the operator, novel situation. On the other hand, a skill-based mode refers to the simple execution of highly physical automated actions requiring little or almost no conscious monitoring. In this case the operator is usually highly experienced and performing normal everyday work. The rule-based mode is somewhere in between the other two modes of operation, regarding the level of required consciousness, where action is triggered by learned rules.

Every now and then things, for various reasons, go wrong. Reason (1990) separated slips, defined as errors occurring when carrying out a task where the intention is correct, from mistakes, arising from wrong intentions which in turn may be due to lack of knowledge or insufficient diagnosis or situation analysis. Neither slips nor mistakes are to be confused with violations where the operator intentionally, sometimes in a routine manner, does not follow procedures, regulations etc. One form of execution error, in addition to slips that normally refer to observable actions, are lapses that generally involve failures of memory. In conclusion, slips and lapses would be related to execution failures, primarily tied to skill- based actions, whereas mistakes would be tied to rule-based or knowledge-based actions.

Later, the naturalistic decision making (NDM) framework emerged as a means of studying how people actually make decisions and perform cognitively complex functions in demanding, real-world situations. Such situations are often marked by limited time, uncertainty, high stakes, constraints, unstable conditions, and varying amounts of experience (Klein et al., 1993).

Several models exist capturing the humans’ role in the system as part in the dynamics of accident causation. The Swiss Cheese Model captures both the direct active errors forgoing unsafe events as well as contributory factors for accidents in terms of latent conditions, or resident pathogens, always existing within the organization (Reason, 2000). Latent conditions could for example be poor management, poor equipment, inappropriate sleep schedules and fatigue, conflicting goals, poor internal communication, unsuitable or non- existing procedures, training deficiencies etc. Other apparently simplistic models exist, such as the SHELL model, originally developed by Edwards (1972), and further elaborated on by others (e.g. Hawkins & Orlady, 1993). The SHELL model shows how the various element of the aviation system do not operate in isolation, in particular how the central liveware component relates to other system blocks, e.g. hardware, software and environment.

Rasmussen’s dynamic socio-technical system model involved in risk management, described in chapter 1, includes interactions between the various levels in the model, such as information going up to help form educated decisions about potential adjustments as well as information moving downwards in the system (Rasmussen, 1997). Being part of the larger

35

Chapter 3 – Theoretical framework socio-technical system consisting of the airline and beyond, the pilots’ ability to perform in that environment largely depends on what information, training and feedback they receive from the airline where they are employed. That feedback both depends on external factors, such as new equipment, new regulations or procedures potentially justifying adjusted or additional training, as well as airline internal factors such as how well the airline utilize data about operations and information about the performance of the flight crew, both as a crew as well as on an individual level. Although Rasmussen’s model is some 25 years old, the need for a cross-disciplinary research approach to address the multi-level causality chains in accidents is emphasized and still deemed equally relevant (Waterson et al., 2016). The rational and many of the preconditions for the model described at that time are still in place today; continued technological change and increasing cross sub-system integration, a highly competitive environment and conflicting pressures from the public, e.g. regarding availability and shareholder interests vs. environmental implications in terms of noise and pollution.

Within the HILAS project, as a complement to Rasmussen’s socio-technical system model, the distinction was made between three organizational layers i.e.: the strategic management layer, the tactical management layer, and the operational management layer. All three layers are supporting real-time flight operations. As part of the operational management layer, operational support functions are in direct contact with crew and other staff involved in day to day operations. Those functions may be related to planning, or assisting in technical troubleshooting supported by the accumulated (documented) knowledge of the organization. Risk management is very much dependent on well-functioning performance measurement activities since much of the output from those processes is input to the risk management process (McDonald et al., 2009).

3.5.2 “Safety I” – system perspective assuming simplified understandable causality Historically it is the outcome of active errors that have been highlighted in accident and incident investigation reports. Gradually the view on human error shifted, from that of a human error or failure as being the root cause of an accident or incident, to that of being a consequence of other deeper organizational problems (e.g. Perrow, 1984; Reason, 1990; Woods et al., 1994; FAA, 1996). Usually, many contributory factors are required to allow an accident to happen (e.g. Reason, 1990, 1997; Dekker, 2000). Finding specific contributory factors or latent conditions present (Reason, 1990), or even identifying a specific root cause following an accident or incident, is often difficult since several factors usually combine to build towards a specific outcome.

A perspective where safety is defined as a state where as few things as possible go wrong can be called Safety I (Hollnagel et al., 2015). Underlying the Safety I approach is a presumption that “things go wrong because of identifiable failures or malfunctions of specific components: technology, procedures, the human workers and the organizations in which they are embedded. Humans—acting alone or collectively—are therefore viewed predominantly as a liability or hazard, principally because they are the most variable of these components” (ibid, p.1).

3.5.3 “Safety II” and Resilience engineering During the early 2000s, as system complexity increased, some within the safety research

36

Chapter 3 – Theoretical framework community argued that the current approach to safety had reached a plateau, and that there was a need for a shift of paradigm to radically increase safety further (e.g. Amalberti, 2006; Woods & Hollnagel, 2006). The output of a meeting among several international researchers in Söderköping, Sweden, in 2004 resulted in the book “Resilience Engineering – Concepts and Precepts” (Hollnagel et al., 2006) where these ideas were captured. Resilience engineering looks at how organizations adapt, anticipate potential threats or changes, and how they can recover from the impact of those threats that were not anticipated. Resilience engineering was developed as a broad concept to both explain and support the capacity of organizations to monitor their operation, anticipate potential threats, adapt and adjust to meet those and recover from the impact of those threats which were not anticipated. In short, it addressed how the organization could recover from unavoidable disturbances without them leading to major incidents or accidents.

Safety is something you do, not something you have (Hollnagel, 2012b). The same applies to organizational resilience. It is a capability, not a property. A capability to recognize where the boundaries of safe operations are and to maneuver back to the safe region when required. One of the key questions studied by resilience engineering is how an organization can monitor and understand its own adaptations to the real world including imperfect knowledge, time pressure, unruly technology etc. (Dekker, 2011). Hence, central to this concept, just as in other safety views, is an organizational capacity to monitor and measure relevant safety issues and manage risks confronting the system. For a socio-technical system the appropriate question to ask could be how and when the adjustments that people do to accomplish their work (the variability of normal performance) can lead to adverse outcomes (Hollnagel, 2009).

So, a pilot would ideally have a repertoire of ways of dealing with emerging problems in varying contexts. However, this requisite variety (Ashby, 1956) will be affected and limited by others at various system levels. Also, what is “right” in one context may not be appropriate in another. Also, an efficiency thoroughness tradeoff may have to be reached for several reasons (Hollnagel, 2009): there may be limited or uncertain availability of required resources (e.g. time), the natural tendency not to use more effort than needed, or to maintain a reserve/slack in case of unexpected new requirements, social and organizational pressures or, individual priorities.

Later and related, Hollnagel et al. (2015) and others elaborated on the need to move beyond the dominating approach to safety, where most people think of safety as the absence of accidents and incidents, or that the risk present is at an acceptable level. The assumption that systems could be easily modeled in a straightforward causality chain of relationships do not fit the current world, with increasing demands and complexity. The Safety-I view does not stop to consider why human performance practically always goes right where people adjust what they do to match the conditions of work. As systems develop adding more complexity, these human adjustments become increasingly important. “The challenge for safety improvement is therefore to understand these adjustments—in other words, to understand how performance usually goes right in spite of the uncertainties, ambiguities, and goal conflicts that pervade complex work situations” (ibid, p.2). Changing the safety management perspective from ensuring that ‘as few things as possible go wrong’ to ensuring that ‘as many things as possible go right’ is called a Safety-II approach, focusing on the

37

Chapter 3 – Theoretical framework system’s ability to succeed under varying conditions. “A Safety-II approach assumes that everyday performance variability provides the adaptations that are needed to respond to varying conditions, and hence is the reason why things go right. Humans are consequently seen as a resource necessary for system flexibility and resilience” (ibid, p.2).

However, even more “systemic” accident/incident models, including the wider system and organizational factors, may fall short since they still assume that a well-defined cause can be identified. In a system accident it is not necessarily a fact that any component is broken. Rather the incident/accident is created by the interactions and relationships between components (Dekker, 2011). Similarly, it is argued that the usefulness of many incident and accident investigations can be questioned since many incident and accident investigations stop when a “convenient” cause is identified (the bottom of a used taxonomy is reached, a politically correct cause is identified, the time is up etc.) (Hollnagel, 2009). Simple explanations are preferred as they usually can be reached in less time.

3.5.4 Conflicting views – what does it mean? Above, the evolution of different approaches to aviation safety has been described. My perspective in this thesis is that of the systems view, still acknowledging that people cannot be completely free from responsibility for their actions. The assumption that clear causality can be derived between various variables regardless of context seems highly unlikely. However, at the recent (June 2019) 8th REA Symposium on Resilience Engineering: Scaling up and Speeding up, it was acknowledged that the ideas of resilience have not transferred to the industry as quick as was initially assumed. Resilience engineering represent a departure from traditional practices where linear assumptions are over-simplifications and do not work in increasingly complex worlds operating at new scales and tempos (Woods, 2019). Resilience research focus recently shifted towards modeling and ways to measure resilience. However, this research stream is still underdeveloped, at least in terms of providing practical and operational implications (Patriarca et al., 2018). There may be a need to slow down to achieve a deeper understanding of the non-trivial dynamics that govern, not just the challenges of our work environments, but also how we cope with them (Hollnagel, 2019). In a recent paper, Provan et al. (2020) address challenges for safety professionals to support safety II. They make the distinction between centralized control (safety I) or guided adaptability (safety II) through which safety is achieved. Here, the role of the organization and the safety professionals is to facilitate achieving the adaptability desired.

Acknowledging the need to address both the people as well as the system perspective is emphasized by e.g. Dekker and Leveson (2014). Organizations should create a discretionary space for individuals working in the organization that is not framed by fear of sanction or dismissal, but by opportunity, empowerment and an appropriate match between individual characteristics and professional demands (ibid, 2014). Rather than taking the individual or the system perspective, we should try to understand the relationships and roles of people in the system (Dekker, 2007).

3.6 Understanding and controlling the function of a socio- technical system Performance Management is the area dealing with how an organization can achieve (manage) its strategies, reaching the set objectives and goals. Within the HILAS project a

38

Chapter 3 – Theoretical framework theoretical framework for key concepts for an improvement process was developed where performance management was one of the concepts intended as a control strategy for system performance. Key elements of performance management were identified as task and performance support, feedback, reporting and communication (Cahill et al., 2007). How well this strategy works would depend on capabilities for measuring and monitoring performance. It is, for example, essential to capture a valid and rich picture of the operations. Achieving this depends in turn on safety culture, reporting culture, trust and integrity. This relates to efficient feedback to encourage reporting, to increase learning and sharing and ultimately enhance the organizational change capability.

3.6.1 Performance measurement Performance measurement deals with the measurement of performance, comprising the quantitative indicators put in place to track the company’s progress against the strategy and goals (e.g. Kaplan & Norton, 2000). Typically, performance measurement cover areas such as financial, customer, process and people measures. Performance measurement is the process of collecting, analyzing and/or reporting information regarding the performance of an individual, group, organization, system or component. It can involve studying processes and strategies within organizations, or studying engineering processes, parameters, and phenomena, to see whether output is in line with what was intended or should have been achieved (Upadhaya et al., 2014). Sometimes Performance monitoring (and evaluation) is used synonymously with performance measurement. A commonly used term for selected outputs of the performance measurement system is Key Performance Indicators, KPIs. A KPI is a performance measure that measure a critical activity that are directly linked to a company’s success (Parmenter, 2015).

Complicating measuring activities is the system of systems nature of the ATM system. Different organizations working within the same operational context may have different, sometimes conflicting, priorities and interests while still each organization plays a significant and essential role for the quality and efficiency output of the overall system and shared collaborative system measures. Okwir (2017) addresses issues related to performance measurement across organizational boundaries and the associated inter-organizational performance management activities. Such complex multi-stakeholder system of systems settings contains a significant lack of feedback and feedforward reporting mechanisms for goal setting and goal attainment. The main challenges facing organizations collaborating within a common performance management system were identified as: a lack of clear collaborative goals, change of culture, poor role setting, poor information management and lack of a system integrator (ibid; Corrigan et al., 2014). Contributing complexity factors were identified as being non-linear interdependencies, existence of informal roles between actors and the network size. Another factor is an apparent lack of trust in the measurement system. Also, continuous improvement is hampered by a lack of interpretive right to set new measures where exogenous factors can become increasingly critical for an actor’s role (Okwir, 2017).

Learning from previous mistakes is one way to improve performance. In the aviation industry that often means looking deeper into the causes for incidents and accidents. During the 1990’s, as described in the automation issues section above, it became increasingly common in incident and accident investigations to attribute the outcome to concepts and

39

Chapter 3 – Theoretical framework measures such as e.g. automation complacency, loss of situation awareness etc. Some advice that several of the issues related to automation are constructs and really could not be used as a contributory factor in a presumed causality chain. In line with the reasoning and a move from Safety I to Safety II thinking, Dekker (2015) cautions against using a term such as situation awareness in hindsight assuming that what is known in hindsight should also have been known by the individual at that time. Such constructs may oversimplify the underlying complexity of the humans interacting with the aircraft and all of its systems (Dekker & Hollnagel, 2004). So, in line with the previously described system perspective, when investigating an incident/accident the investigator needs to keep the perspective of the people involved in a developing and unfolding situation, although the investigator has access to much more knowledge in hindsight. Consequently, to understand people’s behavior one need to look at the operational history, the organizational environment and features of the relevant technology the human interacts with (Dekker, 2000). Also, organizations should maneuver based on work as done (WAD) rather than work as imagined (WAI) (Ombredane & Faverge, 1955; Provan et al., 2020).

Our perception of the world is heavily influenced by our understanding and mental picture of the current state of the world (e.g. Lundberg et al., 2009). HF data need to be understood in the context from which they come, where micro-matching robs data of its original meaning (Dekker, 2000). Grouping selected parts together under one condition identified in hindsight is rather meaningless. It can in fact be contra productive since data is given a new meaning imposed from outside in hindsight (ibid). Also, it is the complete automated aircraft – human operator system, the JCS, that must be considered when trying to understand how well the system functions. Taking a greater system perspective, other interactions of interest are those interactions taking place between the aircraft and other actors in the system (e.g. ATC) as well as those interactions and activities displaced in time from the actual flight between various actors and levels in an organization.

3.6.2 Safety Performance Indicators Safety performance measurement is the objective evidence showing how well the organization is executing its own Safety Management System (SMS) (Kim et al., 2013). A Safety performance indicator, SPI, is a safety related KPI. Efficient SPIs must above all be valid and reliable (Gerede & ve Yaşar, 2017). Validity refers to the degree of measuring accurately the quantity that needs to be measured. Reliability refers to the extent to which measurements of the same quantity yield consistent results in different contexts (Hale, 2009). Also, performance indicators must be sensitive enough to perceive changes that occur in a short period of time (Hale, 2009). The indicators that enable us to determine the quality and quantity of activities performed to maintain safety at the acceptable level are qualified as leading indicators. However, the lack of a strong connection between leading indicators and safety performance is a problem when trying to use indicators in a meaningful way (Verstraeten et al., 2014).

Piric et al. (2019) identified that the long-established view on safety as absence of losses has largely limited the measurement of safety performance to indicators of adverse events, such as accident and incident rates. However, the exclusive use of outcomes metrics does not suffice to further improve safety and establish a proactive monitoring of safety performance. Although the academia and aviation industry have recognized the need to use activity

40

Chapter 3 – Theoretical framework indicators for evaluating how safety management processes perform, and various process metrics have been developed, those have not yet become part of safety performance assessment (ibid). This is partly attributed to the lack of empirical evidence about the relation between safety proxies and safety outcomes, and the diversity of safety models used to depict safety management processes (i.e. root-cause, epidemiological or systemic models). This, in turn, has resulted in the development of many safety process metrics, which, however, have not been thoroughly tested against the quality criteria referred in literature, such as validity, reliability and practicality.

As described above, the traditional use of airline SPIs are primarily outcome based (Safety I), tracking the number of RWY incursions/excursions, hard landings, unstabilized approaches etc. Activities making up the system, on the spot flight deck judgment and decisions, adaptability, activities that makes the system work remains largely unmeasured. This is e.g. the case for human – automation interactions that are essential for controlling the flight path. Automation surprises exist but are usually not reported, preventing system improvements (De Boer & Hurts, 2017). This is also largely the case regarding critical safety processes and system level interactions and relationships. The possibility to measure adaptivity and resilience was addressed by Hoffman and Hancock (2017), proposing a table with conceptual (theoretical) measurables. However, developing measures of the quality of macro-cognitive work remains an outstanding challenge (Woods, 2017).

3.6.3 Methods for socio-technical systems and safety Theoretical approaches to understanding the function of sociotechnical systems exist but are largely dependent on experimental settings in a controlled environment or using dedicated, in an airline usually not available, resources, such as researchers conducting jump seat observations during normal flight operations. A summary of the development of such methods are shown in Fig. 3.3.

41

Chapter 3 – Theoretical framework

Figure 3.3. A timeline of the development of methods for sociotechnical systems and safety (Waterson et al., 2015, p.567, based partly on Hollnagel, 2012a).

The Cognitive Work Analysis, CWA, framework emanates from the work of Jens Rasmussen with focus on constraints and addresses how work can be conducted considering these constraints. The CWA method consists of five different modular phases: work domain analysis, control task (or activity) analysis, strategies analysis, social organization and cooperation analysis, and worker competencies analysis (Rasmussen, 1986; Vicente, 1999). A later approach to CWA is proposed to include Social Network Analysis (SNA) to further investigate team cooperation and the organization (Houghton et al., 2015).

The STAMP (Systems-Theoretic Accident Model and Processes) model is based on the conception that accidents occur when the control system fails to adequately handle external disturbances, component failures or dysfunctional interactions (Leveson, 2004; 2016).

The Functional Resonance Analysis Method (FRAM) centers around the idea of resonance arising from variability in everyday performance (Hollnagel, 2012a). The FRAM is based on four principles: equivalence of failures and successes, approximate adjustments, emergence, and functional resonance as a complement to causality. A FRAM analysis is intended to lead to recommendations for damping unwanted variability.

In a concluding remark, Waterson et al. (2015) note that there is a lack of studies of the reliability and validity of various methods, and how various methods compare. They propose that attempts should be made to consolidate this body of work, trying to unify, or seek commonalities from the point of view of theory and practice.

3.6.4 Using what is measured for improvements McDonald (2015) describes how human co-ordination, and other intentional acts within a sociotechnical system, are driven by information, knowledge and understanding that is, at

42

Chapter 3 – Theoretical framework least partially, shared within the system. This knowledge is not necessarily explicit, expressed, and formalized. It provides the practical ‘know-how’ that justifies and supports action and anticipates the consequences of the selected action. It reflects some kind of global representation of reality; how the system works, based, at least in part, on interactions within the system. Knowledge in use seeks to be validated, e.g. through feedback in an iterative process. Such actions and interactions of people and technology create facts and data that are constantly used, formally or informally, to confirm, extend and transform the understanding of the system.

In socio-technical systems, knowledge operates through two processes that manage different types of knowledge (ibid). The first process is solely social. This is when an individual or a group, relate to an implicit (tacit) understanding and then makes this explicit, to oneself or to the others by means of query, explaining or discussion. This is the tacit- explicit-tacit process. This process cannot be complete without an external and explicit source, validating knowledge that is implicit to the operator. The second type of process is technology driven where a person has knowledge about the system that is revised or validated through data which may contain information, that is interpreted depending on the individual or group knowledge; knowledge-data-information-knowledge. The transformation of tacit operational know-how to explicit functional knowledge enables future actions or design of work and systems. Functional knowledge is used to select data representative of system functions, to organize that data into information and to use that information to understand system performance. It is the management of knowledge through these two processes that gives leverage over system control, design and change.

Argyris and Schön (1974) make a distinction between single loop learning, at an individual level where actions are adjusted within the boundaries set by the organization to achieve the desired goals in a changing system; and double loop learning, where, additionally, the organization becomes aware of the need to adjust overall parameters or processes. This relies on the use of organizational feedback looking at past actions. There are similarities to the supervisory control theory, which makes a similar distinction between the closed loop strategy, with continuous feedback on action outcome, and the open loop strategy, when adjustments are not possible until the next trial or operation, where feedback is only available after the task is complete (Rasmussen et al., 1994). A closed-loop strategy means that the operator is fine-tuning performance continuously as information is provided that gives indication of corrections to be made along the operation (see also Fig. 3.1.). An open- loop strategy means that an operation is planned and executed, but feedback is given only after the completion of the operation, when the overall performance is compared with the intended target. Higher order organizational learning is sometimes called deutero-learning (Bateson, 1958), where the distinction between deutero- and other forms of learning, e.g. double loop learning, is not always clear (Visser, 2007). According to Provan et al. (2020), learning at all organizational levels is an important part supporting the guided adaptability approach.

3.7 Theoretical summary The Safety I approach has made aviation extremely safe. However, the Safety II approach, focusing on adaptation, supporting decentralization and the flight crews’ ability to handle the dynamic environment as well as unexpected events, may be the way forward to further

43

Chapter 3 – Theoretical framework improve flight safety. Clearly there are difficulties related to how to monitor, understand, and control the flight process from an airline perspective. The flight process cannot be directly controlled as reality unfolds. Where safety is seen as an emergent property, resulting from years of evolution and the various interactions and relationships taking place in the local JCS as well as the greater system, safety can only be influenced indirectly.

An airline’s safety management system and related processes and activities are primarily designed by regulatory requirements, still largely reflecting the Safety I perspective. However, the Safety I airline structures in place may potentially support a Safety II approach if used consciously. Automation, performance measurement and the individual crew member, as part of the flight deck JCS, one way or another affect the flight process and ultimately has an impact on flight safety. An individual’s ability depends on many factors, some being their individual personal characteristics, their knowledge about procedures, about supporting technology and automation, as well as knowledge and skills gained through training and on the job experience about the handling of the aircraft and its systems. These relationships and interactions, the interplay between these areas, as well as how adaptability can be facilitated are explored further in the following chapters addressing barriers and potential for improvement.

44

Chapter 4 – Methods

4 Methods This chapter contains a summary of the research process and methods used throughout the research period. It starts with an overview of the methodological approach followed by a description of methods used to address the Research Question: What are the barriers and potential for training and airline performance measuring processes to support the flight crew for the operation of the highly automated aircraft?

4.1 Overall research approach On their website, the International Ergonomics Association, IEA, define ergonomics, or human factors (used interchangeably), as “the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory principles, data and methods to design in order to optimise human wellbeing and overall system performance" (IEA, 2020). Cognitive systems engineering is a design discipline that uses analyses of work when designing processes and technology for human-system integration. It deals with socio-technical systems, where socio refers to the social processes of communication, cooperation, and competition (Schwartz et al., 2008; See also Rasmussen et al., 1994; Hollnagel & Woods, 1983). Consequently, in the field cognitive systems engineering, as well as in the field of ergonomics and human factors, one way to do research is to find out about conditions in everyday working life in order to suggest methods for improvements of conditions or system performance. In experimental research settings it is possible to control most variables under investigation. However, doing research in naturalistic settings makes it difficult to control the dynamic environment. On the other hand, the validity is high in relation to problem formulation, given that you manage to observe what you intend.

4.1.1 Applied research Applied research “aims at finding a solution for an immediate problem facing a society, or an industrial/business organization, whereas fundamental research is mainly concerned with generalisations and with the formulation of a theory” (Kothari, 2008, p.3). The underlying hypothesis in this dissertation is that there is potential to improve the training and airline performance measuring processes supporting the flight crew in their work on the automated flight deck. The approach in this thesis is applied research and as such intends to understand, describe the overall system, and apply developed insights and knowledge into the aviation system. A prerequisite for this type of research is the researcher’s understanding of the area under study when interpreting the collected data (Säfsten & Gustavsson, 2019).

4.1.2 System analysis The International Council of Systems Engineering, INCOSE, defines a system as “an arrangement of parts or elements that together exhibit behaviour or meaning that the individual constituents do not” (INCOSE, 2020). System analysis is a problem-solving technique that tries to break down a system into its components to study how well those components work and interact to accomplish their purpose (Bentley, 2007).

45

Chapter 4 – Methods

Studies of socio-technical systems need to be conducted at three interrelated broad levels (Trist, 1981):  The primary work systems. The “micro”-level where a set of activities involved “in an identifiable and bounded subsystem of a whole organization” is being carried out. This work can be carried out by a single face-to-face group supported by specialist personnel, relevant equipment and other resources.  The whole organization system. This would consist of the whole workplace of a company or a defined subpart of such an organization, such as a specific manufacturing plant.  The macrosocial systems. The “macro’-level comprising a domain or industrial sector.

In this research, Rasmussen’s model of socio-technical systems (Rasmussen, 1997) was selected as the model to form a framework to study automation, training and performance measurement in the aviation domain. This model covers several aspects of the research question including environmental stressors such as technological change, various organizational layers and their interactions.

All the three levels described above have been addressed in this thesis, with particular focus on the interactions between the model levels above and below the flight crew. The methods described later in this chapter were utilized to explore these level-interactions.

4.1.3 Domain knowledge – my own professional experience After my M.Sc. degree I was accepted to a government sponsored ab initio flight training school going through initial flight training from scratch. At that time the basic aircraft training was undertaken on the single-engine Scottish Aviation Bulldog (Beagle B 125 Bulldog). The subsequent multiengine training phase was conducted on the Piper PA-31 Navajo. Neither of these two aircraft types did have any type of sophisticated automated systems present in current age modern aircraft.

After joining an airline as a first officer on the Fokker F28, I have had the opportunity to fly all variants of the MD80 series, MD90, Airbus 330/340, the Boeing 737 NG as well as the Airbus A320 (CEO & NEO) where I until recently flew as a Captain. These commercial airliners represent different philosophies with respect to flight deck design. The level of sophistication and automation is also largely different between these aircraft, to some extent representing the development of flight deck design during the extended research period.

During the second half of the 1990s I started out doing research related to the environmental impact of aviation. Initially this work focused on the technical improvement potential of engines, aerodynamics etc. but gradually moved towards the improvement potential of the aviation system at large, increasingly relying on the potential of automation.

For about 10 years, apart from flying, I was working within the flight operations department of my airline employer, initially working within the training department for approximately 7 years. This involved the design and setup of training programs, what to train, where, how and when etc. At that point, regulatory requirements mandated new and updated Crew Resource Management (CRM) training. Aircraft automation and related training was at that

46

Chapter 4 – Methods point to a large extent included in the CRM training activities where the European aviation authorities "Joint Aviation Authorities" through the "Joint Aviation Requirements" (JAR) described a set of training requirements for the industry to comply with. The JAR-OPS regulations acknowledged the implications of modern flight deck and aircraft design and explicitly stated both initial and recurrent training requirements with respect to automation. In addition to working with the development of the Crew Resource Management training, I was working with recurrent flight crew training in general as well as managing the Operations Manual – part D, the training manual, as editor.

After working with flight deck training issues, I spent 3 years working at the airline’s safety office. This work included working on the development of the airline’s Safety Management System, work processes, analyzing incident, occurrence and flight data. Also, I was the airline’s representative in the previously mentioned HILAS project.

All these activities have, except from providing insight into the aviation system at large, naturally formed my own opinion and ideas related to many of the research topics. But at the same time many of the ideas and questions related to this research has sprung out of questions raised, and shortcomings identified, during activities taking place while conducting everyday work.

Considering reliability of the studies, the domain expertise gives an understanding of the complexity of the aviation world and assists when interpreting research data.

4.1.4 Collection of qualitative and quantitative data In large socio-technical systems, such as the aviation system, with great complexity, the understanding is increased if several methods are used to collect data. Mixed method is the combination of quantitative and qualitative research approaches for application in a research study (e.g. Onwuegbuzie et al., 2007). The work included for this dissertation used both quantitative and qualitative data.

A scientific issue in applied research is how to interpret data from the different methods for collecting data. Triangulation is the incorporation of several different methods to collect and analyze the data. The motive for triangulation is to compensate the weakness of one method with the strength of another method (Yin, 2018). For instance, the interview technique with a few people giving “deep” answers about the problem can be combined with questionnaires to many people, creating knowledge on a wider but more shallow level.

Another issue in qualitative research, relevant for this work, is that the researcher is close to the data. Totally avoiding subjectivity is difficult, if not impossible. However, by using triangulation and by using earlier research as a theoretical background supporting the interpretation of data, subjectivity can be reduced (Mackey & Gass, 2015)

In conclusion, a mixed method approach has been deemed appropriate. The use of a mixed method approach should give a better understanding of complex phenomena seeing the world through multiple lenses (Rossman & Wilson, 1985).

47

Chapter 4 – Methods

In this thesis the following data collection methods have been used:  Interviews of pilots (qualitative data).  Workshops with groups of pilots and safety office staff (qualitative and quantitative data).  Implementation work/attempt of the proposed 5-step method as part of the workshops (qualitative data).  Collection of flight operational data (quantitative data).

The data have been analyzed against a system analysis approach as theoretical framework. In addition, my own professional experience creates a pre-understanding of the area against which the collected data could be interpreted. The application of the different methods are included below.

4.2 Interviews The interviews have been semi-structured in the sense that a questionnaire, or an interview guide, were prepared in order to systematize the questions to be asked (Sinclair, 1990). During the interview there has been possibilities for the interviewed persons to elaborate on the issues of interest. With the starting point in hypothesizing on what workplace issues were relevant in the study much effort was put on designing the questions for the interviews that should give comprehensive answers related to the overall research question.

A fixed set of background facts was documented for each subject such as age, sex, previous flying experience including aircraft types and number of flight hours, where and how initial flight training was conducted, where and how training to new aircraft types was conducted, additional non-flying experience etc.

Interviews with pilots of Airbus 330/340 - training for automation During 2003, four interviews with highly experienced Airbus 330/340 flight crews were conducted in the period between paper 1 and 2 to guide focus areas for further studies. At that time, the Airbus A330/340 was considered as one of the most advanced aircraft types with respect to automation and system integration. To ensure that some questions of interest were addressed during all interviews a questionnaire was developed and used by the interviewer during the interviews. The four interviewed pilots were colleagues of the author, meaning that they did not represent a random sample of Airbus pilots. The interviewer took notes during the interviews. The questionnaire is enclosed (appendix G). The questions were of a rather general nature to allow the participant to elaborate as freely and unbiased as possible. The questions primarily addressed skill and knowledge requirements and training issues related to gaining an aircraft type rating. Additional questions addressed the latest conversion/type rating training towards the Airbus 330/340 and general automation training issues were included focusing on the man-machine interface and its implications.

Interviews with pilots of B737 - future automation and ATM For appended paper 5, semi-structured interviews with six Boeing 737 pilots was performed by the author. Questions were prepared based on literature related to the future ATM system (SESAR ATM Master Plan), with focus on automation issues in relation to ATM and its potential impact on pilots. The interviews addressed the pilots view on the current level of

48

Chapter 4 – Methods automation as well as any thoughts related to a future system where the level of automation is potentially increased. Interviews took place in similar ways as with the Airbus pilots.

Interviews on pilot operational behavior during approach – Perceived vs. actual behavior Following the workshop described in appended paper 6, interviews with 10 pilots were performed at one of the airlines participating in the study. The purpose with these interviews were primarily to validate the result of the previously held workshop in which aspects of pilot operational behavior during approach had been discussed (see below). These interviews were conducted by one of the cowriters of appended paper 6. Subsequent post flight data analysis interviews were also conducted with 10 other pilots by me.

4.3 Workshops In a workshop it is possible to collect data from representatives from a workplace and in this case a group of pilots (Christie & Gardiner, 1990). The strength of the method lies in the fact that what a person says invites other people to make associations based on their own experiences (Osvalder et al., 2009). The workshops and focus groups are often appreciated by the participants as it brings about an opportunity to give your own opinion at the same time as you learn about other colleagues’ explanations of a complex issue at work.

HILAS workshops In the HILAS project, workshops within the research group as well as with airline personnel, primarily from the airline’s safety office, were conducted to propose an expanded set of key performance indicators relevant for the subject and a proposed method how to develop them. The airline’s procedures and processes for performance monitoring and its current set of safety performance indicators was reviewed. Implementation of the proposed 5-step method in appended paper 2, was conducted by the participating researchers and the airline staff. See appended paper 2 and 3.

Brantare workshop To further explore the possibilities contained in the Flight Data Monitoring system a study regarding the pilots’ behavior during the final approach was conducted in project Brantare. A workshop with 10 experienced pilots was conducted. Participating pilots from two airlines were asked to state how they would operate the aircraft during a final approach scenario under varying operational and environmental conditions. See appended paper 6.

4.4 Flight data collection and analysis To get access to actual flight data through the FDM system, a confidentiality agreement in the project Brantare, approved by the pilot union, was agreed upon with an airline operating Airbus A321 aircraft. Complete flight data recorder records were obtained for 1389 flights with a resolution of 1 Hz. The data sometimes contain inconsistencies, or the flight did not follow the (for measurement purposes) desired flight path. Therefore, flights were excluded if any of the following conditions were met:  Relevant data was missing for the flight.  Inconsistencies existed in the data for the flight.  The flight did not perform a stabilized approach.  The flight did not follow the ILS glide slope.

49

Chapter 4 – Methods

 The flight did not pass over the nominal final approach point within a margin of 400 meters.  The flight did not pass over the nominal final approach point at the correct altitude with a margin of 300 feet.

After cleaning of the data 1159 flights remained. The data was analyzed and compared to the previously conducted interviews and the workshop result. The result of this work is presented in appended paper 6.

4.5 The appended papers´ association to the research question To show how the appended papers relate to the research question: ”what are the barriers and potential for training and airline performance measuring processes to support the flight crew for the operation of the highly automated aircraft?”, the appended papers can be divided into two broad categories. These two categories relate largely to the interactions between the flight crew and the flight ops & safety management levels in the risk model (as numbered in Fig. 4.1 below).

Figure 4.1. Socio-technical model level interactions. Performance Measurements (1) and Training (2).

The two categories are: 1. An exploration of airline flight operational performance measurement processes and the use of data. This is mainly addressed in: a. Appended paper 2 (Measuring safety performance – Strategic risk data (airline safety and human factors issues). b. Appended paper 3 (Study of safety performance indicators and contributory factors as part of an airline systemic safety risk data model). c. Appended paper 4 (In Need of a Model for Complexity Assessment of Highly Automated Human Machine Systems). d. Appended paper 5 (Airline perspective on future automation performance - Increased need for new types of operational data). 2. An exploration of potential training opportunities. This was mainly addressed in: a. Appended paper 1 (Sharing the Burden of Flight Deck Automation Training). b. Appended paper 6 (Fine-tuning flight performance through enhanced functional knowledge).

50

Chapter 4 – Methods

Although the appended papers are categorized to one of the two main categories above, the content of the papers cover to a large extent both categories, also including descriptions and discussions related to automation. Below, the papers are presented in the order they were written. In addition, a summary of the skills and knowledge required for the future flight deck work environment of the pilots and related issues are addressed by the initial interviews performed and presented in section 5.1 in the next chapter. These separate interviews mainly helped my own process of how to proceed with the research.

Limitations of the chosen methods in relation to results from empirical studies are given in section 7.4.

51

Chapter 4 – Methods

52

Chapter 5 – Results

5 Results In this chapter a short summary of the studies conducted, as well as a summary of the results from the respective appended papers, are included. The result from explorative interviews conducted early in the research is also included.

5.1 Explorative interviews (2003) A summary of the results from the explorative (initial) interviews with four Airbus A330/340 pilots that are not included in the appended papers are given below. Questions and answers relate primarily to what skills and knowledge are required to work on the Airbus flight deck.  Three out of four participants responded that (taken stick and rudder skills for granted) non-technical skills are very important. Technical knowledge and a deeper understanding of the aircraft systems is not perceived as equally important.  However, on the specific topic on increased flight deck automation features, three pilots responded that there is a need for an increased level of technical knowledge. One participant answered that there would be a need for a different knowledge about the systems.  All agreed that learning to fly and flying the airbus 330/340 did not require any significant other knowledge or skills than the aircraft previously flown. It was mentioned by one pilot that a higher degree of knowledge about working with computers could be desired, including a higher-level systems knowledge and integration training.  When asked to rate the importance of flying, managerial and supervisory skills the responses varied. Flying skills were rated lowest by most participants, mentioning that most ways to operate the aircraft under normal operations is safe (”All is safe”).  Further issues are how to gain knowledge and building a mental model about how the system works. Training was mentioned as most important related to general knowledge and handling skills of the aircraft (including emergency and non-normal procedures) whereas line flying experience is deemed more important in relations to knowledge and handling skills of the automated systems.  De-training knowledge and skills related to the operations of the aircraft type previously flown was deemed important. Lack of cross aircraft type flight deck standardization could be an issue (e.g. switch on/off direction).  It was agreed that possible shortcomings during training could be addressed and compensated for during initial line training. One participant responded that he had “never known as little about an aircraft” after the successful completion of the type rating.  Two participants had undertaken integrated initial ab-initio training. They stated that little initial training addressed automation issues relevant for the current work situation.  Regarding their future role as pilots they did not foresee any significant changes over the coming 20 years. They stated that the biggest challenges in their work are:  Maintaining on time performance given the resources available.  Avoid complacency.  Challenge the captain (from flight officer).  Keep the interest in the job.

53

Chapter 5 – Results

Although only four interviews were conducted (with mixed responses), these interviews moved my focus towards performance monitoring and measurement.

5.2 Appended paper 1 – Sharing the burden of flight deck automation training (2000) This paper addresses the role of the pilot and the requirement for new skills and training requirements with increased levels of flight deck automation. The paper is based on the professional experiences of the authors, both with experience as commercial airline pilots. The introduction of new additional skills, rather than a substitution of old skills with new skills, creates a need for new management skills, changing the role of the pilots. The pilot increasingly becomes a manager of the aircraft and its automated systems and is expected to seamlessly move between various levels of automation as appropriate given various circumstances. In this paper it is explored how overall pilot training would benefit from pulling automation training forward in the pilot training curriculum, from the airlines to the earlier stages of training.

New ways to enhance the transfer of knowledge from basic initial training to the airline environment is proposed by moving towards a more holistic training approach, addressing automation consistently throughout training. Possibilities exist to improve the overall training process if airlines are willing and able to share their knowledge about the actual knowledge and skill requirements for working on the modern flight deck to the flying schools responsible for the initial phases of flight training.

However, problems exist preventing a move towards a more holistic approach. Various practitioners, different regulatory frameworks and organizational bodies administer different training phases making coordination more difficult. Also, instructors at the initial training phases often have little or no experience of what work on a modern automated airliner requires. At the airlines, limited resources or lack of specific automation instructor training may not allow instructors (with experience from modern aircraft) to elaborate or provide the necessary training. Multi Crew Cooperation, MCC, and Crew Resource Management, CRM, training topics are used as automation training which it was not originally intended for. These training topics are used as “add-on” instead of being an integrated part of the training. Also, pilots and training organizations usually train for the test rather than training for the task. With tests geared towards recall rather than creative application of knowledge in context, there will be a dissociation between training and the skills and knowledge really required to safely operate a modern aircraft.

5.3 Appended paper 2 – Measuring Safety Performance: Strategic Risk Data (Airline Safety and Human Factors Issues) (2009) In an airline, much data is collected but it is far from obvious how to handle all the data, or even to know what can be measured in a meaningful way. Reporting systems capture aspects of the operations, but they are largely retrospective by nature. Flight Data Monitoring, FDM, programs and other data sources can add a more predictive capability. This paper explores possibilities to improve the airline methods related to safety performance measurement by bringing human factors and safety risk theory closer to airline practice. Focus is on data management and finding new ways to use available data in a more proactive or predictive manner. Related to this is the selection and use of SPIs.

54

Chapter 5 – Results

Resilience, with focus on what works well and how humans adapt to the changing environment during normal operations, represent a different view on how to address safety matter. However, it is in this study assumed that the dominant safety approach, with the discussion of causality and latent conditions as root cause/causes still holds potential to improve safety further.

By addressing the issue of understanding airline flight operations and the evidence that exist, or potentially could be extracted out of an airline’s operations related to automation problems, it would subsequently be possible to adjust not only the training setup as required, but also the automation design, procedures etc. as appropriate.

This study is conducted within the HILAS project as workshops involving staff from a participating airline’s safety office as well as the participating researchers.

The main conclusions from the study were:  Human Factors is still important for safety improvements.  Information for airlines on how to in practice handle the vast amount of data available within their system is not readily available.  Selecting key performance indicators and related target levels can be exceedingly difficult if relationships is not known between various contributory factors and potential outcomes.  Both traditional measurement activities as well as the resilience concept (two sides of same coin) are deemed valuable.  A 5-step methodology is proposed to increase the utilization of currently available data within an airline and integration of HF information (see Table 5.1 below).

Table 5.1. The proposed 5-step methodology.

Step Explanation 1. Select Direct Safety Performance The term DSPI is chosen representing key Indicators (DSPIs) safety performance-related outcomes. Examples of DSPI are e.g. altitude penetration, runway incursion/excursion, smoke/fire onboard etc. 2. Identify contributory factors stated in This would include some deductive activity, or concluded from safety or voluntary and follow upon any investigation, to be able reports to structure and categorize data in a constructive way. 3. Identify contributory factors through Expand the set of potential contributory external sources factors conceivable for the occurrence at hand. 4. List relationships between selected This includes deducing the strength that each DSPIs and identified contributory factors contributory factor has on the DSPIs. 5. Select other relevant (non-reporting) This step includes relating or combining data sources various types of data from within one data source or by combining data from multiple data sources.

55

Chapter 5 – Results

5.4 Appended paper 3 – Study of safety performance indicators and contributory factors as part of an airline systemic safety risk data model (2009) This paper describes an implementation effort in an airline, using the proposed method described in appended paper 2. The purpose is to apply, validate and further develop the proposed method. The study is, just like the previous paper, conducted within the HILAS project. The focus in this study is to define the logic behind data management of existing, current, and historical data, and particularly how data can be combined to provide a prospective view of future risk.

Table 5.2. Implementation work step conclusions.

Step Implementation work step summary 1. Select Direct  There are difficulties drawing the line between operational Safety Performance outcomes and potential contributory factors. Indicators (DSPIs)  The DSPI list proposed in the previous paper is slightly amended and expanded. 2. Identify  It is discovered that contributory factors in many cases are contributory factors not easily identified in reports or in the subsequent stated in or investigation. concluded from  A more serious incident/accident, with a corresponding safety or voluntary more exhaustive investigation, would likely provide clearer reports details about contributory factors.  A standardized categorization of data is essential for the follow-through of this method. 3. Identify  It is found that it is difficult to find information matching the contributory factors ways data is managed in the participating airline. through external  Many contributory factors found, e.g. in research papers, sources are too generic to link specifically to a specific DSPI in a meaningful way. 4. List relationships  It quickly became overburdening, within the scope and between selected resources available for this research, to set up and deduce DSPIs and identified the strength that each potential contributory factor would contributory factors have on each DSPI. It was in this study decided to focus primarily on one selected DSPI. 5. Select other  This was the most difficult of the 5 steps in the proposed relevant (non- methodology and the research group was not able to bring reporting) data this step to a satisfying completion. sources Several difficulties were discovered:  Due to uncertainties in previous steps it was unclear how to best use available resources for further work.  It was difficult to conclude how an indicator could be “built” from data available from various systems and/or sources.  Organizational cross-system data categorization standardization would be desired.  Extracting data from other sources can be time consuming

56

Chapter 5 – Results

and a cost driver.

5-step method overall conclusion  Carrying out this study is both difficult and time consuming, even without the study covering as many safety related topics as originally planned.  Human factors data is not readily available in a suitable format to integrate using this method to create reliable links between contributory factors and outcomes.  Lack of cross system data categorization standardization makes work using the method difficult.  Process modelling is discussed and suggested as a means supporting this method. Given the dynamic nature of the airline operations, process modelling could however be very resource demanding.  It could be difficult to justify the cost associated with carrying out the proposed method without validated and proven results.

Regardless of the success of establishing the data relationships, following the 5-step method could give an increased general safety awareness as well as internal knowledge and awareness related to the organization’s data utilization capabilities. Because of the study being conducted at the participating airline, the airline’s set of key safety performance indicators was revised and updated. It is unclear to what extent this was a result of using specifically this proposed method or simply since this area of operations was highlighted and reviewed. No proposed changes to the method itself were made.

5.5 Appended paper 4 – In need of a model for complexity assessment of highly automated human machine systems (2011) This paper does indirectly address the interaction between the system model layers by examining how increasing system integration, the aviation system being a “system of systems”, can affect these interactions. The increased integration, and coupling of systems, are addressed in this paper exploring and discussing models and means of understanding the performance of this developing system.

Through SESAR, the intention is to move the Air Traffic Management, ATM, system from a paradigm of airspace-based control to a trajectory-based system management. It is in this paper argued that current system development design methods do not fully capture the potential emergent system behavior in large scale systems, such as the evolving ATM system, and that new tools and methods are required. It is concluded that:  Pursuing a reductionist reasoning may result in excellent subsystems that may decrease overall safety and/or efficiency goals. On the other hand, an overly holistic approach will most likely be very resource demanding.  Those parts of the current system that survive the transfer to the new system may not fully behave as they do today.  There will be different levels of equipment installed on different aircraft, giving them different capabilities while still operating in the same airspace.  As the system evolves, complexity will increase on some organizational levels, while potentially decrease at other levels. Therefore, the scope of any analysis must be properly defined. 57

Chapter 5 – Results

5.6 Appended paper 5 – Airline perspective on future automation performance – increased need for new types of operational data (2012) The focus of this paper is on the work environment of the pilots, automation issues in general as well as the role of automation within an envisioned future ATM system using trajectory-based operations. The potential effect and complexity of future automation is discussed. By addressing these areas, the purpose of this paper is to identify a preliminary framework related to understanding how the system works, particularly considering future automation requirements.

To explore the way for potential directions for continued research, interviews were conducted with six European airline pilots. The questionnaire contained rather few, but open questions to cover the broad aviation automation topic, particularly relating to feedback, perceived automation problems and the related training. One question addressed specifically the impact of the future ATM system on the role of the pilots. The result of these interviews can be summarized as follows:  There seems to be an uncertainty regarding what automation behavior is “normal” or “non-normal”. o “Automation is probably right, I am wrong”. o There is unwillingness to report odd FMS behavior deemed to be “normal”. o Clumsy time-consuming reporting systems are not encouraging the reporting of minor problems.  Some issues related to training are reported. o More in-depth automation system training is asked for. o Hands on training is preferred over theoretical training. o Problems in the simulator during training/checking is mentioned as a potential problem where simulator “odd behavior” sometimes is suspected prior to assuming own mishandling of aircraft systems.  Future system concerns. o Fear of reduced redundancy in the future system is expressed. o Conflicting goals between air traffic controllers and flight crew are mentioned in this context.

In summary, there are challenges finding problems relating to flight crews’ interaction with the automated systems on the flight deck. Understanding how the complex evolving ATM system functions, and its flight deck implications, can be even more challenging. It is proposed to further study how airline SPIs are used to understand issues such as:  What influence human performance in selected operational processes?  What influence the system performance of the selected operational processes?  What human performance improves system safety?  What system performance improves safety?  What dependencies are there between various indicators?

Supporting this work is a clear understanding of tools and data format used within airlines including:  The structure and accessibility of the data.

58

Chapter 5 – Results

 The tools available for data analysis including data base and software applications such as methods and models used for statistical and mathematical analysis and trends, correlations etc.  How data can be represented and presented.  An understanding of reliability and validity of chosen measures and data.

Human Factors and performance management are two fields of research that may complement each other to achieve a better understanding of the current and future system function and performance with respect to automation. It is concluded that new performance indicators, possibly using novel combinations of flight path data with other data, such as human performance data, could be needed as the air traffic systems evolves. However, a balance of perceived gains vs. cost must be made. Also, the use of more user-friendly reporting tools is proposed.

5.7 Appended paper 6 – Fine-tuning flight performance through enhanced functional knowledge (2020) As part of the project Brantare, this paper explores a potential mismatch between the flight crew’s perceived view about their handling of the aircraft and the automated systems and actual measured performance. A methodology is proposed that potentially could be used to support individual as well as organizational learning.

A workshop was conducted where 10 participating pilots were asked to state how they expect that their operational behavior configurating the aircraft for landing during the final approach phase under various environmental conditions (primarily varying tailwind conditions) would be. Subsequently, flight data from a participating airline comprising of 1159 actual flights under varying similar meteorological conditions is analyzed. The result of this analysis is compared to the previously described behavior by pilots, follow up interviews are conducted, and the results are discussed.

Fig. 5.2 shows how the pilots described how they would configure the aircraft for landing under varying wind conditions. The figure also contains the information about actual performance during similar weather conditions. The red ellipses symbolize results from the workshop and the green ellipses symbolize actual flight data.

59

Chapter 5 – Results

Figure 5.2. The studied group’s stated average altitudes of extension of different flaps and landing gear configurations for different tail wind situations combined with result from flight data analysis.

Limitations to this study are primarily related to accessibility issues and limitations associated with the use of flight data. However, this study describes a potential conceptual learning concept. A difference between one group of pilots view on their performance and data showing actual performance for partly another group is shown. By also adding the possibility for pilots to state their post flight perceived performance, work as perceived, vs. work as actually performed, data could be compared and used as a learning concept for flight crew during normal operations. Post analysis interviews with 10 pilots showed a positive attitude towards the proposed learning method.

60

Chapter 6 – Analysis

6 Analysis This chapter elaborates on, against the backdrop of the highly automated flight deck, the interplay between the flight crew and the flight operations and safety departments by means of performance measurement and flight crew training, including barriers for system improvements. This chapter is based on both previously described theoretical concepts as well as reached results.

6.1 System capabilities and competence Automation has indeed changed the workplace of the pilots. The technological development brings continuous new challenges for the system and the human operator having to adapt to the new technology. Automation is no longer perceived as a separate add-on system that can be treated in isolation. Today, automation is fully integrated, working with other systems in sometimes cloudy ways, and together with the pilots, automation is an essential part of the flight deck Joint Cognitive System. The SESAR ATM Master Plan paves the way for more automation, system-, as well as cross-system, integration, creating the foundation for new system emergent properties. Technological change, including new procedures and/or equipment, will be burdensome on an airline where information, training programs etc. will have to be planned, created, and executed. From a practical point of view, the flight deck JCS is largely defined by manufacturers and regulators. The flight operations and training departments support the pilots with relevant feedback on performance, execute training, inform on system updates etc. to increase the JCS competence and capability. This is conducted essentially given how the hardware part of the system is configured. Another relevant discussion, not included in this thesis, is how actual system performance and reports can contribute to improve system design methods, testing etc.

A high rate of procedural and/or technical change will require any airline pilot to keep up with a larger amount of information, de-learn old procedures etc. However, during the initial interviews conducted during this research the pilots did not foresee any significant changes regarding their work situation over the coming 20 years. As almost 20 years have past, the work environment of the pilots have become even more automated, but the fundamental role of the pilots indeed remains the same.

Several recommendations exist how pilots should be better prepared to operate the highly automated airliner and the automation issues described in chapter 3. The FAA Performance- Based Aviation Rulemaking Committee and the Commercial Aviation Safety Teams “Flight Deck Automation Working Group” addressed the need, as the level of automation increases to develop and implement standards and guidance for maintaining and improving knowledge and skills for manual flight operations (FAA, 2013). Other automation-specific recommendations included were related to training for improved autoflight mode awareness, as well as related to improving the understanding of the use of the Flight Management System, Electronic Flight Bags, performance management calculations etc. Other recommendations address pilot training and the need to monitor the consequences of implementation of new operations and technologies as well as the need to develop new methods and practices for improved data collection and analysis, specifically addressing human performance and underlying factors.

61

Chapter 6 – Analysis

Knowledge and competence at all levels in an organization is important for the overall process outcome, as the Rasmussen socio-technical model captures, both supporting and limiting the individual in the system. These are requirements that the theoretical models included in chapter 3 imply:  A control system should be in place able to handle external disturbances, failures or other deficiencies (Leveson, 2004; 2016).  The system should be able to compensate for variability in everyday performance to stay within defined boundaries/constraints (Hollnagel, 2012a).  Focus should be on interactions and relationships between humans as well as between humans and automated system components to be able to understand how the system functions (Dekker, 2011).  The system should have the capacity to deal with unexpected events. This is emphasized both at an individual level (Boy, 2014) as well as at an organizational level (Provan et al., 2020).  Decentralization is an important part of Safety II and resilience engineering. The role of the organization is then to “guide adaptivity” of workers and systems to create an organizational readiness to respond (ibid).  Organizations should maneuver and support proactive learning based on work as done (WAD) rather than work as imagined (WAI) (Ombredane & Faverge, 1955; Hollnagel et al., 2013; Provan et al., 2020).

The system qualities mentioned above are not just system properties. They are made up by people working within the system. This becomes especially clear during unexpected emerging events at the sharp end with potential catastrophic outcomes, where all the system knowledge should be manifested through the flight crew and the functionality of the flight deck JCS. The collective efforts and organizational resilience must also be manifested in the individual, and that person’s capacity, in the local JCS as well as the greater system. This is not to say that sole responsibility lies with the individual that in most situations does the utmost to keep operations running safely and smoothly. The pilot’s ability and responsibility (there and then when faced with a challenging situation) should not be confused with retrospective “blame” that may originate at deeper organizational levels.

What are then the specific set of knowledge, tools and skills that the pilots would need to function and contribute positively to the performance of the flight deck JCS and to safely operate in a dynamic environment, and by doing so contribute to the airline’s resilience? Regarding the question on how to operate the aircraft during normal operations one of the interview responses was that ”all is safe”, meaning that the same goal can be accomplished in many different safe ways (appendix G). This statement captures the proposed importance of adaptivity as well as the importance of being able to handle unexpected events (Boy, 2014) to stay within the defined boundaries. What then would be required to be able to handle an unexpected, novel, demanding situation on a highly automated flight deck without losing sight of the ultimate goal of keeping everyone onboard safe? Clearly, it is not just a question about having the knowledge and skills. Those characteristics must be possible to put to use in a potentially demanding and highly stressful situation.

The results regarding competence requirements from the research conducted within the scope of this research has given a mixed view on what pilots perceive as important. On the

62

Chapter 6 – Analysis one hand, results from the initial interviews show that workload management and other management skills were deemed very important. These are typical non-technical skills. At the same time, the result also shows that there was a need for an increased level of technical knowledge about flight deck automation features. Although focusing training on flying skills (taking them for granted) were rated low by most interview participants, the industry is emphasizing the need to maintain or develop manual flying skills. Potentially that is a result of the time that has passed since the interviews were conducted and a result of more automated functions on the flight deck where mode confusions have been deemed a factor in accidents. My interpretation of the result is that there is indeed a requirement (and a stated desire) for a deeper understanding about the automated systems. However, as automated functions proliferate the managerial and supervisory skills are becoming even more important.

Judgment is a recurring central term at all levels in Rasmussen’s risk model. That is also the case at the work level, where pilots are managing a flight. However, the meaning of that word is not the focus of Rasmussen’s paper and is not elaborated on at any depth. Rasmussen do however briefly address the “extremely important” area of capability and competence of the controllers and decision makers. “Capability or competence here is not only a question of formal knowledge, but also includes the heuristic know-how and practical skills acquired during work and underlying the ability of an expert to act quickly and effectively in the work context” (Rasmussen, 1997, p.196). At an individual level, judgment can be formulated as “the ability to combine personal qualities with relevant knowledge and experience to form opinions and make decisions” (Likierman, 2020, p.104).

To me, the organizational readiness to respond (Provan et al., 2020) is built on flight crew’s individual judgment capacity, readiness, adaptivity, airmanship, in turn supporting and building the flight deck JCS readiness as well as the greater organizational readiness and resilience. This capacity should in turn be supported by the organization by using actual data and through the use of proactive learning.

In summary:  To improve judgment, adaptivity, readiness for unexpected events etc. a system that allows the pilots to learn as much as possible from their own, as well as others experience, must be supported.  This should be supported by both traditional measurement activities (already available data) as well as by embracing the resilience concept.  Much of the flight deck interactions and relationships between the pilots themselves as well as with the automated systems is not captured or measured during normal airline flight operations. Still, available data should be used to prepare/train/learn the pilots as much as possible.  Pilot “readiness” to respond is bound by the organization in a centralized safety management approach (Safety I) but potentially unleashed and promoted by the organization with a decentralized safety management approach (Safety II).

6.2 Socio-technical model further elaborated During the work of the HILAS project, and the study reported on in appended paper 2 and 3, it became clear that theoretical approaches such as Safety II, resilience, guided adaptivity

63

Chapter 6 – Analysis etc. is not reflected in airline practice. More recent research makes the same conclusion (e.g. Provan et al., 2020). Earlier more cognitive approaches to safety, rather than those studying the output of the JCS, are even further distanced from available airline data on a day to day basis.

To capture the research conducted for this dissertation at an aggregated level the Rasmussen socio-technical system model structure in Fig. 2.1 and the flight crew – flight ops/safety management level is further elaborated on in Fig. 6.1 below. The Rasmussen cycle involving flight crew, described in detail in the end of chapter 2, includes a cycle of performance measurement (logs, reports etc.), flight operations department judgment, leading to training-design and/or information given back to the flight crew. Fig. 6.1 shows the degree of analysis and/or data aggregation applied to the collected data before feedback is given back to pilots in selected format such as training, plain information, direct feedback etc. Here, the training content is, in addition to other regulatory required content, based on aggregated data.

Safety improvements do not only have to “go through” the flight ops or safety department and be fully structured and organized. In fact, any organization should focus on enabling as much autonomous learning and knowledge sharing activities as possible to take place (as proposed in appended paper 6). The word learning is included in Fig. 6.1 to encompass not only scheduled training activities but any form of learning opportunity. The individual pilot’s performance could be compared to not only their own performance during a single flight, but with aggregated individual data as well as aggregated peer data (the circles in the figure).

Figure 6.1. Level of data sophistication and aggregation available for risk management judgment and/or crew feedback.

The two vertical arrows in the right part of the figure represent information (raw data) going between the levels in the model. The up arrow represents measurement activities capturing the flight deck JCS activities and outcomes. During severe incidents and accidents richer data can be captured. The down arrow represents raw data feedback to the flight crew. Normally such information is scarce unless operational exceedances has occurred. The horizontal

64

Chapter 6 – Analysis arrow going from left to right at the bottom of the figure represents learning taking place during scheduled training activities. The top arrow pointing to the left represents the level of data aggregation and sophistication. To the far end of that arrow SPIs reside.

The diagonal arrows in between pointing towards the flight crew represent possibilities for the airline to give the individual flight crew feedback on own as well as group performance. The two circles named “compare” symbolize how the flight crew can compare own perception about performance either with one’s own performance on a one flight only basis or with aggregated personal performance. Personal performance can naturally also be compared to others. Fig. 6.1. can be seen as a partial application of the supervisory control model (Sheridan, 1987) on the risk management model (Rasmussen, 1997) capturing potential open-loop learning opportunities based on captured data. Double-loop learning at the flight operational level can be inferred although not specifically pictured. By expanding the picture to include the hazardous process itself, flight crew closed-loop learning could also be included.

6.3 Barriers for improved performance measurement and training/learning 6.3.1 Performance measurement Although aviation safety at a regulatory level is largely measured in the number of fatalities per number of passengers carried, a larger number of incidents and non-fatal accidents occur. However, lacking many accidents or incidents to learn from, and the inherent retrospective aspect tied to past events, airlines increasingly must rely on other forms of data to maneuver and stay within the acceptable safe area of operations. During the HILAS project, it was observed that in the airline studied various parameters were monitored for exceedances and other data were monitored for trends. Other, or the same parameters, were used to, together, represent a “level of safety” usually through the use of defined Safety Performance Indicators. Regulations are not very explicit how specifically safety should be measured and what safety performance indicators to use. SPIs were used at a rather high level with the number of accidents and serious incidents at the top. However, indicators used below the highest level were often of a nature where they are easily definable and collectable, such as the number of runway incursions/excursion, hard landings, unstabilized approaches without a go around etc. The aggregated nature of safety performance indicators, mostly capturing rare events/incidents in its nature, providing little opportunity for the organization to develop tailored training or feedback programs to individual pilots. On the other hand, flight crew work in an environment with a high acceptance of normal operational performance monitoring (e.g. Line checks and Flight Data Monitoring) that enables the generation of large amounts of data on normal operational performance. Measuring possibilities are clearly available. However, internal company confidentiality agreements restrict the use of available flight data as seen both in the HILAS as well as in the Brantare project.

The 5-step method implementation work reported on in appended paper 2 and 3 and conducted in the HILAS project was conducted in a time when, at least within the research community, resilience engineering and the move from focusing on what goes wrong to what make things go right was under way. Still, the predominant airline safety model was that of Safety I and that assumed simplified causality chains should be possible to uncover. The

65

Chapter 6 – Analysis research was conducted in good faith hypothesizing that data would be available but simply not used to the fullest potential. In retrospect, this seems uninformed and potentially futile, considering the airline tendency to measure what can be easily measured e.g. through the FDM system and mandated reporting, the focus on higher level output metrics, not fully considering complexity, emergent properties, and more recent approaches to safety.

Consequently, the study encountered several problems, several of which may explain the difficulty for airlines to embrace a Safety II approach (see further below under “barriers”). At the same time, trying to follow the 5-step method during the implementation attempt at the participating airline, the airline’s set of key safety performance indicators was revised and updated.

Although the study did not set out to change the current airline metrics, but rather to use available data to the fullest, it was observed that data contained little information about the human – machine interactions. To some extent, such information can be deduced from FDM data (such as speed selections, configuration changes etc.), but more intricate interactions are not captured. However, it was discovered that airline practice is largely outcome-based measuring how well (mainly the JCS of the aircraft) operations functions, partly manifested in Safety Performance Indicators.

6.3.2 Training/learning New ways to enhance the transfer of knowledge from basic initial training to the airline environment was proposed in appended paper 1 by moving towards a more holistic training approach, addressing automation consistently throughout training. Possibilities exist to improve the overall training process if airlines are willing and able to share their knowledge about the actual knowledge and skill requirements for working on the modern flight deck to the flying schools responsible for the initial phases of flight training. New ab initio training aircraft include more automated functions than the aircraft predominantly in use when the first paper for this dissertation was written. This potentially ease the distributed training argued for in appended paper 1 on how to build a mental construct on how to operate the highly automated flight deck of the modern airliner. Also, some airlines have natural ties to the ab-initio training stage through the MPL program.

As described in chapter 2, training activities are normally fixed and scheduled at regular intervals according to regulatory requirements. As training implies formal scheduled training activities, such as simulator training sessions, learning, as included in fig 6.1, better captures the ongoing learning activities taking place also during other work activities, including experienced based learning. Using data to its fullest, which was the idea behind appended paper 2 and 3, to support learning in a wider context is explored in appended paper 6.

During the initial explorative interviews, line flying experience was deemed important in relations to knowledge and handling skills of the automated systems. Further, it was suggested that possible shortcomings during training could be addressed and compensated for during initial line training. During normal line flying pilots continuously gain experience, but there is little opportunity for structured learning outside the traditional training/checking sessions in the simulator. Ideally, airlines would benefit from being able to use already available data, or easily accessible new data, in ways avoiding just relying on

66

Chapter 6 – Analysis reactive data to become more proactive and predictive. During the follow up interviews (appended paper 6), 10 out of 10 participants wanted feedback on their own performance supporting a flight crew desire for more structured learning outside traditional training activities. By small measures it should be possible to give the individual more access to his/her own performance, both in relation to the desired performance, in relation to peers, but also in relations to one’s own mental picture about how a certain task has been performed as addressed in appended paper 6. Self-reports and sharing knowledge and experiences internally to a larger extent could be another step forward supporting this.

6.3.3 Barriers for potential improvements There is a challenge to bridge the gap from research theory to the current airline environment. Current airline measurement framework is largely outcome based. Automation issues as described in chapter 3, whether seen as constructs or actual contributory factors (in a greater systems perspective), are not measured as such but are rather the construct of a possible subjective accident or incident investigation. This is not meaningful during normal operational Performance Measurement. Also, for airlines to stay in business, compliance with regulatory requirements is an absolute necessity. Using theoretical approaches to safety, not easily implemented and supported by regulations, is not an absolute requirement. The table below summarizes identified barriers for improved training and performance measurement.

Table 6.1. Barriers for improved performance measurement and training.

Barrier Explanation Addressed The system is The lack of a large number of serious incidents and HILAS very safe accidents to learn from forces airlines to rely on normal operations data to understand what is going on. However, measurement activities capture outcomes, only highlighting when defined boundaries have been exceeded. The SPIs used are of a too high level to be of use for individual learning. Safety II vs. The result from the HILAS project and the conducted HILAS, airline practice implementation attempt, considering several of the Paper 2 & 3 difficulties discovered trying to apply the proposed method, explains why the Safety I approach still dominates in the industry:  Information for airlines on how to in practice handle the vast amount of data available within their system is not readily available.  Selecting more, potentially more predictive, lower level key performance indicators and related target levels is difficult if validated relationships cannot be deduced between various parameters and potential outcomes.  Human factors data and potential contributory factors (that themselves are not a stable factor) are in many cases not easily identified in reports or in

67

Chapter 6 – Analysis

the subsequent investigation if conducted.  There is a high level of subjectivity on how things are reported and analyzed.  It is difficult to match external information with the way data is managed in the participating airline.  Safety II metrics, such as activities conducted to make things go right, relationships and interactions are generally not included in available data.  Standardizing and integrating data from various sources and (legacy computer) platforms are time consuming and costly. System The increased complexity that may follow as the system HILAS changes and evolves is pointing at the need for new models capturing Paper 4 & 5 cross-system the greater aviation system. Current normal operations interactions measurement systems are confined to the respective airline, or a specific air traffic control unit etc. As the system evolves, with more system integration across previously independent system parts, performance measurement activities will not capture the full system functionality without new tools and methods. As addressed in appended paper 4, cross-system behavior, interactions and processes should be included. Holistic Various practitioners, different regulatory frameworks and Paper 1 automation organizational bodies administer different training phases training making coordination difficult. Also, instructors at the initial approach training phases often have little or no experience of what work on a modern automated airliner requires.

Theoretical references and arguments for pulling automation related training forward (earlier) in the training process is made in the first appended paper. However, linking such actions to actual improvements in flight safety can be difficult. Unwillingness In appended paper 5 it was reported on an unwillingness to Paper 5 to report report e.g. odd automation behavior. Also, clumsy time- consuming reporting systems are not encouraging the reporting of minor problems (Ulfvengren, 2007). Limitations to Limiting for the research conducted for appended paper 6 HILAS the use of was how flight data could be used. Agreements between Brantare data the pilot union and the airline prevented individual use of Paper 6 data extracted from the FDM system. Also, largely it is the output of the flight deck JCS that is captured by the FDM system, rather than individualized data, making it challenging to use captured data to tailor individual learning activities. Financial More in-depth automation system training is asked for in HILAS constraints appended paper 5, and e.g. Boy (2014) points at the Brantare

68

Chapter 6 – Analysis and regulatory importance of the training time in simulators, especially to Paper 5 & 6 compliance. practice unfamiliar situations. With unclear return on investments, willingness to invest in non-regulatory required work is limited.

69

Chapter 6 – Analysis

70

Chapter 7 – Discussion and Conclusions

7 Discussion and Conclusions This chapter discusses the obtained results as well as potential for system improvement, includes a critical review and limitations related to the research conducted and concludes the research.

7.1 Results vs. theories Over the time of the work on this thesis, research as well as industry focus has shifted from the individual automation issues, such as automation induced surprises and automation bias, to a more holistic flight deck automation approach. This is e.g. manifested in the increased focus on flight path management as the overarching goal. Also, rather than solely focusing on what goes, and can go, wrong, more emphasis is put on the human’s role in keeping operations running smoothly in a dynamic and challenging environment. A theoretical shift from Safety I to Safety II has been proposed. At the same time airline practice and regulations still largely fits the Safety I approach.

Several issues related to working with automation, as described in chapter 3, are identified and these issues remain as a factor during any normal line flight. New issues will emerge as the system evolves. However, during normal line flying, most deviations or disturbances related to the use of automation are being absorbed and handled without significant implications for the continuation of the flight. This is the core idea of Safety II and resilience. As few severe incidents and accidents occur, airline rely on information about how normal operations function. In addition to the largely reactive accident and incident reporting, flight data monitoring programs are generally seen to add a more predictive capability. However, the fact that performance measurement and the use of safety performance indicators are regulatory mandated is no guarantee that they are in fact used in an optimum way. As seen in the HILAS project, data generated from such programs are not fully utilized.

To become more proactive and predictive, one of the key issues is to define the logic behind data management of current and historical data to provide a view of future risk and where the airline is operating in the safety space. Analysis of trends over time is central in discovering areas in need of further attention and deeper analysis, but also to understand the effectiveness of measures undertaken. Understanding trends requires sufficiently robust and detailed data – as well as an interpretive capacity. One of the problems is that essential indicators in one context may not be the essential ones in another context. Also, the underlying potentially significant environmental differences may not be given enough attention. For example, hard landings depend on short or slippery runways, gross weight, crosswind, turbulence, wake vortex etc. Similarly, other behaviors of the pilots may be attributed to environmental factors not easily recorded. Also, recorded data, e.g. from cockpit voice recorders or from the FDM system do never disclose the complete work context as these monitoring systems do not capture much of the essence of the actual interaction between human and automation. As concluded in the 5-step implementation work (appended paper 2 and 3), finding meaningful correlations between numerous variables and parameters is not easy, not only choosing what data to include but also how to categorize data. In that way, those conclusions clearly support the move towards a Safety II approach. However, as proposed in the section below about potential improvements, Safety

71

Chapter 7 – Discussion and Conclusions

I (and regulatory compliance) and the use of data should be used consciously and deliberately.

The results in this research do not in themselves contradict theories and previous knowledge, e.g. related to known automation issues. Rather it shows that the data available was not near specific enough nor in a format suitable to find human-automation related problems. Nor was it possible for the implementation attempt working group to create predictive data causalities to, in a meaningful way, address those potential problems. Some classification categories, such as automation complacency, if they were available may oversimplify the human machine interaction and in reality, do not provide meaningful information (Dekker & Woods, 2002). These results do support the move towards a Safety II approach, where the proposed improvements above are not primarily related to showing how clear correlations between various factors and potential outcomes could be made using data available, but rather reflect the idea of safety as an emergent property.

Theoretical arguments for the impact of cross-system interactions on complexity and the potential emergent properties of such a system are made in appended paper 4. To maintain or improve system performance new methods and models are deemed required. If the aviation system develops as envisioned, fully measuring and understanding various sub- systems (such as an airline) may not be enough to increase safety further. This is supported in research addressing issues related to performance measurement across organizational boundaries and the associated inter-organizational performance management activities (Okwir, 2017; Barchéus, 2007). Although working within the same operational context, various stakeholders, such as the airline or air traffic management, may have different, sometimes conflicting, priorities and interests while still each organization plays a significant and essential role for the quality and efficiency output of the overall system.

7.2 Potential for improvements Based on the research conducted, this section addresses some areas where potential improvements could be made related to performance measurement and training in an airline. At the onset of the research for this dissertation I hypothesized that data would be available related to human automation interactions that could be used to make the airline more predictive and adjust training programs etc. accordingly. It turned out that such data generally was not available. In addition, the increasing acceptance of emergent system behavior and Safety II thinking implies that, even if such knowledge have been available, causality and validated relationships between various parameters and future outcomes cannot be made in a meaningful way. Consequently, several of the areas included in the table below are of a more generic nature not specifically addressing the automated flight deck environment as such. However, it does recognize the airline environment and the corresponding data availability. Using research resources, not normally available in an airline, for improvements are not addressed in the table but discussed in another separate section below.

72

Chapter 7 – Discussion and Conclusions

Table 7.1. Potential for improved performance measurement and training.

Potential for Description improvement Maximizing A data analytical capacity and strategic thinking ability at the flight Safety I and operational department level is key for successful performance-based current airline operations. Supporting this work is a clear understanding of tools and data practice format used within airlines including: output  The structure and accessibility of the data.  The tools available for data analysis including data base and software applications such as methods and models used for statistical and mathematical analysis and trends, correlations etc.  How data can be represented and presented.  An understanding of reliability and validity of chosen measures and data.

Doing Safety I work can lead to positive side effects not related to assumed causality chains, such as increased self-awareness and a review of current practice (appended paper 3). This in itself may justify future efforts in the area. At the same time this shows that management is fully committed to safety and willing to develop and strive towards improving its organizational and individual learning capabilities. Without actively working with safety data and reviewing current methods and practices there is little chance of improvements and change.

Several of the older safety models, such as the Swiss Cheese Model or the SHELL model, or terms such as situation awareness, complacency, lack of communication, coordination etc. can be seen as “constructs”, implying blame, in retrospect following incidents or accidents. However, such terms can help to “talk about” safety and CRM related interactions and relationships, especially in the context of flight crew training and learning. Towards a A Performance Measurement and safety management system structure is Safety II in place with a Safety I approach. However, a change in focus towards approach measuring other things, using the same organizational capacity, should shift attention towards areas defined as essential in a Safety II approach.

The environment is dynamic and changing with new technology being introduced. Consequently, it would be natural with a more dynamic approach to measures that capture information related to the change itself, rather than looking at a specific set of rather static high-level indicators in isolation.

Organizational processes can be reviewed through audits, management reviews and other ongoing activities. However, regulatory compliance is no guarantee for efficient and innovative safety work, operational processes, and resilience. Audits, such as high level IOSA audits, add more focus on regulatory compliance activities rather than looking at innovative

73

Chapter 7 – Discussion and Conclusions

new processes, adaptivity, decentralization, collaborative efforts and openness at the operational staffs’ end of the organization.

Potential indicators giving information about airline adaptivity could e.g. be in the areas of:  Training program adaptation to the individual and/or to other factors. This could be a relevant indicator on how well an airline’s measurement system functions as well as how adaptive the airline is based on available data and other factors. If performance-based training would work as intended, the training hours would be adapted based on the identified training needs, rather than squeezed into a fixed training setup. High numbers of senior pilots are retiring, meaning many new pilots in the right seat at the airlines. At the same time the number of yearly training hours remains the same, or even decrease on average. This could be an indication that economic conditions do not allow performance- based training programs, with a potential correlation to the airline’s resilient capacity, to function in an optimum way.  The dynamics of the number of interactions between e.g. the HILAS defined real time operational support level and the flight crew. Cross-system Define and agree on use of common metrics spanning several interactions organizational entities, such as technical and operations departments, or both airlines and ATC units. Holistic Aspiring pilots are most often self-sponsored and any ties during their automation initial training to the future environment on a major jet transport aircraft training is random and limited at best. Creating closer ties between airlines and approach basic training schools and increased use of airline mentoring programs (without actual obligations to hire students) should help set the tone right for young training pilots working with various levels of automation from the onset of training.

The fact that more automation and sophisticated equipment has found its way into the basic training aircraft gives this problem much better conditions to be addressed in a natural way. Increased Encourage easy to use reporting methods. The main goal of reporting reporting should move from simply reporting mandated occurrences and deviations towards feedback on system design and processes, collect information about hazards, make it easier to follow up on changes etc. Information and knowledge gained should be used for flight crew training design. At the same time, the airline must have the resources and capacity to respond to reports received. Wider context New methods (such as the one proposed in appended paper 6) could be of learning made available and used to optimize learning opportunities based on already available data. Exploration and experimentation may be a way forward, where some new measuring methods and learning opportunities could prove useful, and others less useful. Validated relationship to flight

74

Chapter 7 – Discussion and Conclusions

safety and high level SPIs will often be unclear. However, using already available data and systems could keep costs down.

Capitalizing on positive knowledge (and learn from mistakes) is central to be able to continuously improve. Any knowledge could be used to its fullest to promote continuous learning, both on the organizational as well as the individual level. Overcoming Voluntary use of individualized data could be a first step towards a greater limitations acceptance of using normal operations flight data for learning in a wider related to the context and more individualized training setups. use of data

7.3 Application of theoretical models to airline safety processes Reports can include references and information related to latent conditions, slips, mistakes and even make references to models, such as the Swiss cheese or the SHELL model. Obtaining such data from the FDM system is not possible, or at least very difficult, as currently designed. Theoretical models have influenced airline operational procedures as well as training. SOP requires cross verification between pilots when making certain system changes, operating specific switches etc. The use of Threat and Error Management (TEM) is mandated in many airlines. Models on how to deal with various non-normal situations exist and are to some extent based on decision making theories such as the NDM model. But to what extent can theoretical models be used to guide airlines safety work and as a basis for what to look for during performance measurement activities trying to become more proactive and predictive? Although models such as the supervisory control model, skill, rule and knowledge-based information processing behavioral levels, the distinction between slips, errors and mistakes etc. makes perfect sense, in theory, many of these models and concepts are difficult to transfer to actual procedures or company processes such as performance measurement activities.

Not only may the actual application on newfound knowledge be difficult, but it could also be counterproductive if not carefully considered when academia or safety experts interact with airlines to transfer knowledge deemed valuable for safety improvement purposes. Almklov et al. (2014) concluded that “safety scientists, safety professionals, and organizations that hire safety professionals need to be sensitive to the possibility that their well-intentioned efforts to promote safety may lead to a marginalization of local and system-specific safety knowledge” (ibid, p.25). Similarly, Dekker (2018) proposes that safety, as well as the human experience of work, improve when top-down hierarchical and bureaucratic approaches are replaced by local ownership, autonomy and collaborative problem-solving. This would lead to motivation, creativity and diversity that are necessary organizational characteristics to successfully manage safety in a complex, non-deterministic world.

Many areas are difficult, if not impossible, to regulate in detail. Consequently, it is to a substantial extent up to the individual organization to be smart rather than only meeting a non-specific regulatory framework. With flight safety at such a high level, little incentives exist to develop the existing airline measurement framework, little less trying to find completely new methods. Even tragic events such as the recent 737 MAX accidents, although having profound effects on the aircraft manufacturer, is unlikely to fundamentally

75

Chapter 7 – Discussion and Conclusions move airlines to allocate significantly more resources to this area. Here, applied research could play a significant role with the participation of airlines and regulatory bodies. To avoid specialists making experience-based knowledge marginalized, efforts should be taken to acknowledge, support and enable all form of learning and knowledge activities, both at an individual and at an organizational level, thus strengthening the organizational resilience, where the role of the pilots can be seen as an essential part of that resilience.

7.4 Limitations The area of study is large. The appended papers are consequently not focused solely on one of the research areas, e.g. performance measurement, which could have provided more depth in one of the research areas selected. To be able to use a theoretical framework the Rasmussen model of risk for socio-technical systems was selected and applied to an airline context. This model is several decades old but was deemed appropriate for capturing the complexity of the selected research areas. Other researchers have found it useful to adopt the system-oriented approach of Rasmussen. In their research on work functions and innovation, Asplund and Ulfvengren (2019) refer to cognitive systems engineering of Rasmussen as useful when considering the work of safety engineers in industrial firms.

The Safety I and Safety II concepts as described and referred to in this thesis are no absolute and undisputed facts. However, potentially these theoretical concepts capture the essence about different approaches to safety. In particular the Safety II term focuses on the operator, and consequently the operator’s knowledge, skills, and capacity to adapt and provide resilience in a challenging and dynamic environment. Since this thesis largely address how to use data and information available to strengthen the pilots in their capacity as operators in the aviation system, these terms are considered relevant for this thesis.

7.4.1 Evaluation and scrutiny of methods One threat to the internal validity of this research is the time passed since conducting the research for appended paper 2 and 3. However, although there has been a move towards a more performance based regulatory environment, these regulations, as well as regulations related to an airline Safety Management System were already in place at the time of the research for these papers. The same issue of time is relevant to consider, especially for appended paper 1. Although this appended paper includes no field study, the conditions under which ab-initio flight training is conducted has changed during the 20 years that have passed since the paper was written. More automated training aircraft have been introduced, and with the introduction of MPL-training, ties between airlines and the initial training organization has strengthened. This can be seen as support for the argument made in that paper. As described in appended paper 6, there were limitations to the use of flight data preventing the association of individual participant responses to recorded flight data by that same individual which poses a threat to the internal validity of the study of the proposed learning method. This renders the study somewhat conceptual, still addressing the potential attitude of the pilots towards the proposed method. This limitation should be addressed in future research if conditions allow.

The research method for the respective paper is clearly stated and reliability should be satisfactory. Limitations in methodology are presented in the respective appended paper where applicable. The issue if more allocated time and/or resources for the implementation

76

Chapter 7 – Discussion and Conclusions attempt described in appended paper 2 and 3 could have given a different result remains somewhat unanswered. My conclusion is that more resources could potentially have shown causality in some areas, e.g. regarding rostering and fatigue, but that the method as such, exploring a Safety I approach, would not have succeeded in finding clear relationships between different parameters and outcomes in terms of actual incidents or accidents.

Data saturation, the point when “no new information or themes are observed in the data” (Guest et al., 2006, p.59), was reached to a varying degree during the interviews conducted during this research. The first set of initial interviews (appendix G) did not reach data saturation, but still guided my future work. The interviews conducted for appended paper 5 did not reach full data saturation but was deemed sufficient for the purpose of that paper. The interviews comprised in appended paper 6 did reach data saturation.

7.5 Time aspect Overall, this research has been conducted over a long period of time. As described above, there may be a problem comparing results from the earlier papers to the more recent appended papers. However, there is also the benefit of studying the evolution of the aviation system firsthand, related to flight deck automation, Safety I towards Safety II, etc. During the research period the following trends were observed:  The ongoing technological development, moving towards increased automation and system integration continues. Consequently, the nature of automation problems has changed since the initiation of the thesis work as technology has matured. Automation has increasingly become a natural part to the flight deck environment. It “is” the environment rather than an add-on. Also, initial training aircraft has evolved where automation management and issues could be more naturally addressed.  Training increasingly relies on relevant data on what is needed to train rather than simply performing the items required by the prescriptive regulatory training curricula. The regulatory move towards performance-based operations emphasizes the need for effective performance measurement processes.  A theoretical shift, at least among parts of the research community, from causality to emergence, Safety I to Safety II, has occurred.  Despite the SMS regulatory introduction, available measuring methods remain largely the same over a long period of time, lacking cross-system capabilities.

7.6 Contribution This thesis contributes with knowledge in the field of airline performance measurement and flight crew training. All research is conducted in the real airline world using actual airline flight data. In the project HILAS several airlines, as well as other organizations such as universities, research institutes and maintenance organizations, took part and shared knowledge and information. In the project Brantare, in addition to the research group, two different airlines participated. Consequently, the knowledge gained and shared in this thesis should be possible to apply (generalized), not only to other airlines than those participating in this research, but also to other highly automated safety critical domains with a high acceptance of performance being measured and analyzed. Critical to the external validity may be the fact that all studies have been conducted in a European environment. Although airlines in Europe all operate under the same regulatory requirement, differences naturally exist. Such differences, related to airlines being subject to observations and/or participation,

77

Chapter 7 – Discussion and Conclusions could have affected the result. Also, other parts of the world may have a different attitude towards information sharing and the use of data for individual learning, affecting the external validity.

7.7 Future research Although some issues have been highlighted during this research, it also points at areas in need of further attention.

To better validate the usefulness of the proposed learning method proposed in appended paper 6, it would be valuable to reperform the research conducted in project Brantare, but as actually proposed in the paper using individualized data throughout the research.

Organizational learning and continuous improvement are well-known concepts. Future research should focus on how structured learning could be more present in the day to day work of the pilots. How to better share knowledge and improve learning in normal everyday operations.

Currently, performance measurement related to normal flight operations capture data that largely comes out of the flight deck JCS. It does not capture a richer set of information about flight deck interactions and does not reveal the true nature of the man machine relationship. This limits the usefulness of such information to be used for feedback and individualized training design. Related to this is the potential use of real time, on the flight deck, automation monitoring of the nature of the pilot – automation interactions. This could include the use of AI, IAS (Intelligent Adaptive System), Augmented Cognition, Operator State Monitoring etc. This would pose new questions regarding flight crew acceptance and ethical issues that could be an important area to address further.

Resilience engineering and a Safety II approach has been discussed for a long period of time without fully being integrated and adopted by the industry. Further studies on actual practical differences between a Safety I and Safety II approach should be valuable. As part of this work, addressing how the current, regulatory required, airline performance measurement framework and organizational processes already in place can be developed to better support a Safety II approach should be addressed. Also, as part of this process, the value of other measures should be explored further, showing the adaptability and the dynamics of an organization. Detailed information related to normally unmonitored human automation interactions, such as flight guidance commands, FMS interactions etc., and how they potentially change over time could also be addressed further.

As the aviation system evolves and the use of automation becomes even more widespread, new cross-system measurement methods is desired. How such a measurement system should be designed, with a larger use of common aviation system metrics, could be another area of interest to study.

7.8 Conclusions Large amounts of data that are available in airlines are not used. There is an uncertainty what to do with the data, unclear causality/relationships, or resources are lacking to address the area more in depth. Due to the already high flight safety levels, investments in, and

78

Chapter 7 – Discussion and Conclusions exploration of new performance measurement processes and activities are not prioritized. Current airline practice largely reflects a Safety I approach. With unclear causality between various parameters recorded and actual outcomes, it is difficult for airlines to use data available as a source for confident training design. This is especially the case for safety performance indicators that often are outcome based at a high level.

Despite the ideas of resilience and Safety II, aviation regulations and airline practice have not fully embraced these theoretical concepts. Many factors likely contribute to this inertia and slow change. From an airline perspective, safety levels are deemed good enough and little incentives exist for airlines to invest in new concepts unless required to do so to adopt to new regulatory requirements. From a regulatory perspective, the Safety II approach still is perceived just as a theoretical reasoning rather than a validated method with proven results. Consequently, the way forward should include a clarification on practical differences between the Safety I and Safety II approach and how regulatory changes and practical airline activities aimed at improving flight safety can be conducted without abandoning current processes and activities.

The current measuring systems are not ideal for specifically measuring automation related problems and flight crew – automation interactions. Only during a thorough accident or incident investigation is a richer data picture, although in hindsight, painted that can be used for learning or other adjustments. Moving from a scheduled training activity mindset to a wider learning and knowledge management and sharing concept could be a cost-efficient way forward. By various means, normal operational flight data could be used for this purpose.

The future SESAR envisioned ATM system, with higher efficiency and more cross-system integration, will render the current measurement systems insufficient to understand difficulties and possibilities in the greater aviation system.

Safety work matters. The intended outcome of work performed may not always be achieved but potentially other unexpected changes and benefits may surface. As described in appended paper 3, the work looking for causality led to a revised set of safety performance indicators. Such a revision naturally cannot be confidently understood as leading to a higher flight safety level. Still, what is the alternative to exploring, trying to improve, talking about safety? The proposal included in this thesis, to shift the mindset about reporting, from required occurrence reporting towards a more proactive safety thinking and information sharing, throughout the organization reflects that. Also, the notion of safety as an emergent property and Safety II, implies a degree of uncertainty that must be embraced and acknowledged. Here, judgment, central in the Rasmussen model, plays a crucial role.

79

Final thoughts

Final thoughts In the current highly competitive airline business, the cost associated with any measurement and data analysis, and subsequent activities such as training, procedural changes etc., must always be balanced against the potential outcome, however difficult the potential outcome may be to quantify. Proof that changes or activities undertaken will keep the airline out of trouble will most of the time not be available. However, using data as guidance where to point the flashlight, address potential problems, and strive to improve, remains a core activity within any healthy airline.

80

References

8 References Abdelmoula, F., & Scholz, M. (2018). LNAS – A pilot assistance system for low-noise approaches with minimal fuel consumption. In Proceedings of the 31st Congress of the International Council of the Aeronautical Sciences. Brazil, September 9-14, 2018.

Adriaensen, A., Patriarca, R., Smoker, A., & Bergström, J. (2019). A socio-technical analysis of functional properties in a joint cognitive system: a case study in an aircraft cockpit, Ergonomics, 62:12, 1598-1616. https://doi.org/10.1080/00140139.2019.1661527

Almklov, P. G., Rosness, R., & Størkersen, K. (2014). When safety science meets the practitioners: Does safety science contribute to marginalization of practical knowledge? Safety Science. 67, 25-36.

Amalberti, R. (2006). Optimum System Safety and Optimum System Resilience: Agonistic or Antagonistic Concepts? In E. Hollnagel, D. D. Woods, & N. Leveson (Eds.). Resilience Engineering: Concepts and Precepts. Ashgate Publishing Ltd: Aldershot, UK.

Argyris, C., & Schön, D. A. (1974). Theory in Practice: Increasing Professional Effectiveness. Jossey-Bass.

Ashby, R. W. (1956). An Introduction to Cybernetics. Methuen.

Asplund, F., & Ulfvengren, P. (2019). Work functions shaping the ability to innovate: insights from the case of the safety engineer. Cognition, Technology & Work. https://doi.org/10.1007/s10111-019-00616-w

Bainbridge, L. (1983) Ironies of Automation. Automatica, 19(6), 775-779. https://doi.org/10.1016/0005-1098(83)90046-8

Bar-Yam, Y. (2002). General Features of Complex Systems. Encyclopedia of Life Support Systems. EOLSS UNESCO Publishers.

Barchéus, F. (2007). Who is responsible? Communication, coordination and collaboration in the future Air Traffic Management system. Doctoral Thesis. KTH Royal Institute of Technology, Department of Industrial Economics and Management.

Barchéus, F., & Mårtensson, L. (2007). Air traffic management and future technology – the views of the controllers. Human Factors and Aerospace Safety, 6(1), 1-16.

Barnett, A. (2020). Aviation Safety: A Whole New World?. Transportation Science. https://doi.org/10.1287/trsc.2019.0937

Bateson, G. (1958). Naven (2nd ed.). Stanford University Press.

Bentley, L. D. (2007). Systems Analysis and Design for the Global Enterprise – 7th edition. McGraw-Hill/Irwin.

81

References

Billings, C. E. (1996). Aviation Automation – The search for a human-centered approach. Lawrence Erlbaum Associates.

Blom, HAP., Everdij, M. H. C., & Bouarfa, S. (2016). Emergent Behaviour. In A. Cook, & D. Rivas (Eds.), Complexity science in air traffic management. 83-104. Ashgate.

Bodin, I. (2016). Cognitive Work Analysis in Practice adaptation to Project Scope and Industrial Context. Licentiate thesis. Uppsala University, Department of Information Technology.

Boeing (2015). Boeing Flight Training Look Ahead (Presentation at the AABI meeting), July 2015.

Boeing (2017). Boeing 2016 Statistical summary, July 2017. https://aviation- safety.net/airlinesafety/industry/reports/Boeing-Statistical-Summary-1959-2017.pdf

Boy, G. A. (2011). Introduction: A Human-Centered Design Approach. In G. A. Boy (ed.), The handbook of human Machine Interface – a Human-centered design approach. Ashgate Publishing Limited.

Boy, G. A. (2014). Dealing with the unexpected. In P. Millot (Ed.), Risk Management in Life Critical systems. Wiley-ISTE.

Cahill, J., McDonald, N., Ulfvengren, P., Young, F., Ramos, Y, & Losa, G. (2007). HILAS Flight Operations Research: Development of Risk/Safety Management, Process Improvement and Task Support Tools. In D. Harris (ed), Engineering Psychology and Cognitive Ergonomics. EPCE 2007. Lecture Notes in Computer Science, vol 4562. Springer. https://doi.org/10.1007/978-3-540-73331-7_71

Chialastri, A. (2012). Automation in aviation, In F. Kongoli (Ed.), Automation. IntechOpen. https://doi.org/10.5772/49949.

Christie, B., & Gardiner, M. (1990). Evaluation of the human computer interface. In J. R. Wilson, & E. N. Corlett (Eds.), Evaluation of human work. A practical ergonomics methodology. Taylor and Francís.

Corrigan, S., Mårtensson, L., Kay, A., Okwir, S., Ulfvengren, P., Mc Donald, N. (2014). Preparing for Airport Collaborative Decision Making (A-CDM) implementation: an evaluation and recommendations. Cognition, Technology & Work, 17(2). 207-218. Springer. https://doi.org/10.1007/s10111-014-0295-x

Dahlstrom, N., & Wikander, R. (2014). The Multi Crew Pilot Licence - Revolution, Evolution or not even a Solution? - A Review and Analysis of the Emergence, Current Situation and Future of the Multi-Crew Pilot Licence (MPL). Lund University School of Aviation.

82

References

De Boer, R., & Hurts, K. (2017). Automation Surprise – Results of a field survey of Dutch pilots. Aviation Psychology and Applied Human Factors, 7(1), 28-41. https://doi.org/10- 1027/2192-0923/a000113

Dekker, S. W. A. (2000). The field guide to human error investigations. Draft, August 2000.

Dekker, S. W. A., & Woods, D. D. (2002). MABA-MABA or Abracadabra? Progress on Human–Automation Co-ordination. Cognition, Technology & Work. 4: 240-244. https://doi.org/10.1007/s101110200022

Dekker, S. W. A., & Hollnagel, E. (2004). Human factors and folk models. Cognition, Technology & Work, 6, 79–86. https://doi.org/10.1007/s10111-003-0136-9

Dekker, S. W. A. (2007). Just culture: balancing safety and accountability. Ashgate Publishing Co.

Dekker, S. W. A. (2011). Drift Into Failure: From Hunting Broken Components to Understanding Complex Systems. Ashgate Publishing Co.

Dekker, S. W. A., & Levenson, N. (2014). The bad apple theory won’t work: response to ‘Challenging the systems approach: why adverse event rates are not improving’ by Dr Levitt. BMJ Qual Saf. 2014(12), 1050-1. https://doi.org/10.1136/bmjqs-2014-003585

Dekker, S. W. A. (2015). The danger of losing situation awareness. Cognition, Technology & Work, 17(2). https://doi.org/10.1007/s10111-015-0320-8

Dekker, S. W. A., (2018). The safety anarchist: relying on human expertise and innovation, reducing bureaucracy and compliance. Routledge.

EASA (2014). Notice of Proposed Amendment 2014-17, Crew resource management (CRM) training (RMT.0411 (OPS.094) - 26.6.2014). European Union Aviation Safety Agency.

EASA (2016). Acceptable Means of Compliance (AMC) and Guidance Material (GM) to Annex III – Part-ORO Consolidated version including Issue 2, Amendment 61, February 2016. European Union Aviation Safety Agency.

EASA (2017). Data4Safety: A partnership for a data driven aviation safety analysis in Europe (On Air, Issue 17: Data for Safety). European Union Aviation Safety Agency. https://www.easa.europa.eu/newsroom-and-events/news/data4safety-partnership-data- driven-aviation-safety-analysis-europe

EASA (2018a). Notice of Proposed Amendment 2018-07(A). European Union Aviation Safety Agency. https://www.easa.europa.eu/document-library/notices-of-proposed- amendment/npa-2018-07

83

References

EASA (2018b). SMS - EASA Rules. European Union Aviation Safety Agency. https://www.easa.europa.eu/domains/safety-management/safety-management- system/sms-easa-rules

Edwards, E. (1972). Man and machine: systems for safety. In Proceedings of the British Airline Pilots Association Technical Symposium, British Airline Pilots Association, 21-36.

Emery, F. E., & Trist, E. L. (1960). Socio-technical Systems. In C. W. Churchman & M. Verhurst (Eds.), Management science: models and techniques. Pergamon.

Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32-64.

European Commission (2011). Flightpath 2050. Europe’s Vision for Aviation. Maintaining global leadership and serving society’s needs. Report of the High-Level Group on Aviation Research. https://ec.europa.eu/transport/sites/transport/files/modes/air/doc/flightpath2050.pdf

FAA (1996). The Interfaces Between Flightcrews and Modern Flight Deck Systems. Federal Aviation Administration. Human Factors Team Report. Washington, DC.

FAA (2011). Flight Crew Training for NextGen Automation. Federal Aviation Administration. Prepared for Federal Aviation Administration Office of the Chief Scientific and Technical Advisor for Human Factors. September 2011.

FAA (2013). Operational Use of Flight Path Management Systems. Federal Aviation Administration. Aviation Rule Making Committee / Commercial Aviation Safety Team Flight Deck Automation Working Group.

Fitts. P. M. (1951). Human Engineering for an Effective Air Navigation and Traffic Control System. Washington, DC: National Research Council.

Fitts. P. M. (1962). Function of Man in Complex Systems. Aerospace Engineering, 21(1), 34- 39.

Flightglobal (2016). Flight Fleet Forecast 2016-2035, Flight Ascend Consultancy. www..com

Galotti, V. P. (1997). The future air navigation system (FANS): communication, navigation, surveillance, air traffic management. Ashgate.

Gerede, E., & ve Yaşar, M. (2017). Evaluation of Safety Performance Indicators of Flight Training Organizations. International Journal Of Eurasia Social Sciences, 8(29), 1174-1207.

Guest, G., Bunce, A., & Johnson, L. (2006). How Many Interviews Are Enough?: An Experiment with Data Saturation and Variability. Field Methods, 18(1), 59–82. https://doi.org/10.1177/1525822X05279903

84

References

Hale, A. (2009). Why safety performance indicators?. Safety Science, 47(4), 479–480. https://doi.org/10.1016/j.ssci.2008.07.018

Harris, D., & Stanton, N. (2010). Aviation as a system of systems: Preface to the special issue of human factors in aviation, Ergonomics, 53(2), 145-148. https://doi.org/10.1080/00140130903521587

Harris, D. (2011). Human Performance on the Flight Deck. Ashgate publishing limited.

Harris, D. (2013). Distributed cognition in flight operations. In Engineering Psychology and Cognitive Ergonomics: Applications and Services: 10th International Conference, EPCE 2013, Held as Part of HCI International 2013, Proceedings (PART 2 ed., Vol. 8020 LNAI), 125-133. Springer.

Hawkins, F. H., & Orlady, H. W. (Ed.). (1993). Human factors in flight (2nd ed.). Avebury Technical.

Helmreich, R. L., Merritt, A. C., & Wilhelm, J. A. (1999). The evolution of crew resource management training in commercial aviation. The International Journal of Aviation Psychology, 9(1), 19-32. https://doi.org/10.1207/s15327108ijap0901_2

Helmreich R. L., Klinect J. R., Wilhelm, J. A. (1999). Models of threat, error, and CRM in flight operations. Proceedings of the Tenth International Symposium on Aviation Psychology, 677- 682.

HILAS (2010). HILAS, Final Periodic Activity Report. Deliverable submitted as part of the HILAS (Human Integration into the Lifecycle of Aviation Systems). EU FP6 (516181) 2005-2009.

Hoffman, R. R., & Hancock, P. A. (2017). Measuring Resilience. Human Factors, 59(4), 564- 581. https://doi.org/10.1177/0018720816686248

Hollnagel, E., & Woods, D. D. (1983). Cognitive Systems Engineering: New wine in new bottles. International Journal of Man-Machine Studies, 18, 583--600. [Originally Riso Report M2330, February 1982] (Reprinted International Journal of Human-Computer Studies, 51(2), 339-356, 1999 as part of special 30th anniversary issue).

Hollnagel, E., & Woods, D. D. (2005). Joint cognitive systems: Foundations of cognitive systems engineering. CRC Press.

Hollnagel, E., Woods, D. D., & Leveson, N. (2006). Resilience Engineering: Concepts and Precepts. Ashgate Publishing Ltd.

Hollnagel, E. (2009). The ETTO Principle: Effeciciency-Thoroughness Trade-Off. Ashgate.

Hollnagel, E. (2012a). FRAM, the functional resonance analysis method: modelling complex socio-technical systems. Ashgate.

85

References

Hollnagel, E. (2012b). A Bird’s Eye View of Resilience Engineering. Presentation at Loughborough University.

Hollnagel, E., Leonhardt, J., Licu, T., & Shorrock, S. (2013). From Safety-I to Safety-II: A White Paper. Eurocontrol.

Hollnagel, E. (2014). Safety-I and Safety-II: the past and future of safety management. Ashgate.

Hollnagel E., Wears R. L., & Braithwaite, J. (2015). From Safety-I to Safety-II: A White Paper. The Resilient Health Care Net: Published simultaneously by the University of Southern Denmark, University of Florida, USA, and Macquarie University, Australia.

Hollnagel, E. (2019). Advancing resilient performance: from instrumental applications to second-order solutions. In the Book of abstract: 8th REA Symposium on Resilience Engineering: Scaling up and Speeding up. Linnaeus University, Kalmar, Sweden, 24th-27th June 2019. https://open.lnu.se/index.php/rea/issue/view/148

Houghton, R., Baber, C., Stanton, N., Jenkins, D., & Revell, K. (2015). Combining network analysis with Cognitive Work Analysis: insights into social organisational and cooperation analysis. Ergonomics, 58, 1-16. https://doi.org/10.1080/00140139.2014.966770

Hutchins, E. (1995). Cognition in the Wild. MIT Press.

IATA (2014). Data Report for Evidence-Based Training. 1st Edition. International Air Transport Association. https://www.iata.org/contentassets/c0f61fc821dc4f62bb6441d7abedb076/data-report-for- evidence-basted-training-ed20one.pdf

IATA (2018). IATA Safety Report 2017, Issued April 2018, 54th Edition. International Air Transport Association. https://aviation-safety.net/airlinesafety/industry/reports/IATA- safety-report-2017.pdf

IATA (2019). IATA Safety Report 2018, Issued April 2019, 55th Edition. International Air Transport Association. https://libraryonline.erau.edu/online-full-text/iata-safety- reports/IATA-Safety-Report-2018.pdf

ICAO (2013a). Safety Management Manual (SMM), Doc 9859, third edition. International Civil Aviation Organization. https://www.icao.int/safety/fsix/Library/DOC_9859_FULL_EN.pdf

ICAO (2013b). Manual of Evidence-based Training, First edition. Doc 9995. ISBN 978-92- 9249-242.

ICAO (2016). 2017-2019 Global Aviation Safety Plan, Second Edition. Doc 10004. https://www.icao.int/Meetings/a39/Documents/GASP.pdf

86

References

ICAO (2018). Emerging Issues. Presented by Austria on behalf of the EU, ECAC and Eurocontrol at the Thirteenth Air Naviation Conference, Montreal, Canada, 9-19 October 2018.

ICAO (2020). Annual Report 2018. https://www.icao.int/annual-report-2018

IEA (2020). International Ergonomics Association. https://iea.cc

INCOSE (2020). International Council on Systems Engineering. https://www.incose.org/about-systems-engineering/system-and-se-definition

JATR (2019). Boeing 737 MAX Flight Control System: Observations, Findings, and Recommendations. Joint Authorities Technical Review (Panel). Associate Administrator for Aviation Safety, U.S. Federal Aviation Administration.

Jorens, Y., Gillis, D., Valcke, L., De Coninck, J., Devolder, A., & De Coninck, M. (2015). Atypical forms of employment in the aviation sector’, European social dialogue, European Commission, 2015. Ghent, Belgium. http://hdl.handle.net/1854/LU-6852830

Kaplan, R. S., & Norton, D. P. (2000). The Strategy Focused Organization: How Balanced Scorecard Companies Thrive in the New Business Environment. Harvard Business School Press.

Kim, S., Oh, S., Suh, J., Yu, K., & Yeo, H. (2013). Study on the Structure of Safety Performance Indicators for Airline Companies. In Proceedings of the Eastern Asia Society for Transportation Studies, Taipei, Taiwan.

Klein, G. A., Orasanu, J., Calderwood, R., & Zsambok, C. E. (Eds.). (1993). Decision making in action: Models and methods. Ablex Publishing Corporation.

Klein G. A. (2008). Naturalistic Decision Making. Human Factors, 50(3), 456-460. https:/doi.org/10.1518/001872008X288385

Kothari, C. R. (2008). Research Methodology: Methods and Techniques. New Age International.

Learmount, D. (2016). Licensed to fly but not up to the job. Flight International, 8-14 November 2016.

Lehman, C. (1998). We need a new jet transport pilot training model. Journal for Civil Aviation Training, 8(7), 34-39.

Leveson, N. (2004). A New Accident Model for Engineering Safer Systems. Safety Science, 42(4), 237-270.

87

References

Leveson, N. (2016). Rasmussen’s Legacy: A Paradigm Change in engineering for safety. Applied Ergonomics, 59. https://doi.org/10.1016/j.apergo.2016.01.015

Likierman. A. (2020). The Elements of good judgment. Harvard Business Review, Jan-Feb 2020 Issue.

Lintern, G. (2011). Introduction to the Special Issue on Cognitive Systems. The international Journal of Aviation Psychology, 21(1). https://doi.org/10.1080/10508414.2011.537553

Lundberg, J., Rollenhagen, C., & Hollnagel, E. (2009). What-You-Look-For-Is-What-You-Find – The consequences of underlying accident models in eight accident investigation manuals. Safety Science, 47, 1297-1311. http://doi.org/10.1016/j.ssci.2009.01.004

Mackey, A., & Gass, S. M. (2015). Second language research: Methodology and design. Second Edition. Routledge.

Maier, M. W. (1998). Architecting principles for system of systems. Systems Engineering, 1(4), 267-284. https://doi.org/10.1002/(SICI)1520-6858(1998)1:4<267::AID-SYS3>3.0.CO;2-D

Martins, E., Soares, M., & Martins, I. (2014). Ergonomics and Cognition in Manual and Automated Flight. In F. Rebelo, & M. Soares (Eds.) Advances in Ergonomics In Design, Usability & Special Populations, Part II. AHFE Conference. https://doi.org/10.13140/2.1.4685.7601

McDonald, N., Grommes, P., Morrison, R. (2009). Strategic issues in the design, operations and regulations of large integrated operational systems. Deliverable to EU Commisson as part of the EU FP6 HILAS project.

McDonald, N. (2015) The evaluation of change. Cognition, Technology & Work, 17(2), 193- 206. https://doi.org/10.1007/s10111-014-0296-9

Meister, D. (1999). The history of human factors and ergonomics. Lawrence Erlbaum Associates.

Mårtensson, L. (1995). The Aircraft Crash at Gottrora: Experiences of the Cockpit Crew, The International Journal of Aviation Psychology, 5(3), 305-326. https:/doi.org/10.1207/s15327108ijap0503_5

Naikar, N. (2017). Cognitive work analysis: An influential legacy extending beyond human factors and engineering. Applied Ergonomics, 59, Part B, 528-540. https://doi.org/10.1016/j.apergo.2016.06.001

Naikar, N., & Brady, A. (2019). Cognitive Systems Engineering – Expertise in Sociotechnical Systems. In P. Ward, J. M. Schraagen, J. Gore, & E. M. Roth (Eds.) The Oxford handbook of expertise. Oxford University Press.

88

References

Niven, P. R. (2006). Balanced Scorecard step-by-step: maximizing performance and maintaining results, 2nd edition. John Wiley & Sons.

NTSB (2014). National Transportation Safety Board Aviation Accident Final Report. NTSB/AAR-14/01. https://www.ntsb.gov/investigations/AccidentReports/Pages/AAR1401.aspx

NTSC (2019). Aircraft Accident Investigation Report. Final KNKT.18.10.35.04. Komite Nasional Keselamatan Transportasi. http://knkt.dephub.go.id/knkt/ntsc_aviation/baru/2018%20- %20035%20-%20PK-LQP%20Final%20Report.pdf

Okwir, S. (2017). Collaborative Measures – Challenges in Airport Operations, Doctoral Thesis 2017, KTH Royal Institute of Technology, Department of Industrial Economics and Management, Stockholm, Sweden.

Ombredane, A., & Faverge, J. M. (1955). L’analyse du travail. Presses Universitaires de France.

Onwuegbuzie, A. J., Johnson, R. B., & Turner, L. A. (2007). Toward a Definition of Mixed Methods Research. Journal of Mixed Methods Research, 1(2), 112-133. https://doi.org/10.1177/1558689806298224

Osvalder, A-L., Rose, L., & Karlsson, S. (2009). Methods. In M. Bohgard, S. Karlsson, E. Lovén, L-Å. Mikaelsson, L. Mårtensson, A-L. Osvalder, L. Rose, & P. Ulfvengren (Eds.), Work and technology on human terms. Prevent.

Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on systems, man and cybernetics – Part A: Systems and humans, 30(3), 286-297. https://doi.org/10.1109/3468.844354

Parasuraman, R., & Wickens, C. D. (2008). Humans still vital after all these years. Human Factors, 50(3), 511-520. https://doi.org/10.1518/001872008X312198

Parmenter, D. (2015). Key Performance Indicators: Developing, Implementing, and Using Winning KPIs. Third edition. John Wiley & Sons, Inc. https://doi.org/978-0470545157

Patriarca. R., Bergström, J., Di Gravio, G., & Costantino, F. (2018). Resilience engineering: Current status of the research and future challenges. Safety Science, 102, 79–100. https://doi.org/10.1016/j.ssci.2017.10.005

Perrow, C. (1984). Normal Accidents: Living with High-Risk Technologies New York: Basic Books.

Piric, S., de Boer, R., Roelen, A. L. C., Karanikas, N., & Kaspers, S. (2019). How does aviation industry measure safety performance? Current practice and limitations. International Journal of Aviation Management, 4(3), 224-245. https://doi.org/10.1504/IJAM.2019.10019874

89

References

Pritchett, A. (2001). Reviewing the role of cockpit alerting systems: Implications for Alerting System Design and Pilot Training. SAE Technical Paper. https://doi.org/10.4271/2001-01- 3026 ·

Pritchett, A. (2009). Aviation Automation: General Perspectives and Specific Guidance for the Design of Modes and Alerts. Reviews of Human Factors and Ergonomics, 5(1), 82-113. https://doi.org/10.1518/155723409X448026

Provan, D. J., Woods, D. D., Dekker, S. W. A., Rae, A. J. (2020). Safety II professionals: How resilience engineering can transform safety practice. Reliability Engineering & System Safety, 195. https://doi.org/10.1016/j.ress.2019.106740

Rasmussen, J. (1986). Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering. Elsevier Science Inc.

Rasmussen, J., Pejtersen, A. M., & Goodstein L. P. (1994). Cognitive systems engineering. Wiley.

Rasmussen, J. (1997). Risk Management in a dynamic society: A modelling problem. Safety Science, 27(2-3), 183-213. https://doi.org/10.1016/S0925-7535(97)00052-0

Reason, J. (1990). Human Error. Cambridge University Press.

Reason, J. (1997). Managing the Risks of Organizational Accidents. Ashgate Publishing Ltd.

Reason J. (2000). Human error: models and management. BMJ, 2000(320), 768-770. https://doi.org/10.1136/bmj.320.7237.768

Rossman, G. B, & Wilson, B. L. (1985). Numbers and words: Combining quantitative and qualitative methods in a single large scale evaluation study. Evaluation Review, 9(5), 627- 643. https://doi.org/10.1177/0193841X8500900505

Sarter, N. B., & Woods, D. D. (1992). Pilot interaction with cockpit automation: Operational experiences with the flight management system. International Journal of Aviation Psychology, 2(4), 303-321. https://doi.org/10.1207/s15327108ijap0204_5

Sarter, N. B., & Woods, D. D. (1994). Pilot interaction with cockpit automation II: An experimental study of pilots' model and awareness of the flight management and guidance system. International Journal of Aviation Psychology, 4(1), 1-28. https://doi.org/10.1207/s15327108ijap0401_1

Sarter, N. B., Woods, D. D., & Billings, C. E. (1996). Automation surprises. In G. Salvendy (ed.), Handbook of Human Factors & Ergonomics, second edition. Wiley.

Schwartz, D. H., Flach, J. M., Nelson, W. T., & Stokes, C. K. (2008). A Use-Centered Strategy for Designing E-Collaboration Systems. In N. Kock (Ed.), Encyclopedia of E-Collaboration (pp. 673-679). IGI Global. https://doi.org/10.4018/978-1-59904-000-4.ch102 90

References

SESAR (2009). European Air Traffic Management Master Plan, Edition 1 - 30 March 2009. SESAR Joint Undertaking. https://ec.europa.eu/transport/sites/transport/files/modes/air/sesar/doc/1- european_atm_master_plan.pdf

SESAR (2015). European ATM Master Plan, Executive view. Edition 2015. SESAR Joint Undertaking. https://ec.europa.eu/transport/sites/transport/files/modes/air/sesar/doc/eu- atm-master-plan-2015.pdf

SESAR (2018). SESAR Joint Undertaking – Single Programming Document 2018-2020. https://www.sesarju.eu/sites/default/files/documents/adb/2017/SJU%20Single%20Program ming%20Document%202018%20V1.pdf

SESAR (2020). European ATM Master Plan, Executive view. 2020 Edition. SESAR Joint Undertaking. https://www.sesarju.eu/masterplan2020

SESAR/FAA (2018). NextGen - SESAR State of Harmonisation. Third edition. SESAR Joint Undertaking /Federal Aviation Administration. https://doi.org/10.2829/91465

Sheridan, T. B., & Verplank, W. L. (1978). Human and computer control of undersea teleoperators. Massachusetts Institute of Technology, Man-Machine Systems Laboratory.

Sheridan, T. B. (1987). Supervisory control. In G. Salvedy (ed.), Handbook of human factors (pp. 1243-1268). Wiley.

Sheridan, T. B. (2009). Allocation of functions among people and automation in Next Gen. Presentation given at the Workshop on Air Traffic Management in Zurich, Switzerland, January, 2009.

Sinclair, M. (1990). Subjective assessment. In J. R. Wilson, & E. N. Corlett (Eds.), Evaluation of human work. A practical ergonomics methodology. Taylor and Francís.

Snook, S. A. (2000). Friendly Fire. Princeton University Press.

Stahre, J. (1995). Towards Human Supervisory Control in Advanced Manufacturing Systems. Production Engineering. Doctoral thesis, Chalmers University of Technology, Gothenburg, Sweden.

Säfsten, K., & Gustavsson, M. (2019). Forskningsmetodik för ingenjörer och problemlösare. Studentlitteratur.

Trist, E. (1981). The evolution of socio-technical systems – a conceptual framework and an action research program. Ontario Quality of Working Life Centre.

Ulfvengren, P. (2007). Study on how to increase reporting in aviation. Proceedings of Nordic Ergonomic Society, NES 2007. 1-2 October, Lysekil, Sweden.

91

References

Upadhaya, B., Munir, R., & Blount, Y. (2014). Association between performance measurement systems and organizational effectiveness. International Journal of Operations & Production Management, 34(7), 853-875. https://doi.org/10.1108/IJOPM-02-2013-0091

Verstraeten, J. G., Roelen, A. L. C., & Speijker, L. (2014). Safety performance indicators for system of organizations in aviation. Technical Report. ASCOS. https://www.ascos- project.eu/downloads/ascos_paper_verstraeten.pdf

Vicente, K. J. (1999). Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work. Lawrence Erlbaum.

Visser, M. (2007). Deutero-learning in organizations: A review and a reformulation. In Academy of Management Review, 32(2). https://doi.org/10.5465/AMR.2007.24351883

Waterson, P., Robertson, M. M., Cooke, N. J., Militello, L., Roth, E., & Stanton, N. A. (2015). Defining the methodological challenges and opportunities for an effective science of sociotechnical systems and safety. Ergonomics, 58(4), 565-599. https://doi.org/10.1080/00140139.2015.1015622

Waterson, P., Le Coze, J-C., & Andersen, H. (2016). Recurring themes in the legacy of Jens Rasmussen. Applied Ergonomics, 59. https://doi.org/10.1016/j.apergo.2016.10.002

Wickens, C. D. (1995). Designing for Situation Awareness and Trust in Automation. In Proceedings of IFAC Conference on Integrated Systems Engineering, 28(23), Baden-Baden, FRG, 27-29 September 1994.

Wickens, C., Lee, J., Liu, Y. & Gordon Becker, S. (2004). An introduction to Human Factors Engineering. Pearson Prentice Hall.

Wiener, E. L., & Curry, R., E. (1980). Flight-Deck Automation: Promises and problems. Ergonomics, 23(10), 995-1011. https://doi.org/10.1080/00140138008924809

Wiener, E. L. (1989). Human factors of advanced technology ("glass cockpit") transport aircraft (NASA Contractor Report No. 177528). NASA Ames Research Center. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19890016609.pdf

Wittmer, A., Bieger, T., & Müller, R. (ed.) (2011). Aviation systems: management of the integrated aviation value chain. Springer.

Woods, D. D. (1986). Paradigms for decision support. In E. Hollnagel, G. Mancini, & D. D. Woods (Eds.), Intelligent decision support in process environments, NATO ASI Series, 21. Springer. https://doi.org/10.1007/978-3-642-50329-0

Woods, D. D., Johanssen, L. J., Cook, R. I., & Sarter, N. B. (1994). Behind human error: Cognitive systems, computers and hindsight. University of Dayton Research Institute. CSERIAC. https://apps.dtic.mil/dtic/tr/fulltext/u2/a492127.pdf

92

References

Woods, D. D., & Hollnagel, E. (2006). Prologue: Resilience Engineering Concepts. In Hollnagel, E., Woods, D. D. & Leveson, N. (Eds.). Resilience Engineering: Concepts and Precepts. Ashgate Publishing Ltd.

Woods, D. D. (2016). On the Origins of Cognitive Systems Engineering: Personal Reflections. To appear in: P. Smith and R. Hoffman (Eds.), Cognitive Systems Engineering: A Future for a Changing World.

Woods, D. D. (2017). Reflections on the origins of cognitive systems engineering. In P. Smith, & R. R. Hoffman (Eds.), Cognitive systems engineering: The future for a changing world. Taylor and Francis.

Woods, D. D. (2019). Book of abstract: 8th REA Symposium on Resilience Engineering: Scaling up and Speeding up. Linnaeus University, Kalmar, Sweden, 24th-27th June 2019. ISBN: 978-91-88898-95-1.

Yin, R. K. (2018). Case study research and applications, design and methods (6th ed.). Sage Publications.

93