UNIVERSITÄT DER BUNDESWEHR MÜNCHEN FAKULTÄT FÜR LUFT- UND RAUMFAHRTTECHNIK INSTITUT FÜR RAUMFAHRTTECHNIK UND WELTRAUMNUTZUNG

Characterization and Application of User-Centred Tools as Engineering Support for Satellite

Tanja Nemetzade

Vollständiger Abdruck der von der Fakultät für Luft- und Raufahrttechnik der Universität der Bundeswehr München zur Erlangung des akademischen Grades eines

DOKTORS DER INGENIEURWISSENSCHAFTEN (DR.-ING.)

genehmigten Dissertation.

Gutachter/Gutachterin: 1. Univ.-Prof. Dr.-Ing. Roger Förstner 2. Univ.-Prof. Dr.-Ing. Kristin Paetzold

Diese Dissertation wurde am 23.05.2019 bei der Universität der Bundeswehr München eingereicht und durch die Fakultät für Luft- und Raumfahrttechnik am 12.12.2019 angenommen. Die mündliche Prüfung fand am 16.01.2020 statt.

Das Ganze ist mehr als die Summe seiner Teile. Aristoteles

Danksagung

Diese Arbeit wäre nicht möglich gewesen, wenn ich die letzten Jahre nicht durch viele Menschen unterstützt worden wäre.

Meinem Doktorvater Prof. Dr.-Ing. Roger Förstner vom Institut für Raumfahrttechnik und Weltraum- nutzung der Universität der Bundeswehr München danke ich herzlich für die umfassende akademische Betreuung dieser Arbeit. Jede der vielen kurzweiligen Besprechungen führte zu wertvollen Denkanstößen und vermittelte das gute Gefühl, als Systemmensch verstanden zu werden.

Prof. Dr.-Ing. Kristin Paetzold vom Institut für Technische Produktentwicklung der Universität der Bun- deswehr München danke ich für die Inspirationen aus einer Nicht-Raumfahrtbranche.

Ehemaligen und aktuellen Mitarbeitern der Firma Airbus Defence and Space GmbH, die mir diese Arbeit erst ermöglicht haben, gilt mein besonderer Dank. Insbesondere möchte ich Herrn Andreas Lindenthal und Herrn Dr. Marc Steckling danken, die stets an mich und den Erfolg dieser Arbeit geglaubt haben. Herrn Dr. Thomas Pietrus danke ich für Fortführung der Unterstützung.

Herrn Dr. Frank Döngi und Herrn Reiner Schricke danke ich für die Aufnahme in die Future Programs Abteilung in Friedrichshafen. Die hier gewonnenen Einblicke und die Zusammenarbeit mit den Kollegen ermöglichte einen Großteil der Ergebnisse dieser Arbeit. In diesem Zusammenhang möchte ich auch den Herren Manfred Langemann, Dr. Ulrich Johann, Michael Kersten, Gerald Hechenblaikner, Dr. Ralf Münzen- meyer und Noah Saks sowie allen weiteren Mitarbeitern der Abteilung für das entgegengebrachte Vertrauen und die gemeinsame Zeit einen Dank aussprechen.

Stellvertretend für das gesamte JUICE-Team von Airbus Defence and Space SAS in Toulouse danke ich Herrn Thomas Schirmann und Frau Sandie Deslous für die gute Zusammenarbeit.

Bei Herrn Jan van Casteren, Herrn Markus Schelkle und Frau Susanne Fugger bedanke ich mich für die Unterstützung aus dem BepiColombo-Team von ESA und Airbus Defence and Space GmbH.

Herrn Pius Butz und Herrn Marcel Anklam von VECTRONIC Aerospace GmbH, Berlin danke ich für die professionelle Zusammenarbeit.

Den Mitarbeitern vom Institut für Raumfahrttechnik und Weltraumnutzung danke ich sowohl für Ihre fach- liche Unterstützung als auch ihre Freundschaft. Außerdem gilt mein Dank meinen Bachelor-, Projekt-, Master- und Diplomarbeitsstudenten, die mit ihren Studien zu dieser Dissertation beigetragen haben.

Und schließlich: meiner Familie, meinen Freunden und meinem Partner - für Eure Geduld, Euer Verständ- nis, Euren Zuspruch, Eure Anregungen, Eure Kritik, Eure Ablenkungen und dafür, dass Ihr alle - ob persön- lich oder im Geiste - immer an meiner Seite wart. Danke.

Zusammenfassung

Modellbildung & Simulation (M&S) ist in Raumfahrtprojekten ein wichtiger Aspekt, der zu ihrem Erfolg in technischer und finanzieller Hinsicht beiträgt. Jedoch ist die aktuelle Umsetzung von M&S in frühen Ent- wurfsphasen gekennzeichnet durch eine begrenzte Nützlichkeit, Wissensweitergabe und Anwendung auf Systemebene. Die vorliegende Arbeit stellt einen neuartigen M&S-Ansatz, das Systemsimulator-Konzept (engl. System Simulator Concept (SSC)), zur Überwindung dieser Schwächen vor und verifiziert diesen. Kernidee des Konzeptes ist eine stärkere Einbindung des Nutzers im Entwurfs- und Entwicklungsprozess von Softwarelösungen im Sinne des nutzerzentrierten Entwurfsansatzes. Das SSC wurde von einem kundenindividuellen, missionsspezifischen Simulator abgeleitet, welcher im Rahmen des ESA Projektes BepiColombo entwickelt wurde. Dessen seit 2001 kontinuierlicher und erfolgrei- cher Einsatz wurde auf seinen nutzerzentrierten Entwicklungsansatz und der daraus folgenden Implemen- tierung von Relationen zwischen einzelnen Systemelementen zurückgeführt, welche letztlich erlauben, die emergente operationelle Leistungsfähigkeit des Satelliten zu bestimmen. Als Leitfaden spezifiziert das SSC den Entwicklungsansatz, den Umfang sowie die Handhabung eines erfolgreichen Simulators zur Unter- stützung der -Aktivitäten im Projekt. Die Gebrauchstauglichkeit des Konzeptes wurde durch seine Anwendung in der Machbarkeitsstudie von ESAs LOFT-Mission und der Definitionsphase von ESAs JUICE-Mission bei Airbus Defence and Space zur Unterstützung der industrieseitigen Entwurfsvali- dierung sowie durch die Entwicklung einer generischen Simulatorlösung zur Nutzung in frühen Satelliten- entwurfsphasen verifiziert. Die neu entwickelte Parameter-Einfluss-Netz-Methode (engl. Parameter Influ- ence Net Method (PINM)) als Interpretation des SSCs komplementiert die Vorzüge von Simulatoren, die nach dem SSC gebaut wurden, durch die Auswertung der Systemstruktur zwecks Steuerung der System- emergenz. Sie erfasst den Einflussgrad von Elementen innerhalb eines komplexen Systems durch Quan- tifizierung und Visualisierung der Sensitivität von Parametern hinsichtlich Werteänderungen. Zusätzlich zum SSC wurde das Tool- und Simulatorqualitätsmodell (engl. Tool and Simulator Quality Model (TSQM)) als Ausdruck realer Nutzerbedürfnisse in frühen Entwurfsphasen von industriespezifischen Um- fragen und Interviews abgeleitet, um Entwurf und Umsetzung von Softwarelösungen hin zu hoher Ge- brauchstauglichkeit und Akzeptanz zu unterstützen. Das TSQM definiert Gebrauchstauglichkeit als einen Satz individuell gewichteter, zusammenwirkender Qualitätskriterien. In diesem Zusammenhang wurde festgestellt, dass technische Kriterien und solche, die die Handhabung des Produktes bestimmen, gleicher- maßen bedeutend für die Gebrauchstauglichkeit einer Softwarelösung sind und dass die Akzeptanz durch den Nutzer unabdingbar für den Erfolg eines Produktes in einem Projekt ist. Zum Abschluss wird die Validität des SSCs und des TSQMs hinsichtlich aktueller Trends und Entwicklungen im M&S-Bereich diskutiert. Es wird gezeigt, dass die präsentierten Konzepte eine stichhaltige Lösung bieten die vorhandenen M&S-Schwächen zu überwinden und für zukünftige Anwendungen mit wachsender Sys- temkomplexität vorbereitet zu sein.

Page VII

Abstract

Modelling & Simulation (M&S) is an important aspect in space projects which contributes to their success in both a technical and a financial perspective. However, the current M&S implementation in the early de- sign stages is characterized by limited usefulness, knowledge transfer and application on system level. The present work introduces and verifies the System Simulator Concept (SSC) as novel M&S approach to over- come these weaknesses by fostering an increased involvement of the user in the design and development process of software solutions following the user-centred design approach. The SSC has been derived from a customized, mission specific simulator developed in ESA’s BepiColombo . The continuous and successful use of this simulator since 2001 has been traced back to its user- centred development approach and the consequent implementation of system element relations that ul- timately allow the assessment of the satellite’s emergent operational performance. As guideline, the SSC specifies the development approach, scope and handling for a successful simulator supporting the systems engineering activities within a project. The usability of the concept has been verified by its application in ESA’s LOFT feasibility phase and ESA’s JUICE definition phase at Airbus Defence and Space in support of the industrial design validation activities and by creation of a generic simulator solution applicable to early satellite design activities. The newly developed Parameter Influence Net Method (PINM) as interpretation of the SSC complements the benefits of the simulators build according to the SSC by specifically targeting at the assessment of the system structure to provide control over the system emergence. It yields the degree of influence of elements within a complex system by quantifying and visualizing the sensitivity of system parameters to value changes. In addition to the SSC, the Tool and Simulator Quality Model (TSQM) as expression of real user needs in early design phases has been derived from industry-specific surveys and interviews to support the design and implementation of software solutions towards high usability and acceptability. The TSQM defines usa- bility as a set of individually important, co-acting quality criteria. It has been revealed that technical and handling criteria are equally important for the usability of a software solution and that the acceptability by the user is indispensable for the success of a product within a project. To conclude, the validity of the SSC and the TSQM in view of the current trends and evolutions in the M&S domain is discussed, showing that the presented concepts are a sound solution to overcome the present M&S weaknesses and to be prepared for future applications in the face of increasing system complexity.

Page IX

Contents

1. Introduction 1 1.1. Importance of Modelling and Simulation in Space Projects ...... 1 1.2. Four Dimensions of Modelling and Simulation in Space Projects ...... 2 1.2.1. First Dimension "Level of Interest": System vs. Discipline-Level ...... 3 1.2.2. Second Dimension "Development Approach": Build from Scratch vs. Ready-to-Use . .3 1.2.3. Third Dimension "Derivation": Make vs. Buy ...... 4 1.2.4. Fourth Dimension "Service Life": Phase-Specific vs. Long-Term Operation ...... 4 1.3. State-of-the-Art of Modelling and Simulation in Space Domain ...... 5 1.3.1. Tool and Simulator Landscape ...... 5 1.3.2. Practice of Modelling and Simulation in Early Design Phases ...... 6 1.3.3. Motivation and Weaknesses of Current Ad-Hoc Modelling and Simulation Practice . . .8 1.3.3.1. Limited Usefulness ...... 9 1.3.3.2. Limited Knowledge Transfer ...... 10 1.3.3.3. Limited System Level Application ...... 10 1.4. Implementation of System Modelling and Simulation in Combination with User-Centred De- sign as Proposed Solution ...... 11 1.5. Thesis Goal and Structure of Present Work ...... 12

2. A Successful System Modelling and Simulation Implementation and its Formalization into a Mission Transferable Concept - the Science Payload Simulator and the System Simulator Concept 14 2.1. Science Payload Simulator as Origin of System Simulator Concept ...... 14 2.1.1. ESA’s BepiColombo Mission ...... 14 2.1.2. Context of Use and Identified Need for Science Payload Simulator ...... 15 2.1.3. Evolution of Science Payload Simulator ...... 15 2.1.3.1. Development Approach and Characteristics ...... 15 2.1.3.2. Use Cases ...... 16 2.1.3.3. Structure ...... 18 2.1.4. Evaluation of Science Payload Simulator Usage ...... 18 2.2. Science Payload Simulator Success Factors - User-Centred Design Approach and Resulting Fo- cus on Systems Engineering Supportive Functionalities ...... 21 2.2.1. User-Centred Design and Usability ...... 21 2.2.2. Systems Engineering and Importance of Parameter Interdependencies ...... 25 2.2.2.1. Systems Engineering Fundamentals ...... 25 2.2.2.2. Systems Engineering Characteristics and Importance within Space Projects . . 26

Page XI Contents

2.2.2.3. Importance of Parameter Interdependencies ...... 27 2.3. Systematization of Science Payload Simulator Analysis Findings - System Simulator Concept Definition ...... 28 2.3.1. System Simulator Concept Definition and Validation Approach ...... 28 2.3.2. System Simulator Concept ...... 29 2.3.3. Validity of System Simulator Concept for Engineering Domains Beyond Space ...... 31

3. Transferring Simulator Success to Further Missions - System Simulator Concept Transfer Approach and Application Cases 32 3.1. Definition of System Simulator Concept Transfer Approach ...... 32 3.2. Application Case LOFT - ESA’s Large Observatory for X-Ray Timing Mission ...... 34 3.2.1. SSC Transfer Step 1 - Search and Decision for Test Mission ...... 34 3.2.2. SSC Transfer Steps 2 and 3 - Simulator Realization and Use ...... 34 3.2.3. SSC Transfer Step 4 - Evaluation of SSC Transfer ...... 37 3.2.3.1. Key Benefits and Added Value of SSC Transfer for LOFT ...... 37 3.2.3.2. Acceptance of LOFT Simulator by Industrial Study Team and ESA ...... 38 3.2.3.3. Evolution of Simulator Functionalities and Applications ...... 38 3.2.3.4. Lessons Learned and Conclusion ...... 39 3.3. Application Case JUICE - ESA’s JUpiter ICy moon Explorer Mission ...... 40 3.3.1. SSC Transfer Step 1 - Search and Decision for Test Mission ...... 40 3.3.2. SSC Transfer Step 2 and 3 - Simulator Realization and Use ...... 40 3.3.3. SSC Transfer Step 4 - Evaluation of SSC Transfer ...... 42 3.3.3.1. Key Benefits and Added Value of SSC Transfer for JUICE ...... 42 3.3.3.2. Acceptance of JUICE Simulator by Industrial Team and ESA ...... 42 3.3.3.3. Evolution of Simulator Functionalities and Applications ...... 43 3.3.3.4. Lessons Learned and Conclusion ...... 43 3.4. Beyond Specific Missions - the General Mission and System Simulator ...... 44 3.4.1. Motivation for Creating GMSS - Generalizability of Missions ...... 44 3.4.2. GMSS Scope and Design Description ...... 45 3.4.3. SSC Transfer Step 4 - Evaluation of SSC Transfer ...... 45 3.4.3.1. Key Benefits and Added Value of GMSS ...... 46 3.4.3.2. Acceptance of GMSS by Study Teams ...... 46 3.4.3.3. Evolution of GMSS Functionalities and Applications ...... 46 3.4.3.4. Lessons Learned and Conclusion ...... 47

4. Basis for Tool and Simulator Success - the Tool and Simulator Quality Model 48 4.1. Motivation for Definition of Tool and Simulator Quality Model ...... 48 4.2. Model Definition Approach ...... 49 4.3. Consolidated Tool and Simulator Quality Model and Quality Criteria Weighting ...... 51 4.3.1. Consolidated Tool and Simulator Quality Model ...... 51 4.3.2. Quality Criteria Weighting ...... 55 4.3.2.1. Calculation of Quality Criteria Weighting ...... 55

Page XII Contents

4.3.2.2. Quality Criteria Weighting Results ...... 56 4.4. Validity of Tool and Simulator Quality Model ...... 57 4.4.1. Validity of Survey ...... 57 4.4.2. Validity Range of Survey Results in View of their Application ...... 59 4.4.3. Future Work on Validity of Tool and Simulator Quality Model and Criteria Weighting . . 59 4.5. Application of Tool and Simulator Quality Model ...... 60 4.5.1. Evaluation Scenario ...... 60 4.5.2. Evaluation Approach ...... 61 4.5.3. Evaluation Results ...... 63 4.5.4. Validity of Evaluation Approach and Future Work ...... 63 4.6. Going Beyond the Tool and Simulator Quality Model - Acceptability as Essential Criterion for Tool and Simulator Success ...... 64

5. A Novel Interpretation of the System Simulator Concept - the Parameter Influence Net Method 67 5.1. Identified Need for Parameter Influence Net Method ...... 67 5.2. Approach towards Parameter Influence Net Method, its Algorithm and Implementation Process 68 5.2.1. Modelling System Structure and Behaviour ...... 69 5.2.2. Modelling System Element Influences within a System - Algorithm behind the Parame- ter Influence Net Method ...... 69 5.2.2.1. Parameter Influence Net Method Algorithm for Multiplications ...... 71 5.2.2.2. Parameter Influence Net Method Algorithm for Summations ...... 71 5.2.2.3. Parameter Influence Net Method Algorithm for Exponential Calculations . . . 71 5.2.2.4. Parameter Influence Net Method Algorithm for Trigonometric Calculations . . 72 5.2.2.5. Parameter Influence Net Method Cascade Algorithm for Nested Calculations . 72 5.2.2.6. Application Example of Parameter Influence Net Method Algorithms ...... 73 5.2.3. Visualizing System Structure and System Element Influences - the Parameter Influence Net ...... 73 5.2.4. Implementation Process of Parameter Influence Net Method ...... 75 5.3. Application of Parameter Influence Net Method ...... 76 5.3.1. Structure and Appearance of Realized Parameter Influence Net ...... 76 5.3.2. Use Cases of Parameter Influence Nets ...... 79 5.3.3. Selection of Visualization Tool for Parameter Influence Net Method ...... 79 5.4. Added Value and Limits ...... 81 5.4.1. Putting Users in Focus - UCD as Fundamental Idea behind the Parameter Influence Net Method...... 81 5.4.2. Closing a Gap in M&S Landscape - Utility of Parameter Influence Net Method ...... 81 5.4.3. Meeting Users’ Needs - TSQM Evaluation of Parameter Influence Nets ...... 83 5.4.4. Limits of Parameter Influence Net Method ...... 83 5.4.5. Limits of Implemented Parameter Influence Nets ...... 86 5.5. FutureWork ...... 86

Page XIII Contents

6. Synthesis and Discussion 88 6.1. Synthesis of Thesis Findings ...... 88 6.2. Reasons for Successful Transfer of SSC to Further Missions ...... 88 6.3. Reasons for Success of SSC ...... 89 6.3.1. SSC Meets User Needs ...... 89 6.3.2. SSC is Different to Current M&S Landscape ...... 90 6.3.2.1. Difference to Products on Market - Added Value of SSC ...... 90 6.3.2.2. Difference to Model-Based Systems Engineering - Why the Current MBSE Ap- proach is not the Solution to Overcome the M&S Weaknesses ...... 91 6.3.2.2.1. Evolution and Definition of Model-Based Systems Engineering . . . . 91 6.3.2.2.2. Model-Based Systems Engineering at ESA and DLR ...... 92 6.3.2.2.3. Why the Current MBSE Approach is not Sufficient to Overcome the M&S Weaknesses ...... 93 6.3.2.2.4. Difference to Virtual Spacecraft Design - Why it is not Sufficient to Connect Existing Tools and Simulators ...... 94 6.4. Reasons for Future Success of SSC ...... 95 6.4.1. Trend: M&S Is and Remains Important ...... 95 6.4.2. Trend: System of Systems ...... 95 6.4.3. Trend: Cost Reduction via Re-Usability ...... 97 6.4.4. Trend: Independence from Single Tool and Supplier ...... 98 6.5. Limits of Promoted Concepts ...... 98

7. Conclusion and Future Prospects 100

A. LOFT Mission and System Design Description - Basis of Mission Challenge Analysis 101

B. LOFT Simulator Models and Functionalities 107 B.1. LOFT Power Subsystem Model ...... 107 B.2. LOFT AOCS Model ...... 108 B.2.1. Disturbance Torque due to Earth’s Gravity Gradient ...... 108 B.2.2. Arrangement of Reaction Wheels ...... 109 B.2.3. Euler Equations for Angular Momentum Analysis ...... 109 B.2.4. Angular Momentum Analysis for Attitude Hold Mode ...... 110 B.2.5. Angular Momentum Analysis for Slew Mode ...... 110 B.2.6. Ambiguity of Reaction Wheels ...... 111 B.2.7. Magnetic Torquer Performance Model ...... 111 B.3. Scope and Operation of LOFT Simulator ...... 112

C. JUICE Mission and System Design Description - Basis of Mission Challenge Analysis 116

D. JUICE Simulator Models and Functionalities 119 D.1. JUICE Mass Memory Model ...... 119 D.2. Scope and Operation of JUICE Simulator ...... 120

Page XIV Contents

E. Tool and Simulator Quality Model Quality Criteria Weighting as Quantified Expression of User Needs 124 E.1. Tool and Simulator Quality Model: Quality Criteria Weighting - Survey Results ...... 125 E.2. Survey at Future Programs Department at Airbus Defence and Space GmbH ...... 129 E.2.1. Experiences and Lessons Learned ...... 129 E.2.2. Questionnaire Version Airbus Defence and Space GmbH ...... 129 E.3. Survey at DLRK 2014 Conference ...... 140 E.3.1. Preparatory Step: Adaptation of Questionnaire to Potential Respondents ...... 140 E.3.2. Experiences and Lessons Learned ...... 140 E.4. Survey at SECESA 2014 Conference ...... 141 E.4.1. Preparatory Step: Improvement of Questionnaire ...... 141 E.4.2. Experiences and Lessons Learned ...... 142 E.4.3. Survey Results ...... 143 E.4.3.1. Evaluation of Impacts of Questionnaire Improvements ...... 143 E.4.3.2. Evaluation of Survey Results from Section A of Questionnaire - Validity of Ob- tained Results ...... 143 E.4.4. Questionnaire Version SECESA 2014 ...... 144 E.5. Exemplary Application of Tool and Simulator Quality Model ...... 154

F. Parameter Influence Net Method - Algorithm For Trigonometric Calculations and Exemplary Application 158 F.1. Parameter Influence Net Method Algorithm For Trigonometric Calculations ...... 158

F.2. Application of Parameter Influence Net Method on Modelling of Eclipse Time tEcli pse ..... 166 F.2.1. Derivation of Eclipse Time for Circular Orbits ...... 166 F.2.2. Derivation of Eclipse Time for Elliptical Orbits ...... 169 F.2.3. Application of Parameter Influence Net Method Algorithm on Modelling of Eclipse Time for Circular Orbits ...... 169 F.2.4. Application of Parameter Influence Net Method Algorithm on Modelling of Eclipse Time for Elliptical Orbits ...... 172

F.3. Application of Parameter Influence Net Method on Modelling of Sunlit Time tSunli t ...... 172

F.3.1. Derivation of Sunlit Time tSunli t for Circular and Elliptical Orbits ...... 172 F.3.2. Application of Parameter Influence Net Method Algorithm on Modelling of Sunlit Time for Circular and Elliptical Orbits ...... 172 F.4. Application of Parameter Influence Net Method on Modelling of Power Subsystem ...... 173

F.4.1. Derivation of Battery Stored Energy EB at ...... 173 F.4.2. Application of Parameter Influence Net Method Algorithm on Modelling of Battery Stored Energy ...... 174 F.5. Application of Parameter Influence Net Method on Modelling of Communication Subsystem . 175

F.5.1. Derivation of Free Data Storage Capacity C f ree ...... 175 F.5.1.1. Derivation of Maximum Ground Station Contact Time for Spacecraft in Circu- lar Orbit ...... 176

Page XV Contents

F.5.1.2. Derivation of Maximum Ground Station Contact Time for Spacecraft in Ellip- ticalOrbit ...... 177 F.5.2. Application of Parameter Influence Net Method Algorithm on Modelling of Free Storage Capacity ...... 178 F.5.2.1. Application of Parameter Influence Net Method Algorithm on Modelling of Contact Time for Circular Orbits ...... 179 F.5.2.2. Application of Parameter Influence Net Method Algorithm on Modelling of Contact Time for Elliptical Orbits ...... 180 F.6. Parameter Influence Net - Exemplary Implementation ...... 180

G. Hubble Space Telescope Data 185

H. Definitions and Translations of Employed Notions 187

Bibliography 193

Page XVI List of Figures

1.1. M&S Simulator Landscape and Classification According to Defined M&S Dimensions...... 8 1.2. Thesis Roadmap...... 13

2.1. Structure and Interfaces of Science Payload Simulator...... 19 2.2. SPS Default GUI Screenshot...... 20 2.3. Interaction of Human-Centred Design Activities...... 23

3.1. System Simulator Concept Transfer Approach...... 33 3.2. SSC Transfer Approach Step 2 and 3: Simulator Development Process Following SSC, Based on UCD Approach...... 34 3.3. Evolution of Angular Momentum of LOFT’s Four Reaction Wheels per Wheel Axis over Mock Observation Plan...... 36 3.4. Analysis of Reaction Wheel Failure: Zoom into Evolution of Angular Momentum Stored in One Reaction Wheel During Slew Manoeuvre Performed in Three-Wheel Configuration Prior and After Observation Plan Tuning...... 37

4.1. Tool and Simulator Quality Model Level III...... 52 4.2. Tool and Simulator Quality Model Level II and III...... 52 4.3. Tool and Simulator Quality Model Level I, II and III...... 52 4.4. Usability Breakdown Structure...... 53 4.6. Abstracted Usability Breakdown Structure...... 61 4.7. Criteria Weighting Calculation, Tool and Simulator Evaluation and Their Relation...... 62 4.8. Tool and Simulator Quality Model Success Definition...... 65 4.5. Quality Criteria Ranking based on Survey Results...... 66

5.1. System Structure Implied by Algebraic Expressions...... 70 5.2. System Structure Created by Interconnected Algebraic Expressions...... 70 5.3. Cascade Algorithm Implemented in Parameter Influence Net...... 74 5.4. Example for Accumulated Parameter Influence Net...... 75 5.5. Parameter Influence Net GUI Screenshot...... 77 5.6. Parameter Influence Net: Labelling of Elements...... 78 5.7. Parameter Influence Net Method Development Process...... 82

A.1. LOFT Spacecraft and Reference Frame...... 102 A.2. LOFT Reaction Wheel Accommodation in Pyramidal Configuration...... 102 A.3. LOFT Field of Regard...... 103

Page XVII List of Figures

A.4. Variation of Earth Occultation on Field of Regard of LOFT’s Large Area Detector over One Orbit. 104 A.5. Evolution of Commanded Torque, Spacecraft Angular Rate and Rotation Angle During Time Optimal 60° Slew Manoeuvre of LOFT...... 105 A.6. Extract of LOFT Mock Observation Plan...... 105 A.7. Histogram of Frequency of Required Slew Angles for LOFT According to Observation Plan. . . 106

B.1. LOFT Simulator Default GUI Screenshot...... 115

C.1. JUICE Spacecraft and Reference Frame...... 117 C.2. JUICE GCO 500 Orbit and Beta Angle Definition...... 118

D.1. JUICE Simulator Default GUI Screenshot...... 123

F.1. Parameter Influence Net Representation of Sine Function...... 159 F.2. Relative Deviation of Linear Approximation for Sine Function and Positive Constant Percent- aged Change p...... 162 F.3. Relative Deviation of Linear Approximation for Sine Function and Negative Constant Percent- aged Change p...... 162 F.4. Relative Deviation of Linear Approximation for Cosine Function and Positive Constant Per- centaged Change p...... 163 F.5. Relative Deviation of Linear Approximation for Cosine Function and Negative Constant Per- centaged Change p...... 163 F.6. Relative Deviation of Linear Approximation for Arcsine Function and Positive Constant Per- centaged Change p...... 164 F.7. Relative Deviation of Linear Approximation for Arcsine Function and Negative Constant Per- centaged Change p...... 164 F.8. Relative Deviation of Linear Approximation for Arccosine Function and Positive Constant Per- centaged Change p...... 165 F.9. Relative Deviation of Linear Approximation for Arccosine Function and Negative Constant Percentaged Change p...... 165 F.10. Evolution of Eclipse Time in Circular Orbits in Dependence of Orbital Altitude...... 168 F.11. Evolution of Eclipse Time in Circular Orbits in Dependence of Orbital Altitude (Zoom). . . . . 168 F.12. Parameter Influence Net of Modelled Communication System...... 181 F.13. Parameter Influence Net: Screenshot of Modelled Communication System Net...... 182

Page XVIII List of Tables

1.1. Definition of Common Modelling and Simulation Notions...... 2

2.1. Exemplary Definitions of Usability...... 24

4.1. Tool and Simulator Quality Model Quality Criteria Contribution to Efficiency, Effectivity and Pleasure...... 54 4.2. Tool and Simulator Quality Model: Sources of Bias...... 58

A.1. LOFT Overall Mission Profile...... 102 A.2. LOFT Notions Definitions...... 103

B.1. LOFT Simulator Power Model Parameters...... 108

C.1. JUICE Overall Mission Profile...... 116

E.1. Tool and Simulator Quality Model: Quality Criteria Weighting - Survey Results...... 125 E.4. SECESA 2014 - Survey Results...... 143 E.7. Tool and Simulator Quality Model Application and Results...... 154

F.1. Value Intervals for x Where Relative Deviations RD (p p )/p for Com- DP = f ,Taylor − f ,exact f ,exact mon Trigonometric Functions are Below 0.1 for Percentage Changes 0.05 p 0.05. . . . . 161 − ≤ x ≤ F.2. Percentaged Changes p for Trigonometric Functions, Comparison of Exact Calculation and Taylor Based Approximation and Domains of Definition...... 183 F.3. Relative Deviation RD (p p )/p for Common Trigonometric Functions = f ,Taylor − f ,exact f ,exact and Evolution of Terms Towards Point of Discontinuity...... 184

G.1. Hubble Space Telescope Design Point Data...... 186

H.1. Tool and Simulator Quality Model Criteria Definitions...... 187 H.2. Definition of Common User-Centred Design Related Terms...... 189 H.3. Tool and Simulator Quality Model: English-German Translations of Common Notions...... 190

Page XIX

Symbols

Latin symbols

2 ASA m area of solar array

a,b,c - generic constants

B,B~ T Earth’s magnetic field

C b data storage capacity

Ccell Ah capacity of battery cell

Closs - capacity loss of battery

d % yearly degradation of solar cells

D,D~ Am2 spacecraft magnetic dipole

DOD % depth of discharge

e orbital eccentricity −

EB at,n Wh energy storage of battery after n orbits

Emax Wh maximum battery energy

f ,g,h variable generic mathematical functions

hOr bi t km orbital altitude

2 HRW , H~RW kgm /s angular momentum of reaction wheel

i degree orbital inclination

generic indices, representing single summands respectively i, j,k - factors in an equation

2 IRW kgm reaction wheel moment of inertia

2 IS/C kgm spacecraft moment of inertia

l years satellite lifetime

Page XXI Symbols

lB at years battery lifetime

m~ ,M~ Nm moment

Nc ycle - number of battery duty cycles

px - percentaged change of parameter x

PSA W power generated by solar array

Puser W power consumption, averaged over orbit duration

Puser,e W power consumption in eclipse

Puser,s W power consumption in sunlit fraction of orbit

q,r,s - number of summands respectively factors in an equation

R bps data rate

rS/C ,~rS/C m distance Earth-spacecraft

S W/m2 solar intensity at 1 astronomical unit, 1367 W/m2

TB at °C battery temperature

T~GG Nm gravity gradient torque

T~m Nm magnetic torque

te s time spent in eclipse

tOr bi t s orbital period

ts s time spent in sunlight

Vcell V battery cell voltage

x, y,u,w,~x variable generic parameters

Page XXII Symbols

Greek symbols

α degree solar incident angle

βs degree incident sun angle on the orbital plane

∆t s or h time span

∆tc s battery charging time

∆td s battery discharging time

ηcell - solar cell energy conversion efficiency

ηchar ge ,ηc - battery charging efficiency

ηcov - percentage of solar array area covered by solar cells

ηdi schar ge ,ηd - battery discharging efficiency

ηMPPT - solar generator regulator efficiency

ηtemp - solar cell temperature efficiency

µ m3/s2 Earth’s gravity constant

φ rad or degree angle variable

ρ degree angular radius of the Earth

ωRW 1/s angular velocity of reaction wheel

Page XXIII

Abbreviations

AIV Assembly, Integration and Verification

AHM Attitude Hold Mode

AOCS Attitude and Orbit Control System

BER Bit Error Rate

BOL Begin of Life

C&DH Command and Data Handling

Com Communication

D/L Downlink

DLR Deutsches Zentrum für Luft- und Raumfahrt

DOD Depth of Discharge

DoF Degree of Freedom

DP Design Point

EEP Effectivity, Efficiency, Pleasure

EOL End of Life

EO Earth Observation

EPS Experimental Planning System

ESA European Space Agency

ESTEC European Space and Technology Centre

FAVS Functional Appropriate Visualization and Structure

Fig Figure

FoR Field of Regard

GaAs Gallium Arsenid

Page XXV Abbreviations

GEO Geostationary Orbit

GMSS General Mission and System Simulator

GUI Graphical User Interface

HGA High Gain Antenna

HK Housekeeping

IGRF International Geomagnetic Reference Field

JAXA Japan Aerospace Exploration Agency

JUICE JUpiter ICy moons Explorer

LAD Large Area Detector

LEO Low Earth Orbit

LOFT Large Observatory for X-ray Timing

MBSE Model Based Systems Engineering

MCS Mercury Composite Spacecraft

MMO Mercury Magnetospheric Orbiter

MOSIF MMO Sunshield and Interface Structure

MPO Mercury Planetary Orbiter

M&S Modelling & Simulation

MTM Mercury Transfer Module

NASA National Aeronautics and Space Administration

OCDT Open Concurrent Design Tool

P/L Payload

PI Principle Investigator

PIN Parameter Influence Net

PINM Parameter Influence Net Method

RAAN Right Ascension of Ascending Node

RD Relative Deviation

S Science (Index)

SA Solar Array

Page XXVI Abbreviations

SAA Solar Aspect Angle

S/C Spacecraft

SE Systems Engineering

SLM Slew Mode

SM&S System Modelling & Simulation

SOC State Of Charge

SoS System of Systems

SoSE System of Systems Engineering

SPS Science Payload Simulator

SSC System Simulator Concept

Tab Table

TT&C Telemetry, Tracking and Command

TSQM Tool and Simulator Quality Model

UCD User-Centred Design

VSD Virtual Spacecraft Design

WFM Wide Field Monitor

Page XXVII

1. Introduction

1.1. Importance of Modelling and Simulation in Space Projects

The success of a space mission is based on the accomplishment of defined mission objectives that are the core of every space mission. The mission and spacecraft design has to target at their best possible achieve- ment. It shall allow for the best in-orbit performance with the least resources to get a maximum outcome of the mission while handling conflicting requirements, constraints and the overall complexity of the mission.

The complexity of the system, the accumulation of an extensive number of design impacting factors and the number of involved disciplines call for a structured development approach to achieve this goal. For this reason, the interdisciplinary approach of systems engineering (SE) is used in space projects. NASA [1, p.3] defines SE to be "a methodical, disciplined approach for the design, realization, technical management, operations, and retirement of a system". The iterative SE process as described in literature, e.g. [1, 2, 3, 4], consists of a sequence of activities to be performed across the project phases to transform the customer requirements, including time and money constraints, into a robust and balanced system solution.

In support of engineering and operation activities, Modelling and Simulation (M&S) is an integral part of the SE activities. It assists in the specification, analysis, design, verification, validation and operation of the space segment by demonstrating system capabilities prior to the system’s realization and/or operation otherwise impossible to show.

Simulations used in the space domain cover one or several subsystems and are realized in a wide range, from a set of simple algebraic formulas implemented in MATLAB or MS Excel up to sophisticated mission spe- cific simulators comprising detailed models. They cover the complete life cycle of a mission from analysis and design, over Assembly, Integration and Verification (AIV) up to operations, classified by ESA memo- randum [5] as three M&S application fields (AF). Simulations support the identification of possible mission concepts and the verification of their feasibility in early design stages, the refinement and consolidation of the mission and system design later on, up to the qualification of the space segment prior to launch and its operation in space. As support of the iterative and recursive design process, they facilitate trade-offs and design decisions and assist in the verification and validation of the mission and system design against the mission objectives by behaviour predictions, reducing the risk for costly design iterations. They are sup- portive to the testing of the system up to the training of the operations team.

To establish a common terminology background, Table 1.1 provides definitions of M&S key notions used in this work.

Page 1 Four Dimensions of Modelling and Simulation in Space Projects

Table 1.1.: Definition of Common Modelling and Simulation Notions.

Notion Definition

A model is the abstraction or representation of the reality [6]. It is a mathematical Model or physical (=real) equivalent of the real phenomena to be investigated [7, p.24].

Modelling refers to the construction of models as abstraction of reality [6]. Mod- Modelling elling is the presentation of the complete system or parts of it in mathematical or real form to demonstrate the behaviour of the system [7, p.24].

Simulation is complementary to modelling. A simulation is the dynamic execution of a model to study or verify its behaviour [5, 6]. The outcome of the simulation is Simulation the system’s behaviour as response to its environmental conditions [7, p.24]. In a simulation "the system design is subjected to a time-varying input to stimulate its dynamic response modes" [8, p.99].

A tool is a supportive software device. Tools are employed to implement and con- Tool stitute a simulator [6].

Simulator Simulators are used to run simulations and are usually set-up with tools.

System Modelling SM&S refers to modelling and simulation activities performed at system level. In and Simulation the space domain, the system is considered to be a spacecraft with its environment (SM&S) and parts or the entire ground segment [5, 6].

In practice the notions tool and simulator are not clearly distinguished and occasionally used inaccurately. A clear differentiation is difficult to achieve as software products entail capabilities that can be allocated to a tool and others that characterize a simulator. Often, the same product is used to set up the models and also providing the infrastructure to execute the models and the database to configure the simulation, cf. System Tool Kit by AGI [9]. As a consequence, both terms are used ambiguously in practice. In addition, both terms are already engaged by specific meanings. For example the term simulator is widely understood in space industry as software product used to verify the on-board software of a spacecraft. To avoid misunderstand- ings, the notions as defined in Table 1.1 will be used to the greatest possible extent in this work.

1.2. Four Dimensions of Modelling and Simulation in Space Projects

To assess the practice of M&S in space projects, this work distinguishes four categories for the classification of tools and simulators. These four dimensions are defined to be:

• level of interest: M&S is employed on different levels, from single component over discipline up to system level.

Page 2 Introduction

• development approach: M&S is realized with tools and simulators that are on the range between ready-to-use and build from scratch.

• derivation: M&S is implemented with tools or simulators that are self-made, bought or in between these two extremes.

• service life: M&S is realized with tools and simulators that accompany the project for a very short time up to major parts of the life cycle.

1.2.1. First Dimension "Level of Interest": System vs. Discipline-Level

M&S can be implemented across the system life cycle in a discipline specific way as support. The term dis- cipline is defined in the ESA standard ECSS-M-ST-10 on space project planning and implementation as "a specific area of expertise within a general subject" [10]. In the space domain, M&S support for thermal and mechanical modelling and analysis, AOCS, Power or Data Handling and Communication can be encoun- tered, see later Section 1.3.1. M&S can also be employed on system level to solve system related problems. It is able to support those SE activities that require a multidiscipline perspective to assess the system and its behaviour comprehensively, for example definition of system requirements, system design validation from an electrical, thermal, opera- tional, etc. point of view against high-level performance requirements, support of test activities, prediction of system performance, mission control team training and anomalies troubleshooting [5, 6].

So the use cases for SM&S may cover the complete space system life cycle and the corresponding M&S ap- plication fields, ranging from system level requirements definition, analysis and design trade-offs, over AIV activities at subsystem and system level, up to training and support for spacecraft operations. Tools and simulators used on discipline level are usually sophisticated and very detailed. On the downside, interdis- ciplinary aspects are largely left aside so that a system view is missing. In contrast, tools and simulators combining several disciplines allow to detect and tackle system-related (design) challenges. They encour- age and support cross-discipline communication so that the comprehensive problem solution on system level is enhanced. This favours system maturity and balance and an efficient disciplinary work flow while saving time and costs. If problem finding and solving is part of early design work, design changes and it- erations are less costly than in later stages. On the downside, system-level tools and simulators might get complex if too many functionalities are implemented, resulting in more managing and maintenance effort.

1.2.2. Second Dimension "Development Approach": Build from Scratch vs. Ready-to-Use

One of the main decisions during the implementation of M&S is whether the work is done based on existing elements or completely started from scratch. The decision for the development approach depends on the complexity of the system in focus and the intended purpose of the to-be-created product. It might change during the course of the mission.

Creating a software product from scratch provides the greatest possible extent of flexibility and allows the simulator or tool to be directly answering the needs of a project team. The development process, however,

Page 3 Four Dimensions of Modelling and Simulation in Space Projects can be very time-consuming due to the required validation and verification work and might be challenged by the tight time schedule of a study or a project.

To save time and costs, space projects fall back on existing commercial or self-made tools and simulators. However, these products rarely are ready-to-use and often require effort for mission specific customization. Depending on the flexibility of the tool or simulator and its functionalities, the customization might be time consuming and only possible to a limited extent, thus being not thoroughly useful for the team. However, using existing tool and simulator components might have the advantage to bring along validated models and functionalities what assures some degree of maturity.

1.2.3. Third Dimension "Derivation": Make vs. Buy

Both tools and simulators can be self-made or commercial. Bought software products are either developed by the space primes themselves in dedicated departments or by external simulator suppliers, both by pro- viding their services as simulator developers to their clients [6]. They might be completely designed from scratch or require customization by its end-user to answer the project specific needs. Self-made simula- tors can be completely modelled and created from scratch or can make use of existing elements from the creator and user himself or from colleagues. Public licence and open-source software products make part of this M&S dimension as peculiarity, spread in between the two extremes. Depending on their maturity, they can be considered as a sophisticated product similar to the ones commercially available, or still require development work by the user.

Self-made tools and simulators provide the advantage to potentially save costs while developing and keep- ing the technical knowledge in house and providing their users the highest flexibility to answer their needs. However, the tool and simulator development process from modelling to validation and verification might be time consuming. In addition, the maturity of the software solution is endangered if no or very few pro- gramming skills are available internally. In contrast, bought software products might be available for use in shorter time in a far more mature status, however, being potentially more expensive than self-made prod- ucts. Though the purchased product should cover the specified needs, this is not guaranteed, either.

1.2.4. Fourth Dimension "Service Life": Phase-Specific vs. Long-Term Operation

Choice, usage and maturity of the employed tools and created simulators change along the project life cycle and are highly dependent on the project phase in accordance with the changing M&S needs. Software prod- ucts might be quickly configured or set up to solve specific problems and abandoned and/or exchanged af- ter problem solving. These phase-specific solutions might be employed during a short time of the mission, or they might be evolved, maintained and employed during a large part of the product life cycle.

Phase-specific tools and simulators answer to the needs of the addressed phase and are likely to be man- ageable as they are ideally lean and require very few or no maintenance effort. A major drawback, however, is the higher familiarization effort: the more tools and simulators along the mission are encountered, the more the users have to spent time to get familiarized to the products. Furthermore, having changing tools and simulators along the mission life cycle requires effort to assure data compatibility between phases and

Page 4 Introduction impedes knowledge transfer throughout the project through a discontinuous use of tools and simulators. Tools and simulators employed during a longer time of the mission might act as data source and assuring the knowledge transfer between phases. Also, the familiarization effort is reduced if the same tool or simu- lator is employed during a longer part of the project. To ensure their continuous usage, however, additional effort might be necessary for maintenance activities and tool and simulator updates to adapt the products to changing needs along the mission.

1.3. State-of-the-Art of Modelling and Simulation in Space Domain

1.3.1. Tool and Simulator Landscape

A variety of space specialized tools and simulators is available on the market, both free and subject to charge, that target different application fields to support the modelling and simulation process.

A number of (free of charge) products covering different satellite related topics is developed at universities, institutes and as student and research work with different degree of maturity, see for instance the High Performance Spacecraft Dynamics Simulator [11], the Open Source Satellite Attitude and Orbit Simulator Toolbox for Matlab [12], the astrodynamics model library TU Delft Astrodynamics Toolbox [13] and the Open Source Satellite Simulator OS3 for network simulation [14, 15]. Single enthusiasts around the globe develop in particular open-source orbit propagators with visualization features, e.g. [16, 17, 18].

While these academic and sporadic developments by single persons cover largely the first application field of analysis and design, the range of institutionally used tools, developed internally or in collaboration with private industry, cover products intended for various application fields. ESA’s tool landscape comprises specifically the operational application field (AF 3) as support for software products intended for satellite operations support and training, see for example SIMSAT from Terma A/S that serves as kernel of mission operations simulations [19], the European Real-Time Operations Simulator EuroSim [20] and the mission operation simulator infrastructure SIMULUS [21]. The modelling and simulation tool EcosimPro by EA Internacional S.A. [22, 23] and SIMVIS [24] as multiphysics modelling and simulation tool for application in the concurrent design phase are targeted at the application fields of analysis and design (AF1) and AIV (AF2). SIMSAT, SIMULUS and SIMVIS are free of charge for citizens of the ESA member states. In collaboration with private industry, NASA has been developing the General Mission Analysis Tool as open source tool with focus on space mission analysis and design, cf. [25] and for the user manual [26], applicable in particular for the design and development phase.

Sophisticated tools specialized on a single domain exist that cover the complete life cycle of the product (from AF1 to AF3). This is for example given for satellite network simulations with commercial vendors like iTrinegy (Satellite Network Emulator, cf. [27]) and Vocality (Satellite Simulator 3, cf. [28]) or public licence products developed under ESA ARTES programs like the Satellite Network Simulator 3 from Magister Solutions, cf. [29].

Industry standards for discipline specific applications that are intended for detailed design and develop- ment (AF 1) are for instance Nasa Structural Analysis System NASTRAN by MSC Software for mechanical

Page 5 State-of-the-Art of Modelling and Simulation in Space Domain applications [30] and the ESATAN Thermal Modelling Suite (ESATAN-TMS) by ITP Engines UK for thermal modelling and simulation [31]. MS Excel [32] and MATLAB/Simulink [33, 34] provide full flexibility to their users and evolved to industry standards for self-made products addressing various discipline applications with respect to design and analysis (AF1). All these products are subject to charge.

Commercial toolboxes like Aerospace Trajectory Optimization Software (ASTOS) by ASTOS Solutions [35], the System Tool Kit (STK) from Analytical Graphics [9] in combination with the module SOLIS [36] and the Spacecraft Control Toolbox (SCT) from Princeton Satellite Systems [37, 38] provide a wider range of application, though still with a scope that is rather discipline-focused.

Large prime industries operate their own software departments that develop internal, self-made tools and simulators that are not marketed as products, covering largely single application fields. In support of the in- stitutional and prime internal software departments, well-established commercial companies like gmv [39] complete the landscape by providing customized software solutions, for instance for mission performance and ground system simulations that are also targeting specific application fields.

1.3.2. Practice of Modelling and Simulation in Early Design Phases

The sole availability of simulators and tools on the market cannot be one to one equated with their actual usage. Therefore this section focuses on the real M&S practice. For this purpose, extensive research on publications reflecting the current usage of tools and simulators in space industries was performed but did not yield satisfying results. Consequently, the presented information below is mainly based on the author’s experience made in the Space Systems division of Airbus Defence and Space GmbH in Friedrichshafen, Germany and the direct exchange with colleagues from the Future Programs Department [40]. Due to the nature of these insights and in order to analyse the potential long term development of tools and simula- tors, the present work focusses on the early mission phases (0 to B2) and the corresponding analysis and design activities (AF1) that are supported by M&S. The later project phases (test, verification, operation and disposal) and the related use of software products are not subject of this work.

Figure 1.1 pictures the results of the assessment by positioning the identified simulators employed in mis- sion phases 0 to B2 (very rarely also C/D) in the first three M&S dimensions presented in Section 1.2. The graphical plane represents the second and third dimension, whereas the font size clarifies whether the sim- ulators are used on discipline (small font) and/or system level (large font) (first dimension). Please note that on the y-axis it is purely distinguished between simulators that are bought (project externally) and sim- ulators that are set-up by the user himself or a company colleague. If simulators are re-used from similar projects, this is reflected in the second dimension on the x-axis that presents the availability of a simulator for a mission. Therefore, a re-used simulator that was previously created from scratch for another mission is classified as ready-to-use or need to be customized but not as created from scratch. The simulator keeps the third dimension-related classification from its original project and remains self-made or bought. The colour shading emphasizes the quantitative occurrence of the software product, i.e. the stronger the colour, the more frequent is the software solution to be encountered. The fourth dimension is not reflected in Fig. 1.1 but described later in this section.

Page 6 Introduction

Software products based on MS Excel are encountered very often, in particular in the very early design phases 0 and A. They are mainly self-made, created from scratch or transferred by its user from a previ- ous project. At the utmost, they are taken over from colleagues and modified to the own needs. They are rarely ready-to-use. A single creation consists of several linked spreadsheets and usually covers a specific discipline whereas MS Excel itself is used in diverse domains. Although it might be controversial to classify Excel-based software products as simulators, they do play an important role for space projects and must not be omitted here. MATLAB/Simulink-based simulators are as important as Excel-based ones. They are mainly used in sin- gle disciplines and can be specifically found in the AOCS related domain. MATLAB-based simulators are bought in-house developments or, more frequently, self-made, created from scratch or transferred from a precedent project. Simulink-based simulators are generally not created from scratch and not classified to be completely self-made as they make use of the internal model libraries that Simulink provides. It may also happen that a collection of ready-to-use MATLAB/Simulink-based simulators is available and exchanged among colleagues. In contrast to Excel-based simulators, MATLAB/Simulink-based simulators are more of- ten relying on existing simulator versions and only modified to the mission specific needs. Tools like ASTOS [35] or Satellite Control Toolbox [37] are based on MATLAB/Simulink as well. They provide pre-defined functionalities to their user to set up a simulator that is focused on the AOCS discipline. They offer their user a larger selection of pre-defined and elaborated functionalities than MATLAB/Simulink, re- quiring less programming related work from its final user. Therefore they are classified towards the bought and customized products in Fig. 1.1. The most common MATLAB/Simulink-based simulators, however, remain the ones that are created independently from commercial MATLAB/Simulink-based tools. A variety of simulators come into play in projects that are company internally developed by means of di- verse tools (either commercial or company internal), often rather bought than self-made but seldom pro- grammed individually. They are summarized as internal simulators in Fig. 1.1. Their scope is mainly discipline specific and they are not created from scratch by their final user but customized to specific user needs with a tendency to ready-to-use functionalities. Rather system comprehensive simulators can be created with the System Tool Kit [9]. They often support Earth observation projects in assessing the satellite’s ground coverage and related functionalities while al- lowing some analyses of involved subsystems. As STK already provides various features, requiring only few set-up activities, STK-based simulators are nearly categorized as bought products with a tendency to be ready-to-use from the beginning. These simulators are in usage less frequently than MATLAB/Simulink- and Excel-based simulators.

In summary, the investigation of the actual tool and simulator landscape demonstrates that the current M&S implementation in early design phases is realized by rather discipline specific than system comprehensive simulators. SM&S as such is not realized as a matter of fact in space industry so far. Instead, M&S is put in practice on an ad-hoc basis in each spacecraft engineering discipline. Simulators are developed discipline or subsystem specific, cf. also technical memorandum [5], and not covering the whole system in focus. The employed simulators are rather build from scratch than ready-to-use and are very often self-made. The majority of the simulators mentioned above are developed by the primes themselves. Very few of the cited tools in Section 1.3.1 are applied successfully in commercial space industry. Most of them appear to be employed by few, specialized users.

Page 7 State-of-the-Art of Modelling and Simulation in Space Domain

With regard to the fourth dimension, the encountered simulators and tools are seldom employed in more than one or very few consecutive phases in a project. In particular tools that require very few programming skills (i.e. Excel-based products) are quickly configured and set up to solve specific problems and often abandoned and/or exchanged after problem solution. This ad-hoc simulator solutions are frequent but often neither mature nor maintained. In general, simulators that are maintained and regularly updated are based on tools like MATLAB/Simulink. However, also these simulators are usually not intended to be used along a mission but only in a limited phase. More often, a chain or combination of several simulators is used along the project life cycle. All in all, the ad-hoc simulator approach is dominant, in particular in the early design stages where the openness of the to-be-defined design calls for tool and simulator flexibility. Standardized tools and mature simulator solutions providing the greatest possible flexibility of scope and configuration, however, are missing in the current M&S implementation.

Figure 1.1.: M&S Simulator Landscape and Classification: Font Size Reflects M&S Dimension 1 (Small Font for Discipline, Large Font for System Level Simulators), X-Axis Reflects M&S Dimension 2, Y- Axis Shows M&S Dimension 3, Colour Shading Represents Occurrence of Software Solution in Categorization.

1.3.3. Motivation and Weaknesses of Current Ad-Hoc Modelling and Simulation Practice

As identified in the previous sections the current M&S practice is to have discipline specific simulators, cre- ated from scratch, self-made and employed in a short time period, rather than system-oriented, re-used, bought ones that are used over a longer period. For a thorough understanding of the current M&S practice, the motivation for it was analysed, revealing its potential improvement areas and identifying that the con-

Page 8 Introduction text of the tool and simulator usage is of significant importance. With regard to the first M&S dimension, rather discipline specific than system level software solutions are popular as many questions to be tackled are of detailed, discipline specific character. System-level aspects are not part of the typical responsibilities of an expert but of the systems engineer or project manager. Direct exchange with colleagues [40] revealed that the general state of the second and third dimension of M&S and the practice to create ad-hoc solutions is motivated by several factors. These are:

(a) the tight time schedule in projects combined with

(b) only minor coordination of tool and simulator related activities in projects,

(c) sometimes limited usefulness of existing solutions with regard to encountered tasks (models not adapted/suitable, scope not sufficiently covering the task to be tackled),

(d) at times unclear or not well communicated maintenance of existing solutions, leading to limited con- fidence in their maturity and reliability, and

(e) missing documentation and entailed effort to (re-)familiarize with (the own) existing work.

The very nature of the ad-hoc solutions leads to the short service life (forth M&S dimension) of the products.

The current M&S practice combined with the given tool and simulator landscape leads to three interwoven weaknesses that prevent M&S from being performed more efficiently and effectively and thus from unfold- ing its full potential in the space industry [6]. They are detailed in the following.

1.3.3.1. Limited Usefulness

The key weakness in the current M&S practice is that the software products available do not always meet the potential users’ expectations and needs in full extent [40]. Some users felt that available tools and simu- lators, either commercial or self-made, were not fully suitable for their work. This would affect the scope of the tools and simulators (models and functionalities not sufficient and adequate for tasks to tackle) as well as their manageability (tools and simulators too complex with too many functionalities that overburden their user, handling requires considerable familiarization effort). A variety of commercial solutions would be available on the market but not sufficiently taking into consideration the circumstances the tools and simulators would be used in. For the early design stages this would be in particular the tight time schedule, thus calling for a solution that is easy to use and not overburdening its user with complexity. To give an ex- ample, STK was criticized by many to be very powerful in terms of functionalities and models but requiring a significant amount of familiarization effort.

Since existing solutions cannot fully cover the needs of the users (c) and the time schedule in early design phases is tight (a), users tend to create simulators ad-hoc from scratch to quickly achieve simulator solu- tions. This is often accompanied by a missing documentation (e) and maintenance (d) of the solutions, leading to an overall simulator development effort that is challenging to be coordinated within a project (b). With the next upcoming task time remains limited and available solutions might not be fully adequate for the needs - so the user might tend to create again an ad-hoc solution.

Page 9 State-of-the-Art of Modelling and Simulation in Space Domain

1.3.3.2. Limited Knowledge Transfer

Even if simulators do provide a reasonable amount of usefulness, it was observed that these solutions are often not used, either. In more general terms, it was witnessed and confirmed by colleagues that the possi- bilities to communicate and use M&S knowledge and experience across phases and missions are not fully exploited yet [40], as it is generally observed by ESA across the companies and institutions of the European space sector [6]. This is reflected by the fact that the majority of simulators in use is self-made, created from scratch and phase-specific as pictured in Fig. 1.1. Models and data, that are very likely to be similar from the end of one phase to the beginning of the next one, are not necessarily handed over. Thus the consideration of potential model commonalities for risk reduction and saving across or within projects is impeded. Simu- lator functionalities are newly set up. The usage of a simulator as data source is not necessarily considered as a matter of course. This is also sometimes the case across missions despite potentially existing (design) commonalities between them (for generalizability of missions see Section 3.4.1).

As a result of this approach, effort in modelling, validation and verification as well as familiarization and training is spent that might have been avoided. It also impedes the full exploitation of the fact that continu- ous usage and development of simulators over project phases and across missions decreases development risks and leads to a growth in simulator functionality, maturity and finally reliability. This leads to the use of a chain of simulators and tools that are not necessarily compatible. Consequently, model and data re-use between tools and simulators used along the project life cycle, between major phases and across projects could be improved.

Reasons for this way of working are assumed to be the challenging coordination of the tool and simula- tor related activities within a project (b) (as also assumed by ESA [6]), a general lack of communication, triggered by the tight schedule (a), and indirectly the missing maintenance (d) and documentation (e) of existing solutions, limiting the trust in the reliability of the existing solutions.

1.3.3.3. Limited System Level Application

The overall assessment of the tool and simulator landscape revealed a limited system perspective in the current M&S implementation. This is underlined by the great number of discipline specific simulators and tools in use (see Fig. 1.1) and fortified by their phase-specific employment along the spacecraft’s life cycle not always supported by a clear tool and simulator strategy. Also ESA acknowledges the general lack of SM&S in European space projects [6]. While it is without doubt indispensable to employ domain-specific tools and simulators for the detailed design and analysis of the system and its subsystems, they do not provide a comprehensive view on the system which is at least equally important (see also Section 2.2.2 on the importance of systems engineering for projects). The analysis of system interdependencies is not supported though being responsible for the balance and robustness of the system. System related deficits resulting from the interdependencies between disciplines might not be necessarily detected and/or solved by discipline-specific simulators alone and the emergent behaviour of the system remains subject to the systems engineer experience and intuition. Synergies between phases are not used and the life cycle of the product is not necessarily continuously supported with the current M&S implementation.

Page 10 Introduction

1.4. Implementation of System Modelling and Simulation in Combination with User-Centred Design as Proposed Solution

Although the practice of M&S is acknowledged as supportive in general, its current implementation has room for improvement in several points as demonstrated in the precedent sections. The need to perform M&S more effectively and efficiently is obvious, especially in view of the significant time and financial in- vestments that M&S requires but also savings that it may bring.

A sustainable possibility to overcome the first M&S weakness on the limited usability of the M&S implemen- tation is the application of User-Centred Design (UCD) for the tool and simulator development. The UCD approach puts the user and his needs in the centre of the tool and simulator design process. In combination with UCD, an intensified realization of SM&S enables to overcome the other two major weaknesses of the M&S implementation on the sometimes limited knowledge transfer and of the system focus. SM&S enables a phase-embracing tool and simulator approach in line with the life cycle comprehensive discipline of sys- tems engineering and supports the system view on the mission in assistance to the systems engineering activities (discussed in more detail in Section 2.2.2). An intensified realization of SM&S is also in line with the trend observed by ESA [6] that sees tools and simulators supporting a multiphysics system view increas- ingly being used for system design and verification in space industry. So SM&S meaningfully implemented is expected to not only shape the M&S related activities more effi- ciently while overcoming its current weaknesses but also enhance and support the SE-related work and increase the cost-effectiveness of the development process.

With many advantages of the usage of SM&S presented, the question may arise how SM&S can be efficiently implemented in space projects. At the current state an adequate, broadly applicable approach for its imple- mentation is missing. A successful possibility to realize SM&S has been executed in the frame of ESA’s BepiColombo mission. A newly developed system simulator approach has been initiated and realized by the Prime at Airbus Defence and Space GmbH starting from the mission’s definition phase. The Science Payload Simulator (SPS) is a cus- tomized, i.e. user-centred, mission specific simulator. It is a system and life cycle comprehensive product, that is, with regard to the M&S dimensions, half self-made, half bought and created from scratch. At any time, it has been answering the current needs of its users as a multiphysics device. Its scope and involved models are specified by the Prime itself, its realization and programming is executed externally. The SPS has been employed and evolved in concert with the physical build-up of the space system and the progress of the mission since 2001 [41], which is exceptional in comparison to the usually employed discipline and phase specific tools and simulators. The SPS has gained considerable attention because of its unusual development process and its continuous use. As essential part of this work, the success of the SPS was analysed and crystallized to be the combi- nation of its user-centric development according to the User-Centred Design process and the focus on the modelling and simulation of system interdependencies to assess the systems emergent behaviour in sup- port of systems engineering activities, as later discussed in more detail in Section 2.2.

Page 11 Thesis Goal and Structure of Present Work

1.5. Thesis Goal and Structure of Present Work

Goal of this PhD is to demonstrate that the implementation of SM&S via SPS-like simulators can be sus- tainably used to overcome the identified M&S weaknesses and make the tool and simulator development activities more efficient and successful in support of systems engineering activities. The demonstration is based on the transfer of the success of the SPS to other missions and the deduction of characteristics a potentially successful simulator should bring along with.

For this purpose, the PhD approaches the problem from different perspectives to determine comprehen- sively what is important for the tool and simulator user. On the one side, the System Simulator Concept (SSC) as guideline for system simulator development activities was derived from the SPS set-up and applied to two further missions. It was further used to develop the novel Parameter Influence Net Method (PINM) that complements the benefits of system simulators like the SPS. In parallel, the Tool and Simulator Quality Model (TSQM) was defined based on the user need assessment via interviews, questionnaires and observa- tions, identifying and weighting the characteristics a successful software solution should bring along with.

The statement accompanying this thesis is:

A software product built according to the SSC (1) can be employed profitably as systems engineering support (2) for any mission (3).

To support the thesis statement, this work

1. clarifies how the SSC is defined,

2. demonstrates that the SSC is not mission specific but transferable,

3. demonstrates that the SSC can be employed profitably to support the systems engineering activities and can be used to overcome the current weaknesses of the M&S domain, and

4. demonstrates that the SSC can be implemented in versatile ways.

As a result, this thesis communicates top-level guidelines for successful systems engineering simulator practices based on practical experience and lessons learned gained in space industry. It provides guidance to systems engineers and simulator developers on how to establish system simulators to effectively and efficiently support systems engineering tasks and answers the question which characteristics a software solution has to exhibit to be successfully employed as systems engineering support.

In order to do so, this work consists of seven chapters: following the present introductory chapter, Chapter 2 focuses on the SPS, its storyline and the analysis of its characteristics leading to the definition of the SSC. Chapter 3 describes the elaborated transfer process of the SSC to further missions, demonstrates the results of the transfer to the feasibility phase A of ESA’s LOFT mission and Phase B1 of ESA’s JUICE mission, and shows that the SSC can also be employed to create a mission generic tool, the Generic Mission and System Simulator (GMSS). Chapter 4 presents a novel quality model and how it was defined as baseline for a user- centred tool and simulator development. The analysis of the tool and simulator landscape led to a novel interpretation of the SSC, the Parameter Influence Net Method (PINM), resulting in a new tool as presented in Chapter 5. Chapter 6 discusses the advantages and limits of the SSC whereas Chapter 7 closes this work

Page 12 Introduction by a conclusion and future prospects on the topic. These seven chapters are supplemented by appendices that provide further information to illustrate and deepen topics from the core chapters. Figure 1.2 pictures the thesis roadmap as guideline for the reader.

Figure 1.2.: Thesis Roadmap.

Page 13 2. A Successful System Modelling and Simulation Implementation and its Formalization into a Mission Transferable Concept - the Science Payload Simulator and the System Simulator Concept

The Science Payload Simulator (SPS) is a mission-specific, customized simulator, continuously developed and in use in ESA’s BepiColombo mission since 2001. Based on the experience made with the SPS, the Sys- tem Simulator Concept (SSC) has been derived to provide guidance for the development of successful sys- tem simulators. Section 2.1 deals with the motivation for the creation of the SPS, its development approach, scope and characteristics. Section 2.2 discusses its potential initiators for success that are the application of the User-Centred Design (UCD) approach for its development and the modelling of parameter interdepen- dencies. Finally, Section 2.3 presents the definition of the SSC as systematization of the findings of the SPS investigation.

2.1. Science Payload Simulator as Origin of System Simulator Concept

2.1.1. ESA’s BepiColombo Mission

BepiColombo is an ESA mission to Mercury in cooperation with Japan to gain an in-depth view of the planet. The science mission comprises two separate spacecraft. ESA is responsible for the Mercury Planetary Or- biter (MPO) that is supposed to study Mercury’s surface and internal composition in a 400 x 1508 km po- lar orbit. The Mercury Magnetospheric Orbiter (MMO) is contributed by the Japan Aerospace Exploration Agency (JAXA) and will examine Mercury’s magnetosphere in a polar orbit of 400 x 11824 km altitude.

Both spacecraft build together with the MMO Sunshield and Interface Structure (MOSIF) and the Mercury Transfer Module (MTM) the Mercury Composite Spacecraft (MCS). Shortly before Mercury orbit insertion, the MTM will be separated from the spacecraft stack and the two spacecraft will continue their journey to their dedicated orbit around the planet. The MCS was launched on 20 October 2018 on an Ariane 5 rocket from Kourou in French Guiana and is supposed to arrive at Mercury in 2025. The nominal mission duration is set to one year. [42, 43, 44, 45]

Airbus Defence and Space GmbH in Friedrichshafen, Germany, is prime contractor and in particular re- sponsible for system design, realisation and functional verification of the MPO.

Page 14 Science Payload Simulator and System Simulator Concept

2.1.2. Context of Use and Identified Need for Science Payload Simulator

The SPS activities started in 2001 during BepiColombo’s phase A that implied the typical study constraints like a very tight schedule, a moderate financial budget and a limited number of team members. Under the given conditions, the team had to understand the complex mission constraints quickly and thoroughly to propose and justify an adequate system design compliant to the mission objectives. The multifaceted and multidisciplinary tasks covering, e.g. design decisions, trade-offs and design justifications, had to be solved on system level and changed corresponding to the high dynamics of the mission evolution. Neither deep programming skills of the project team members nor time for extensive familiarization effort was given in that phase to set up and maintain a ready-to-use software solution.

Existing tools and simulators could not respond to the identified need of system level support and easy manageability of the software solution as support of the systems engineer. Financial and time budget were too short for adaptations of existing products as well as the development of new simulators in conformity with traditional development processes, including extensive validation and verification effort. So the team pursued a novel simulator development approach to answer its urgent need, complementing the existing tools and simulators used at discipline level.

This resulted in the creation of two mission specific, customized simulators. The SPS as one of these simu- lators, focusing on the MPO and its operation phase around Mercury was created straight out of the need of its future users.

2.1.3. Evolution of Science Payload Simulator

2.1.3.1. Development Approach and Characteristics

The peculiarity of the SPS development approach is the focus on the user who has been in the centre of the design process right from the beginning of the development activities. As a result, the SPS cultivated particular characteristics that will be highlighted in the following as pre-requisite for the SSC definition in Section 2.3.

Intended to be a product to continuously respond to the needs of its users, the SPS has been evolving in re- sponse to the variety of the upcoming challenges and tasks. In the very beginnings, for instance, the scope of the SPS only comprised a 2D-visualization of the MPO position with reference to Mercury, the Earth and the Sun to understand the complex illumination conditions of the mission. As numerous design decisions had to be made in the course of time, also subsystem specific modules were added to the simulator to support the corresponding trade-off analyses and design validation activities. As a result, the SPS comprises among others representations of the payload, power, communication and attitude and orbit control subsystem of the MPO and the environmental conditions in one single software product [41].

Beyond the implementation of multiple disciplines, type and complexity of the emerging tasks lifted the SPS from a pure collection of independent discipline specific models to a simulator acting on system level. These tasks called for the analysis of system comprehensive dependencies to predict the system’s behaviour in operation. For this purpose cross-connections, i.e. interrelations between the discipline specific models,

Page 15 Science Payload Simulator as Origin of System Simulator Concept have been implemented in the simulator so that disciplines would be simulated dependant on the perfor- mance of each other. For example, the power generation by the solar array, determined by the attitude of the solar panels to the Sun, would be calculated based on the actual spacecraft attitude (and not a worst case), that, in turn, would be driven from operational constraints. Hence, the SPS emerged to be a system simulator.

Simulator models and functionalities have been specified and implemented according to their necessity and have been deleted as soon as their scope has become obsolete because of design changes or freezes. As a result, the SPS has always been an adaptive and lean software product, not overburdening its user or complicating its use with obsolete functionalities and modules. The SPS has been representing the current state of the mission and system design and its complexity has continuously corresponded to that of the system, serving as knowledge basin.

Answering the direct needs of its users, fundamental but comprehensive system level models have been preferred to detailed but single aspect modelling. Models have only been refined if necessary. As an exam- ple, the solar panel model has been highly detailed in the course of the mission because of the criticality of the panel’s temperature development. In contrast, the orbit propagation calculation remained rudimental. A technical benefit from this strongly reduced model approach is that simulations of time spans covering months or years can be performed in a reasonable amount of time. Simulations of one mission day are possible in 30 minutes up to one hour [41]. The selected model philosophy resulted further in reasonable modelling and verification effort that, in turn, was adequate for the given context of limited time and finan- cial budget. The reliability into the product was not questioned despite the limited classical verification and validation effort since the majority of the simulator models and functionalities have been specified by the team itself. This also allowed to use the mission specific notions and terms and made the SPS customized, transparent and easy to familiarize with. Together with the customized Graphical User Interface (GUI) that allowed for intuitive handling of the SPS, the simulator could be efficiently used within a short time and its results understood by all project team members, even by those not involved in the simulator definition activities.

2.1.3.2. Use Cases

With its key capability to simulate the spacecraft’sbehaviour in operation by providing parameter evolutions over mission time, the SPS emerged to be a system simulator in support of systems engineering activities requiring system overview. In more detail, the major use cases of the SPS are pictured in the following.

Support of Design Decisions, Establishment of a Balanced Design and Design Validation against Mission Objectives The SPS effectively supported trade-offs prior to design decisions and their verification against the mission objectives with its ability to simulate the evolution of key design parameters over the course of a specified time frame. For example the best spacecraft side for the solar array panel in terms of power and geomet- rical constraints was determined by support of the SPS [46]. Design alternatives and their impact on the system performance could be quickly checked as the configuration interface allowed facile design param-

Page 16 Science Payload Simulator and System Simulator Concept eter changes. This allowed to identify parameter sets for a balanced and robust design and to base design decisions on realistic data rather than on worst case assumptions.

On the way to a balanced design, the SPS helped to handle conflicting requirements that could only be solved by multidiscipline simulations with its system comprehensive models and interrelations. For exam- ple, the SPS supported the definition of a steering algorithm to compromise between the need for a vertical orientation of the solar panel for high power generation and the necessity of a horizontal panel orientation for low cell temperatures [41]. Or it supported the identification of geometrical constraints induced on the antenna line of sight to the Earth by the solar array movement [41].

In support of the system design validation against the mission objectives, the SPS allowed to verify the spacecraft performance against the requirements. By analysis of the parameter evolutions over time sudden parameter changes could be revealed and conflicts between the subsystems, criticalities and risks could be detected and solved in an early development stage. With its ability to read in the instrument operation sce- narios - comprising information on the mode dependant instrument power consumption and data produc- tion rate - and simulate the system performance under the given scenario, the SPS helped to demonstrate adequate resource allocation of the spacecraft design [46].

Support of Effective Discipline Specific Analysis Along the mission, the discipline specific tools and simulators and the complementary SPS have been ben- efiting from each other. The SPS has been validated with the simulation results of the highly sophisticated discipline specific tools. In turn, the SPS allowed to efficiently trigger the costly discipline specific simula- tion for the detailed analysis of critical events like a high temperature gradient. For this, the SPS was used to identify the critical time instances in the mission time line. The run time of the subsequent detailed simulations on the resource consuming discipline specific tools could be narrowed down to a shortened simulation time span enveloping the critical event. This staggered analysis approach raised the overall effi- ciency of the analysis work.

Support of Science Operations and its Planning ESA has been using its Experimental Planning System (EPS) to set up operation schedules for the instru- ments on-board. The operation of the instruments is dependant on specific events, e.g. the transition from eclipse to the sunlit fraction of the orbit, that trigger instrument modes. The EPS uses as input a dedicated file listing these events of interest. Prior to the SPS, the event file was created with significant amount of manual, yet not effective effort. With the SPS that comprised the required models to compute the occur- rence of the events of interest, the event files could be generated by simulator computation, easing the work on the operations planning. [41] With the feature of the SPS to read in the operation scenarios, different scenarios could be easily tested and evaluated with regard to their scientific outcome. As such the operational schedule could be improved and the scientific outcome of the mission increased. [46]

Support of Project Communication and Knowledge Transfer Covering several disciplines, the SPS has been used in support of the team communication and enhanced collaboration. With the simulator at hand, the team has been able to discuss system tasks across disciplines by enabling the discipline externals to understand and assess the problems quickly. The visualization of

Page 17 Science Payload Simulator as Origin of System Simulator Concept data in plots or the spacecraft moving in orbit around Mercury supported this process. The systems engi- neer as primary user of the simulator has been able to detect problems and guide the experts accordingly, identifying the disciplines that would need to work closely together to solve multiphysics problems. The SPS has been used over several phases, maturing and growing with the mission and spacecraft in paral- lel, at any stage reflecting the current system design and the corresponding data. As such, it served as knowl- edge basin, storing the system and mission relevant data and being a mean to pass it from team member to team member, from phase to phase, from Prime to Customer, allowing quick familiarization with mission and spacecraft.

2.1.3.3. Structure

The modules and functionalities of the SPS were set up according to the use cases identified in the course of the mission and have been continuously evolving with the exception of a largely unchanged structure. Fig- ure 2.1 pictures the latter including its modules and interfaces. The Solar System model and mathematical library are generic, mission independent parts of the simulator. They build the simulator framework and mathematical core of the SPS. They are complemented by the mission specific modules that incorporate the model specifications by the Prime. They cover orbit and subsystem specific models and an interface to the external trajectory and spacecraft geometry data required for the simulation. According to the evolving needs of the SPS users and the mission progress, the mission specific core has been continuously updated. The user interface allows for the control of the simulator. The simulation data and mission and spacecraft model parameters is configurable via files or by a user-friendly configuration mask. Figure 2.2 presents the Graphical User Interface (GUI) that allows intuitive handling of the simulator. The simulation data is pro- vided in three formats: in plots and tables within the GUI, in a 3D-visualization of the spacecraft in its orbit within the GUI and recorded in separate text files that can be archived and post-processed by third party tools like MATLAB or MS Excel. Finally, the EPS of ESA is interfaced to the SPS.

More details on the models, functionality and handling of the SPS are given by Anklam et al. [41]. A sig- nificant number of internal documents, e.g. Architectural Design Document [47], User Manual [48], are regularly updated and provide further details.

2.1.4. Evaluation of Science Payload Simulator Usage

The SPS as system simulator in support of systems engineering activities has been appreciated by the indus- trial team, the mission involved scientists and ESA as added value to the project by complementing the set of highly sophisticated tools and simulators specialized to isolated aspects of the system design. Putting SM&S in practice, the SPS has been supporting the mission tasks required to be solved at system level by simulating the spacecraft performance multiphysically and anticipating its emergent operational behaviour. With the simulation results, it has been possible to solve conflicting requirements, to balance the system design and to validate the system design against the mission objectives and demonstrate its compliance. In addition, the improvement of the scientific outcome of the mission by its operations planning capabilities was sup- ported by the SPS. The short response times of the simulator implementer VECTRONIC Aerospace GmbH, e.g. the correction of simulator inconsistencies within hours or major scope updates within days, allowed

Page 18 Science Payload Simulator and System Simulator Concept for the smooth integration of the simulator usage in the spacecraft development activities. The continuous usage of the simulator along the mission, including regular updates and adaptations to the varying require- ments, made the SPS valuable as basin of system data and copy of the current mission and system design status.

Figure 2.1.: Structure and Interfaces of Science Payload Simulator according to Anklam et al. [41].

Page 19 Science Payload Simulator as Origin of System Simulator Concept

Figure 2.2.: Screenshot of Default Graphical User Interface of Science Payload Simulator, Version 6.4.11, 31 January 2012.

Page 20 Science Payload Simulator and System Simulator Concept

2.2. Science Payload Simulator Success Factors - User-Centred Design Approach and Resulting Focus on Systems Engineering Supportive Functionalities

Simulators are usually employed in a limited duration of the mission, see Section 1.3.2. So with regard to the continuous, successful usage of the SPS within the BepiColombo project over several phases since 2001, the question may arise what distinguishes the SPS from other simulators.

The major difference revealed is its user-centred development approach. Putting the user in the focus of all simulator development activities leads to a product that is fully tailored to the particular needs of its users. In that context, the peculiar characteristics of the SPS described in Section 2.1.3.1 explicitly represent the user needs and expectations towards a useful simulator. This is true for:

• the structure and handling of the SPS: the SPS is lean, easy to familiarize with and simple to handle to comply with the tight project schedule that among others limits the time for simulator familiarization.

• its developing philosophy: the SPS is adaptive and evolves along the project to make allowance for the emerging needs of its users; the models are self-specified, using the project specific terms, and thus transparent to allow for quick familiarization, correct configuration of the models within their range of validity and providing confidence in its functionalities.

• and its scope: the SPS covers several disciplines and their interrelations, largely modelled at shallow depth, to allow for the handling of challenges that need to be assessed at system level.

Being able to simulate the spacecraft in operation in almost its entire technical width allows to tackle a va- riety of systems engineering duties and responsibilities. It is specifically the modelling of discipline interre- lations, expressed as parameter interdependencies that is a unique characteristic of the SPS in comparison to other products and consequently renders it successful. With the SPS, the users assess, understand and manage the interrelations of the system elements and eventually achieve the desired system behaviour by eased trading of different parameter configurations. Differences in the resulting system performance allow to derive nature and magnitude of single parameters’ impact and consequently provide a feeling for the system, its drivers and its bottlenecks.

The following sections will provide more insight into the UCD approach and discuss the significance of pa- rameter interdependencies in view of the successful execution of systems engineering tasks within a project.

2.2.1. User-Centred Design and Usability

The User-Centred Design (UCD) approach is based on the idea to position the user of a product with his contextual needs in the centre of the product life cycle. Consequently, the product is adapted to the user needs and the context of use in every product stage [49].

The UCD process is to be applied several times during the planning, design and development stage of the product [49, 50]. Inspired by the human-centred design process of interactive systems, the user-centred development process is composed of the four iterative steps

Page 21 Science Payload Simulator Success Factors

Step 1: Specification of the context of use,

Step 2: Specification of requirements,

Step 3: Development of design solutions and

Step 4: Evaluation of design,

[50], see also Fig. 2.3.

In the context of simulator development, Step 1 specifies the current phase of the mission the simulator shall be used within, the involved time and budget constraints, the overall mission context, and the poten- tial user of the simulator, i.e. its role and tasks within the project, its software and tool related skills and its former experience with tools and simulators. According to Eason [51] several user types have to be dis- tinguished and determined. Primary users are the main user of the product in question. Secondary users employ the product only occasionally. Tertiary users are affected by the use of the product indirectly. Part of this study are mainly primary and secondary users. In Step 2 the user’s expectations from the simula- tor are specified, comprising the technical scope and its user interface. In Step 3 the requirements defined in Step 1 are implemented into a solution while in Step 4 the solution is evaluated against the specified requirements. Depending on the progress of the development activities and the general approach to the topic, the outcome from Step 2 might range from a rough definition of possible use cases and in- and out- put formats of the simulator to a finalized and highly detailed specification with mathematically detailed spacecraft models. Accordingly, the outcomes from Step 3, that are evaluated in Step 4, might range from first simulator models in response to the outlined use cases in Step 2 up to the fully executable simulator versions. The iterative development is restarted in case of new functionalities or changing context, e.g. new project constraints, new users, etc. The SPS was created according to the UCD process fortuitously and it is possible to retrospectively map the UCD design steps to its development. Section 2.1.2 represents the UCD Steps 1 and 2 while Section 2.1.3 details UCD Step 3 and Section 2.1.4 outlines UCD Step 4. Steps 2 and 3 have been repeated several times since 2001 due to the evolving needs of the BepiColombo team. In this respect, functional scope and GUI of the SPS evolved continuously whereas the fundamental requirements like the short response time by the SPS programmer remained unchanged.

The UCD concept is enjoying increasing application and success [52] since it was first mentioned in the early 1980s [53, 54, 55]. The development approach is applicable for a variety of products, comprising hard- and software, and related to the central notion of usability that will be detailed in the next section. With ris- ing importance of the World Wide Web and the invention of mobile devices like smartphones and tablets, the popularity of UCD is in particular given for software based products like websites or mobile applications nowadays, cf. e.g. [56, 57, 58] that give instructions on how to best implement usability in the given circum- stances, for product design improvement [59], for measuring usability (or the related user experience as expression of fulfilled usability) [60, 61], or usability of games [62]. Usability is also an emerging research topic, leading to the establishment of research institutions, university chairs and companies offering con- sulting and usability related trainings, cf. [63, 64, 65, 66, 67].

Page 22 Science Payload Simulator and System Simulator Concept

Figure 2.3.: Interaction of Human-Centred Design Activities [50, Fig. 1].

Definition of User-Centred Design Key Notions

Systems imply a certain degree of quality. In the standard ISO 9000 on systems quality is defined as "degree to which a set of inherent characteristics fulfils requirements" [68]. The standard ISO/IEC 25010 on systems and software quality requirements and evaluation defines the quality of a system to be "the degree to which the system satisfies the stated and implied needs of its various stakeholders, and thus provides value" [69]. In the standard ISO 8402 on quality management and quality assurance quality is defined to be the "totality of characteristics of an entity that bear on its ability to satisfy stated or implied needs" [70].

In the context of software products the standard ISO/IEC 9126-1 on software quality [71] - that meanwhile is replaced by the standard ISO/IEC 25010 [69] - introduced the notion of quality in use that encompasses more than quality. It is the user’s view of quality [72] and describes the quality of a product as experienced by its user during its usage. Bevan defines it as "the extent to which a product satisfies stated and implied needs when used under stated conditions" [73]. It is the quality in use that the product user seizes and employs to evaluate the product quality. Consequently, quality is context and user dependent. To achieve quality in use, the application of the UCD approach is required [72].

The central and widespread term in the context of UCD is usability. The achievement of a specific us- ability for the user is the overall objective of performing UCD. Usability "lies in the interaction of the user with the product or system" [74]. It is measured in terms of the result of using the product rather than properties of the product itself, expressing the extent to which users can achieve their goals with efficiency, effectiveness and satisfaction [50]. Increasing the usability of a product means to match it more closely to the user needs. Thus, similar to quality in use its definition depends on the context it is used in and its user and it is not generalizable. For a simulator, usability can mean to increase its ease of use and/or to implement new models, aiming at a better handling as well as a more suitable scope and better function- alities. German translations for usability comprise Gebrauchstauglichkeit (engl. appropiateness of use),

Page 23 Science Payload Simulator Success Factors

Nutzerfreundlichkeit (engl: user-friendliness), Funktionalität (engl: functionality) and Bedienbarkeit (engl: operability). The manifold translations demonstrate the diversity of the notion as well as its vague defini- tion. Table 2.1 provides several definitions as found in literature. They express the inherent dependence of the meaning from the user and/or the context of use.

Table 2.1.: Exemplary Definitions of Usability.

"Degree to which a product or system can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use". [69]

Usability is a characteristic of software quality. It is the attribute of a software product to be tailored to the requirements of its end users. [75]

"For a software product, usability is the user’s view of software quality". [74]

An often encountered synonym to UCD is usability engineering, addressing the execution of a number of activities to achieve a specific usability. Based on the central notion of usability, further expressions have emerged, each focusing on a specific aspect of the usability approach, like usability tests, usability con- sulting, usability methods, etc. to name a few. Depending on the professional context, different usability notions became current currency [76, 77].

In contrast to the given standards on software quality that define usability as a subset of quality in use [69], quality in use and usability are understood as synonyms in the frame of this work. The higher the usability of a product, the higher is its quality in use, and vice versa. This approach is also used by Bevan [73].

Within this thesis, the usability of a simulator or tool is defined as key indicator for its success within a project. Accordingly, a new quality model was specified in this work that defines usability by a set of quality criteria. The Tool and Simulator Quality Model (TSQM) defines that the degree of usability a simulator or tool might achieve is dependent on the level of achievement of the quality criteria and their importance. This breakdown structure is similar to the standardized approach cited in the standard ISO/IEC 25010 [69] that defines quality with a set of characteristics. In line with the UCD approach, the quality criteria and their importance were specified through inquiries of potential simulator users to integrate user’s emotions and opinions in the TSQM definition. More details on the TSQM are presented in Chapter 4.

Another emerging notion related to UCD and usability is User Experience (UX). It "encompasses all aspects of the end-user’s interaction with the company, its services, and its products" [78]. In the standard ISO 9241- 210 on ergonomic aspects of human-centred design for interactive systems it is defined to be the "person’s perceptions and responses resulting from the use and/or anticipated use of a product, system or service" [50]. UX covers all aspects of the human-computer interface aspects, including the user’s emotions and opinions that occur before, during and after the use of the product. So UX is dynamic and might change over time. Within the frame of this work, usability is understood to cover the emotions and opinions of the users as the definition of the TSQM in Chapter 4 will demonstrate it. These emotions result from the actual usage of the product and former experience. Therefore, usability is considered not as synonym but part of UX.

Page 24 Science Payload Simulator and System Simulator Concept

2.2.2. Systems Engineering and Importance of Parameter Interdependencies

2.2.2.1. Systems Engineering Fundamentals

As basis for the subsequent explanations, the notion of system and its characteristics are discussed. Litera- ture [2, 4, 8, 79, 80, 81] reveals the following commonly cited characteristics of a system:

• A system consists of elements.

• These elements are interconnected.

• The co-operation of the interconnected elements leads to an emergent behaviour of the system.

• The emergent system behaviour ideally supports the fulfilment of the system’s objective(s).

Open systems are differentiated from closed systems by their interaction with other systems in a shared envi- ronment and their resulting behaviour that is affected by emergence and complexity [4]. Engineered systems are a specific type of open systems that have a purpose and contain technical elements [4]. They may in addition contain social, natural and abstract elements. They usually have a life cycle and are operated in a complex and dynamic environment.

Open systems have boundaries to confine them from their environment. In that context, a system is defined to have internal and external interfaces. The boundaries of the system against its environment represent the external interfaces. Internal interfaces are defined by the interaction between the system elements. Depending on the perspective on the system, interfaces might be evaluated to be internal or external [4]. Through system decomposition and inherent change of perspective, system elements might themselves be considered as systems with their proper internal and external interfaces. System decomposition within a system can be performed several times and thereby producing several system levels.

The system structure defines the allowable interrelations between the system elements. The system be- haviour is the result of the interaction of system elements with the system environment and the underlying system structure [4]. Emergence is the characteristic of the system’s behaviour [4]. The system elements together produce a result or a behaviour that is not obtainable by the elements alone [80, p.23]. Literature defines a variety of emergence types. In dependence of the observer, unexpected emergence is differenti- ated from unpredictable emergence [82, p.194]. The first describes actually predictable emergence that was however missed. The second describes a genuinely unpredictable emergence that, on the downside, has limited value for engineering.

Unexpected emergence has to be controlled to be exploited, i.e. to avoid system failures and benefit from synergies. To the extent that the emergent system behaviour cannot be predicted from the knowledge about the individual elements’ characteristics, the sole optimization of single system elements does not necessar- ily lead to the intended system behaviour. Emergence is manageable with (design) iteration [83], supported by M&S. Adequate mathematical models and the control of component and subsystem variability in design and operation are the pre-requisite for sound emergence exploitation [84].

The emergent behaviour of a system is a result of its complexity [4, 85, 86, 87]. It is defined as degree of difficulty in predicting the properties of a system if the system element properties are known [87]. In this

Page 25 Science Payload Simulator Success Factors respect, complexity depends on the number of elements and their interrelations [87] and is expressed as structural complexity [88]. The concept of dynamic complexity acknowledges the time related aspect of complexity that becomes evident when the system is used in its environment [88]. Weaver [87] distinguishes problems of disorganized and organized complexity. Problems of disorganized complexity are characterized by a very large number of variables that can only be solved with probability theory. Problems of organized complexity deal with a considerable number of parameters that are interre- lated in an organized way. They can be successfully treated with experimental and mathematical analytical methods. Note: A complex system is not to be confounded with a complicated system. Complicatedness is a charac- teristics of the system perception and as such subjective [89, p.16]. Complexity is a system characteristics and results in emergence.

In the context of this work, the spacecraft is understood as engineered system with organized complexity. Its performance is its emergent behaviour that is determined by the system internal interrelations and the exchange of the spacecraft with its environment. Consequently, the notion of systems engineering is un- derstood in the context of definition, development and operation of the complex, multidisciplinary system "spacecraft".

2.2.2.2. Systems Engineering Characteristics and Importance within Space Projects

Systems Engineering (SE) is a systems approach to guide the engineering of engineered systems [4, 8]. The importance of SE for the success of a project is its complementing character to the various involved disciplines and its structured approach as answer to the complexity and the entailed emergence of engi- neered systems. Honour [90] confirmed that with increasing level and quality of SE effort cost and schedule compliance of the project are positively affected which is even more important in view of the tightened en- gineering life cycles and the rising competition [91]. SE is the art of getting the different system elements originating from various disciplines to work together and of exhibiting an emergent behaviour that is compliant to the requirements and fulfils the objectives. SE has evolved out of the need to prevent catastrophic failures in space systems during the complex Apollo missions of the 60s and 70s [92]. Nowadays, the necessity for SE is likewise - domain independently - driven by the need to avoid system failures in general.

SE has the two complementary sides of technical leadership and systems management, the "art" and "sci- ence" of systems engineering [3, 93, 94].

On the technical side, SE is about balance and complexity as mentioned in many SE definitions, cf. e.g. [1, 2, 8, 91, 92, 93, 95]. SE fills out the gaps between the system elements. It "bridges the traditional engi- neering disciplines" [8, p.4] and balances the system to obtain a solution compliant to the requirements. It acknowledges, manages and exploits the complexity and emergence of a system by identifying and pursuing the "big picture" [92] and communicating it. It is about interface management and leading the multidisci- plinarity in the project, i.e. by identifying the technical issues that need to be solved across disciplines, by triggering the communication between the disciplines, and by guiding the work towards the common

Page 26 Science Payload Simulator and System Simulator Concept system objective across the life cycle. In this respect, Muratore [96] defines robustness, growth capability, reasonableness, elegance and visibility as further "elements" that the technical leadership should embrace.

On the management part, SE organizes and leads the activities along a project that are necessary to obtain a integrated whole. The corresponding SE process ranges from international standards like ISO 15288 on system life cycle processes in systems and [97], to general practice of SE [3, 4, 91] up to domain specific work. For the space sector these are for example the NASA Systems Engineering Processes and Requirements NPR 7123.1 [98], the NASA Systems Engineering Handbook [1] that provides guidance on how to implement the technical processes defined in [98], and the standard ECSS-E-ST-10 on system engineering general requirements in space engineering [2] with its supplements that defines the SE process for ESA space projects. They all describe iterative and recursive problem solving processes, applied through the system’s life cycle to transform the requirements into a system solution. In very rough terms, the ap- proach consists commonly of the identification of the system objectives, the specification of requirements, the creation of system design concept alternatives based on the identified requirements, the selection and realization of the best design based on trade-off analyses, the verification that the design is properly built and integrated, and the post-implementation assessment of how well the system meets the objectives, i.e. the validation of the system solution [1]. This approach is also in line with the V-model [99] that is tradition- ally employed to describe the development process of a product [100, 101]. The space sector in particular pursues the combination of a top-down system design process, consisting of the requirements specification and the design solution definition process recursively applied at increasing level of product detail, and a converse bottom up product realization process, composed of product im- plementation, verification and validation processes [1, 2]. This SE engine, that might be also interpreted as V-model, is executed in each life cycle stage and so refines the system solution from a feasible concept over a top-level architecture and a design baseline to the final integrated product [1].

A successful systems engineer knows how to balance the art of technical leadership with the science of systems management within a project [102] to develop a system solution that meets performance, cost, schedule and risk objectives [3]. He understands, manages and responds to the complexity of the system and guides the SE process. He implements SE in view of the seven elements defined by Muratore [96] and guides the technical development as responsible for the "big picture". By applying the SE process he plans and manages the activities necessary for the specification, design, verification, operation and maintenance of the system in conformance with the customer’s requirements [2]. According to Ryschkewitsch et al. [93], he has strong leadership and communications skills as well as diverse technical knowledge, the ability to apply sound technical judgement and to make system wide connections as prerequisite to predict the emer- gent system behaviour and detect criticalities, to anticipate the effect of changes on the system behaviour and to set and guide the multidisciplinary teams.

2.2.2.3. Importance of Parameter Interdependencies

As clarified in the preceding sections, a-priori knowledge about the system emergence is important and ne- cessitates the SE discipline to obtain the desired result out of the engineering activities. Emergence is driven by the system elements interdependencies. Consequently knowledge about them is a significant benefit for

Page 27 System Simulator Concept Definition the systems engineer to fulfil his tasks of seeing the big picture and guiding the activities accordingly. In detail, knowing the relations between the system elements allows for the identification of:

• system drivers, i.e. elements that intensely drive the systems behaviour, in particular its performance, and therefore need to be specifically put in the focus of attention during the SE process,

• highly connected elements that cannot be designed or changed independently, and

• system bottlenecks in the system structure that arise from the elements and their interactions and that may need to be solved.

This information, in turn, allows the systems engineer to obtain a deep understanding of how the system works so that he can actively manage the interfaces for a balanced design that achieves the desired emergent behaviour [8].

2.3. Systematization of Science Payload Simulator Analysis Findings - System Simulator Concept Definition

Based on the success of the SPS and its continuous and fruitful use within the BepiColombo mission, its novel development approach and resulting characteristics were systematized in the System Simulator Con- cept (SSC) that provides guidance for projects that envisage to create and apply a system simulator as sup- port for SE activities. So with the definition of the SSC a systematization has been created to test and demon- strate the transferability of the SPS success to missions beyond BepiColombo.

As framework for the simulator development activities, the SSC addresses the characteristics, the develop- ment approach, the context of use and the scope that constitute a system simulator like the SPS. The SSC outlines the mission specific, user-centred development of a lean, adaptive, transparent and easily manage- able simulator with focus on the comprehensive implementation of system elements and their relations, to the extent that they are needed to anticipate the emergent system behaviour as support of the SE activi- ties. This combination of UCD and SE is of the same tenor as for the SPS. Simulators built according to the SSC are system level tools (M&S Dimension 1, see Section 1.2). They can be self-made or bought (M&S Di- mension 2), build from scratch or requiring customization (M&S Dimension 3). As the concept requires the direct involvement of the user in the development process, the simulators are never ready-to-use. Like the SPS, they have the potential to be used along the mission life cycle, covering several mission phases (M&S Dimension 4).

2.3.1. System Simulator Concept Definition and Validation Approach

Starting from the SPS experience in BepiColombo, the SSC was defined in an adapted UCD process with the following steps:

Step 1 - Identification and generalization of the SPS characteristics: analysis of the SPS as concrete de- sign solution (UCD Step 3), its user-centred development approach and its characteristics leading to its success.

Page 28 Science Payload Simulator and System Simulator Concept

Step 2 - Identification and generalization of the needs and users that are satisfied with the SPS: reverse UCD identifying the SE related needs addressed by the SPS (UCD Step 2) and the context of use (UCD Step 1), i.e. constrained schedule and financial budget, challenging tasks by mission requirements that cannot be supported by available tools and simulators respectively, sys- tems engineer as major affected party and potential main user.

Step 3 - SSC definition: derivation of the SSC as generalization of the analysed simulator approach realized in the BepiColombo mission

Step 4 - Evaluation of the SSC in the unmanned spacecraft domain: justification of the concept against actual simulator supply and demand (UCD Step 4)

a. preliminary justification against needs of the Future Programs Department at Airbus Defence and Space GmbH at Friedrichshafen, Germany, identified by observations, experience and in- terviews with colleagues [40], in consideration of the current tool and simulator landscape (see Sections 1.2.3 and 1.3.2)

b. theoretical justification against the Tool and Simulator Quality Model (TSQM), see Section 4.5

c. practical justification against the needs of a space project by application of the SSC to on-going missions (see Chapter 3). This delivers the ultimate proof for the validity of the SSC within the unmanned spacecraft domain: if the simulator created following the SSC proves to be useful, the SSC proves to be valid and successfully transferable.

2.3.2. System Simulator Concept

In the following the SSC as framework for the system simulator development activities is detailed in its principles. Its application in a project requires the tailoring of the guidelines to the mission and the project and is in this respect user-centred.

I. Simulator Characteristics Simulators following the SSC are lean, adaptive, transparent and easily manageable. Their implemented models and interfaces are user-defined. They focus on the simulation of the system instead of being limited to single disciplines. Their objective is the support of SE activities for engineered systems along their life cycle.

II. Simulator Context of Use Simulators following the SSC are intended for the use by systems engineers as primary users and discipline experts as secondary users. The users are assumed to be individually educated and diverse in careers with fundamental astronautical background but no specific qualification or experience in M&S tools and simu- lators. Possible consequences of this user diversity are different simulator requirements that ultimately have to be assessed with a user need analysis [54, 103].

Simulators following the SSC are employed in a highly dynamic project environment that is characterized by a tight schedule not providing extensive room for simulator development or familiarization activities and changing mission challenges along the project. Depending on the addressed system life cycle stage, they

Page 29 System Simulator Concept Definition may range from design and justification related tasks in earlier life cycle stages up to operations planning and analyses in case of requirement or design modifications in later stages, cf. the current employment of the SPS in BepiColombo’s Phase C/D [104].

To achieve the mentioned characteristics in the describes context of use, the following implementation principles are advised in the SSC guideline.

IIIa. Simulator Development Approach The SSC calls for an iterative and user-centric development approach tailored to the context of use dis- cussed above to achieve a high degree of satisfaction while keeping the potential barriers of simulator em- ployment as low as possible. It implies:

• the continuous implementation of user needs while pursuing the evolving character of the mission challenges and allowing the simulator to evolve in parallel to the system design as adaptive solution,

• the exclusive and gradual implementation of required models and functionalities omitting obsolete parts, resulting in a lean simulator that does not overburden its users,

• the implementation of user-defined content, in particular user-specified models to obtain a transpar- ent simulator which leads to higher reliance into and eased familiarization with the simulator, and

• the implementation of user-defined, clear and easy manageable user interfaces allowing to easy fa- miliarize with the simulator.

To fit into the dynamic context of use:

• the initial release of the simulator is performed in a reasonable amount of time that is adequate for the mission phase, and

• change requests are realized in an updated simulator version within short time.

To achieve that:

• the development activities, i.e. specification, implementation and validation of the simulator models, are streamlined to allow for the timely delivery and employment of the simulator within the project, e.g. focus on essential validation activities like health checks instead of extensive validation proce- dure, and

• short reaction times of the party implementing the simulator are essential.

IIIb. Simulator Scope As support of the SE activities within the project and complementary to the sophisticated discipline specific simulators, the system simulator focuses comprehensively on the system. This is achieved by:

• the implementation of system elements originating from different disciplines (if required),

• the implementation of system element relationships, i.e. system cross-connections, and

• the prioritized implementation of fundamental but comprehensive system-level models instead of detailed but single discipline modelling. Lower-level refinement of simulator models might become necessary in the course of the mission but is not performed in the first instance.

Page 30 Science Payload Simulator and System Simulator Concept

IIIc. Scheduling of SSC Implementation It is recommended to start the SSC implementation in the early life cycle stages to achieve the largest impact and benefit. A later start is also possible and beneficial but might involve catch-up work on the modelling and implementation.

2.3.3. Validity of System Simulator Concept for Engineering Domains Beyond Space

The SSC is applicable within all domains that design and produce engineered systems. It is not dependant on any specific tool and general enough to be incorporated into many engineering domains. It is to be understood domain independently except for the context of use that is driven by the domain characteristics. This may lead to a tailoring need for the SSC prior to application to other domains - noting however that for many engineering domains in the competitive economy a dynamic project environment with a tight schedule is common with the space domain.

Page 31 3. Transferring Simulator Success to Further Missions - System Simulator Concept Transfer Approach and Application Cases

The SSC transfer to missions beyond BepiColombo was performed by following the steps of the transfer approach presented in Section 3.1. Two missions were identified to test the SSC transfer. First, it was ESA’s Large Observatory For X-ray Timing (LOFT), a LEO science mission studied in the Future Programs depart- ment at Airbus Defence and Space GmbH in Friedrichshafen, Germany. Its SSC transfer has been discussed by Nemetzade and Förstner [105] and is summarized in Section 3.2. The second test mission was ESA’s JUpiter ICy moon Explorer (JUICE), an interplanetary science mission realized by Airbus Defence and Space SAS in Toulouse, France. Its SSC transfer is presented in Section 3.3. On top of the application of the SSC to the LOFT and JUICE mission, the SSC was applied mission indepen- dently and led to the General Mission and System Simulator (GMSS). The motivation for its creation and its set up is described in Section 3.4.

3.1. Definition of System Simulator Concept Transfer Approach

Figure 3.1 shows the defined steps of the SSC transfer approach according to Nemetzade and Förstner [105] that consists of the identification of a suitable test mission (1), the identification, specification and imple- mentation of the required system simulator scope (2), the actual use of the simulator (3) and the evaluation of the transfer (4). Steps 2 and 3 are expected to be iteratively executed.

To facilitate the search for a suitable test mission (Step 1), four selection criteria have been defined:

Criterion 1: The mission shall be challenging but preferably less complex than interplanetary missions.

Criterion 2: The presumed effort for the required system modelling shall be adequate for the given context.

Criterion 3: The project shall be preferably in an early design stage, i.e. Phase 0/A, to allow the potential consecutive evolution of the simulator in subsequent mission phases.

Criterion 4: The study/project team shall be open for the simulator trial.

The fulfilment of Criteria 1 to 3 is rather desirable than necessary whereas the fulfilment of Criterion 4 is a necessity for the execution of the transfer.

Steps 2 and 3 of the transfer approach comprise the actual simulator development activities that follow the four UCD steps discussed in Section 2.2.1, i.e. the specification of the context of use (1), requirements spec-

Page 32 System Simulator Concept Transfer Approach and Application Cases

Figure 3.1.: System Simulator Concept Transfer Approach [105].

ification (2), development of design solutions (3) and design evaluation (4). Figure 3.2 details the activities performed in SSC Transfer Steps 2 and 3 and distinguishes the specification of the simulator from its subse- quent technical implementation in a usable product. First, the context of use of the simulator is specified i.e. the mission phase the simulator shall be used within, the involved time and budget constraints, the overall mission context, and the potential user of the simulator, including his role and tasks within the project, his software related skills and his former experience with tools and simulators. With the understanding of the user and its context, Step 2 specifies the use cases of the simulator following a mission challenge analysis i.e. the identification of potential tasks a system simulator could provide support to, and thereby determining the technical scope of the simulator, and the requirements for the user interface, defining the user expecta- tions in terms of handling the simulator. In Step 3, the simulator models as design solution of the simulator use cases are defined and the look of the simulator interface to the user is specified. In Step 4, the models and interface specifications are evaluated against the use cases and the handling requirements. Potentially, iterations between Step 3 and 4 are necessary before the simulator specification is finalized. Afterwards, the simulator is set up in Step 5. The delivered version is then evaluated against the specification and the identi- fied use cases and handling requirements in Step 6. In case of inconsistencies, iterations might be necessary between Step 5 and 6. The outcome of the process is a designed simulator that meets the user requirements. In case of new emerging challenges that require new simulator functionalities, or new project constraints, e.g. the transit to the next mission phase or a new user group, the simulator development process re-starts at Step 1.

Page 33 Application Case LOFT

Figure 3.2.: SSC Transfer Approach Step 2 and 3: Simulator Development Process Following SSC, Based on UCD Approach [69].

3.2. Application Case LOFT- ESA’s Large Observatory for X-Ray Timing Mission

3.2.1. SSC Transfer Step 1 - Search and Decision for Test Mission

ESA’s Large Observatory For X-ray Timing (LOFT) mission was chosen for the first SSC transfer test as it met the four decision criteria defined in Section 3.1. The scientific mission was evaluated to be challenging but still less complex than interplanetary missions (Criterion 1). So the level of difficulty in system modelling was seized to be moderate and the effort adequate for the given context (Criterion 2). At the time of the decision, LOFT was in its early design stage (Phase 0/A) (Criterion 3). Most importantly, the study team was open for the simulator trial (Criterion 4).

The actual work on the LOFT Simulator started in autumn of 2012 with the simulator definition work and was finalized with the end of LOFT’s Phase A in July 2013.

3.2.2. SSC Transfer Steps 2 and 3 - Simulator Realization and Use

Based on LOFT’s mission and system design, see Annex B.3, the dominant mission challenge to be sup- ported by the LOFT Simulator was identified by the end of Phase 0 to be the validation of the system design against the mission objectives in support of the systems engineer as primary user and the study team as

Page 34 System Simulator Concept Transfer Approach and Application Cases secondary user. In particular, the operational feasibility of the observation plan [106] with the proposed spacecraft design in terms of resource allocation had to be demonstrated. The system performance as re- sult of the superposition of various contributors, however, was not easily predictable. The mission scenarios induced frequent spacecraft attitude changes which, in turn, would lead to a time variance in the evolution of the system performance contributors. Addressing this need for a dynamic and broad view on the system, the simulator was defined to run the observation plan and deliver the dynamic profiles of parameters of interest that would allow the assessment of the time variant behaviour of the system. For this, the simula- tor had to simulate the attitude changes of the spacecraft following the observation plan. In addition, the motion of the spacecraft in orbit had to be simulated to determine its position with regard to the Sun and Earth at any time instance. Both information combined composed the prerequisite for calculating the en- vironmental conditions the spacecraft would be subjected to at any time instance, for example the incident Sun or the gravity gradient. With all this information fed into the spacecraft specific models the simulator was able to output the evolution of selected parameters reflecting the performance of the system, like the stored angular momentum of the reaction wheels.

Further information on the implemented models, the operation and visual design of the LOFT Simulator has been introduced by Nemetzade and Förstner [105] and is detailed in Annex B.1, B.2 and B.3. The use cases of the LOFT Simulator are detailed in the following.

Use Case 1: Validation of Attitude and Orbit Control System including Wheel Failure Due to the high inertial momentum of the spacecraft in combination with the frequent attitude changes following the observation plan, the AOCS in terms of the reaction wheel performance came into the focus of the investigations first. With the LOFT Simulator, it was possible to see the effect of superposition of gravity gradient disturbance torque, wheel desaturation by magnetorquers and slew manoeuvres on the evolution of the reaction wheel momentum while considering their time-varying character during the course of the observation plan. The results of the simulation run for the four-wheel configuration are presented in Fig. 3.3. The design robustness of the three and four wheel configuration of the spacecraft could be proven as the simulator comprised a feature to switch wheels on or off in the observation plan run.

Use Case 2: Observation Plan Analysis and Optimization The LOFT Simulator helped to identify critical exceedances of angular momentum capacity in the three- wheel failure analysis which could have been solved by a simple but elegant small observation plan change (instead of a costly AOCS re-design): delaying the start of a slew manoeuvre for 2000 s helped to let coincide the slew with a more favourable gravity gradient disturbance torque and desaturation torque evolution such that the wheel would not have been over-saturated. Figure 3.4 plots the critical slew manoeuvre at 2456424.7 days (Julian Date) prior (a) and after (b) the observation plan tuning. Thus, even if in the three-wheel case the actuation strategy could not have been optimized, it still would have been possible to avoid reaction wheel saturation by a slight adaptation of the observation sequence. So by means of the LOFT Simulator sizing and accommodation of actuators was justified to be adequate and the operation according to the observation plan to be feasible without the need for optimized manoeuvre and actuation strategies.

Use Case 3: Validation of Power Subsystem and Determination of Power Driven Operational Constraints Later on, also the power subsystem draw attention as its performance was directly influenced by the space- craft’s attitude through its body fixed solar panels. The LOFT Simulator helped to analyse the actual evo-

Page 35 Application Case LOFT lution of the solar incident angle during the course of the observation plan run and delivered a dynamic profile of the battery state of charge, replacing the conservative worst case assumptions for the power evo- lution used for the design before. With the information about the power generation and storage evolution it was possible to prove the robustness of the power subsystem and to determine the power-driven limits of the instrument’s FoR that lead to a more elaborated spaceraft design with higher scientific outcome.

Use Case 4: Determination of Observation Availability Budget Finally, the observation availability of the spacecraft as expression of the scientific performance of the mis- sion had to be demonstrated while Earth occultation of the targets and the time spent for slew manoeuvres between targets have to be taken into account as limiting factors. For this purpose, the LOFT Simulator was made able to distinguish the time for a slew manoeuvre between two targets, the target occultation times and actual observation time during the run of the observation sequence in order to establish an Observation Availability Budget.

(a) Reaction Wheel 1 (b) Reaction Wheel 2

(c) Reaction Wheel 3 (d) Reaction Wheel 4

Figure 3.3.: Evolution of Angular Momentum of LOFT’s Four Reaction Wheels per Wheel Axis over Mock Observation Plan, cf. [106].

Page 36 System Simulator Concept Transfer Approach and Application Cases

(a) (b)

Figure 3.4.: Analysis of Reaction Wheel Failure: Zoom into Evolution of Angular Momentum Stored in One Reaction Wheel During Slew Manoeuvre Performed in Three-Wheel Configuration Prior (a) and After (b) Observation Plan Tuning [107].

The simulation confirmed the compliance of the spacecraft design to the mission objectives by exceeding the required observation availability of 40 % to 57 %. The observation availability in case of a wheel failure (longer slews) was analysed to be of 53 %. These additional 17 % resp. 13 % observation availability beyond the requirement would have allowed to increase the scientific outcome of the mission with supplementary observations in eclipse and in the sunlit phase.

Further Potential Simulator Use Cases - Increasing Mission Scientific Outcome Going beyond the use of the LOFT Simulator for design justification, the optimization of the observation plan with regard to the slew durations would have been a further potential simulator use case. Figure A.7 pictures as histogram the absolute frequency of all slew angles following the observation plan, with an av- erage over all angles of 86.6°. An improved order of targets called in the observations plan would have led to smaller slew angles and thus to reduced total time spent for slew manoeuvres. Consequently, more mission time would have been available for actual target observation, increasing the scientific mission outcome.

3.2.3. SSC Transfer Step 4 - Evaluation of SSC Transfer

3.2.3.1. Key Benefits and Added Value of SSC Transfer for LOFT

The use of the LOFT Simulator added value to the feasibility phase of the mission as support of a sound systems engineering work and demonstration of the team’s mission comprehensive competence. The study team was made capable to demonstrate the conformance of the mission and system design against the mis- sion objectives with a realistic operation scenario. With the information on the time-variant performance parameters and their superposition, the justification of the system design robustness (see for example the wheel failure analysis) has not been solely relying on worst case analyses that normally accompany early phases. Going beyond the design validation task, the study team could demonstrate the potential of the proposed spacecraft design to increase the mission scientific outcome beyond the requirements. They were

Page 37 Application Case LOFT capable to define an extended instrument operation scenario with regard to power and demonstrated sig- nificant margin in the observation availability budget with the given design. Using the simulator allowed active mastering of mission and system design requirements expressed in the observation plan. Instead of only reacting to (operational) input with changes to the spacecraft design, the observation plan could be ac- tively manipulated (see the simple but elegant solution to perform small changes to the observation plan in order to avoid angular momentum capacity exceedance of the reaction wheels), enlarging the study team’s range of influence, strengthening its overview of the challenging mission and underlying its competence.

3.2.3.2. Acceptance of LOFT Simulator by Industrial Study Team and ESA

In the beginning, the industrial team signalized conservative and cautious interest in the LOFT Simulator as the SSC was widely unknown and misunderstood as replacement for subsystem specific tools. As the simulator proved useful, especially because it was able to handle and run the observation plan, the team’s confidence in it within the team grew such that other team members beyond the study manager started relying on it and using it for analyses. Correspondingly, new simulator features were requested like the power system implementation, thus leading to several design iterations and greater simulator scope. A continuation of the simulator development and use in the next mission phases for trade-offs, operations planning, identification of design driving cases, worst/best case investigations, knowledge transfer across phases, etc. would have been appreciated by the industrial study team, in case LOFT would have been selected to become ESA’s M3-mission.

ESA as customer and the involved scientists acknowledged the creation and use of the LOFT Simulator during the feasibility phase. Feedback during the study phase’s final presentation at ESTEC, Noordwijk, The Netherlands, comprised in particular their interest to the simulator’s ability to manage and modify the observation plan. Again, it became clear that one of the strongest benefits of the simulator was that it enabled its user to master a great number of design influencing factors and to take them into account in trade-offs. As such the simulator allowed a comprehensive view on the mission and likewise the mastering of the mission and system constraints.

3.2.3.3. Evolution of Simulator Functionalities and Applications

According to the evolving needs of study team, scope and functionalities of the LOFT Simulator changed during the course of LOFT’s Phase A, similar to the continuous evolution of the SPS. As an example, mod- els describing the power subsystem were implemented in a later stage of the phase while the focus of the simulator was put on the AOCS and the capability of the simulator to run the observation plan in the be- ginning. The potential benefits by a simulator for the power subsystem analysis were discovered when a first version of the simulator was already in use and inspired the study team to further use cases. Not only additional subsystem models but also the extension of the existing simulator functionalities were part and result of the iterations accompanying the simulator development activities. As soon as a new simulator version was available, new ideas arose that asked for further simulator functionalities like disabling single reaction wheels for failure analyses. Originally conceived as a simulator to analyse the designed system and validate its performance, the LOFT Simulator was soon also used to evaluate changes of the system design

Page 38 System Simulator Concept Transfer Approach and Application Cases for system performance enhancement. For this reason, a number of design parameters were chosen to be modifiable.

The handling of the observation sequence evolved as well. First, it was executed by the simulator strictly following the plan defined by van Damme [106]. The imposed distribution of observations to calendar days, however, did not match the actual duration of a day. Observations would have been disrupted by the change of a calendar day or not been day-filling. For a more effective usage of the mission time, the LOFT simulator offered also the execution of the observation sequence disregarding the day allocation. In response to the mission requirement to be able to extent operations beyond the observation sequence by observations in eclipse, the simulator was able to differentiate whether the target had to be observed in eclipse or in the sunlit phase. Although the observation plan was modifiable from the beginning, allowing the addition of an unlimited number of targets to the sequence, this functionality was made available later. In that context, the operation scenario was adapted correspondingly, i.e. in case observations were not finished at eclipse entry, they were simulated to be disrupted for the duration of the eclipse and continued directly after eclipse exit.

3.2.3.4. Lessons Learned and Conclusion

In conclusion, the SSC transfer to LOFT was successful. The LOFT Simulator supported the study team in their work and added value by providing reliable and important analysis results that would not have been easily obtainable without it. As a systems engineering simulator, it focused on multidiscipline aspects of the mission design and enabled the assessment of the time variant performance of the spacecraft in operation. As such, it supported the holistic view on the mission. The implemented models stayed on a rather high- level but proved to be sufficient for the intended use. The simulator supported the maximization of the scientific outcome of the mission. It helped to amelio- rate the on-board resource allocation, to substitute worst case assumptions for design justification and to demonstrate the spacecraft’s potential to increase the overall observation availability. Being in a compet- itive situation, the simulator was a significant mean to demonstrate the study team’s pre-eminence and proficiency in terms of control of the mission and system requirements, the compliance of the proposed mission and system design to the mission objectives and the team’s comprehensive view on the overall mis- sion concept. The user-centred design of the simulator and its inherent flexibility to adapt to the evolving needs of its users proved again to be key to the success of a system simulator created according to the SSC. The most challenging aspect of the concept transfer was to gain confidence of the colleagues into the novel simulator concept. Consequently, important factors for a successful concept transfer, i.e. "success ingredients" were experienced to be

• comprehension for the cautious behaviour and reluctant attitude of the study team: in daily work, there is limited room/time for experimenting with new tools and simulators, and past experience with new tools and simulators might have been negative,

• endurance to advertise the simulator and convince the study team of its benefits,

• provision of heritage of the successful SSC use in a precedent mission to gain confidence,

Page 39 Application Case JUICE

• added value for the team members and the study, facilitating the work of the study team and proving their competence to the customer.

The key success factor of the SSC transfer is the simulator user himself. Not only is the simulator set up and adapted according to his needs. Eventually, he has to have confidence into the simulator and accept its application. The identification, understanding and consideration of the user’s needs and his nature is es- sential for the success of the simulator development and use. Therefore, beyond its academic significance, the background work on user needs leading to the Tool and Simulator Quality Model (see Chapter 4) proved to be useful to accompany the SSC transfer adequately.

3.3. Application Case JUICE - ESA’s JUpiter ICy moon Explorer Mission

3.3.1. SSC Transfer Step 1 - Search and Decision for Test Mission

Following the fruitful creation and employment of the LOFT Simulator in the mission’s feasibility phase in autumn 2013, ESA’s JUpiter ICy moon Explorer (JUICE) mission was selected as next test mission to demon- strate the transferability of the SSC to missions as complex as BepiColombo and confirm its beneficial usage. At that time, the mission was in the industrially competitive assessment phase (A/B1) and already selected as first L-class missions in ESA’s Cosmic Vision program [108].

A first mission challenge analysis combined with the experiences and beneficial operation of the simulators in BepiColombo and LOFT led to the identification of potential use cases for the JUICE Simulator that were presented to the JUICE project team at Airbus Defence and Space SAS in Toulouse, France. The simulator was suggested to be set up for the Ganymede science phase to support operations and resource planning and to demonstrate and validate system performance. The suggestions met approval and the JUICE team decided to apply the SSC within the project for this very purpose.

With regard to the four decision criteria defined in Section 3.1, the scientific mission was evaluated to be challenging and complex (Criterion 1), thus not meeting the decision criteria on first glance. However, the level of difficulty in system modelling was seized to be moderate and the effort still adequate (Criterion 2) for a simulator covering one mission operational phase. Furthermore, JUICE was in its early design im- plementation stage (Phase B1) (Criterion 3), thus offering sufficient time for potential successive simulator development steps. Most importantly again, the project team was open for the trial (Criterion 4).

The activities on the JUICE Simulator as presented in this work started in autumn 2013 and were finalized in July 2014.

3.3.2. SSC Transfer Step 2 and 3 - Simulator Realization and Use

Based on JUICE’s mission and system design, see Annex D.2, the validation of the system design against the mission objectives was soon identified to be the dominant task to be assisted by the JUICE Simulator in support of the systems engineer as primary user and the project team as secondary user - similar to the application of the SSC in the LOFT and BepiColombo mission. The JUICE Simulator was set up for analysis

Page 40 System Simulator Concept Transfer Approach and Application Cases support of the 100 days that the spacecraft will spent in a nearly circular orbit of 500 km around Ganymede (GCO500) during its science phase. The validation had to be done against predefined operation scenarios with regard to resource allocation, i.e. power generation, storage and supply, angular momentum of reac- tion wheels and mass memory capacity. The scenarios entailed regular spacecraft attitude changes between the science driven pointing to Ganymede and the alignment of the antennas to the Earth for communica- tion, and unsteady power consumption and data production by the varying operation of the instruments - all in all making the system performance as result of the superposition of the various contributors time variant and hence not easily predictable. In answer to the need for a comprehensive view on the spacecraft in GCO500 for the analysis of the time variant behaviour of the system, the JUICE Simulator was defined to run the operation scenario and simulate the evolution of selected design parameters like the stored angular momentum of the reaction wheels during the course of the GCO500 science phase. For this purpose the spacecraft’s frequent attitude changes, its motion in orbit and the calculation of the environmental condi- tions had to be simulated, similar to the functionalities of the LOFT Simulator.

Further information on the implemented models, the operation and visual design of the JUICE Simulator has been introduced by Nemetzade and Förstner [105] and is detailed in Annex D.2. The use cases of the JUICE Simulator are detailed in the following.

Use Case 1: Validation of the Attitude and Orbit Control System including Wheel Failure Due to the high inertial momentum of the spacecraft in combination with the required daily attitude changes between Science and Communication pointing, the AOCS with regard to the reaction wheel performance came into focus first. The JUICE Simulator simulated the superposition of the continuous compensation of the gravity gradient disturbance torque, the required execution of the slew manoeuvres and yaw-steering as well as the desaturation by the thrusters on the evolution of the reaction wheel momentum. Also, the design robustness of the three and four wheel configuration could be demonstrated as the simulator comprised a feature to deactivate wheels in the simulation.

Use Case 2: Validation of the Power Subsystem For validation of the power subsystem the impact of the frequent attitude changes, the Jupiter eclipse phases as well as the stop of the yaw-steering for the reduction of microvibrations during scientific measurements had to be taken into account. The JUICE Simulator supported the analysis of the solar incident angle evo- lution along the operation scenario and delivered a dynamic profile of the battery state of charge while considering the varying communication and science-wise driven power consumption profile of the space- craft, complementing the conservative worst case analysis for the power subsystem analysis. The adequacy of the solar array and battery dimensions has been justified and the feasibility of the operation scenarios without the need for design adaptations has been demonstrated.

Use Case 3: Validation of the Mass Memory and the Data Transmission Approach For the data handling subsystem validation the restricting influence of the weekly Jupiter occultation on the transmission time (in the worst case leading to a complete link loss for a day), the day-variable, large sci- ence data amount and the varying daily link volume had to be taken into consideration for the analysis. The JUICE Simulator supported the demonstration of the conformance of the mass memory capacity size and the foreseen daily downlink volume with the expected science and housekeeping data volume. It simulated the evolution of the free memory capacity along the operation scenarios and demonstrated the mass mem-

Page 41 Application Case JUICE ory’s link loss compensation performance. The data production and transmission model implemented in the JUICE Simulator is presented in detail in Annex D.1.

Further Potential Simulator Use Case: Scenario Adaptation - Increasing the Scientific Outcome of the Mission and Operations Planning Going beyond the use of the JUICE Simulator for design justification, the improvement of the pre-defined science scenarios and operations planning in general would have been a further potential use case - similar to LOFT. The flexibility of the implemented scheduler has allowed the modification of the instrument oper- ation timeline to trade the instrument operation with regard to enhanced on-board resource employment. Investigations might have addressed the duration of the instrument operations, the assessment whether instruments can be operated in parallel or sequential and whether they can be switched multiple times be- tween their modes. The resulting improved science scenario would have resulted in an increased scientific outcome of the mission.

3.3.3. SSC Transfer Step 4 - Evaluation of SSC Transfer

The JUICE Simulator was set up in close cooperation with the industrial team in Toulouse during the mis- sion’s assessment phase. In contrast to the author’s involvement in LOFT, however, the work package for JUICE was limited to the simulator creation. Analyses were not asked for as in the case of LOFT where the usage of the simulator in the study could be directly tracked and evaluated. Consequently, the evaluation of the simulator reception in JUICE is restricted to the feedback of the project team in Toulouse.

3.3.3.1. Key Benefits and Added Value of SSC Transfer for JUICE

The use of the JUICE Simulator for the GCO500 phase put the project team into the position to demonstrate the conformance of mission and system design against the mission objectives and prove its robustness without the need for worst case analyses and with a comprehensive view on the system. Science scenarios have been effectively evaluated and their criticalities identified. With the capability to change and run dif- ferent operation scenarios, the potential of the proposed spacecraft design in view of an increased mission scientific outcome could be assessed while demonstrating the mission comprehensive competence of the team. With the simulator as proficient mean to manage, control and propose modifications to the mission input, i.e. the science scenario, the team enlarged their range of influence beyond the system design.

3.3.3.2. Acceptance of JUICE Simulator by Industrial Team and ESA

Due to the remote set up, the contact with the JUICE project team was focused on few members. The project leader decided to create a simulator for JUICE and strongly supported its prompt implementation. The systems engineering manager was the focal point for all technical simulator set up activities. The AOCS and the Power architect provided necessary information to set up the respective subsystem models. Overall, several team members signalized diffident interest in the simulator. However, it cannot be judged in detail whether the simulator met the approval of all team members. It is strongly assumed that the simulator did

Page 42 System Simulator Concept Transfer Approach and Application Cases support and convince the team of its benefits since it was decided to pursue the simulator development activities for Phase B2/C/D, see Section 3.3.3.3, with the support of ESA.

3.3.3.3. Evolution of Simulator Functionalities and Applications

The scope of the JUICE Simulator for Phase A/B1 was largely specified right from the start followed by very few small changes in the development phase. A major evolution step by addition or detailing of subsystems, as it was observed for LOFT and the SPS, was not materialized for JUICE. This head start is traced back to the communicated overview of all possible implementations of models and functionalities right from the start of the discussions that has been based on the existing realized simulator solutions from LOFT and BepiColombo.

A large leap followed the delivery of the initial simulator version for Phase A/B1 when the project team decided to continue the simulator development activities. The JUICE Simulator was part of the prime pro- posal for Phases B2/C/D that was won by Airbus Defence and Space SAS in Toulouse in July 2015 [109]. With contract signature in December 2015 [110], the simulator development activities has been supported by ESA as it is the case for BepiColombo. Three simulator development phases until Summer 2017 were agreed with VECTRONIC Aerospace GmbH. Each phase comprised the request for new models, simula- tor functionalities and updates [111, 112]. To benefit from previous developments, simulator models from BepiColombo and LOFT have been re-used to the most possible extent. Currently, the simulator covers the complete cruise phase of the spacecraft, including the scientifically important fly-bys of Callisto and Eu- ropa. The power and mass memory models have been detailed. The power profiles of the instruments and the reaction wheels have now been implemented and the mass memory capacity is sectioned and allocated to the instruments. The Scheduler has evolved to a dynamic simulator input: an optional number of instru- ments with optional number of modi can now be added to the scenario, providing significant flexibility to the operations planning.

3.3.3.4. Lessons Learned and Conclusion

The decision of the team to continue the simulator development activities is the evident proof that the JUICE Simulator has been appreciated as support for the design and validation work and thus providing benefit to the project. Being in a competitive situation the JUICE Simulator might have been a significant support to win the contract for Phases B2/C/D as mean to demonstrate the study team’s pre-eminence in terms of control of the mission and system requirements, the compliance of the proposed mission and system design to the mission objectives and the comprehensive view on the overall mission concept. The transfer of the SSC is evaluated to be a significant success and its employment for JUICE beyond Bepi- Colombo and LOFT confirms its utility and transferability. The user-centred design of the simulator and the team’s willingness to create and use it proved again to be key to the success of the SSC.

Page 43 General Mission and System Simulator

3.4. Beyond Specific Missions - the General Mission and System Simulator

Resulting from the experience with the simulators in BepiColombo, LOFT and JUICE, combined with the insight in several studies, it has been experienced that the majority of the design questions recur in many missions, followed by technical solutions that resemble. Although every mission will come along with chal- lenges that are mission specific and need to be individually supported by tools and simulators, there are also activities to be performed and decisions to be made that are mission independent to a certain extent. Those can be supported by a generic tool. The General Mission and System Simulator (GMSS) was set up for this very purpose following the SSC. Based on the functionalities of the LOFT Simulator, the GMSS is conceived to support the design, analysis and validation activities of Earth-bound missions in their early development phases. The focus was put on Earth bound missions as they are responsible for a significant part of the Future Programs Department activities. In the following, the generalizability of missions as pre-requisite for the usability of a generic tool will be assessed, followed by the description of the GMSS functionalities and the evaluation of the SSC transfer. The application of the SSC transfer Steps 1 and 2 are covered in Section 3.4.1 and 3.4.2. Step 3 is skipped as the GMSS was not applied to a concrete study.

3.4.1. Motivation for Creating GMSS - Generalizability of Missions

Nearly every satellite is unique in its detailed realization and thus not comparable to serial technical prod- ucts like cars. Nevertheless similarities do exist in their development and technical realization. All space missions pursue an objective, be it commercial or scientific, to fulfil the expectations of their stakeholders. In unmanned missions one or several payloads are accommodated on a spacecraft to fulfil the mission ob- jective(s). Realization, aim and shape of these payloads might differ, but they are all complemented by a spacecraft platform or service module that allows the payload to be operated. Platforms and service mod- ules, respectively, have similar functional tasks to fulfil which they do by technically similar subsystems. The similarity of the technical subsystems is not necessarily given in the technical details but on system level of the technical solution. For example, solar arrays and batteries usually assure the power supply of satellites in the inner solar system. Mission specifics come into play when choosing the number of solar cells or defining the battery capacity.

Nemetzade and Förstner [113] identify the mission common functional tasks to encompass for example the provision of the right spacecraft attitude for the payload operation and for TM transmission and TC recep- tion, an adequate power supply and the right thermal environment for the payload and the spacecraft to be operated. Pirzkall [114] and Bergler [115] analysed in total 25 past, current and future Earth-bound science and Earth observation missions to identify the similarities in their technical implementation. The work re- vealed, for instance, that solar arrays and batteries are the selected technical solution for power supply in all 25 investigated missions. At least 44 % of the missions employ body-fixed solar arrays. 16 out of 22 missions with 3-axis stabilized spacecraft, i.e. 73 %, employ reaction wheels for e.g. attitude control, slews and distur- bance torque compensation. Thrusters for attitude control and/or wheel desaturation are accommodated in 88 % of the missions. 10 out of 12 missions that are operated in LEO, i.e. 83 %, make use of magnetic torquers, either for attitude control or for wheel desaturation.

Page 44 System Simulator Concept Transfer Approach and Application Cases

Similarly to functional tasks and technical implementations, the design activities the functional tasks come along with, e.g. dimensioning of solar arrays and batteries, resemble in many missions as identified by Nemetzade and Förstner [113]. The same is true for the continuous design validation needed along a project, as it was the case for LOFT and JUICE, see Sections 3.2.2 and 3.3.2.

Due to the pictured commonalities between missions in their development process and their technical re- alization, this work speaks of mission generalizability. This concept was used to set up the GMSS. The scal- ability of a technical solution and the need to fulfil the same functional tasks made it possible to build a tool that supports the related design and justification activities on a level that is not as detailed that it annuls the generalizability of missions and limits its potential application range. The current tool landscape with products like STK, see Section 1.2, confirms the usability of a generic tool for design and validation activities that requires only to be mission specifically configured and parametrized prior to its use. Products like STK or ASTOS rely on libraries comprising generic spacecraft models. Also ESA [6] refers to the benefits of re- using models across missions. These observations and statements could not be possible if missions and the related design and development activities would not be generic in a certain extent, what, in turn, confirms the concept of mission generalizability.

3.4.2. GMSS Scope and Design Description

Based on the concept of mission generalizability, the identified similarities in the development and techni- cal implementation of missions, the LOFT Simulator models as cited in Annex B.3 and the LOFT Simulator User Manual [116] have been re-used for the GMSS, except for the functionality to read in and execute an observation plan. This functionality has been replaced by two general attitude modes, i.e. inertial and nadir pointing mode. Additionally, the user may define an offset attitude in Euler angles to align one spacecraft axis to a celestial target in inertial pointing mode. With the given scope, several Earth-bound mission char- acteristics according to Nemetzade and Förstner [113], Pirzkall [114] and Bergler [115] are covered by the GMSS in its final version 1.1.1 of 12 December 2013 in support of the systems engineer (primary user) and the project team (secondary user). Further details on scope, functionalities and handling of the GMSS are given in its user manual [117] that accompanied the simulator delivery.

3.4.3. SSC Transfer Step 4 - Evaluation of SSC Transfer

The GMSS was set up project-independently based on the experience with LOFT and BepiColombo and insights in the Future Programs Department work. Consequently, no explicit GMSS user has been given to rely on his feedback for the evaluation of the SSC transfer. The evaluation of the tool reception is therefore restricted to observations by the author and the intermittent feedback received by colleagues who tested the GMSS out of curiosity.

Page 45 General Mission and System Simulator

3.4.3.1. Key Benefits and Added Value of GMSS

In comparison to other tools that are frequently used in early mission phases like Excel or Matlab, see Section 1.3.2, the key benefit of the GMSS is that it is a validated and reliable software solution with the flexibility to be applied to many studies that is intuitively manageable, quickly configured and providing a multiphysics view on the system. The GMSS is set up as ready-to-use tool and as such is supposed to answer mission independent, recurring questions in a time effective manner. With the implemented mod- els selected on the basis of the mission generalizability approach, a multidisciplinary view on the system of interest is provided and supported. The system design validation can be assisted, trade-offs for various design parameters are facilitated and best and worst case investigations can be executed easily. The usage of the GMSS allows its user’s to ultimately demonstrate compliance of the mission and system design to the mission objectives and the team’s comprehensive view on the overall mission concept. Being in a compet- itive situation during the study phases, this is supposed to be a significant benefit to win the consecutive contracts for the detailed design and implementation phases.

3.4.3.2. Acceptance of GMSS by Study Teams

Based on the positive reception of the LOFT Simulator and the identified current need of the study team [40], it was assumed that a tool like the GMSS would meet approval by the colleagues of the study department. Due to its generic set-up and intuitive handling, the GMSS was expected to be suitable and beneficial for many studies. Unfortunately, however, no studies were in progress when the GMSS was announced to be available. Shortly after completion of the GMSS, the author as focal point of contact for all SSC related activities left the department. Consequently, the awareness about the existence of the GMSS was fading, and although available, it was not used for any study yet. Some colleagues started to familiarize and work with it out of curiosity but did not use it systematically any further. The reasons for the non-usage may be manifold. In daily business, time is rare to get to familiarize with new tools. If the manual is not easy to read or the tool intuitively to handle, the tool will not be used apart from some first, maybe disappointing and frustrating trials. If the need for tool support in the study is not eminent and obvious or if the tool does not meet the user’s need in terms of functionalities and models, it will not be in use either. At the utmost, due to the excessive supply of tools and simulators, the tool might get lost in the shuffle and sink into oblivion. Concluding, the acceptability of the GMSS by the study teams is assessed to be developable. Feedback from colleagues was fundamentally positive and they regretted the lack of a current study for actual usage of the GMSS. It is assumed that an application case for the GMSS might have increased the awareness of the study team for the existence and capabilities of the tool. As knock-on effect, a larger number of interested colleagues would have been expected.

3.4.3.3. Evolution of GMSS Functionalities and Applications

The concept of mission generalizability (see Section 3.4.1) in mind, combined with the executed need as- sessment of the study team [40], the generic mission challenge analysis was already performed prior to the

Page 46 System Simulator Concept Transfer Approach and Application Cases

GMSS development activities. Consequently, the scope of the GMSS was specified starting with the deci- sion to set it up. As Earth bound missions were identified to be the mission type the GMSS should support, many functionalities of the GMSS have been re-used with very minor changes from the LOFT Simulator. No further major evolution step has been performed for the GMSS after its initial set up and fundamental validation.

3.4.3.4. Lessons Learned and Conclusion

The SSC evolution to the novel, generic tool GMSS is evaluated to be successful to the extent that the missing application case allows. The GMSS is based on experience and insights in the Future Programs Department and responds to the identified needs of the study team [40] as easy to use, multiphysics system tool to anal- yse the time variant performance of the spacecraft in operation. The concept of mission generalizability made it possible to create a software solution that supports the design and justification activities mission independently. As platform for the fundamental and generic design and evaluation steps for an Earth ob- servation spacecraft its scope and functionalities focus on the Phase 0/A needs. For its use in consecutive phases or other mission scenarios, it provides the pre-requisites as first step towards a mission dedicated simulator that grows along the project. The evaluation of the GMSS usage confirms the significance of the user as main success factor of the tool and simulator activities. Inconsistencies like an immature user man- ual, the lack of a pilot project or a missing contact person to push the application might lead to limited or missing acceptability and ultimately rejection or oblivion of the product.

Page 47 4. Basis for Tool and Simulator Success - the Tool and Simulator Quality Model

Given the significant role of UCD for the SPS and the SSC, this work considers usability - the central notion of UCD - to be a key aspect for tool and simulator validation and ultimately a central indicator for tool and simulator success within a project. To render usability to a seizable and concrete measure, the Tool and Simulator Quality Model (TSQM) was developed within the frame of this study. It specifies usability by a set of weighted quality criteria that build upon each other. The quality criteria have been identified and specified as expression of user needs through inquiries of potential simulator and tool users. The evaluation of tools and simulators against the TSQM as incorporation and systematization of user needs is in line with the UCD process described in Fig. 2.3 and is understood as prerequisite of their success in a project.

The TSQM definition was executed in parallel to the simulator analysis and set up activities described throughout this study. As such, both work streams complemented and benefited from each other and now provide a comprehensive picture of the overall topic.

Section 4.1 details the motivation to set up a dedicated quality model for the space sector. Section 4.2 presents the TSQM development approach. Section 4.3 pictures the final version of the TSQM that is in- tended for quality validation of simulators and tools conceived for early mission design phases. The validity of the TSQM is discussed in Section 4.4. Section 4.5 details how the TSQM is to be applied for tool and simulator evaluation and provides an application example. Finally Section 4.6 introduces acceptability as essential criterion for tool and simulator success.

4.1. Motivation for Definition of Tool and Simulator Quality Model

Usability as synonym to quality in use has been identified in Section 2.2.1 as central notion of UCD. Simula- tors build according to the UCD approach like the SPS aim at the achievement of a high degree of usability for their users. The higher the usability of a simulator, i.e. the more it meets its users’ needs, the larger is its benefit for the project. Therefore, usability is understood as instrument to assess the success of a simulator in a project.

However, usability cannot be used as a measure. It is vague, elusive, user and context dependent. Hence, an approach had to be found to turn the notion seizable and to systematize the underlying user needs. The realized solution is the TSQM that defines usability as the aggregation of lower-level quality criteria that build upon each other and contribute variously weighted to the superordinate simulator characteristic usability.

Page 48 Tool and Simulator Quality Model

The TSQM is to be understood as space mission specific tailoring and enhancement of existing, generic standards on software quality and corresponding quality criteria like [50, 69, 118, 119]. In contrast to this work, however, these standards define usability as one quality criterion among several and give the same importance to all quality criteria. As general standards, they are applicable to various sectors. Consequently, they do not take the actual context of use and the corresponding user needs into account. However, in the context of UCD, it is exactly this user-specific perspective that is important to be specified and followed. Therefore, the TSQM was defined to overcome these weaknesses. Following the UCD approach, the iden- tified quality criteria as expression of real user needs are based on experience and communication with space engineers through interviews and questionnaires. Ultimately, the TSQM provides a novel structure to the quality criteria and notions used in common literature and expands the existing models with additional elements.

4.2. Model Definition Approach

In line with the UCD approach that accompanies this work as central philosophy for simulator creation and evaluation, the TSQM was set up to reflect the actual user needs in the context of early spacecraft mission design phases. This objective is reflected and respected in the TSQM definition approach that consisted in the combination of existing quality models and its tailoring to the space mission context with identified user needs.

The TSQM development was performed in two major steps. The first consisted in the familiarization with user quality standards and first interviews with potential users that culminated in the definition of an initial TSQM, cf. [120, 121]. This first model was refined and validated in a second step with a survey accompanied by a questionnaire for the assessment and consideration of actual user needs on a larger scale. Details are cited by Nemetzade and Förstner [122] and are further elaborated in Annexes E.3 and E.4.

In higher granularity, the TSQM definition activities consisted of:

Step 1 - Exploratory Stage of the Study: Definition of the Initial TSQM, cf. [120, 121]

a. Definition of the initial TSQM by the preliminary identification of relevant quality criteria based on standard quality models and experience:

i. Familiarization with the quality criteria, i.e. "ilities", as cited in literature [50, 69, 118, 119], their meaning and their range of validity.

ii. Assessment of their relevance for the space specific M&S domain based on experience within the Future Programs Department, Airbus Defence and Space GmbH, Friedrichshafen, and with simulators and tools designed for the space sector in general, cf. [123].

iii. Assessment of the interrelations between the quality criteria.

iv. Definition of the initial TSQM composed of three levels, cf. [120, Fig.2]. Fundamental idea is that a product’s usability or quality is measurable by the degree of satisfaction the prod- uct evokes in its users. Satisfaction is composed of cognitive and emotional satisfaction

Page 49 Model Definition Approach

(TSQM Level I). They, in turn, are achieved by efficiency, effectivity and pleasure (TSQM Level II) that characterize the work with the tool and/or simulator. These characteristics can be achieved by or subdivided into a set of independent "ilities" (TSQM Level III), the explicit quality criteria of the tool or simulator.

b. Validation and refinement of the initial TSQM, i.e. validation of the initial selection of quality criteria and allocation of priority levels to the criteria, derived from actual user needs assessed in interviews of potential tool and simulator users:

i. Definition of exploratory, semi-structured interviews, cf. [121, Annex B], as employed in qualitative and quantitative social research [124, 125].

ii. Execution of interviews with potential simulator users originating from different depart- ments of Airbus Defence and Space GmbH, Friedrichshafen, that regularly use and/or are in contact with tools and simulators.

iii. Evaluation of interviews, cf. [121, Annex C], resulting in the validation and refinement of the initial TSQM including a first relative weighting of the quality criteria, see [120, Fig.3].

Step 2 - Scrutinising Stage of the Study: Refinement of the Initial TSQM, cf. [122] and Section 4.3

a. Definition of a questionnaire following design guidelines like [126, 127, 128] for the systematized assessment of user needs including the implementation of lessons learned from Step 1b, see Annex E.2.2.

b. Execution of a survey with potential simulator users from the Future Programs Department from Airbus Defence and Space GmbH, Friedrichshafen, to validate the initial TSQM and to allocate a detailed weighting to the quality criteria, cf. [122].

c. Evaluation of the survey resulting in the definition of the "usability breakdown structure", cf. [122, Fig.1], detailing the bottom level of the TSQM (TSQM Level III, see Step 1a iv.), and allocating a weighting to the criteria composing the breakdown structure, cf. [122] and Annex B. The idea behind this modelling approach is that tool/simulator quality is achieved through the level-wise fulfilment of quality criteria, i.e. "ilities". Each criterion, in turn, contributes to the achievement of the overall tool or simulator usability with a specific weighting, cf. [122] and Annex E.

d. Definition of an improved questionnaire, see Annex E.4, with the lessons learned from the survey in Step 2b.

e. Execution of a survey with potential simulator users within the frame of the "6th International Workshop in Systems & Concurrent Engineering for Space Applications (SECESA 2014)" to vali- date the obtained quality criteria weighting from Step 2c, see Annex E.

f. Merging of the usability breakdown structure (TSQM Level III), cf. Step 2d, into the efficiency - effectivity - pleasure model (TSQM Level II) of Step 1b, see Section 4.3.

Page 50 Tool and Simulator Quality Model

4.3. Consolidated Tool and Simulator Quality Model and Quality Criteria Weighting

4.3.1. Consolidated Tool and Simulator Quality Model

The TSQM defines the usability of a product to be based on three layers built upon each other.

Its bottom layer (Level III) defines the tool’s/simulator’s quality in use to be achieved by concrete "ilities", summarized in a combination of dialogue and technical criteria and emotional reliability, see Fig. 4.1. The "ilities" and their interrelations are detailed in the usability breakdown structure, itself containing four lev- els, that will be presented later in this section. Both the dialogue and technical criteria and the emotional reliability are equally contributing to achieve the user’s satisfaction, distinguished in emotional and cog- nitive satisfaction (Level I), see Fig. 4.2, that ultimately leads to the perceived quality of the product by the user. Effectivity, efficiency and pleasure (EEP) compose the intermediate and connecting Level II of the model. Figure 4.3 pictures how the dialogue and technical criteria affect the user’s satisfaction via their contribution to the product’s EEP. Emotional reliability contributes to the usability of a product in three different ways. First, it contributes directly to the EEP of the product and thus to its usability via its sub- ordinate criteria according to the usability breakdown structure. A part of emotional reliability, however, is fed by EEP, influencing the emotional and cognitive satisfaction: working with the product might lead to higher (or lower) perceived EEP in general. This experience, in turn, might influence the degree of emo- tional reliability the user feels towards the product. The higher/lower the perceived EEP,the higher/lower is the emotional reliability to the product and thus the higher/lower the emotional and cognitive satisfaction. Finally, a share of emotional reliability directly affects the emotional satisfaction towards the product. It is neither influenced nor influencing the EEP but purely emotional and highly personal, so not hinged on any lower-level criterion.

The usability breakdown structure developed by Nemetzade and Förstner [122], see Fig. 4.4, constitutes the less elusive part of the TSQM. It details Level III of the TSQM by a level-wise set-up of explicit "ilities" to usability. The model implicates vertical relations between the criteria, i.e. across-level, only. Lateral influ- ences, i.e. within a level, are possible but considered negligible in comparison to the hierarchical relations, hence not analysed further.

The contributions of the single criteria from the usability breakdown structure to the EEP of a product, see Table 4.1, constitute the link between Level II and III of the TSQM.

Analysing Table 4.1 it becomes obvious that the dialogue criteria largely contribute to efficiency in the first place with secondary effects on the effectivity and pleasure the tool or simulator comes along with. In con- trast, the technical and functional criteria primarily determine the effectivity of the tool or simulator. The particular role of emotional reliability is confirmed by its EEP relation. First, it contributes to all three parts of EEP. Second, the influence of its subordinate criteria is in line with the influence of the technical and dialogue criteria in general. Technical maturity as rather technical criterion determines the effectivity of the tool or simulator whereas interface maturity affects the efficiency of the tool or simulator.

Page 51 Consolidated Tool and Simulator Quality Model and Quality Criteria Weighting

Combined with the identified importance of the single criteria listed in Table E.1, it is possible to determine the main contributors to the EEP of the software product. This knowledge guides the tool/simulator de- velopers to the criteria that require to be particularly considered and tuned to achieve a high degree of EEP. Considering the highest level of the usability breakdown structure, it is emotional reliability that is weighted with the highest points for User Groups 1 and 4, see Table E.1.. For User Group 4, familiarization and func- tional reliability are similarly important for the higher-level criteria efficiency and effectivity, respectively. For User Group 1, documentation and interoperability are to be particularly tuned for high degrees of effi- ciency, and functional completeness and functional suitability for effectivity.

Figure 4.1.: TSQM Level III. Figure 4.2.: TSQM Level II and III.

Figure 4.3.: TSQM Level I, II and III.

Page 52 Tool and Simulator Quality Model

Figure 4.4.: Detailed TSQM Level III: Usability Breakdown Structure based on [122, Fig.1].

Page 53 Consolidated Tool and Simulator Quality Model and Quality Criteria Weighting

Table 4.1.: Tool and Simulator Quality Model Quality Criteria Contribution to Efficiency, Effectivity and Plea- sure; Bolded Criteria were Subject of Interviews and Surveys, cf. [122] and Annex E.2, E.3 and E.4, and Investigated in More Detail; (x) indicates secondary effects of evaluated criterion on EEP,(x*) tertiary influence.

Criteria Effectivity Effectiveness Pleasure

Emotional Reliability x x x

Technical Maturity x (x) (x)

Interface Maturity (x) x (x)

Transparency x (x)

Accessibility x

Documentation x

Familiarization (x) x (x*)

Documentation (x) x

Self-Descriptiveness x

FAVS1 x (x)

Conformity with User Expectations x x (x)

Adaptability to User Character x x

Biological Perception x x (x)

Readability x x (x)

Clarity x x (x)

Controllability x (x)

Error Tolerance x

Manageability (x) x (x)

Modifiability x x (x)

Accessibility x

Modularity x

Maintainability x

1Functional Appropriate Visualization and Structure

Page 54 Tool and Simulator Quality Model

Table 4.1.: (continued)

Criteria Effectivity Effectiveness Pleasure

Accessibility x

Portability x

Compatibility (x) x

Interoperability (x) x

Co-Existence (x)

Functional Reliability x (x) (x*)

Fault Tolerance x x

Recoverability x

Functional Suitability x (x)

Functional Completeness x (x)

Functional Correctness x (x)

Modifiability x x (x)

Accessibility x x

Modularity x

Reusability x x

4.3.2. Quality Criteria Weighting

Not every criterion in the usability breakdown structure is as essential as others for the users and thus for the success of a software product. The knowledge and consideration of these grades of importance renders the tool and simulator development more efficient and let it be focused on the needs of its users. Therefore an important aspect beside the identification of the quality criteria have been their ranking and weighting.

4.3.2.1. Calculation of Quality Criteria Weighting

The criteria weighting for tools and simulators used in early design phases has been derived from the po- tential tool and simulator users first by rough evaluation of interviews, see Step 1b listed in Section 4.2, and then in Step 2c and 2e, ibid, by means of questionnaires. For this purpose statements addressing criteria realizations had been formulated, see Part E in the questionnaires in Annex E.2.2 and E.4.4. Each state-

Page 55 Consolidated Tool and Simulator Quality Model and Quality Criteria Weighting ment was assigned to a single criterion and was asked to be rated in its importance with a value w ranging between 0 ("not important") and 10 ("very important").

The calculation of the quality criteria weighting follows the usability breakdown structure from the bottom to the top. This means that the weighting of a parental, higher-level criterion is composed of the evaluation of the statements directly assigned to the higher-level criterion and the weighting of its subordinate, lower- level criteria.

Formalizing the calculation of the quality criteria weighting performed by Nemetzade and Förstner [122], on level n of the usability breakdown structure, the weighting Wi,n for the ith criteria Ci,n is calculated by

Pk Pj 1 Wk,n 1 1 w j,n Wi,n − + (4.1) = k j + i.e. it is composed of k weightings Wk,n 1 of the k lower-level criteria Ck,n 1 and j weightings w j,n for the j − − statements q j,n that are directly assigned to the criteria Ci,n.

A special case occurs for the lowest level n 1. Here, no lower-level criteria exists, hence the weighting = Wk,n 1 is obsolete. So it is − Pj 1 w j,1 Wi,1 (4.2) = j

Wi,n reaches values between 0 and 10, based on the range of the reply options w in the survey that are between 0 and 10.

4.3.2.2. Quality Criteria Weighting Results

Table E.1 and Fig. 4.5 picture the quality criteria weighting results obtained from the survey within the Fu- ture Programs Department at Airbus Defence and Space GmbH in Friedrichshafen, see Step 2b listed in Section 4.2, and the successive survey in the frame of SECESA 2014 conference, see Step 2d ibid. By the majority, the criteria are higher or equally rated by the participants of SECESA 2014 in compari- son to the colleagues within Airbus Defence and Space GmbH with a delta of up to three points (see self- descriptiveness). Very few criteria are rated lower, like documentation referring to implemented tool/simu- lator models contributing to the overall transparency of the product. Reason for this overall high ranking is assumed to be the larger sensitivity and involvement of the SECESA participants to the topic of tool and simulator quality in general. As in the case of the first survey within Airbus Defence and Space GmbH, the dialogue criteria are rated as important as the technical and functional criteria by the SECESA 2014 participants, see Fig. 4.5. This out- come confirms the conclusion by Nemetzade and Förstner [122] that it is important for tool and simulator developers to consider both groups of criteria equally in the product development.

The quality criteria weighting results in Table E.1 confirm that some criteria are more important to the potential users than others. The higher their evaluated importance, the higher is their relevance in view of a successful usage from the perspective of its user. In other words, the identified weighting directly expresses the conditions for the potential usage of the tools/simulators by the users. Criteria with high weighting are

Page 56 Tool and Simulator Quality Model expected to be hygiene criteria, i.e. those criteria have to be fulfilled by the software products to be employed at all, cf. Herzberg [129] who defined hygiene factors in the context of motivation for work. Criteria with lower points are rated to be beneficial but not necessarily important. All the more it is surprising that criteria like functional suitability, i.e. the suitability of the scope of the software product to the application case, are not rated with higher points. However, it is this observed discrepancy between expected and obtained results that the study was intended to reveal.

4.4. Validity of Tool and Simulator Quality Model

Two types of validity are relevant with regard to the TSQM. First, the validity of the survey with regard to var- ious sources of bias is to be considered. It states whether the survey measures what it is supposed to assess [128]. The survey validity is grounded on the survey reliability that indicates the consistency of answers, i.e. the probability to receive the same answer on recurring questions. Second, the survey results, in particular the quality criteria weighting, have a range of validity, i.e. they are valid for a specific context of use (mission phase, type of user, etc.) only. This type of validity is important for the application and actual employment of the TSQM.

4.4.1. Validity of Survey

The TSQM is based on a descriptive study, i.e. its purpose is to interrogate a representative sample and then draw a conclusion valid for all potential users [128].

Questionnaires and interviews as used in the study are instruments for data collection with the aim to ex- plore and measure attitudes by means of non-factual questions [128, p.100]. Attitudes and opinions are diffi- cult to be measured and even more challenging to be evaluated. The same opinion or attitude, for instance, may be expressed differently by different respondents while others do not have any opinion or attitude at all. Questions aiming at opinions and attitudes are multilayered, in addition potentially touching emotion- ally loaded topics, and therefore more sensitive to linguistic (e.g. question format, wording), situational (i.e. context) and other biases [128, pp.143]. Sources of unreliability start with the question wording over the se- lection of respondents, the execution of interviews/distribution of questionnaires up to their interpretation and evaluation. Table 4.2 cites sources of bias that were sought to be avoided during the study with no claim to be complete. Sources of bias might exist that were unknown and not avoided at the time of the study and thus influencing its results.

So interview and questionnaire results can both be biased and unreliable, reducing their validity. To achieve a high degree of reliability and validity of the answers, rules of survey design and execution, as cited in common literature like [126, 127, 128], were followed during the study to get the respondents to say their opinion freely while mitigating the possible impact of misunderstandings and bias. For instance, emphasis was put on paraphrasing the quality criteria to avoid misinterpretations, questions were refrained to reveal inconsistencies in answers and a set of questions aimed at one single quality criteria, offering a number of answer options to the respondent to allow a more reliable evaluation of the criteria weighting. Despite the countermeasures, however, it cannot be fully excluded that some error did find its way into the final results.

Page 57 Validity of Tool and Simulator Quality Model

Table 4.2.: Tool and Simulator Quality Model: Sources of Bias.

Sources of Bias Related to Survey Design

Questions are wrongly worded. • Questions are not neutral but loaded and/or leading. • Criteria that shall be paraphrased by questions are not measurable with the intended question. • Questions with a large effect on the criteria weighting are missing in interview/questionnaire. • Linear scale to map the respondent’s attitude oversimplifies the actual attitude range of the re- • spondent.

Sources of Bias Related to Interviewers

Interviewer understands the respondent or interprets the answer wrongly while listening with • the third ear, i.e. noting not only what is being said/written but also what is being omitted, interpreting the body language, etc.

Interviewer is not neutral but tends to be selective in note-taking [128, p.91]. • Information is lost because of interviewer’s inattention during the interview. • Interviewer influences the respondent • by its appearance [128, p.95] and/or its role: respondent is daunted and/or cautious, answer- − ing not what his/her actual opinion is but what the respondent thinks might be required to say. Note: this source of bias is also valid in case a questionnaire is distributed personally.

by asking directive or subtly questions, putting (wrong) ideas into the mind of the respondent. − Interviewer does not create the same conditions for all respondents, e.g. the interviewer does not • provide explanations to questions to all respondents; note: in general interviews might evolve during their execution, loosening their comparability.

Sources of Bias Related to Respondents

Respondent does not express his real opinion but what he thinks is expected although a question • is not understood, the question content is unfamiliar or his real attitude or opinion is different [128, p.138].

Respondent answers a question although no opinion or attitude is existent. • Mood of the respondents changes along the interview/questionnaire, influencing the opin- • ion/attitude, decreasing the consistency in answers.

Respondent understands the question wrongly. • Respondent answers the question with wrong words. •

Page 58 Tool and Simulator Quality Model

Table 4.2.: (continued).

Respondent does not follow the equality of intervals introduced by the linear scale, e.g. the leap • from 8 to 10 is considered/interpreted to be "larger" than between 6 and 8.

Sources of Bias Related to Evaluation

Information is lost during evaluation since • answers to single questions are summarized. − answers with low ratings are not taken into the calculation of the criteria weighting. − Results are biased by the evaluator since • answers are wrongly understood/recorded, i.e. misinterpreted. − the evaluator’s attitude towards the topic is not neutral. − knowledge about the respondent’s background or attitude experienced in environments that − are not necessarily related to the study topic influences the evaluation of the answers.

4.4.2. Validity Range of Survey Results in View of their Application

The specific focus on the single user and his/her personal perspective and desires that the UCD approach comes along with limits at the same time the application range of the obtained results. The TQSM presented in Section 4.3.1 as well as the criteria ranking resulting from the surveys, see Table E.1 and Fig. 4.5, are valid and applicable in a context similar to the one the model was deduced and the results were obtained in. In detail, this means that the results are valid and applicable for tool and simulator evaluation and design that are intended and employed in early satellite design phases. As tasks and environments might change over time, so can opinions and attitudes. They are only a snapshot in time, thus possibly having a limited life span and they may be changing. Hence, extensive care should be taken in applying the TSQM to another context with other types of users, tasks or environments than the ones that were encountered in the study.

In consideration of the potential biases cited in Section 4.4.1, it is important to note that the presented criteria weighting is only valid with regard to the questions that were taken into consideration for the de- termination of the quality criteria weighting. So, strictly speaking, when applying the weighting to software products, it is important to be aware that only a specific set of characteristics was employed to evaluate the "ilities" of the products.

4.4.3. Future Work on Validity of Tool and Simulator Quality Model and Criteria Weighting

To enhance the validity and reliability of the survey, the wording of the questionnaire is recommended to be reworked to minimize the room for misinterpretations, e.g. replace effectivity by effectiveness, shorten and sharpen the questions, improve the neutrality of the wording. To assess the criteria in a wider range of their realization, the number of reply options and questions covering a single quality criterion is recommended

Page 59 Application of Tool and Simulator Quality Model to be enlarged. At the same time, the length of the questionnaire is to be revisited. In this respect, it might be supportive to split the original questionnaire and create several smaller set of questions, each covering a set of dialogue and technical criteria, to not overburden the respondent and keep his motivation high along the survey.

To obtain a larger statistical significance to confirm the validity of the survey results, a larger sample size is recommended to be pursued for the survey. Efforts in the frame of conferences and lectures, i.e. "Deutscher Luft- und Raumfahrtkongress 2014" (see Annex E.3), the Airbus Defence and Space GmbH internal training "Advanced Systems Engineering Practice (ASEP)" in 2014 and the master’s program "Systems Engineering" at the Universität der Bundeswehr München, have been pursued, however, no responses were received from these events, hence not resulting in the desired leap in respondents count. At the same time, it is important to select the potential respondents with regard to their background and adequacy to the objective of the survey.

To broaden the TSQM application range beyond early satellite design phases as presented in Section 4.3, quality models covering other contexts, users and environments should be derived in the future. In that context, differences between the models and weightings might be particularly important for tool and simu- lator developers, emphasizing the context- and user-specific needs necessary to be aware of for successful tool and simulator usage within a project.

4.5. Application of Tool and Simulator Quality Model

The present section presents how to apply the TSQM for simulator or tool evaluation. The resulting quan- tification of the tool and simulator usability allows for the comparison of software products. Exemplary, the TSQM was used for the evaluation of simulators created according to the SSC in comparison to STK with regard to their fulfilment of the TSQM for early satellite design phases. Goal of the presented TSQM appli- cation is the demonstration of its usability and utility in general and the indirect validation of the SSC and its performance in comparison to other simulators.

4.5.1. Evaluation Scenario

The TSQM pictures the particular needs of mission and system developers in early satellite design phases. Tools and simulators, however, might be set up to fulfil the needs of their potential users in various contexts. Hence, for the evaluation of the software products against the TSQM, it is important to define the scenario they are evaluated in, i.e. tasks the simulator is expected to support, users that will employ the simulator and the context of use the software solution will be used in.

For the exemplary application within the frame of this work, scope and handling of the simulator were evaluated by the author of the present work based on the context of use encountered in the Future Programs Department at Airbus Defence and Space GmbH in Friedrichshafen, the simulator use scenarios cited by ESA in an Invitation to Tender on simulators for early design phases [130] and in particular on the following assumptions:

Page 60 Tool and Simulator Quality Model

• Task: exemplary systems engineering task to be supported by the tool or simulator: dimensioning of solar array size or justification of chosen solar array size, respectively, of satellite in LEO with specified payload and satellite bus, i.e. known power consumption, and defined attitude for satellite opera- tions.

• User: systems engineer as potential tool or simulator user with rather basic programming capabilities and/or limited amount of time for tool or simulator set up activities, as focus lies on managing of system overview and relations during the study phase.

• Context of Use: mission phase with rather limited cost budget and time resources, i.e. limited time for familiarization with a new tool or simulator.

4.5.2. Evaluation Approach

The evaluation of the fulfilment of quality criteria by a software product follows the approach for the deter- mination of the criteria weightings retrieved from the questionnaire answers, cf. [122]. The criteria impor- tance has been derived from answers to questions addressing criteria realizations. Similarly, the evaluation of the fulfilment of a criterion by a tool or simulator is based on the evaluation of the fulfilment of detailed features or characteristics by the tool or simulator. The sampled features are a reflection of the statements q j,n used for the determination of the criteria weightings, see Section 4.3.2. This evaluation approach allows to work with the results of the survey while staying in their range of validity. Table E.7 lists the sampled tool or simulator features and the related survey questions.

The evaluation follows the usability breakdown structure, abstractly pictured in Fig. 4.6, from lower to higher level. Similar to the criteria weighting approach the evaluation on a lower-level criterion flows into the evaluation on the parental, higher-level criterion. Figure 4.7 pictures the relation between the criteria weighting calculation performed by Nemetzade and Förstner [122], see also Annex E, and the tool/simulator evaluation presented in the following.

Figure 4.6.: Abstracted Usability Criteria Breakdown Structure: Evaluation Flow Runs from Lower to Higher Level Criteria.

Page 61 Application of Tool and Simulator Quality Model

Figure 4.7.: Calculation of Quality Criteria Weighting (as Performed by Nemetzade and Förstner [122], see also Annex E) (Left), Tool and Simulator Evaluation with the TSQM (Right), and Their Relation.

The fulfilment Fi,n by a tool/simulator of a criterion Ci,n on the level n of the quality breakdown structure is calculated by the fulfilment of the corresponding tool/simulator characteristics or features f j,n and their respective importance w j,n, combined with the importance Wk,n 1 and fulfilment Fk,n 1 of k lower level − − criteria Ck,n 1 (if any), in relation to the maximal achievable criteria and feature fulfilment. It is: −

Pk Pj 1 Wk,n 1 Fk,n 1 1 w j,n f j,n Fi,n − · − + · Rs (4.3) = Pk Pj · 1 Wk,n 1 Fk,n 1,max w j,n f j,n,max − · − + 1 · with the scaling factor R . To be noted that for the lowest level n 1 no lower-level criteria exists, hence s = Wk,n 1 and Fk,n 1 are obsolete. So it is − − Pj w j,1 f j,1 F 1 · R (4.4) i,1 j s = P w f · 1 j,1 · j,1,max

Values for the weightings Wk and w j are based on the TQSM survey results and are retrieved from Table E.1.

Values for f j,n are allocated during the evaluation and range between 0, i.e. no fulfilment of the underlying tool/simulator characteristics by the investigated software product, and 10, i.e. 100 % fulfilment of the feature in order to keep the value range chosen for the criteria weighting, see Section 4.3.2. Values for Fk,n 1 − have been calculated on the precedent level. If the scaling factor Rs is chosen to be of value 10, the same value interval of 0 to 10 can be achieved for the calculated criteria fulfilment F as for the characteristic fulfilment f . With Fk,n 1,max 10 and f j,n,max 10, i.e. the maximal possible achievable criteria fulfilment, − = = Eq. (4.3) can then be simplified to:

Pk Pj 1 Wk,n 1 Fk,n 1 1 w j,n f j,n Fi,n − · − + · (4.5) = Pk Pj 1 Wk,n 1 w j,n − + 1

Page 62 Tool and Simulator Quality Model and for level n 1, Eq. (4.4) evolves to: = Pj w j,1 f j,1 F 1 · . (4.6) i,1 = Pj 1 w j,1

4.5.3. Evaluation Results

Table E.7 pictures the results of the exemplary evaluation of the SSC-Simulators, e.g. SPS, GMSS, in com- parison to STK (Free Standard Version 11.4 without add-ons) following the evaluation scheme pictured in Section 4.5.2 under the evaluation scenario described in Section 4.5.1, showing the higher usability of 7.8 for the SSC-simulators in comparison to 6.6 for STK. In line with the context of use, the criteria weighting for the occasional users (User Group 3), see Table E.1, has been depicted for the evaluation. In case a weighting has not been available, e.g. for questions 65, the weighting by User Group 4 (SECESA participants) has been used.

4.5.4. Validity of Evaluation Approach and Future Work

The tool and simulator evaluation is valid to the extent of the sampled tool and simulator characteristics, the assumed users, context of use and test scenario. As mentioned in Section 4.4.2, care should be taken in generalising these results, specifically in view of another context with different types of users, tasks or project phases. It can be argued that the chosen scenario does not sufficiently represent the actual context of the tool or simulator usage in early satellite design phases. Therefore, the tools and simulators are rec- ommended to be verified against a set of scenarios representative for the tasks typically to be encountered in this stage of a mission. In view of an improved questionnaire with more reply options and criteria real- izations, see Section 4.4.3, the tool and simulator evaluation based on the improved TSQM is supposed to cover the assessment of an accordingly larger set of tool and simulator characteristics.

With regard to the execution of the evaluation, the results might be biased by the author as they depend on the user or evaluator and his transfer of encountered simulator characteristics into fulfilment figures. For future work, it is recommended to propose a fulfilment grid for the simulator characteristics per question to align evaluations and to execute the evaluation by an independent authority. Usability tests on predefined use scenarios are a possible mean to evaluate and compare the simulators in view of the quality criteria while reducing the bias of the evaluation and increasing its validity. Usability tests are an empirical method of usability evaluation (for further methods on usability evaluation extensive literature is available for consultion, e.g. [131, 132, 133, 134]). In usability tests, the (potential future) user employs the product to assess if it meets its intended purpose. Products that benefit from us- ability testing are in particular consumer products, web sites or web applications and computer interfaces. For more details, the interested reader is referred to the variety of literature on the topic, e.g. [135, 136]. Exemplary, usability tests were designed by Horn [77] and further elaborated and conducted on the LOFT Simulator, GMSS, STK and STC by Pirzkall and Suttarp [137]. Objective of the work was to compare and validate the handling of the simulators and tools and reveal room for improvement. The usability tests were accompanied by questionnaires conducted prior and after the test, covering a subset of quality criteria in or- der to validate the usability test observations. Though revealing deficiencies for each application separately,

Page 63 Acceptability as Essential Criterion for Tool and Simulator Success a comparable rating on the usability of the tested simulators, in particular GMSS and LOFT compared to STK or STC, was eventually too difficult to be reliable. This is largely due to the defined test reference scenario that, though aiming at a handling check only, was influenced by the available content of the software prod- ucts (usability criterion functional completeness). As lessons learned, it is recommended to define reference scenarios that aim solely at either content or handling of the tools and simulators. The accompanying use of questionnaires along the usability testing is recommended as additional source of information for the validation of the observations during the usability tests.

4.6. Going Beyond the Tool and Simulator Quality Model - Acceptability as Essential Criterion for Tool and Simulator Success

Achieving a high degree of usability is one but not the sole contributor for a software product to be suc- cessful. A simulator might be perfectly fitted to a user need, thus achieving a very high usability, however, it might not be used. The reason for this contradiction is manifold. First, the existence of the solution must be known to its user. This condition seems to be trivial but ignorance of commercially available solutions or, on a small scale, of the MS Excel work that the colleague once did and that could help, was experienced to be an ordinary inhibitor for tool and simulator usage in daily project work. Assuming that this essential condition is fulfilled and the solution is well known to its potential user, the next obstacle might be of legal or political nature. For instance, a licence for a product might be too expen- sive, the product itself might contradict regulations of cyber-security, the product might be related to the business competitor, etc. These regulative obstacles are not necessarily in the decision range of the single tool/simulator user. Finally, it is the acceptance or, to keep the "ility"-usage, the acceptability of a tool/simulator by its user that eventually determines whether a software product is used [74] and thus can unfold its full usability. Al- though it can be argued that a user might be forced to use a product despite his lack of acceptance, it is finally his willingness to see its usability for his purpose that allows to develop the tool or the simulator to a full success. Without that, only a limited degree of efficiency and effectivity, not to speak of satisfaction, can be achieved.

Based on the experience of the author along the study, usability and acceptability are hypothesized to be equally important for the success of a tool or simulator within a project, see Fig. 4.8. Expressed in an equa- tion, this means

success = usability x acceptability i.e. the degree of success achieved by a tool or simulator equals the degree of tool or simulator usability times the degree of acceptability by its user. For a normalized approach, the degree of success, usability and acceptability are expressed as values between 0 (no fulfilment) and 1 (total fulfilment).

While the degree of usability of a tool or simulator can be obtained by the application of the TQSM as shown in Section 4.5, a numerical value for the degree of acceptability is more challenging to be determined. It is difficult to be assessed by questionnaires or interviews as it is assumed to be highly emotionally loaded and

Page 64 Tool and Simulator Quality Model thus potentially subject to a large number of biases. A potential solution could be to count and evaluate the employments of each tool and simulator within a project and compare the figures. The higher the count, the higher is likely the acceptability of the tool or simulator. Yet this relative figure is only the first step towards a value representing the absolute acceptability of the software product that is more difficult to be obtained. In that context, the necessary number of application cases equalling a 100 % acceptability is difficult to be assessed and needs to be carefully evaluated. Also, the influence of "forced acceptability" on the simulator/tool success, i.e. a user being forced to use the product despite his unwillingness to do so, has to be investigated in detail in future work.

It is important to note that the degree of acceptability evolves during the tool and simulator usage and thus is manipulable. A high degree of usability supports its increase, a poor usability evokes the reverse result. The same is true for the subordinate level of satisfaction, EEP and the single quality criteria, see Fig. 4.8, that all influence acceptability. In that context, the identified hygiene criteria, see Section 4.3.2, are crucial for a high acceptability. The significance of acceptability is difficult to be assessed via questionnaires or interviews but is assumed to be generally underestimated. In view of its importance for the final employment of the software product its actual significance is expected to be very high.

Acceptability is not to be mixed up with emotional reliability. Emotional reliability can be paraphrased with trust. Acceptability is about the personal attitude of a user to the product, that means his willingness to work with the tool or simulator. Acceptability might be influenced by emotional reliability: the higher the emotional reliability towards a product, the higher is its acceptability.

Figure 4.8.: Defining Tool or Simulator Success as Combination of Acceptability with Tool and Simulator Quality Model.

Page 65 Acceptability as Essential Criterion for Tool and Simulator Success QaiyCiei akn o ifrn oladSmltrUe rusbsdo uvyRsls e ne :()Feun olUesand Users Tool Frequent (a) E: Annex see Results, Survey on based Groups User Simulator and Tool Different for Ranking Criteria Quality : 4.5. Figure Ue ru 2 Group User (a) b cainlTo sr tFtr rgasDprmn tAru eec n pc mHi rercsae,()ad()Participants (d) and (c) Friedrichshafen, in GmbH Space and Defence Airbus 2014. at SECESA Conference Department of Programs Future at Users Tool Occasional (b) Ue ru 3 Group User (b) Ue ru 4 Group User (c) Ue ru ihAatdQues- Adapted with 4 Group User (d) tionnaire

Page 66 5. A Novel Interpretation of the System Simulator Concept - the Parameter Influence Net Method

5.1. Identified Need for Parameter Influence Net Method

Knowledge and control over the emergent behaviour of the engineered system is one of the major respon- sibilities of the systems engineer, see Section 2.2.2. The system behaviour, in turn, is the result of the un- derlying system structure and the interaction of system elements with the system environment. Insight into the system’s structure, i.e. the relations between the system elements characterized by parameters, is there- fore a key capability for a systems engineer and his success. Knowing (a) which elements are drivers in the system and/or are robust, (b) which elements are highly interconnected, and (c) where the bottlenecks are, allows to actively manage and control the system interfaces to achieve a desired emergent behaviour and to guide the multidisciplinary teams accordingly.

Knowledge about the system’s structure is particularly valuable in early design phases where decision mak- ing and evaluation in the face of opposing interests and multiple - sometimes conflicting - requirements are of major relevance in the highly iterative and recursive development process. In this phase, require- ments and constraints change frequently. Expressed in parameter value and structural modifications, these changes possibly have a significant range of influence on the overall system due to the cross-linking of the system elements, potentially influencing the system’s balance, robustness and performance. The more it is important to evaluate design changes and their impact prior to their realization against the desired, result- ing system behaviour to prevent dispensable and costly design iterations. The better the system structure is known and understood, the easier and more effectively it can be controlled, balanced and design decisions made in that respect [8]. This means, the more influential a system element and its characterizing parame- ters are identified to be, the more important it is to keep the element robust for a design freeze and the more effective, i.e. requiring less effort, is its parameter modification to achieve a desired systems behaviour in comparison to changes in other system elements.

Functionalities and scope of the SPS already aim at providing its user a sound understanding of the system structure. Lower-level knowledge about the parameter interdependencies is given as the SPS models are specified by the team itself. However, the simulator provides only direct access and visibility to those system parameters that have been defined as in- or output variables of the simulation. With rising complexity of the models the overall parameter interdependencies get more difficult to be identified and controlled and changes are more difficult to be tracked throughout the system. The parameters’ degree of influence in the system can only be assessed by experience and trial and error with the SPS, and are rather of qualitative

Page 67 Approach towards Parameter Influence Net Method, its Algorithm and Implementation Process nature. A clear picture of the parameter dependencies and their strength, i.e. a quantification, is barely provided by the SPS or similar tools cited in Section 1.2.3.

Other methods employed in space industry to support decision making and evaluation, ranging from time saving intuition to resource consuming simulations and real system tests [94], do barely provide the re- quired insight into the system structure, either. Intuition as method of choice in early design phases where time and resources are limited and uncertainty is high, often fails due to the complexity of the assessed system. Prediction of effects of parameter changes on the complete system becomes the more difficult the more parameters and cross-links are considered. Likewise, the evaluation of simultaneous changes of sev- eral system variables is barely possible with methods based on intuition. On the other end of the method range, simulations and real system tests, though providing quantitative results, are costly and often only focusing on one or few aspects of the system, see Section 1.3.2. Although simulators like the SPS fill in this gap by considering the complete system of interest, the problem remains that the knowledge about the sys- tem structure, i.e. the parameter relations, is based on experience and is possibly enlarged by low-effective trial and error methods, the analysis of the simulator source code if accessible, or the consultation of the simulator documentation if available. Hence, knowledge and thus control of the parameter dependencies remains largely limited.

To close this gap, the Parameter Influence Net Method (PINM) as novel interpretation of the SSC comes in by complementing the benefits of a system tool like the SPS. The PINM determines and plots the system element dependencies and their strength in a net of parameters. Underlying the net are system models with meaningful parameters that characterize the system elements. The quantified influence of a system element, represented by its parameter(s), within the system is obtained through the determination of the system change the parameter would induce by its very change. This change can be tracked parameter by pa- rameter through the modelled system, from the system or net input up to the system or net output. Through the careful choice of parameters as net output, for example parameters describing the performance of the system, the system behaviour as result of a system element parameter change can be assessed with the Pa- rameter Influence Net (PIN). As such, the PIN supports the systems engineers by providing insight into the system structure, characterizing system elements to be drivers, robust and/or highly interconnected, and yielding indications to the expected system behaviour, i.a. resulting from a change.

5.2. Approach towards Parameter Influence Net Method, its Algorithm and Implementation Process

With the need for the assessment of the system element interdependencies identified, the question arose how to realize it. First, it was decided to make use of the existing concept of formulas to model the system structure and its behaviour. Second, an algorithm had to be defined to quantify the influence of the param- eters. Third, a visualization method for the net had to be specified. These three steps are elaborated in the following and will be concluded by the implementation process.

Page 68 Parameter Influence Net Method

5.2.1. Modelling System Structure and Behaviour

The space engineering discipline (as any other) uses mathematical expressions as basis of the various design and analysis activities performed along the mission. These formulas model the system structure and the system’s emergent behaviour as representation of reality. Their level of detail depends on the acceptable degree of reality simplification that should be correspondent with their purpose and application. With a reasonable degree of detail, the majority of the formulas commonly employed in the space domain are based on the basic algebraic operations summation, multiplication, power and trigonometric functions, see the models implemented in the SPS, LOFT and JUICE Simulator presented in Annex B, Annex D and related references [47, 48, 111, 116, 138, 139, 140, 141, 142, 143].

The algebraic expressions imply system structure by connecting system elements through their character- izing parameters. Each formula f connects a set of parameters x1, ,q (input) and y (output), see Fig. 5.1, ··· that represent and describe characteristics of the system element(s) they are associated to. Depending on the definition of the system in focus, different parameters may come into play to represent the system ele- ment. For example, the performance of a power subsystem might be expressed by the generated power by the solar array or the potential power demand of the system. Which parameters are eventually employed, depends on the purpose of the model. The utilized parameters might connect formulas themselves. For example: y f (x ,x ) x x (5.1) 1 = 1,1 1,2 = 1,1 · 1,2 is connected with: y g(x ,x ) x x (5.2) 2 = 1 2 = 1 + 2 by the relation x y , thus connecting the functions f (x ,x ) and g(x ,x ) to the combined function: 1 = 1 1,1 1,2 1 2

y h(x ,x ,x ) g(f (x ,x ),x ) x x x , (5.3) 2 = 1,1 1,2 2 = 1,1 1,2 2 = 1,1 · 1,2 + 2 see Fig. 5.2. Finally, these connections represent the system structure and result in a system behaviour ex- pressed by the output of the (interconnected) formula(s). Parameter values might be set, as input xi to a formula, or be calculated, as output y. For interconnected formulas, the values of the intermediate param- eters connecting the single formulas are not set but result from the preceding formula, thus are calculated.

5.2.2. Modelling System Element Influences within a System - Algorithm behind the Parameter Influence Net Method

A single system element’s influence within the system, i.e. its relation to other elements and its influence on the overall system behaviour, becomes obvious in the impact of its change (one system parameter) on the entire system (all other system parameters) - which determines the pro and con of a design decision. It is to be noted that changing and evolving parameter values are common practice during the life cycle of the system. While in the design phase the values change in response to changing requirements and constraints, they might evolve in the operation phase due to degradation of the hardware or the changing environment all along the mission.

Page 69 Approach towards Parameter Influence Net Method, its Algorithm and Implementation Process

Figure 5.1.: Algebraic Expressions Imply System Structure by Connecting System Elements through their Characterizing Parameters: Function f Connects Input x1, ,q with Output y. ···

Figure 5.2.: Interconnected Algebraic Expressions Create the System Structure and Behaviour by Connect- ing Parameters from Various System Elements.

A possibility to assess the influence of a system element within a system is to compare the system perfor- mance (or output in more general terms) prior and after a system element’s change, i.e. parameter value change. In other words: when deviating from a given initial set of parameter values by the modification of a parameter value xi , it is assessed how the values of the other system parameters change, in particular the output or performance parameter y. The comparison is scale independent if the change in system per- formance is expressed relative, i.e. percentaged, to the initial performance value. So algorithms have been developed to algebraically assess the impact of a parameter xi as expression of the percentaged change py of the system performance parameter y initiated by the change pxi of the parameter xi , defined as:

1 p : y y− 1 y = changed · DP − 1 p : x x− 1. xi = i,changed · i,DP − This has been done for the mathematical operations summation, multiplication and power, cf. [144], and trigonometric functions, see Annex F.1, since these elementary operation types compose the majority of the algebraic formulas at higher system level describing the physical behaviour of a spacecraft interacting with its environment. Combined with the cascade rule for nested functions, cf. [144], they form the basis for the PINM and are described in Sections 5.2.2.1 to 5.2.2.5, followed by an application example in Section 5.2.2.6. The Design Point (DP) is introduced as point of reference, describing the initial system and its

Page 70 Parameter Influence Net Method behaviour prior to the imposed change. The corresponding parameter values are marked with the index DP. So it is: y : f (x , ,x ). DP = 1,DP ··· q,DP

5.2.2.1. Parameter Influence Net Method Algorithm for Multiplications

A pure multiplication is described by:

q Y ai y(x1, ,xq ) ci xi , (5.4) ··· = i 1 · = with ci and ai being constants. The percentaged change of the output y in dependence of the percentaged change of the input xi is expressed by:

q Y ai py (px1 , ,pxq ) (1 pxi ) 1. (5.5) ··· = i 1 + − =

Equation (5.5) demonstrates that in a multiplication a percentaged change pxi of an input variable xi leads to a percentaged change py of the output parameter y which is independent from the system’s initial con- figuration and behaviour, i.e. its Design Point.

5.2.2.2. Parameter Influence Net Method Algorithm for Summations

In a pure summation described by: q X y(x1, ,xq ) ci xi , (5.6) ··· = i 1 · = with ci being constants, it is: q 1 X py (px1 , ,pxq ) yDP− pxi ci xi,DP . (5.7) ··· = · i 1 · · =

In contrast to multiplications, in summations percentaged modifications pxi of input parameters xi lead to percentaged changes py in the output parameter y which do depend on the system’s reference point DP as Eq. (5.7) shows.

5.2.2.3. Parameter Influence Net Method Algorithm for Exponential Calculations

For the exponential case described by: y(x ) bxi , (5.8) i = with b being a constant, it is: p x p (p ) b xi · i,DP 1. (5.9) y xi = −

Similar to summations, for exponential calculations percentaged modifications pxi of input parameters xi lead to percentaged changes py in the output parameter y which depend on the system’s reference point DP,see Eq. (5.9).

Page 71 Approach towards Parameter Influence Net Method, its Algorithm and Implementation Process

5.2.2.4. Parameter Influence Net Method Algorithm for Trigonometric Calculations

For trigonometric functions y(x) f (x), e.g. sin(x), arcsin(x), the percentaged change p is described by: = f (x)

f (xDP ∆x) f (xDP ) f (xDP ∆x) f (xDP (1 px )) p f (x) + − + 1 + 1. (5.10) = f (xDP ) = f (xDP ) − = f (xDP ) −

This exact calculation of p implies a coupling p g(f (p )) that impedes the explicit evaluation of f (x) f (x) = x the influence of px on p f (x). To achieve a decoupling that is essential for the lucidity of the PIN, i.e. to see directly the parameter interdependencies and their strength, the linear approximation of f (x) at the point x x according to Taylor’s Theorem [145, p.185] is used that is: 0 = DP

f (x) f (xDP ) f 0(xDP ) (x xDP ), ≈ + · − (5.11) f (x ∆x) f (x ) f 0(x ) ∆x . DP + ≈ DP + DP ·

∆x With Eq. (5.11) and px , Eq. (5.10) evolves to: = xDP

f (xDP ∆x) f (xDP ) f 0(xDP ) p f (x) + − px xDP . (5.12) = f (xDP ) ≈ f (xDP ) · ·

This approximation of p f (x) is favoured to the exact calculation described in Eq. (5.10) as it provides the desired decoupling of px from p f (x). However, the Taylor approximation has a limited validity range. For large ∆x the error between the exact calculation of p f (x) and the Taylor approximation tends to be too large to balance the benefits of the approximation. A valuable range of ∆x depends on the chosen design point xDP and the type of the underlying trigonometric function and needs to be checked for physical plausibility on a case by case basis. Assuming that for the early design stages, for example, a deviation of 10 % from the exact value of p f (x) might still be acceptable, Table F.1 exemplary summarizes for the sine, cosine, arc sine and arc cosine function the ranges of x where the relative deviation is less than 10 % for 0.05 p 0.05. DP − ≤ x ≤ The interested reader is referred to Annex F.1for further details on the development of the PINM algorithm for trigonometric calculations. An algebraic comparison of the exact calculation of p f (x) against its Taylor approximation for common trigonometric functions including the corresponding domains of definition is given in Table F.2. A graphical analysis of the relative deviation between the Taylor approximation and the exact calculation of p f (x) for several constant percentage changes px is pictured in Figs. F.2to F.9. Table F.3 summarizes the evolution of the relative deviations towards their point of discontinuity.

5.2.2.5. Parameter Influence Net Method Cascade Algorithm for Nested Calculations

In case different elementary operations are combined in one physical equation to describe a system ele- ment, the multilevel algebraic expression has to be decomposed in several equations, each using only one elementary operation type, to allow the application of the PINM algorithms presented in Sections 5.2.2.1 to 5.2.2.4. For example, the function:

y(x , ,x ) t(u (x , ,x ), ,u (x , ,x ),x , ,x ), (5.13) 1 ··· q = 1 1,1 ··· 1,r ··· n 1,1 ··· 1,s 1 ··· q

Page 72 Parameter Influence Net Method is composed of an outer level described by the function t and an inner level expressed by the functions u , ,u . Assuming that each function employs one elementary operation type, it is: 1 ··· n

p (p , ,p ) v(w (p , ,p ), ,w (p , ,p ),p , ,p ). (5.14) y x1 ··· xq = 1 x1,1 ··· x1,r ··· n x1,1 ··· x1,s x1 ··· xq

For nested functions, the percentaged change is passed from the inner to the outer levels. In case the func- tions u , ,u call functions themselves, the cascade rule has to be repeated on lower level. 1 ··· n

5.2.2.6. Application Example of Parameter Influence Net Method Algorithms

To demonstrate the application of the PINM algorithms, Eq. (5.3):

y h(x ,x ,x ) x x x 2 = 1,1 1,2 2 = 1,1 · 1,2 + 2 shall be used in the following as example. First, it has to be decomposed according to the cascade rule, see Section 5.2.2.5, to a summation as recalled from Eq. (5.1):

y g(x ,x ) x x 2 = 1 2 = 1 + 2 on outer level, and a multiplication: y f (x ,x ) x x 1 = 1,1 1,2 = 1,1 · 1,2 on inner level as described in Eq. (5.2). For Eq. (5.1), the PINM algorithm for summations described in Section 5.2.2.2 is applied, yielding:

1 p y− (p x p x ), (5.15) y2 = 2,DP · x1 · 1,DP + x2 · 2,DP and for Eq. (5.2) the PINM algorithm for multiplications cited in Section 5.2.2.1 is used, resulting in:

p (1 p ) (1 p ) 1. (5.16) y1 = + x1,1 · + x1,2 −

With p p the percentaged changes p and p are passed from the multiplication in the inner level y1 = x1 x1,1 x1,2 to the summation on the outer level of y2, resulting in:

1 p y− (((1 p ) (1 p ) 1) x p x ). (5.17) y2 = 2,DP · + x1,1 · + x1,2 − · 1,DP + x2 · 2,DP

5.2.3. Visualizing System Structure and System Element Influences - the Parameter Influence Net

Complex problems are generally more easily and more quickly understood with graphical than with textual support. So a graphical implementation of the PINM was soon decided to be part of the novel method. Mathematical formulas are easily translated into a net with arrows, indicating the information flow, con- necting input and output, see Fig. 5.1. On the basis of the system models and the PINM algorithm, the PIN plots the percentaged change py of the output parameter y as function of the percentaged changes px1,...,q

Page 73 Approach towards Parameter Influence Net Method, its Algorithm and Implementation Process

of the input parameters xi, ,q . Nested elementary operations in one formula are broken down according ··· to the cascade rule to sequential elementary operations, see for example Fig. 5.3 as implementation of Eq. (5.3). The arrows indicate the flow of influence between the parameters, assuming a one way flow, and symbolize the relation between two parameters. The arrows are accompanied by connectivity factors rep- resenting the degree of influence of the input to the output parameter of the relation. Multiplications are represented by single numbers as connectivity factor, where the number represents the exponent used in the multiplication. Mathematical expressions combined with a plus/minus sign or an angle sign represent summations and trigonometric relations respectively, see also Fig. 5.6. The value of the connectivity factors is then obtained by substituting the values for the called variables. The connectivity factors are related to the percentaged change of the output parameter but are not equal to it in all cases. They equal it - under the condition that only the input parameter corresponding to the connectivity factor is changed - for summations, trigonometric calculations and in accumulated nets with direct links between system input and system output parameters. Direct links between the input and out- put parameters are established by the calculation of py as function of px j by means of the PINM algorithms cited in Section 5.2.2 (assuming that all other pxi but px j are equal to zero) while following the information

flow within the net. The calculated connectivity factor py (px j ) yields the overall influence of a system input x j on the system output y and allows a value-based comparison of the input parameters’ influences. The higher this connectivity factor, the higher is the input parameter’s influence on the output parameter. To give an example, for Eq. (5.3) the influence of x1,1 on y2 is expressed by the evolution of Eq. (5.17) with p 0 and p 0: x2 = x1,2 = 1 p (p ) y− (p x ). (5.18) y2 x1,1 = 2,DP · x1,1 · 2,DP For p (p ), Eq. (5.17) is evolved with p 0 and p 0, leading to: y2 x1,2 x2 = x1,1 =

1 p (p ) y− (p x ). (5.19) y2 x1,2 = 2,DP · x1,2 · 2,DP

Similarly, p (p ) is obtained with Eq. (5.17) with p 0 and p 0: y2 x2 x1,1 = x1,2 =

1 p (p ) y− (p x ). (5.20) y2 x2 = 2,DP · x2 · 2,DP

Figure 5.4 pictures the corresponding accumulated net. The interested reader is also referred to the accu- mulated net of a modelled power subsystem by Nemetzade and Förstner [144, Fig.5].

Figure 5.3.: Cascade Algorithm Implemented in Parameter Influence Net.

Page 74 Parameter Influence Net Method

Figure 5.4.: Example for Accumulated Parameter Influence Net Based on Eq. (5.3) - Pictured Connectivity Factors Explicitly Quantify Influence of Input Parameters on Output Parameter for Given Design Point Values and in Dependence of Percentaged Change of the Corresponding Input Parameter.

In the end, the PIN shows the system structure, i.e. system elements’ relations through the parameter in- terdependencies, and displays the influence between the parameters through the connectivity factors. The system behaviour that is implied and expressed in the formulas is then graphically plotted in the PIN. Sec- tion 5.3.3 presents and discusses the steps executed to determine an appropriate visualization tool for the PINM.

5.2.4. Implementation Process of Parameter Influence Net Method

In line with the considerations taken towards the establishment of the PINM, its implementation process is defined with the three following steps, cf. [144]:

Step 1: System Definition and Modelling The system of interest is modelled. For this purpose, the potential (design/performance) question to be supported by the PIN is to be identified. Depending on the aim of the net, determined by its user, the formulas modelling the system of interest are as detailed as necessary and as simple as possible.

Step 2: Quantification of Parameter Influence The system models are subjected to the algorithm described in Sections 5.2.2.1 to 5.2.2.5.

Step 3: Graphical Implementation of System Relations and Percentaged Changes in Visualization Tool The system elements, their relations and strength are implemented and displayed in a tool to result in a PIN. The visualization can be of dynamic or static nature. Dynamic nets adapt to changing inputs and Design Point information while static nets provide the instantaneous system status. Presentation tools like MS PowerPoint [146] are suited for static nets as for example implemented by Nemetzade and Förstner [144]. Tools with graphical output capability like Matlab/Simulink as used by Kleinig [147], and MS Excel, see for instance Fig. F.13 for the implementation of the Communication Subsystem according to Annex F.5, are suited for dynamic nets.

Page 75 Application of Parameter Influence Net Method

5.3. Application of Parameter Influence Net Method

To demonstrate the applicability of the PINM (proof of concept), it was applied exemplarily to the power and communication subsystem of the Hubble Space Telescope, an Earth bound satellite. The respective models and derived percentaged parameter changes py are cited in Annex F.2 to Annex F.5. The following two sections focus on the structure of the realized PIN and its potential use cases. Section 5.3.3 discusses why MS Excel has been chosen as visualization tool for the PINM.

5.3.1. Structure and Appearance of Realized Parameter Influence Net

Following the search and decision for a visualization tool for the PINM, see Section 5.3.3, the PIN has been exemplarily implemented in MS Excel resulting in a file with several sheets. The first sheet represents the GUI, see Fig. 5.5. The remaining sheets comprise the PINs per considered subsystem or mission aspect, e.g. power subsystem, communication subsystem, see Fig. F.13. While the GUI is modifiable by the user, the sheets comprising the PINs are blocked for direct modifications. They can be controlled by the GUI only. The GUI contains the Design Point values of the parameters called in the PINs, the desired percentaged changes of the net input variables, and the calculated, percentaged changes of the net output parameters to provide full overview of the figures. Cells with calculated values, for example the design point value of the semi-major axis, are marked as such and are blocked for modifications. On the remaining sheets, the PINs display the relations between the system elements according to the un- derlying system models, indicated by arrows and the related connectivity factors. Figure 5.6 provides a zoom into the implemented communication system PIN and labels the different elements of the graphical representation. Three different parameter types are differentiated. Blue boxes indicate input parameters, green ones are intermediate variables while orange ones highlight output parameters. Each box indicates the parameter and the value of its percentaged change px within the system. This value is determined by the user for the input parameters (through the GUI) and calculated for the intermediate and output param- eters. The connectivity factors next to the arrows for underlying trigonometric relations and summations are calculated based on the user input on the Design Point parameters. As soon as a Design Point value or a percentaged change is modified within the GUI, the calculated values in the nets are automatically updated. The nets can be set up independently from each other or be con- nected according to the needs of their user. In the exemplarily implemented version within the frame of this work, the single nets are independent but for one exception. For the power subsystem, the user can choose whether the battery charging and discharging time shall be equivalent to the time spent in sunlight and eclipse, respectively, or whether the values are fed into the net independently. In case of the first option, the power sheet and the sheets representing the net for the eclipse and sunlit time, are connected. A macro is introduced in the GUI to account for the choice of the user, see upper right side of Fig. 5.5. In response to the potentially changing needs of their users, the given nets and the GUI can be easily mod- ified in the frame of the inherent Excel functionalities. By adding sheets, an existing PIN tool can be ex- panded by further nets representing more aspects of the system if necessary. As such, one of the significant characteristics of the created PINs is their flexibility and adaptability to the evolving and changing needs of their users, fully along with the UCD approach inherent to the SSC.

Page 76 Parameter Influence Net Method

Figure 5.5.: Extract of Exemplary Parameter Influence Net Default GUI.

Page 77 Application of Parameter Influence Net Method

Figure 5.6.: Zoom of Communication Net - Differentiation of System Elements in Parameter Influence Net.

Page 78 Parameter Influence Net Method

5.3.2. Use Cases of Parameter Influence Nets

The PINs can be potentially used in several ways with different purposes. The first evident use case is the direct visualization and assessment of parameter changes. Via the GUI and the PINs, parameter changes are directly visible and can be assessed and evaluated by the user, for example in support of design decisions. Second, the system structure is visualized and made (more) comprehensible with the PIN. In that context, the system-inherent connection between (sub)systems can be assessed and the importance of single pa- rameters connecting crosswise (sub)systems revealed. For instance the orbital altitude hOr bi t revealed to be an input parameter for both the power and communication subsystem, see Annex F.2to Annex F.5, thus connecting both subsystems. This knowledge is valuable for guiding the team efforts efficiently and pre- dicting the system’s emergent behaviour. Thirdly, the influence of single parameters on the entire and/or crosswise subsystems can be determined through the connectivity factors. For a reduced net where the intermediate parameters are accumulated and input and output parameters are directly connected, see for example Fig. 5.4, the connectivity factors display explicitly the influence of the single parameters. The parameters’ influence can be directly assessed and compared and system bottlenecks and drivers can be identified. In case the modelled system output, for instance, is a parameter representing the system performance whose value is sought to be as high as pos- sible, system bottlenecks might be parameters with very low connectivity factors and system drivers might be parameters with very high connectivity factors. Combined with the knowledge of the absolute value of the parameters that also affects the magnitude of the system output, the overall influence of the system drivers and bottlenecks is then quantifiable and comparable, cf. the calculation of accumulated nets in Section 5.2.3. Even in cases where one input parameter affects one output parameter multiple times (this phenomenon originates from the underlying system models; see for example hOr bi t in the Communication subsystem in Annex F.5 for circular orbits), its overall influence on the output parameter can be summarized by combining the equations describing the respective percentaged changes of the output and intermediate parameters resulting from a change in the input parameter. In cases where one input parameter affects multiple output parameters within the overall system (see for example hOr bi t for the power and commu- nication subsystem modelled in Annexes F.5 and F.4), its overall influence is assessable, too. A qualitative indication of the overall influence of the parameter on the entire system can be obtained by the evaluation of the corresponding connectivity factors resulting from the reduction of the net. To achieve a quantita- tive assessment, one possibility is to define an overall system performance parameter combining the single output parameters and expand the PIN correspondingly. With this proposed approach one or several direct links between the input and the overall output parameter can be established and the reduction of the net, including the accumulation of the connectivity factors, is feasible in the same way as described above for the case that one input parameter affects one output parameter multiple times.

5.3.3. Selection of Visualization Tool for Parameter Influence Net Method

Pursuing the fundamental idea of UCD, easy handling and management of the tool combined with a sound visualization result were key criteria while seeking for an implementation method for the PIN. Starting point

Page 79 Application of Parameter Influence Net Method of the search was the request for dynamic nets. They provide more flexibility to their users than static nets as they adapt to changing inputs and Design Point information. As such the full potential of the PINM can be harvested. With regard to facile handling of the implementation solution, a tool was sought that would be known to the majority of the potential PINM users in order to facilitate a possible expansion and change of the net, providing a sound basis for the flexibility of the PINM. In terms of visualization, the implementation tool was required to enable the clear illustration of the parameter relations and their connectivity factors as well as changes in the connectivity factors after input modifications.

The first choice for the implementation method for a dynamic net fell on Simulink from MathWorks [34] as it is a popular, widely known tool in the engineering domain for M&S and in the space sector in general, see Section 1.3.2. Its handling was assumed to be widely familiar (thus limiting the familiarization time) to most potential PINM users. In addition, a specific degree of tool maturity was taken for granted, reducing the risk for errors. Furthermore, the direct interface to MATLAB was valued to be of considerable advan- tage: mathematical expressions that would be already implemented in MATLAB for other intentions could be easily transferred into Simulink and embedded in the parameter nets. The power model as presented by Nemetzade and Förstner [144], see also Annex F.4, as well as the communication model, see Annex F.5, were exemplarily implemented in Simulink by Kleinig [147]. As one result of the work by Kleinig [147], several downsides of the PINM implementation with Simulink were detected. Pursuing the philosophy of UCD, a GUI was established to enhance the handling of the PINs. However, the implementation of the GUI in MATLAB/Simulink was very time consuming. Further- more, the presentation of the nets in Simulink was not as well-arranged as desired. Firstly, direct display of the connectivity factors was not possible in the Simulink model. But especially the illustration and hence the knowledge of the connectivity factors between the parameters constitute the key added value of the PINM. Without this knowledge, the net lost an important part of its utility and benefit for the user. Sec- ond, although the net could be be arranged in subsystems, masking details for the sake of lucidity, the net continued to be overcharged, thus overburdening its user.

Because of the encountered drawbacks with Simulink following the work of Kleinig [147], it was decided to test the implementation of the parameter net with another tool. Hence, MS Excel 2016 was selected for this purpose. Being a very common tool in the space and engineering domain in general, access to the tool was supposed to be generally given and its handling was expected to be by far easier for the majority of the potential users than working with MATLAB/Simulink. Already, the tool’s usage for the creation of the parameter nets for the power and communication subsystem was experienced to be more facile than Simulink by the author of the present work. In terms of visualization, the connectivity factors and the net output display is ensured. Furthermore, the lucidity of the net is given. On the downside, the creation of graphics other than diagrams is less supported in MS Excel than in Simulink. The modification of an existing net, e.g. its extension, might be experienced as fastidious, as the drag and drop of the graphical elements of the net (boxes, arrows, formulas) deranges the existing graphic, for instance. A freeze of the net, though, facilitates its handling.

All in all, the use of MS Excel was favoured to Simulink and employed to set up the power and communica- tion net based on the models presented in Annexes F.2to F.5. In the future, other potential implementation methods are recommended to be considered and compared to MS Excel. For the purpose of the present

Page 80 Parameter Influence Net Method work, i.e. to show the feasibility of the realization of the PINM, the exemplary implementation of the nets in MS Excel was considered to be adequate.

5.4. Added Value and Limits

The following sections discuss the advantages and limits of the developed PINM and the implemented PINs pictured in Annex F.2 to F.5. It has to be generally noted, that the PINM and its demonstrative application within the frame of this study was evaluated by the author of the present work solely and therefore might be biased, reflecting her views on the topic. Future work, e.g usability tests within projects, might reveal needs for other models or visualization tools than the ones used within the frame of this work.

5.4.1. Putting Users in Focus - UCD as Fundamental Idea behind the Parameter Influence Net Method

In line with the philosophy of the SSC the PINM is based on, UCD is the central driver of the PINM and its application as well as its development, see Fig. 5.7. As such it is designed to address and meet its users’ needs to the greatest possible extent.

The user and his needs have been in the focus from the beginning of the reflections on the PINM. In ac- count of the user and his context (systems engineer, working in an early design phase with low time and budget resources), the method and the resulting tool are lean and easy to handle while assuring the techni- cal benefits inherent to the SSC. The method makes use of basic algebra that is simple to handle. Based on their mathematical simplicity, the formulas are easily manipulated, understood and implemented in tools like MATLAB or MS Excel. This is in explicit contrast to methods like the sensitivity analysis that uses so- phisticated statistic and was evaluated by Zaumseil [148] to be not fully suited to the identified need within this study. Also, the models employed to represent the system of interest are kept as simple as possible and detailed as necessary. To allow for quick familiarization and avoid potential inhibitions by required licences for example, the PINM employs MS Excel for its implementation.

5.4.2. Closing a Gap in M&S Landscape - Utility of Parameter Influence Net Method

The major asset of the PINM is the time-effective assessment and direct display of the system structure and system element interdependencies, and the quantification of the influence between system element parameters via the connectivity factors. As such, the PINM closes a gap in the M&S landscape in support of the systems engineer.

With the provided knowledge, the impact of parameter changes on the entire system can be directly iden- tified and evaluated without the need for less effective trial and error methods, sole intuition or costly sim- ulations. When using the PINM as support to the design phase, for instance, design decisions in terms of parameter value modifications can be evaluated a priori and hence directed in a way that the intended sys- tem behaviour is obtained. The system emergence is revealed on time, i.e. early in the design process, to

Page 81 Added Value and Limits be controlled and then exploited to avoid system failures and benefit from synergies. Consequences of sev- eral simultaneous parameter changes can be easily evaluated with the PINs and the system robustness with regard to changes can be assessed and thus the overall system design validated.

The PINM provides the user with an understanding of the system, ideally leading to resource savings and better performance and robustness of the system. Understanding and predicting the system behaviour through the knowledge of the underlying system structure allows to lead and control the systems behaviour into a desired direction - within the natural limitations that the modelling of the system imposes, i.e. sim- plification of the real system and its behaviour - while the system’s robustness, balance and performance is ensured and improved. Hereby, the PINM supports the management of the system emergence in order to obtain a desired emergent behaviour while improving the system efficiently. The SE-related activities are supported to be executed more efficiently and cost and schedule requirements are pushed to be met more easily.

Figure 5.7.: Parameter Influence Net Method Development Process in line with UCD Approach.

In other words, the PINM supports the technical leadership (Art of SE, cf. [92]) activities of the systems en- gineer, i.e. interface management and multidisciplinarity lead within the project. It supports and enforces the desirable characteristics of the systems engineer, nourishing his ability to make system wide connec- tions by understanding the connections among system elements, as pre-requisite to predict the emergent system behaviour and detect criticalities, to anticipate the effect of changes on the system behaviour and to set and guide the multidisciplinary teams, cf. [93].

Page 82 Parameter Influence Net Method

In more detail, tools created according to the PINM have, compared to other tools, the advantages cited in the following section while speaking in terms of the TSQM (see Chapter 4).

5.4.3. Meeting Users’ Needs - TSQM Evaluation of Parameter Influence Nets

Flexibility - Given the changing character of the system, the corresponding design and development chal- lenges especially during the early phases, and the potentially evolving requirements and constraints, the flexibility of the PIN to be adapted to its users’ needs is a key requirement for its success and acceptability by its users.

Reusability - If applicable, PINs can be reused for other projects. Due to the generalizability of missions, see Section 3.4, it is expected, that the majority of the higher level system models can be used in that way, making use of synergies. Some adaptation of the design points will be very likely necessary to tailor the reused net to the new mission but this effort is expected to be negligible in comparison to setting up a new PIN or simulator from scratch.

Familiarization and Manageability - It can be argued that setting up the PIN requires an amount of manual work that is too high for the outcome. While it is true that the current set up can be improved (currently only the basic functionalities of Excel are used, typing in the formulas cell by cell, connecting everything "manually", etc.) and it might be experienced as fastidious to set up and implement the models into MS Excel, its handling is judged to be rather intuitive and easy.

Transparency - The implementation depth of the tool is rather shallow thus the provided transparency is high. The models implemented in the PINs are defined by the users themselves. Also their implementation (if desired) can be performed by the users. MS Excel as implementation tool of choice and its employed functionalities are broadly known to the potential users. If accompanied with a decent documentation of the implemented models, the PINs are highly transparent to their users.

Emotional Reliability - With the given high transparency of the PINs through the known, self-defined mod- els and the usage of MS Excel that does not require sophisticated programming skills and is trusted to be a mature and nearly error free implementation tool, the PINs are evaluated to be of high emotional reliability for their users.

Functional Reliability - The PIN models are defined by the users according to their requirements, so the degree of functional completeness of the tool is supposed to be high. Furthermore, the tool’s set-up in MS Excel with the inherent flexibility allows for high accessibility, modularity and in the end modifiability of the tool. Concluding, the functional reliability is considered to be high.

5.4.4. Limits of Parameter Influence Net Method

Limited Temporal/Dynamical Representation

The PIN is a static representation of the performance of the system in subject. It does not provide a run- ning, temporal perspective like the SPS that simulates the system performance along the spacecraft’s orbit.

Page 83 Added Value and Limits

The sole temporal component in the PIN is the number of considered orbits n. As such the net enables to compare the system behaviour in the beginning and after n orbits.

Consequently, the application of the PINM to system models that imply temporal parameter variations is limited. For instance, the analysis of the evolution of the disturbance torques acting on a Earth-bound satellite has to be tailored to the capabilities of the PINM. As a first simplification, Lipowski [149] suggests to apply the PINM to the maximally experienceable torque, isolated for each possible disturbance torque, and then to sum up the absolute values to be compensated by the spacecraft’s actuators in a worst case ap- proach. This modelling approach, however, neglects the fact that (a) the maximal torque values are achieved at different orbit positions, (b) in operation the disturbance torques originating from different sources might compensate each other, (c) the acting torque varies along the orbit, either because of its periodical charac- ter or because of the spacecraft’s orientation (that, in addition, might be changing, too, for example in case of slews), and (d) the spacecraft comes along with an actuator system that is also time-variant. So the pro- posed model simplification is judged to lead to a system representation that can be investigated with the PINM but is not representing the system and its boundaries/constraints adequately. It misses the point of the issue.

Further temporal granularity can be added to the models by introducing a time step ∆t as fraction of the orbital period. The more temporally dependant parameters are part of the models, however, the higher is possibly the change in the models and thus in the net. For instance, this is the case for the cross-section of the spacecraft surface that influences the induced disturbance torque due to atmospheric drag. In case of an inertial spacecraft attitude, the cross-section varies continuously along the orbit. Adding slew manoeuvres into the models even increases the degree of variation of the surface cross-section.

In the end, stretching the limits of the PINM might overrule its benefits. A highly detailed, yet complex model, tends to contradict the general idea of the PINM to be still manageable, balancing effort and achiev- able result in the sense of the UCD. Hence, the benefits of the PINM (simple handling, quick insight into the system structure) are recommended to be held on the forefront while defining the system models. In doubt, other M&S methods have to be combined with the PINM to achieve the desirable result with moderate ef- fort. Therefore, with rising complexity of the models, the PINM is recommended to be used in combination with simulators like the SPS.

Limited Suitability for Equations Beyond Elementary Operations

The PINM is suited for models that describe linear, power or trigonometric relations, see the calculation rules for px in Section 5.2.2. If the system models cannot be broken down to the elementary operations cited, the PINM is at its limits. This limitation was already experienced with regard to the AOCS. Common models for the angular momentum stored in reaction wheels imply cross couplings between parameters, see Eq. (B.13), necessitating the solving of a differential equation system. The PINM, however, is currently not designed for this purpose. A possible solution to avoid the cross-couplings is to assess the angular momentum to be compensated per spacecraft axis, as suggested by Lipowski [149]. However, this perspective disregards the actual actuator system, its technical details and its accommodation within the spacecraft. So, it is not possible to make a statement about the performance of the reaction wheel system.

Page 84 Parameter Influence Net Method

This example pictures the importance of the selected system boundaries and the models the PINM is ap- plied on. Both determine the possible range of the PINM results. So it is critical to know right from the beginning, starting with the system modelling process (PINM Step 1, see Section 5.2.4), which parameter(s) shall be investigated and be modelled as output of the PINM.

Limited Suitability for Reciprocal Relations

Reciprocal equations create an infinite loop that cannot be represented in the static frame of the PIN (see Limited Temporal Representation). Strictly speaking, all system models implying a temporal dependence are affected by this limitation of the method as soon as the state of a parameter at the time instance tn 1 − influences the state of the parameter at tn. This limitation can be mitigated by introducing the number of considered orbits into the system models, see models for the stored battery energy and the free data storage presented in Annex F.4and F.5.

Limited Number of System Parameters

The PIN benefits from lucidity and ease of use. Therefore a limited number of parameters is implemented within the PIN. Hence, the user has to be aware that a large system like a satellite is not supposed to be modelled into its last details with the PINM. Balance between the number of parameters and the purpose of the analysis has to be kept in mind during modelling. In that context, the definition of system boundaries is important: the PINM allows to model an extract of a large system into its very details - given the adequate elementary system models - or a rather large system like a satellite across several disciplines however with shallow modelling depth. Handling a larger number of parameters might be improved by use of macros within MS Excel at the potential expense of transparency of the tool.

Restricted Validity of Results

The PINM uses the service of a reference point, i.a. Design Point, of the modelled system. As such the yielded results are not generally valid but only for the employed design point. This limits the flexibility in use of the correlated PINs. If a system is set to a design baseline deviating from the original one, the PINs have to be adjusted.

Limited Verification and Validation

The PINM largely benefits from the fact that few steps are necessary to apply the method on the system of choice by the user himself so that the tool is quickly ready to use and adapted to changing user needs. This high flexibility and short response time comes along with limited verification and validation activities within the approach. The user is responsible for the correctness of the implemented models and their suitability to his needs (which is true as long as the PINM is applied by the user directly. In case a third party is involved in the creation of the PINs, the user remains responsible for the correct and adequate modelling but not necessarily for the implementation of the models into a tool). Consequently, it can be argued that the risk for implementation and modelling errors is high. At the same time, using a mature implementation tool like MS Excel reduces the risk of infrastructure deficiencies. Validation and verification can be generally improved when the net is used at least on mid-term perspective and by several users. As such, the tools is proof-used by several users.

Page 85 Future Work

5.4.5. Limits of Implemented Parameter Influence Nets

Limits of Models Used

Although it is noted that the derived algebraic expressions for the percentaged changes py are valid for the ordinary parameter value range (specific attention has to be paid for trigonometric functions and in general for the denominator in fractions), the user has to be aware that the implemented PINs do not verify whether the net inputs are valid. To give an example, the models used for the modelling of the communication system, in particular for the calculation of tcontact , are only valid for orbits with low eccentricity and orbital altitude, see Annex F.5. In the power model in Eq. (F.66), the generated power by the solar array PSA should be higher than the power consumption in sunlight Puser,S. So it is the responsibility of the user to chose parameter values that fit the models. Also, the user has to chose values that are plausible. An orbital altitude of 0 km might be a mathematically valid input for the models used, but it does not represent a real use case.

Also the output of the models has to undergo a plausibility check. For instance, the currently implemented battery model, see Eq. (F.66), allows for negative stored battery energy EB at,n, though in reality it can only reach 0 Wh. Also in the communication modelling, see Eq. (F.75), the free data storage capacity can achieve values below zero bit which is not representing reality.

Limits in Executed Verification and Validation

The models and their implementation in MS Excel were verified and validated by the author of the present work solely. The models implemented in the current PINs are based on generic spacecraft relations, as- sumed to be largely sufficient for early system design phases as they are similarly also used for the SPS, cf. [48], the LOFT Simulator, cf. [116], and the GMSS, cf. [117]. However, their final validity will be proven while put into real practice in a project. The implementation of the models in MS Excel was verified with plausi- bility checks but might be still erroneous. Usability tests with real users bear the potential to overcome this limit in the future.

Limits of Spacecraft Representation

The user has to be aware that only few disciplines are currently implemented in the PIN for demonstration reasons. It does not provide a holistic view on a generic Earth-bound spacecraft. Also, only missions with Earth-bound orbits can currently be analysed with the implemented nets. Missions in Lagrange orbits or to other planets and moons in the Solar System are not modelled yet. In the end, it is in the responsibility of the users to define system boundaries and models suiting their needs. The PINM allows to model an extract of a large system into its very details - given the adequate elementary system models - or a rather large system like a satellite across several disciplines however with shallow modelling depth.

5.5. Future Work

The set up of the PINs and their usage to plot and assess the system structure is the first step on the in- vestigation of the parameter interdependencies. Future work is recommended to encompass the detailed investigation of the parameter relations. For instance, when summarizing the intermediate parameter rela-

Page 86 Parameter Influence Net Method tions, a direct interdependency between the in- and output of the PINs is established and the influence of the input parameters on the output can be quantified and compared, as for example done by Nemetzade and Förstner [144] for the modelled power subsystem. With this knowledge, the drivers and bottlenecks in the system can be identified.

Although the system models and algorithm used within the PINM are expected to provide steady results (the expected value ranges of all parameters have been generally checked to not cause any discontinuities, specifically when called as denominator; for the case of the trigonometric functions the algorithm has been evaluated and the definition ranges identified, see Annex F.1), it is advised to check in further detail the continuity of the PIN results to reveal any potential algebraic limits (e.g. singularities, evolutions against infinity). As graphically supported approach for this purpose, it is recommended to plot the evolution of the percentaged changes of the intermediate and output system parameter in dependence of px , for example with support of tool like MS Excel.

To validate the PINs and the PINM in general, usability tests or a demonstration project are recommended to be envisaged in the future. Future work should encompass the extension and adaptation of the imple- mented Earth Observation PIN based on the needs of its real users with further system aspects, e.g. ex- tension of the communication subsystem with consideration of link losses, and potential refinement of the existing models, i.e. identification of different or further performance parameters. In that context, it might be desired to extend the application of the PINM to space missions beyond Earth Observation applications. The modular set up of the PINs (e.g. subsystem per subsystem) would allow for a plug-in creation of the tools and quick start of employment in projects.

The combined use of the PINs with further tools could help to overcome several limits of the PINM cited in Section 5.4.4 like limited temporal representation or the restriction to elementary operations. A combina- tion with MATLAB might be useful where simulation is required to solve and/or control complex interrela- tions, as for example for the consideration of the reaction wheel cross-relations. A combined use of the PINs with simulators like the SPS potentially provides a complete picture on the system of interest, delivering a dynamic temporal representation of the system and allowing to handle a larger number of parameters. Fur- ther work might even comprise the implementation of a new functionality in those simulators that allows to obtain a PIN based on the simulated models (instead of creating the PINs separately) at the push of a but- ton. However, all extensions and combinations with other tools shall not weaken the benefits of the PINM like its easy handling and its lucidity.

To provide the largest possible accessibility, for example in projects, the PINs could be made available on common shared drives. Users could then access the PINs, use them and be allowed to modify, e.g. expand them. Also, the PINs realization could be outsourced for projects where for example time and resources in general are limited. The simplicity of the PINM renders both approaches possible. At the same time, one has to be aware that the use of share drives and outsourcing might bring along a limited control over the development process that then needs to be guided to avoid any undesirable outcomes.

Since the SSC is a systems approach, it is applicable to all kind of engineered systems and not limited to space related application. The same is consequently true for the PINM. So future work might encompass the application of the PINM within other engineering disciplines.

Page 87 6. Synthesis and Discussion

6.1. Synthesis of Thesis Findings

Goal of the present work was to demonstrate the possibility to overcome the weaknesses of the current Modelling & Simulation practice and improve it for the benefit of an enhanced support of the systems en- gineering activities. This has been achieved by the definition of the System Simulator Concept, a set of guidelines for the development of successful systems engineering tools and simulators for use in early de- sign phases, its practical application in two projects in their early design phases and the assessment and consideration of user needs that were distilled into the Tool and Simulator Quality Model. The creation of the General Mission and System Simulator and the novel method of the Parameter Influence Net complete the performed research.

The results of this work serve as proof of concept for the thesis statement formulated in Section 1.5. Sec- tion 3.1 presented the definition of the SSC based on the experience made in ESA’s BepiColombo project. The transfer of the SSC has been successful and resulted in the implementation and beneficial usage of the LOFT Simulator (see Section 3.2), the JUICE Simulator (see Section 3.3) and the GMSS (see Section 3.4). They demonstrate that the SSC is not mission specific but transferable and supportive for the systems en- gineering activities within the project as the use cases for the LOFT and JUICE mission have proven. The interpretation of the SSC into the PINM demonstrated the concept’s potential for versatile implementation while the PINM itself closes an additional gap in the M&S landscape by the creation of tools that provide insight into the system structure. The TSQM as essence of the actual user needs is indispensable for the development of successful tools and simulators and creates the foundation of the SSC. The present work proves that the gaps in the M&S landscape, i.e. the limited usefulness, knowledge transfer and application on system level, can be closed with the SSC. The continuous usage and development of the JUICE Simulator in the mission’s Phase B2/C/D, the application of the SSC to further missions and the validation of the PINM in practice will further confirm the benefits of the SSC. The following sections resume and discuss the thesis findings in more detail.

6.2. Reasons for Successful Transfer of SSC to Further Missions

The SSC has been successfully transferred from BepiColombo to the LOFT and JUICE mission and has led to the creation of the mission generic GMSS. The reasons for the successful transferability of the SSC are:

• its characteristics: the guidelines are generic and high-level enough to be transferable while provid- ing sufficient details to be put in practice.

Page 88 Synthesis and Discussion

• the general applicability of UCD: with the UCD the SSC calls for a development approach that is generally applicable to products.

• the similarity of user needs across projects: the SSC guidelines describe development and scope of a system simulator that correlate with the needs from the project team, as the guidelines have been directly derived from the needs. The inherent similarity of the challenges and tasks project teams have to cope with leads to a need that is similar across projects, and thus can be covered and satisfied by one concept, i.e. the SSC. The successful SSC transfer to several missions, in turn, confirms the generalizability of missions.

6.3. Reasons for Success of SSC

The SSC and with it the TSQM are beneficial and successful because they incorporate strategies that support the systems engineer to be successful. The systems engineer, in turn, is successful as soon as the principal focus of SE, the success of the system, is achieved [8, p.28]. The SSC and the TSQM assist the systems engineer to perform the two complementary parts of SE, the technical leadership and system management, in this very respect by meeting his needs and by being different to the hitherto existing M&S solutions.

6.3.1. SSC Meets User Needs

One of the main findings of the present work is that tools and simulators hitherto designed are missing out the actual user and his needs. The results of the TSQM demonstrate that user needs are user-dependent, i.e. the characteristics a frequent tool and simulator user desires cannot be equated with the ones of the occasional users. Far too often, however, frequent users seem to design the products for the occasional user and project their own needs into the development process which do not fit the occasional user’s ones. The problem triangle as Bubb and Straeter [150, p.168] call it, i.e. the deviation of the actual system functionali- ties from the ones designed and the ones needed and a deviation between the user’s actual capabilities and needs and the ones the developer supposes, needs to be solved. This work demonstrated a way to exit this dilemma. The SPS and in the same context the LOFT and JUICE Simulator have been successful because the user’s needs are met in line with his capabilities. The TSQM and the SSC form the foundation for further successful tool and simulator development with focus on the user needs.

Responding the systems engineer’s needs, the simulators and the PINM complement each other. While the simulators allow to passively assess and observe the systems behaviour, the PINs provide direct insight into the system structure, revealing how the system elements work together, so that the systems engineer can actively regulate the system behaviour and challenge the observations with the simulator. Both system behaviour and system structure are unfolded which is essential information for the systems engineer to work on.

In view of the early design phases, the SPS and the LOFT and JUICE Simulator have been so successful be- cause they have been responding very precisely to the tasks that are typical for a feasibility study and the challenges encountered during an early design phase, respectively - being it the visualization of orbit and

Page 89 Reasons for Success of SSC spacecraft configuration allowing the team to easily understand the overall mission concept and implica- tions to their subsystems, the verification of the system concept and performances, e.g. attitude strategies, thermal, power and downlink profiles, by comparing the results of scenarios with different parameter set- tings, or specific analyses involving several subsystems or requiring the implementation of new models, cf. [40, 130]. Also the context of use, e.g. the changing character of the tasks and the time constraints inherent to the early development stages that limit the amount of effort that can be dedicated to carry out simula- tions or adapt the simulator, are taken into account by the software solutions. Following that the simulators and consequently the SSC respond to the called requirements of the users identified in interviews [40], ob- servations and experience, incorporated in the TSQM, see Chapter 4, and the requirements called for future simulator developments [130].

Three more particularities of the SSC notably meeting the user needs shall be emphasized in the following:

• flexibility: products developed according to the SSC offer flexibility to their users. Given the changing character of the system and the corresponding design and development challenges along the mission, the potentially evolving requirements and constraints and the rather long life cycle of the spacecraft from scratch to actual operations, flexibility is a key requirement for the success of a tool or simulator within a project.

• importance and development of models: the SSC follows a model evolution approach that is com- pliant with the general system status along a project. The guidelines call for models as "living rep- resentation" of the spacecraft in operation. The models evolve as they are progressively refined to answer increasingly more detailed questions. The necessary information required to set up the prod- ucts according to the SSC corresponds to the actual design status of the system. As such, the model philosophy of the SSC is similar to the one at the core of the MBSE approach, see Section 6.3.2.2. By its very nature, i.e. to make use of models to support the systems engineering activities, the SSC supports ultimately a model-centric design.

• knowledge transfer: simulators and other products developed according to the SSC are useful and successful as they provide and conserve knowledge about the system that is used to trigger and guide communication in the team. As such the SSC supports the management aspects of systems engineer- ing and assists the systems engineer in this respect.

6.3.2. SSC is Different to Current M&S Landscape

What the SSC addresses on the one side with respect to user needs, is disregarded on the other side by the hitherto existing M&S solutions. The SSC combined with the TSQM is successful because it differs from the current M&S implementation in their shortcomings. The differences are detailed in the following.

6.3.2.1. Difference to Products on Market - Added Value of SSC

In contrast to the existing products on the market, the SSC approach puts the user and his needs in the centre of the design process. Correspondingly the software product is set-up in a staggered approach, not

Page 90 Synthesis and Discussion overburdening its user. Hereby a high degree of usefulness is achieved. In further difference to the products available, their inherent weaknesses as discussed in Section 1.3.3 are overcome by the SSC approach. Fol- lowing the SSC, the simulator solution achieves usefulness to a large extent by the promoted user-centric development approach, knowledge transfer is obtained by the inherent flexibility of the products to be adaptable to changing needs allowing the continuous use of the products over the systems life cycle and by its very nature, the SSC is designed to execute M&S on system level as support of the systems engineer- ing activities.

6.3.2.2. Difference to Model-Based Systems Engineering - Why the Current MBSE Approach is not the Solution to Overcome the M&S Weaknesses

6.3.2.2.1. Evolution and Definition of Model-Based Systems Engineering The notion MBSE was first introduced by Wymore [151] who defined a system model mathematically. MBSE is defined as "the formalized application of modelling to support system requirements, design, analysis, verification, and validation activities beginning in the conceptual design phase and continuing through- out development and later life cycle phases" [152, p.15]. The general idea is to evolve from the traditional document-based SE process, i.e. all relevant data is captured in documents, to a model-centric process where the information along the mission is contained in a coherent system model, i.e. all system infor- mation is stored in a central repository. Thus, models are put into the centre of the system development process [153]. Documentation is secondary and generated from the system model if needed. MBSE is sup- posed to improve data analysis and management, leading to improved communication among stakehold- ers, enhanced system complexity management, increased system quality, lower costs and risks and higher productivity and knowledge transfer through re-use of models by supporting the SE activities. [81, 153]

According to the International Council on Systems Engineering (INCOSE) systems engineering is needed to transform into a model-based discipline to cope with the rising scale and complexity of systems [154]. In that context Model-Based Systems Engineering (MBSE) is assumed to be key to advance the SE discipline [153]. So in 2007, INCOSE initiated an initiative to "promote, advance, and institutionalize the practice of MBSE [...] through broad industry and academic involvement in research, standards, processes, prac- tices, and methods, tools and technology, and outreach, training and education" [155]. As part of this MBSE initiative, the applicability of MBSE for designing CubeSats has been investigated since 2011 [156]. Re- cent activities focused on the development of a CubeSat Reference Model as starting point for developing mission-specific CubeSat models [157], including the development and implementation of use cases into the model [158]. Earlier work concentrated on the creation of an executable MBSE model that integrated other discipline specific engineering models and simulations developed in MATLAB and STK for the simu- lation on an exemplary CubeSat mission [159].

The MBSE approach is still maturing and growing in popularity [160, p.38]. MBSE methodologies and tools to support the systems engineering life cycle exist and are manifold. The interested reader may be referred to [81, 155, 161] for more details on the MBSE approach and the MBSE methodologies. The usability of the INCOSE MBSE tools, processes and languages is tackled in a dedicated MBSE usability working group with latest activities dating back to 2014 [162]. So, the primary focus so far has been on concept engineering,

Page 91 Reasons for Success of SSC modest size applications, and system modelling language and tools [153]. Methods for verification, valida- tion, test and evaluation are still in the early stages. Today, the activities focus on the definition of specific SE questions that MBSE needs to answer, e.g. when is a model considered to be complete [153].

6.3.2.2.2. Model-Based Systems Engineering at ESA and DLR ESA [6] evaluates MBSE to be an important technology to improve the system design and verification pro- cess, focusing on the use of (virtual) models to support the design, analysis and verification process. How- ever, its full integration within the overall life cycle process is judged to be not yet matured and still focused on domains rather than on interdisciplinary aspects.

The Virtual Spacecraft Design (VSD) project was set up by ESA in the early 2000s as idea of a central data model to represent all systems engineering data along the mission life cycle digitally. Thereby it was in- tended as ultimate demonstration of the feasibility and benefits of Model-Based Systems Engineering (MBSE) for European space programs which is understood by ESA as an improvement of the classical SE process [163, 164, 165, 166]. The VSD tool consists of an open framework that allows to integrate and connect disci- pline and phase specific simulators in one software product. The embedded discipline specific information is summarized and available to all disciplines. The VCD tool is intended to accompany the satellite along its life cycle such that the chain of simulators covering the system life cycle of a mission from the System Concept Simulator up to the Operations Simulator is combined in one tool, assuring the consistency of data over several phases and benefiting from cross-discipline synergies. Performing SM&S in the frame of VSD is understood to set up a harmonization in the simulator chain to allow their integration into the VSD frame- work and allow the re-use of models [6]. The VSD project was concluded in 2012 with the finalization of a framework consisting of a reference database for model management and data sharing [167], a design tool for the creation of system level models [168] and a visualization tool [169]. It was ready for pilot application [163], including guidelines for its application [170]. With the VSD, the realization of MBSE within the space domain was approached in a structured way. Final validation in a running space project, however, is lacking at the time of this writing.

The Open Concurrent Design Tool (OCDT) by ESA is an early interpretation of the MBSE approach. It started in the early 2000s as advancement of the ESA Integrated Design Model that was built on an experimental basis using the spreadsheet technology both as data storage and as engineering tool. The OCDT has been proclaimed to enable efficient multidisciplinary concurrent engineering of space systems in the early life cycle phases, e.g. for the Concurrent Design Facility (CDF) of ESA. [171, 172, 173, 174] OCDT is a client / server software package developed on ESA initiative [173]. It is based on an open source database and is realized as an Excel add-in. It allows users to model their specific domains and perform sim- ple analyses and simulations. OCDT is the product implementation of the standard semantic data model defined by ESA in the technical memorandum ECSS-E-TM-10-25 on engineering design data model ex- change [175]. Goal of the model is the establishment of a common basis and consequently the simplifica- tion of model based data exchange in the early phases of engineering and design. Currently the third OCDT version is under development [176].

Recently, ESA is fortifying its efforts in the MBSE domain with the Model bAsed Requirements Verifica- tion Lifecycle (MARVL) project. Goal is the development of a Common Information Platform (CIP), to be

Page 92 Synthesis and Discussion released in 2018, that is supposed to streamline the exchange of engineering information throughout the whole mission life cycle from early design to operations. CIP employs an MBSE approach for a unified rep- resentation of the system in a model that enables a high level of detail. [177, 178]

With Virtual Satellite (VirSat), DLR has been creating and using an MBSE tool since 2007. With its focus on early design phases, VirSat 3 supports studies, for example for human spaceflight [179], in the Concurrent Engineering Facility in Bremen since 2011 and is available free of charge to interested parties. Main features include a 3D-visualization of the system, the generation of power and mass budgets and the possibility to integrate spreadsheets into the tool, cf. its user manual [180]. Currently, research is being followed to expand the functionality of the tool from Phase 0/A towards B, with the long-term objective to cover the complete system life cycle. [181]

6.3.2.2.3. Why the Current MBSE Approach is not Sufficient to Overcome the M&S Weaknesses Ever since the domain of engineering has been established, models play an essential role to save costs and reduce risks of design and development. In the space domain, models are indispensable since decades [153]. In support of the systems engineering activities and beyond, they provide information in a compact and manageable form, ease the collaboration and communication across disciplines and are essential pre- condition of visualization activities. So, the engineering is already pursued model-based as the M&S domain and the involved tools and simulators demonstrate. Hence the fundamental idea of MBSE to have a model in the centre of the activities is acknowledged by the author.

The implementation of MBSE, however, is susceptible to miss the user. Although usability is a topic raised in conjunction with MBSE, cf. [162], the approach as realized is judged by the author to be too academic to be of direct use for a space project. The MBSE approach as of today is rather concentrated on data consistency and transfer and the synergies between the models and tools used than the actual question of whether the available models and tools are sufficient for the projects to enable a successful work. However, SE is not only about the life cycle but also about the system, its structure and emergence. The MBSE approach currently propagated addresses the life cycle but looses sight of the system modelling. It appears to be rather concentrated on the improved management of the SE activities than their technical scope. One of the major identified drawbacks in the current M&S implementation, however, is the lacking suitability of current tools and simulators in support of the prediction of system emergence, see Section 5.1. The existing mission performance analysis tools are not enhanced by the MBSE approach but merged, cf. [159]. The sole combination of existing tools and models, however, does not mitigate their shortcomings like transparency, manageability or required modelling effort.

Furthermore, attention has to be given to the fact that detailed information about a system, resulting in highly detailed models, is most likely not available in the early design stages. Although the general benefit to advance the validation and verification activities to an earlier stage of the mission is desirable, it has to be thoroughly evaluated in practice, if the required design information is available at those early design stages. Although model commonality across missions is very likely to be given, see Section 3.4, the generalizability of missions and thus the preparation of predefined models as part of a satellite language, as proposed by Gross [182], and complete model and scenario catalogues to be used across projects, as supported by ESA [6], is only valid to a limited level of detail. At a certain point, mission-specific aspects always come into

Page 93 Reasons for Success of SSC play that impede mission similarity, see for example the observation plan in the LOFT mission (Section 3.2) or the observation-communication scenario in the JUICE mission (Section 3.3).

Nevertheless, MBSE as implementation of SM&S is judged to be seminal. For this purpose, however, the concept and goals of MBSE have to be re-defined with the user in the centre of the considerations, i.e. MBSE and UCD have to be actively combined. The MBSE methodologies, the involved design languages and tools, their handling and scope, they all need to be adapted to the context of use in space industry and tailored to their users, resulting in an implementation of MBSE for early and later mission stages that differs. For instance the use of SysML might be amenable in later stages where the time scale is different, however not in the very time limited early concurrent design phase. This required evolution of the methodology is in line with the UCD approach that considers the evolving user needs for the creation of a solution.

As cited by several sources, e.g [6, 153], MBSE is still in its infancy and needs to mature to gain widespread adoption. Specifically the demonstration of its value and benefits on real problems that is mandatory for its acceptance remains to be seen. During this process it will become evident if the approach meets the users’ need. The general idea of MBSE to merge all relevant tools in one infrastructure along the life cycle, the use of synergies and the consistency of data over the life cycle, is considered to be very beneficial. However, for its ultimate success, it is strongly believed that an improvement of the existing tools and simulators and the general philosophy of the concept, i.e. development of the approach with focus on the actual user, simple handling and incremental need of system information, as strong foundation is indispensable.

6.3.2.2.4. Difference to Virtual Spacecraft Design - Why it is not Sufficient to Connect Existing Tools and Simulators The question might rise whether, in view of the existing landscape of software product solutions, a new set of tools is necessary or whether the weaknesses of the current M&S implementation might be overcome by the combination of the existing tools and simulators. Projects like the Virtual Spacecraft Design and ESA’s intended realization of Model-Based Systems Engineering in general, provoke the assumption that the merging of existing tools and simulators into one to benefit from the synergies is a decent solution to close the gaps in the M&S domain. The author dissents this assumption with the following two reasons.

First, assuming that the targeted capability is already existing in the available single products, the intercon- nection of a variety of heterogeneous tools and simulators with various interfaces, model formats, scopes and details of modelling requires harmonisation efforts that are cumbersome and complex. The general need of M&S activity harmonization is acknowledged by ESA [6]. First activities have been started by ESA as technical memoranda, targeting for example at general requirements of M&S [5], a common model format for the concurrent design phase [175] or the use of Simulation Model Portability techniques [183]. An estab- lishment of these guidelines as standard, however, is still outstanding and involves further harmonization effort with all space partners.

Second, the assessment of the user needs in the frame of this work has demonstrated that the existing tool landscape is not answering the needs of the users. The sole combination of the existing tools does not as- sure a desired content is achieved as emergent behaviour. Connecting for example various sophisticated, domain-specific tools to one system tool might be technically possible (with regard to data format, inter- faces, etc.), but for an early design stage, handling and quick familiarization are equally important (see Table

Page 94 Synthesis and Discussion

E.1) and not provided to the extent needed by the sole combination of available tools. Also, one has to be aware that the level of system information a tool requires has to fit the level of information that is available or defined at the time of the tool use. Not every tool is equally usable at every stage of a mission.

Finally, it is strongly believed that the new way of thinking this work introduced via the SSC and the TSQM is inevitable to fully benefit from M&S. While it might be possible to use synergies and avoid duplications of effort by the combination of existing tools, the current tool and simulator landscape does not provide the full answer to the identified user needs. New tools and simulators following the SSC are strongly believed to be necessary to be sure to overcome the identified M&S thoroughly.

6.4. Reasons for Future Success of SSC

The success of the SSC is not limited to the current implementation presented in this work. The concept is very likely to be successful in the future since it responds to major trends of the M&S domain as discussed in the following sections.

6.4.1. Trend: M&S Is and Remains Important

The domain of M&S is key for industrial competitiveness as a main contributor to reducing development time by risk reduction and support of engineering activities [6]. It supports the verification of the system feasibility and performance, provides the possibility to demonstrate and test system capabilities either im- possible to show or necessitating expensive prototypes, and supports design trade-offs. By means of M&S the customer monitors and shapes the evolution of the project and then uses it to support system level val- idation and operations activities while the Prime uses it to support the design, development and testing of the system prior to the delivery to the customer. All this has been already demonstrated to be supported by the SSC, as the development of the SPS and the JUICE Simulator and the implementation of the LOFT Simulator and the GMSS have been showing.

The role of M&S is expected to be growing and evolving in the future and taking a more important role in the processes of companies and organizations in view of the rising complexity of the systems. Hence M&S tools and simulators will remain in great demand. The market for system design and verification tools in space is assumed to exceed 1300 users/year in Europe and 25.000 users/year worldwide. Tool developers should seize this opportunity and offer products addressing user needs. The SSC provides a suitable guideline to be profitable and competitive as it already addresses the expected trends like the intensified integration of M&S in the systems engineering process, the importance of M&S for the prediction of the system’sbehaviour during the design process, and the increasing re-use of models between projects. [6]

6.4.2. Trend: System of Systems

In recent years another notion emerged in the context of systems engineering which is System of Systems (SoS). The maturing field of SoS and with it System of Systems Engineering (SoSE) focus on the design and

Page 95 Reasons for Future Success of SSC integration of multiple complex systems to achieve performance and capabilities that individually devel- oped systems cannot reach alone. The challenge addressed consists in bringing independently developed systems - each component of the SoS follows its own set of requirements [184] - together without any sig- nificant redesign of the single systems, so that they can achieve their common goal. Maier [185] shaped the notion of SoS and its architecting principles, defining it as an assembly of component systems where the component systems fulfil their purpose on their own (Operational Independence of the Components) (1), and are continuously operated independently from the SoS (Managerial Independence of the Components) (2). Maier expanded his definition with three further traits [184], that are:

• geographic distribution of the components (3),

• the emergent behaviour of the SoS that cannot be obtained by the single components alone (4), and

• the evolutionary development of the SoS defining a SoS to be never complete (5).

Properties 1, 2 and 4 are essential for a SoS, whereas properties 3 and 5 are typical but not mandatory [184, Ch. 2]. Based on the definition of Maier, DeLaurentis [186] added yet three further SoS traits which are:

• the heterogeneity of the constituent systems of the SoS (6),

• the cross-domain character of the issues to be solved (7), and

• the network rules of interaction between the independent component systems in a SoS (8).

SoS problems are inherently multidisciplinary and they require cross-domain thinking and approaches for effective resolution. By their very nature, M&S applications are multidisciplinary and M&S is increasingly used for SoS Engineering Applications [184]. For more details on SoS, the interested reader is referred to the wide variety of literature on the topic, for instance [187] on the methodical application of the concept and [188] on the management of SoSs.

The trend towards multi-systems is also observed in the space domain. User needs are advancing towards services provided by a combination of independently developed and operated space and on-ground sys- tems. The European Earth Observation Program Copernicus, for example, comprises six missions with more than 30 satellites and a net of ground segments [189]. Projects with system components that are owned and controlled by different stakeholders comprise problem types beyond the technical domain, e.g. funding, ownership, data policy. ESA employs M&S to support these projects and developed the ESA Archi- tecture Framework to support discussion and decisions on programmatic level and assure consistent com- munication and interaction with the various stakeholders, cf. [190] and [184, Ch.12] for the tool description and its application in exemplary projects. It is a model-based architecting methodology, introducing a com- mon architecture definition language and processes tailored to ESA’s SoS related needs. As an addition to the tools already used, the framework is intended to support the trade-off analyses necessary to define a baseline architecture for the projects in question.

In general M&S has the potential to be exploited beneficially for SoSs and SoSE and is increasingly used in that respect [184]. The formalization and standardization of the shape of the M&S implementation in the SoS process, however, is still ongoing.

Page 96 Synthesis and Discussion

SSC has the capability to be a beneficial M&S implementation method for SoSs. The general idea of the SSC to create lean, user-friendly, transparent simulators and tools that support the assessment of the underlying system structure is entirely applicable to SoS. Solely a change of perspective is necessary. For example, while in a single spacecraft mission the spacecraft is defined as the system and components represent the system elements, in a system "SoS" with multiple satellites each satellite constitutes a system element by itself.

Limits in the application of the SSC to SoSs might be encountered in the modelling and simulation of polit- ical relationships and constraints as part of the SoS. A potential solution, however, might be their reformu- lation in technical constraints if feasible. For example, if observations of a specific area on Earth are desired due to political reasons, the revisit time of the spacecraft can be specified accordingly.

Future work on the applicability of SSC to SoSs is expected to ultimately demonstrate its broad application range.

6.4.3. Trend: Cost Reduction via Re-Usability

In view of the increasing number of spacecraft constellations and interplanetary endeavours, future mis- sions will become more complex and systems engineering will be all the more important. At the same time M&S budgets and with them development times will be all the more tightened. Hence, one of the major trends pursued for cost reduction is re-usability of tool and simulation parts from previous projects or be- tween phases to rationalise the development of tools and simulators in space projects.

As shown in Section 3.4, similarities between projects exist in the technical implementation of the missions and thereby also in the needs of the project teams. So the intersecting set of technical similarities between the projects and the resemblance of the challenges tackled in programs should be exploited even though the commonality between missions has its limits. The SSC supports this re-usability approach. The GMSS acts on it explicitly and so do the simulators. In the chronological chain of simulator development from the SPS over the LOFT to the JUICE Simulator simulator parts from previous projects have been reused to the most possible extent. Parts of the models, the visualization implementation and the general simulator infras- tructure have been transferred and reused from project to project. The modular structure of the simulators supported the reuse, as well as the approach of a sufficient but rather shallow depth of modelling.

The re-usability of tool and simulation parts between project phases is in line with the general trend to M&S approaches that cover the life cycle of a product, avoiding a chain of tools across phases, see for example the activities and motivation for MBSE (Section 6.3.2.2.2). Giving specific consideration to the life cycle aspect also plays into the hands of the systems engineer and his tasks. The SSC picks up the general idea to seize the synergies between phases by targeting a simulator that evolves and grows in its models and functionalities along the projects across phases. Hence, the SSC supports the re-usability of simulator parts across phases with a specific condition on the model detailing approach.

In contrast to the SSC that proclaims a limited reusability across projects and a philosophy of developing models across phases, the question might be raised why the tool development cannot be executed once to be then applicable across projects and phases. ESA, for example, pursues exactly this approach and supports technology activities with the objective to develop parametrised models and generic architectures

Page 97 Limits of Promoted Concepts only once and reusing them across phases and projects [6]. This approach, however, neglects the user needs and their individualism. A reuse of simulator items across projects is impeded once mission specifics come into play that need to be considered. Then it is all about the right balance between benefits of the reuse for cost and risk reduction and the involved potential effort for harmonization, mission specific tailoring and revalidation of the reused parts. A one-time simulator development covering the entire life cycle of a system neglects the changing needs of its users along the life cycle, the modifications in design and constraints due to unforeseen changes, as well as the evolution of a project and its stakeholders along several years. Parametrised models alone are not suited to imply all uncertainties a project faces at the beginning. So, while it is acknowledged by the author that reusability supports the minimization of costs and risks, it is believed not to be realizable to the extent that ESA assumes.

Ultimately, the necessity of simulator model and infrastructure reuse is endorsed, its feasibility demon- strated and its realization supported by the SSC. The extent of the reuse is limited by the individuality of the missions and the evolutionary character of the mission phases and involved user needs which are all respected by the SSC.

6.4.4. Trend: Independence from Single Tool and Supplier

The analysis of the current M&S landscape in Chapter 1 revealed a predominance of single (non-European) software products like MS Excel, MATLAB or STK. Specifically MATLAB/Simulink from Mathworks is graded by ESA [6] as de-facto standard in the space sector. Even more does the SSC help in that respect to overcome the dependency from single suppliers as it is applicable by any tool or simulator developer in collaboration with the user. As a result, the European software developer sector can be supported to be more competitive and a wider, diverse range of products can be at the disposal of the space sector.

6.5. Limits of Promoted Concepts

The SSC as concept for user-centred simulator development comes along with a number of benefits, in- cluding the overcoming of the identified M&S weaknesses. At the same time, it shall be noted that it is not an all-in-one device suitable for every purpose. Targeting at system level challenges, discipline specific top- ics might not be sufficiently addressed with the SSC. Also, the SSC proclaims a rather shallow multiphysics modelling philosophy, omitting the very last details of the modelled system if not imperatively needed by the users. Therefore it is essential to combine simulators built according to the SSC with sophisticated, yet highly detailed, discipline specifics tools along the mission. This is true for the simulators built according to the SSC, e.g. JUICE Simulator, as well as the PIN. Both SSC realizations are complementary to each other and to the discipline specific tools. With regard to the system life cycle, the realizations of the SSC so far address the Application Fields of Design and Analysis (AF1) and Operations (AF3). This means that the do- main of AIT (AF2) is hardly touched by the SSC and its M&S implementation still remains to be improved in the future.

Also the desire to have an all-in-one simulator at the start of a project ready to be used for the entire life cycle is not supported by the SSC, nor the wish to use one single tool across missions. The creation of

Page 98 Synthesis and Discussion a solution covering all imaginable use cases and respecting all potential user needs is considered by the author to be not effective. Nevertheless, it has to be admitted that the right balance of pre-defined ready-to- use simulator parts and the lean character of the simulators might not yet been achieved with the realized implementations of the SSC in the GMSS, the LOFT and the JUICE Simulator. Based on the generalizability of the missions, there is still room for an advanced model library accompanying the simulators. In that context, it is expected that the SSC has room to encompass a (system) database in the future, covering (ready-to-use) models and system data in general. Hence, the potential of the SSC-based simulators to serve as knowledge basin is still to be evolved and fully seized.

With regard to verification and validation (V&V), it might be argued that the V&V approach implemented in the SSC is not as stringent as in classical M&S activities, questioning its sufficiency. The SSC proclaims an approach that spreads the V&V activities across the stakeholders in several, moderately sized steps and as such keeps them to an indispensable minimum. The SSC built products embark on the strategy of diverse sanity checks rather than a lengthy test program, in line with the iterative UCD approach. As of today, this proved to be sufficient because of two main reasons. First, the SSC built products are intended to be used in combination with other tools and simulators and thus their simulation results can be cross-checked. Second, the models and functionalities of the products are defined and realized in close collaboration with the final users in a staggered approach, i.e. the scope of the products is continuously verified and validated in manageable steps, reducing the risks of errors and their late or missing detection. Nevertheless, it is admitted that with rising complexity of the missions and the challenges that come along, the V&V activities should be further detailed and elaborated in an update of the SSC.

Ultimately, both the SSC and with it the TSQM, see Section 4.4, might not be matured and validated to their very last details as of today. Being concepts that are grounded on the UCD, their usability will be increasingly developed and confirmed the more they are applied in practice and the more users are involved in their maturity process. It is the user who has the power to bring the concepts to a steady success - and it is the user who might be its largest adversary with his unforeseeable changes in attitude and opinion. As such the involvement of the user in the development process of tools and simulators is a great chance to improve the M&S implementation and at the same time its largest risk and limit.

Page 99 7. Conclusion and Future Prospects

To overcome the present Modelling & Simulation (M&S) weaknesses of limited usefulness, knowledge trans- fer and system support, the present work introduced and verified the System Simulator Concept (SSC), that has been derived from the simulator concept used in the BepiColombo mission, as guideline for the de- velopment of successful software solutions in support of systems engineering activities in early satellite de- sign phases. With the successful application of the SSC to the LOFT and JUICE mission, the definition of the General Mission and System Simulator as generic solution for early design stages and the interpretation of the SSC to the Parameter Influence Net Method (PINM), the SSC proved to be a transferable, mission inde- pendent concept that contributes to the success of projects by support of the systems engineering activities. Key success criterion of the SSC has been confirmed to be its user-centred approach, i.e. the positioning of the user, his needs and his context in the centre of all development activities of software solutions, and in that respect the implementation of parameter interdependencies in support of the assessment of the system structure and subsequently the control of the system emergence.

The Tool and Simulator Quality Model (TSQM) as expression of user needs in early design phases has been derived as supplement to the SSC from user surveys and interviews to support the design and implementa- tion of tools and simulators towards high usability and acceptability. It has been revealed that technical and handling criteria are equally important for the usability of a software solution and that the acceptability by the user is indispensable for the success of a software product.

The evaluation of the SSC and TSQM against the current trends and evolutions in the M&S domain showed that the novel concepts are a strong solution to overcome the present M&S weaknesses and to be prepared for future applications in the face of increasing system complexity. To reduce the limits the current stage of the concepts come along with, future work is recommended to comprise additional detailing, further validation and broader application of the SSC and the TSQM, e.g. the continuation of the JUICE Simulator usage, the systematic application of the SSC to further missions and other engineering domains addressing engineered systems, the validation of the TSQM by further tool and simulator users, the derivation of the TSQM for dedicated user groups, and the application of the PINM in real projects.

In line with the recurrent thread of this work, the close collaboration and communication with the user are strongly believed to be the essential factors towards an successful M&S implementation with the SSC and the TSQM that ultimately lead to the mission success.

Page 100 A. LOFT Mission and System Design Description - Basis of Mission Challenge Analysis

LOFT (Large Observatory For X-ray Timing) was a candidate mission for the M3 launch opportunity in ESA’s Cosmic Vision programme [191] with the goal to investigate the behaviour of matter in close proximity to black holes and stars. The spacecraft was planned to be launched from Kourou in French-Guiana in 2022 and to spent two years minimum in a circular low inclination orbit of 550 km altitude. LOFT was supposed to carry two instruments. The Large Area Detector (LAD) would observe targets according to an observation plane while the Wide Field Monitor (WFM) was supposed to spot specific events like star bursts to be then observed by the LAD. Scientific goal was to spent at least 40 % of the in-orbit time in observation. Table A.1 summarizes the main mission design features.

The spacecraft design solution elaborated in the feasibility phase of the LOFT mission by Airbus Defence and Space GmbH is pictured in Fig. A.1. The inertial momentum of the more than five tons heavy spacecraft was ten times higher along the x-axis and the z-axis (> 40.000 kgm2) than along the y-axis. Four reaction wheels (maximum momentum of 70 Nms and maximum torque of 250 mNm) orientated in a pyramidal ± configuration (see Fig. A.2) were planned to be in charge of compensating the induced disturbances by the Earth’s gravity and the aerodynamic drag and rotating the spacecraft into the desired inertial attitude for target observations. The desaturation of the reaction wheels was foreseen to be permanently performed by magnetic torquers accommodated along each spacecraft axis, by making use of the induced torque by the Earth’smagnetic field. The power subsystem was conceived to be based on two body-fixed solar array panels (~20 m2) for power generation (3700 W at EOL), equipped with triple-junction GaAs cells, and a battery with a capacity of 5800 Wh (EOL) for energy storage. [107, 192]

The required sky visibility of the LAD instrument can be translated in the instrument’s Field of Regard (FoR) that is the instantaneous part of the celestial sphere accessible to the LAD. The mission concept differen- tiated a nominal and a degraded FoR [193]. In degraded LAD observation mode, the detectors would have suffered higher temperatures leading to degraded, yet acceptable performance. Table A.2 presents the de- tailed notions definitions.

For the requested LAD’s nominal FoR of > 35 % (goal: 50 %) the system design solution by Airbus Defence and Space GmbH foresaw a LAD Solar Aspect Angle (SAA) of 90° 20.5°, for the degraded FoR of > 50 % (goal: ± 75 %) the SAA would have varied between 90° 30°. The resulting visibility belt centred on the Sun/space- ± craft direction, see Fig. A.3, would have assured to keep the incident Sun in the spacecraft’s xz-plane which would have been required to avoid direct illumination of the thermally sensitive WFM and LAD instrument, while assuring sufficient power availability by the non-steerable solar array. Only in eclipse, these visibility limitations would have been temporarily obsolete, allowing unlimited extension of the FoR for a short time (disregarding the occultation by the Earth).

Page 101 Table A.1.: LOFT Overall Mission Profile [193, 194, 195].

Scientific goals Study strong-field gravity, black hole masses and spins and the equation of state of ultra-dense matter via high-time-resolution X-ray observations of compact objects. Main targets: supermassive black holes, neutron stars, Active Galactic Nuclei. [196] Large Area Detector > 35 % nominal Payload Sky visibility (EOL) Wide Field Monitor > 50 % extended Lifetime 3+2 years Observing efficiency > 40 % Consumables 10 years High gain antenna steerable, X-band Launch Sojuz (Kourou), 2022 Pointing three-axis stabilized Ground station Kourou, Malindi Data rate 100 Gbit/day Low Earth Orbit 550 km, 2.5° incl.

Figure A.1.: LOFT Spacecraft and Reference Frame [107].

Figure A.2.: LOFT Reaction Wheel Accommodation in Pyramidal Configuration [107].

Page 102 LOFT Mission and System Design Description

The effect of the Earth occultation on the sky visibility of the LAD over one orbit depends on the spacecraft position in orbit and the season. It is exemplary pictured in Fig. A.4 for one orbit during autumnal equinox.

To observe a target, the LAD instrument would have been required to point to it. This would have involved rotating the spacecraft such that its z-axis, that was conceived parallel to the boresight of the LAD instru- ment, would have been aligned with the required direction. The spacecraft was supposed to maintain its inertial orientation for the required duration of the observation. Once the observation would have been finished, the spacecraft would have performed a slew manoeuvre to point to the next target. [106, 107]

According to the described operational scenario, the nominal spacecraft attitude mode (NOM) was subdi- vided into two submodes, the Attitude Hold Mode (AHM) and the Slew Mode (SLM) [107]. During observa- tion the spacecraft was planned to be in AHM, i.e. a fixed inertial attitude with LAD instrument boresight pointed to target. Then, the spacecraft would have slewed into the next attitude while being in SLM. Once the desired alignment to the target would have been achieved, the spacecraft would have switched back into AHM and started the target observation.

Table A.2.: LOFT Notions Definitions [193].

The region of the sky where the LAD can be pointed to continuously for at least 1 orbit (disregarding a potential temporal occultation of LAD Nominal Field of Regard the target by the Earth) guaranteeing the LAD nominal energy resolu- tion and nominal response stability at any time during the mission. The region of the sky where the LAD can be pointed to continuously for at least 1 orbit (disregarding a potential temporal occultation of LAD Degraded Field of Regard the target by the Earth) guaranteeing the LAD degraded energy reso- lution and nominal response stability at any time during the mission. The angle between the direction to the Sun and the viewing direction LAD Solar Aspect Angle of the LAD (i.e. normal to the plane formed by the LAD detectors)

Figure A.3.: LOFT Field of Regard [197], i.e. Instantaneous Part of Celestial Sphere Accessible to LAD, with SAA 90° 30°, Not Showing Earth. ±

Page 103 The thermal sensitivity of the instruments together with the position of the targets would have determined the slew manoeuvre characteristics. For observations in the sunlit fraction of the orbit, the slew manoeuvre would have had to be performed with restricted roll agility around the x- and z-axis in order to not expose the LAD top side ( z ) and bottom side ( z ) or the WFM directly to the incident Sun. In contrast, slews + S/C − S/C in eclipse would have been arbitrary in attitude but restricted in slew duration to fit into the short eclipse duration. Figure A.5 pictures a profile (commanded torque, spacecraft rotation rate and rotation angle) for a time optimal 60° slew manoeuvre.

The spacecraft design consolidation was requested to be performed against a three-week mock observation plan, cf. [195, Ch.3.2], [106], [193]. The plan defined the chronological sequence of targets to be observed by the LAD instrument in 21 days, their galactic coordinates, i.e. right ascension and declination, and the duration of each target observation. An extract of the observation plan is given in Fig. A.6.

To follow the observation plan, the spacecraft was supposed to change its inertial attitude to orientate the LAD towards the targets. The frequency of required slew manoeuvres along the 21 days was assessed to be 120 slews of up to 180°, with up to four slew manoeuvres in one orbit. Figure A.7 pictures a histogram of the absolute frequency of all slew angles following the observation plan with an average over all angles of 86.6°.

In addition, it was requested to check the feasibility of out of field, i.e. out of nominal or degraded FoR, ob- servations during eclipse. In that context it was desired to observe a specified target for at least ten minutes in every orbit in the last 14 days of the observing sequence. This additional scientific goal added two slews of maximal 60° per orbit to the operational scenario.

Figure A.4.: Variation of Earth Occultation on Field of Regard of LOFT’s Large Area Detector over One Orbit During Autumnal Equinox [192].

Page 104 LOFT Mission and System Design Description

Figure A.5.: Evolution of Commanded Torque, Spacecraft Angular Rate and Rotation Angle During Time Op- timal 60° Slew Manoeuvre of LOFT [107].

Figure A.6.: Extract of LOFT Mock Observation Plan [106] Indicating Order of Targets, Their Right Ascension, Declination and Required Observation Time.

Page 105 Figure A.7.: Histogram of Frequency of Required Slew Angles for LOFT in Dependence of Slew Angle Magni- tude Following Observation Plan Sequence, cf. [106].

Page 106 B. LOFT Simulator Models and Functionalities

The following Sections B.1 and B.2 present the models implemented in the LOFT Simulator. Section B.3 gives insight into the handling of the LOFT Simulator.

B.1. LOFT Power Subsystem Model

For the Power Subsystem modelling worst case assumptions are taken into account, i.e. the parameters are modelled at EOL. The total electrical power generated by the solar array is:

P S A cosSAA η η η , (B.1) SA,total = · SA · · cov · cell,EOL · temp with the cell efficiency at EOL: µ100 d ¶l η η − . (B.2) cell,EOL = cell · 100

The maximum battery energy at EOL is obtained with:

100 Closs Emax,EOL Emax − , (B.3) = · 100 with the capacity loss of the battery

q p C (0.005 DOD 0.021) N (0.009 T 2 0.0129 T 0.1533) l , (B.4) loss = · av + c ycle + · B at − · B at + B at and the maximum battery energy at BOL calculated by:

E C N N V . (B.5) max = cell · str ing s · cellsperstring · cell,mean

The battery energy in sunlit fraction of the orbit is calculated with:

EB at,n EB at,n 1 (PSA,total Puser ) ∆tchar ge ηchar ge , (B.6) = − + − · · and in eclipse with: 1 EB at,n EB at,n 1 Puser ∆tdi schar ge . (B.7) = − − · · ηdi schar ge

Finally, the battery State of Charge (SoC) is obtained by:

EB at,n SOC . (B.8) = Emax,EOL

Page 107 LOFT AOCS Model

Table B.1.: LOFT Simulator Power Model Parameters.

Orbit

Solar constant S 1358 W/m2

Battery

Capacity of battery cell Ccell 2.1 Ah

Number of cells per string Ncellsperstring 12 -

Number of strings Nstr ing 96 -

Mean cell voltage Vcell,mean 4.1 V

Averaged Depth of Discharge DODav 35.08 %

Number of duty cycles Nc ycle 29130 -

Battery temperature TB at 20 °C

Battery lifetime lB at 5.5 yrs

Battery charging efficiency ηchar ge 0.96 -

Battery discharging efficiency ηdi schar ge 0.98 - Solar array

2 Solar array size ASA 18.8 m

Cell coverage ηcov 0.82 -

Cell efficiency ηcell 0.3055 - Yearly degradation of cells d 1.25 % S/C lifetime l 5.25 yrs

Temperature effect ηtemp 0.77 - S/C Bus / Payload Power demand (incl. power distribution, P harness and MPPT loss) User in sunlight 2254 W in eclipse 2517 W

B.2. LOFT AOCS Model

B.2.1. Disturbance Torque due to Earth’s Gravity Gradient

The direction and magnitude of the disturbance torque significantly depend on the current orientation of the satellite.

Page 108 LOFT Simulator Models and Functionalities

The general expression for the calculation of the gravity gradient torque due to the gravitational force of the Earth can be expressed by: 3µ ~ TGG 5 ~rS/C ←→I S/C ~rS/C , (B.9) = rS/C · × · with all parameters expressed in the body-fixed system.

If a principal body axis system is used, the inertia tensor ←→I S/C is diagonal and the deviational moments are equal to zero. Hence the above equation can be simplified to:   r r ¡I I ¢  S/C,y · S/C,z · zz − y y  3µ   T~   . (B.10) GG 5 rS/C,x rS/C,z (Ixx Izz ) = rS/C ·  · · −    r r ¡I I ¢ S/C,x · S/C,y · y y − xx

B.2.2. Arrangement of Reaction Wheels

The reaction wheels are not arranged along the body axes but in a triangular pyramid, see Fig. A.2. Their rotation axes can be described in the body-fixed satellite coordinate system by:

µ 1 1 ¶T ~xRW,1 cosφ1 sinφ1 cosφ1 , = p2 p2 µ 1 1 ¶T ~xRW,2 cosφ2 sinφ2 cosφ2 , = −p2 p2 (B.11) µ 1 1 ¶T ~xRW,3 cosφ2 sinφ2 cosφ2 , = p2 − p2 µ 1 1 ¶T ~xRW,4 cosφ1 sinφ1 cosφ1 . = −p2 − p2

B.2.3. Euler Equations for Angular Momentum Analysis

The angular momentum of a reaction wheels with regard to the satellite system, thus the relative angular momentum which is limited, can be assessed by:

h I ω , (B.12) RW,rel = RW · RW,rel with ωRW,rel being the rotation rate of the reaction wheel relative to the spacecraft.

ωRW,rel can be obtained by the Euler equations, i.a. the deviation of the conservation of the angular mo- mentum of the system satellite including the reaction wheels, that are:

I ω˙ ω ω ¡I I ¢ I ¡ω ω ω ω ω˙ ¢ M xx · xx + y y zz zz − y y = RW RW,rel,2 zz − RW,rel,3 y y − RW,rel,1 + xx,ext I ω˙ ω ω (I I ) I ¡ω ω ω ω ω˙ ¢ M (B.13) y y · xx + zz zz xx − zz = RW RW,rel,3 xx − RW,rel,1 zz − RW,rel,2 + y y,ext I ω˙ ω ω ¡I I ¢ I ¡ω ω ω ω ω˙ ¢ M , zz · zz + xx y y y y − xx = RW RW,rel,1 y y − RW,rel,2 xx − RW,rel,3 + zz,ext

Page 109 LOFT AOCS Model with the simplified inertia tensor:   I 0 0  xx    B   ←→I S/C  0 Iy y 0  (B.14) =     0 0 Izz and T ~ωB ¡ω ω ω ¢ (B.15) = xx y y zz being the rotation of the spacecraft with regard to a inertial system, expressed in the body-fixed system. The moment produced by the reaction wheels acting in the x-axis of the body, can be expressed by:

M B I ω˙ , (B.16) xx,RW = RW · RW,1 with ω˙ ω˙ ω˙ , (B.17) RW,1 = xx + RW,rel,1 where ω˙ xx is the derivation of the rotation of the spacecraft with regard to the inertial system and ω˙ RW,rel,1 is the derivation of the rotation of the reaction wheel with regard to the satellite platform.

B.2.4. Angular Momentum Analysis for Attitude Hold Mode

In case the spacecraft holds its attitude, the reaction wheels have to compensate the external torques, for LOFT especially due to the gravitational field of the Earth. It is:

~ωB (0 0 0)T (B.18) = so it is: I ω˙ M M RW · RW,rel,1 = xx,RW = xx,ext I ω˙ M M (B.19) RW · RW,rel,2 = y y,RW = y y,ext I ω˙ M M . RW · RW,rel,3 = zz,RW = zz,ext The stored angular momentum in the reaction wheels is then obtained by integration of the reaction wheel momentum over the duration of the attitude hold.

B.2.5. Angular Momentum Analysis for Slew Mode

In case of a slew, ~ωB ~0 and the rotation of the spacecraft is determined by the required slew according to a 6= given profile (see Fig. A.5 for a 60° slew), following the observation plan.

Consequently the Euler equations have to be solved as first order differential equation for ~ωRW,rel in order to calculate the momentum storage of the reaction wheels according to Eq. (B.13).

Page 110 LOFT Simulator Models and Functionalities

B.2.6. Ambiguity of Reaction Wheels

As the reaction wheel arrangement described in Fig. A.2 and in the following equation shows, the transfer of the calculated moments from the body axes to the wheel axes is not unambiguous:     M  RW,1  M    xx       MRW,2       My y  ←→A   , (B.20)   =      MRW,3    Mzz   MRW,4 where ¡ ¢T ←→A ~x ~x ~x ~x . (B.21) = RW,1 RW,2 RW,3 RW,4

With the pseudo-inverse ←→A inv , a single, optimal solution can be found. So it is:     M  RW,1  M    xx       MRW,2  inv     ←→A  My y    , (B.22)   =      MRW,3    Mzz   MRW,4 with ³ ´ 1 ←→A inv ←→A T ←→A ←→A T − . (B.23) =

B.2.7. Magnetic Torquer Performance Model

The reaction wheel desaturation for LOFT is performed by a combination of thrusters and magnetic tor- quers (MQT). Only the angular momentum components perpendicular to the Earth magnetic field can be off-loaded with the MQTs. Since the spacecraft attitude changes frequently, three MQTs are arranged per- pendicular in the three body main axes, providing the major part of the required off-loading.

The required torque T~req to be provided by the MQT is composed of the need for desaturation of the re- action wheels and the compensation of the disturbance torque due to gravity gradient. With the nominal angular momentum of the wheels hRW,rel,nom set to 0 Nms for simplicity reasons, T~req is expressed in the inertial system with: 1 T~ I H~ I T~ I . (B.24) req = − RW · ∆t + GG

The required magnetic moment expressed in the body fixed system B is:

B~B T~B B × req m~ req . (B.25) = B~B 2 | | where B~ [T] is the Earth’s magnetic field.

Page 111 Scope and Operation of LOFT Simulator

The applicable magnetic moment is limited by the capacity of the employed MQTs that is:   1200     B   m~ MQT  1200  . (B.26) =     1200

The torque by the MQTs can be calculated by:

T~B m~ B B~B . (B.27) MQT = req × The difference in torque not compensated by the MQTs remains in the reaction wheels, i.e. it is:

T~B T~B T~B , (B.28) rem = req − MQT and the remaining angular momentum stored in the reaction wheels is:

H~ I T~ I ∆t . (B.29) RW = − rem ·

B.3. Scope and Operation of LOFT Simulator

The simulator delivery comprises four main parts. The simulator itself is an executable file, LOFT.exe, that can be run on any operation system without installation. Two folders, "Resources" and "Results", accom- pany the executable file. The first folder contains the configuration details for the simulation, i.e. the sim- ulation input. The second folder serves as repository for the simulation data, i.e. the simulation output. In addition, the simulator delivery comprises a user manual, explaining the main models and functionalities and describing operation and handling of the simulator.

The structure of the LOFT Simulator is comparable to the one from the SPS, see Fig. 2.1, with the difference that the interface to the EPS and the trajectory loader are omitted and the observation plan loader is added. The simulator comprised three types of models covering orbital mechanics, environmental conditions and spacecraft specifics. The models were elaborated by the author of the current work and confirmed by the LOFT study team in collaboration with VECTRONIC Aerospace GmbH who implemented the simulator in Delphi. According to the identified challenges and use cases, see Section 3.2.2, the following customized models and simulator functionalities were specified for the final version 1.2.7 of the LOFT Simulator (08 July 2013, status end of Phase A):

• For the attitude and orbit control system:

– inertial pointing attitude,

– differentiation between spacecraft modes AHM and SLM,

– reaction wheel performance model with specified operation profile during slew (see Annex B.2),

– magnetic torquer performance model (see Annex B.2),

Page 112 LOFT Simulator Models and Functionalities

– gravity gradient torque model,

– slew manoeuvre execution according to specified spacecraft slew profile (see Fig. A.5) with

1. constant angular acceleration of spacecraft,

2. ramp with constant angular velocity of spacecraft, and

3. constant angular deceleration of spacecraft

to slew between observation targets.

• For the power system:

– solar array performance model (see Annex B.1), and

– battery performance model (see Annex B.1).

• For the consideration of the operational scenario:

– capability to read in and execute mock-observation plan according to described sequence of operations (AHM-SLM-AHM) described in Section A, and

– differentiation between time spent in Earth occultation, duration of slew manoeuvres and time spent for actual observation of target.

• Fundamental models and functionalities as prerequisite for detailed simulation:

– physical properties of spacecraft and its equipment (i.e. inertial momentum),

– International Geomagnetic Reference Field (IGRF) to model the Earth’s magnetic field for the magnetic torquer operation,

– orbital mechanics, in particular propagation of the Earth during the year and the implementa- tion of a Keplerian spacecraft orbit, J2 model, and

– coordinate system definitions and transformation matrices supplementing implemented mod- els.

• For the user interface:

– comprehensive graphical user interface (GUI) with 3D-visualisation of the satellite in operation,

– output and saving of simulation data in a portable format (e.g. MS Excel, Matlab),

– configuration of simulation input and output (spacecraft and mission parameter settings, e.g. orbit parameters, inertial momentum of spacecraft, dimensions of solar array, operation status of reaction wheels, and simulator settings, e.g. simulation start date, record settings).

For the simulation, the simulator executes the specified models consecutively. For every time instance, the simulator-internal orbit propagator determines the orbital parameters of the spacecraft, the Earth and the Sun. This information triggers the environmental models that, in turn, trigger the spacecraft model calculations.

Page 113 Scope and Operation of LOFT Simulator

In more detail, executing the simulator loads the input files - all deposited in the "Resource" folder - with the following information that is defined by the user prior to each simulation either by the configuration file or directly via an input mask on the GUI:

• spacecraft and mission parameter settings, e.g. orbit parameters, inertial momentum of spacecraft, dimension of solar array, operation status of reaction wheels,

• simulator settings, e.g. simulation start date, record settings,

• observation plan,

• graphical information, e.g. image of the Earth, spacecraft dimensions and colours, star map.

The GUI is opened in parallel. Its default set up is pictured in Fig. B.1. A menu bar located in the upper left section, as used in various common programs like MS Office applications, allows intuitive handling of the tool. The major part of the user interface is composed of windows that can be selected by the user that picture the evolving status of the mission and spacecraft. Due to the importance of the observation plan and the fact that the AOCS support was in the focus of the simulator use, the GUI has been tailored for that purpose. The upper left window pictures the observation status of the targets. The user continuously gets information about the observation status of the target and can track the evolution of the net obser- vation time, the time spent in slew manoeuvres and in Earth occultation. The windows in the lower part of the user interface are used to plot the evolution of the dynamic profiles of parameters of interest, e.g. the angular momentum of the four reaction wheels, in parallel to the on-going simulation, and the upper right window lists the current value of AOCS data of interest. The window in the upper middle is employed for 3D-visualization of the spacecraft in operation, including the spacecraft body coordinate system (red, green, blue) and the direction vector to the Sun (yellow). The user’s perspective on the scenery is modifi- able, putting either LOFT, the Earth or the Sun in the centre, and can be rotated and zoomed as needed. The windows of the GUI can be individually selected and the GUI can be set up by the user according to his needs. The simulator offers a wide range of windows containing plots and status information on parameters of interest. Their content and visual appearance was specified in collaboration with VECTRONIC Aerospace GmbH who brought in their experience with the SPS.

The simulation output is given in two ways: the evolution of chosen spacecraft and mission parameters is displayed in the GUI while the observation plan is run by the simulator as described above. If the user wishes, data can be recorded in addition. For this purpose, the user selects the data that shall be recorded before starting the simulation run; in case of LOFT, the data is grouped in two parts with reference to the related subsystem (Attitude, Power). The data is recorded in txt-files that are compatible with programs like MS Excel and MATLAB. The files are marked with the simulation start and date of execution for facile differentiation and are deposited in the "Results" folder.

To start the simulation, the user has to press the play button in the menu bar. According to the selected settings, the simulation is run in a defined pace with a specified simulation start and end date. All simulation settings can be changed during the on-going simulation.

Further details on the functionality and models of the LOFT Simulator are given in [116, 138, 139, 140].

Page 114 LOFT Simulator Models and Functionalities

Figure B.1.: LOFT Simulator Default GUI Screenshot (Final Version 1.2.7, 08 July 2013).

Page 115 C. JUICE Mission and System Design Description - Basis of Mission Challenge Analysis

The JUpiter ICy moon Explorer (JUICE) is the first large-class (L-class) mission in ESA’s Cosmic Vision pro- gramme [191]. Its goal are measurements of Jupiter’s atmosphere and plasma environment and investi- gations of the surface and interior of the three icy moons Callisto, Europa and Ganymede, with a special emphasis on the latter. For this purpose, the JUICE spacecraft will carry ten instruments for remote sens- ing, geophysical, and in situ investigations of the Jovian system. [198]

Planned for launch in 2022 by an Ariane 5 ECA from Kourou in French-Guiana, the spacecraft will use Venus and Earth gravity assists to arrive at the Jovian system eight years later. The spacecraft will then perform a two and a half year tour in the Jovian system while continuously observing Jupiter’s atmosphere and mag- netosphere. Two Europa and several Callisto and Ganymede fly-bys are part of the tour to study the moons and shape the spacecraft’s trajectory. After this grand Jupiter tour, the spacecraft will start its Ganymede science phase of nearly ten months and orbit Ganymede in several elliptical and circular orbits with various altitudes until finally impacting on it. [199]

Table C.1 summarizes the main design characteristics of the mission.

Table C.1.: JUICE Overall Mission Profile [200].

Scientific goals Explore the Jupiter system as an archetype for gas giants; characterise Ganymede, Europa and Callisto as planetary objects and potential habitats; Investigations during Ganymede science phase on extent of the ocean and its relation to the deep interior; ice shell structure including distribution of subsurface water; geology, composition and evolution of selected targets with very high resolution; global topography; local plasma environment; sinks and sources of the ionosphere and exosphere; deep interior. [199] 10 instruments for remote 500 km, nearly circular Payload sensing, geophysics & Orbit (GCO500) 86.3° mean inclination plasma and fields [201] 11 yrs (incl. 3.5 yrs in body-fixed, Mission Duration High Gain Antenna Jovian system) X- and Ka-band Ariane 5 ECA (Kourou), Launch Pointing three-axis stabilized 2022 Ground station Cebreros, Malargue Data rate 1.4 Gbit/day

Page 116 JUICE Mission and System Design Description

The spacecraft design solution elaborated in the assessment phase consisted of a one degree-of-freedom two-sided solar array (64 m2) for power generation (636 W EOL), equipped with triple-junction GaAs cells optimised for Low-Intensity-Low-Temperature application, and a battery with a capacity of 4750 Wh for en- ergy storage. A body-fixed 3.2 m High Gain Antenna, capable of X- and Ka-band transmission, was planned to support the daily transmission of 1.4 Gbit to Earth. Four reaction wheels with a capacity of 40 Nms, ± orientated in triangular pyramid configuration, were planned to be in charge of executing slew and yaw- steering and compensation of the induced gravity gradient torque by Ganymede. Wheel off-loading was foreseen to be performed by thrusters in regular intervals [202]. Figure C.1 presents the spacecraft design at delivery of the JUICE Simulator in Phase A/B1 in July 2014. The asymmetric inertial momentum of the spacecraft was approximately five times higher along the x-axis and the z-axis (>40.000 kgm2) than along the y-axis.

Figure C.1.: JUICE Spacecraft and Reference Frame, July 2014 [202].

The operational scenario for the Ganymede science phase at nearly circular 500 km altitude (GCO500) has been planned to divide each Earth day in a science phase of 16 hours and a communication phase of eight hours. In the science phase the scientific instruments would be operated, in the communication phase the generated scientific and housekeeping data would be transmitted to Earth [203].

According to the used operational scenario, the normal spacecraft attitude mode contained two pointing attitudes [202]. In science phase, the spacecraft would be Ganymede-pointed (Xsat-axis pointing nadir to- wards moon centre), in communication phase the High Gain Antenna (-Xsat-axis) would point towards the Earth. Reaction wheel off-loading by the thrusters would be performed right before and after Science- Communication attitude switch. In the science phase, the spacecraft performs yaw-steering, depending on the sensitivity of the operating instruments. Due to the significant distance and the unfavourable orienta- tion of the spacecraft to the Sun, yaw-steering was planned to orientate the solar panels optimally towards the Sun and as such to optimize the power budget. For this purpose, the spacecraft would rotate around the x-axis with an angle directly proportional to Π/2 β . Figure C.2 pictures the orientation of the orbit − Sun to Sun and Earth during GCO500 and the nominal flight orientation of the spacecraft without yaw-steering.

βSun is defined to be the unsigned angle between the orbital plane and the Sun direction [201, p.18]. It is used to define the orientation of the spacecraft orbit to Sun and Earth that evolves during the GCO500 pe-

Page 117 riod. Starting with βSun = 62°, the angle grows approximately linearly by 0.16 °/day [201] and is chosen at the beginning of GCO500 such that Ganymede produces no eclipse.

Figure C.2.: JUICE GCO 500 Orbit and Beta Angle Definition.

The design consolidation was to be performed against a fictive four months science scenario [204] that defined the operation of the ten instruments on-board. It was accompanied by further information on the power consumptions of the instruments [205]. Three operation modes were allocated to each of the ten instruments. The instruments would be either operating (ON), in stand-by mode (SB) or turned off but still heated (H). Depending on the mode, power consumption and generated data rate would vary:

• OP: consuming power and producing data,

• H: consuming power, no data production, and

• SB: consuming power, no data production.

Page 118 D. JUICE Simulator Models and Functionalities

D.1. JUICE Mass Memory Model

To demonstrate the level of detail of the JUICE simulator models, the mass memory model is exemplary pictured in the following. More details and models are specified in the JUICE Simulator Specification [141].

The current size of the occupied mass memory is the sum of Cn,S, the occupied mass memory by the science data, and Cn,HK , the occupied mass memory by the house keeping data. So it is:

C C C . (D.1) n = n,S + n,HK

The simulator has been supposed to check whether Cn exceeds Cmax , the capacity of the mass memory. It is: ¡ ¢ Cn,S Cn 1,S RS RD/L,S ∆t , (D.2) = − + − · and ¡ ¢ Cn,HK Cn 1,HK RHK RD/L,HK ∆t . (D.3) = − + − ·

RS represents the sum of the produced science data by the instruments and depends on the investigated science scenario. Each single instrument produces an instrument specific data rate when operating. RHK represents the payload and the spacecraft housekeeping data. The averaged HK data volume per day is 10 % of the assumed data volume of 1.4 Gbits, i.e. 140 Mbits/day, leading to a HK data rate of 1.62 kbps.

Two bands are foreseen for the downlink, Ka-band and X-band. The averaged downlink data rate is day de- pendant and is read out of a look-up table by the simulator. It is calculated as quotient of the foreseen daily data volume by ESA and the total downlink time per day. The data in the look-up table allows additionally for the effect of re-transmission by a 3 % loss factor for the Ka-band volume and 7 % for the X-band volume. The reduced downlink rate for each band is calculated by:

RK a/X R . (D.4) K a/X ,red = 1 loss f actor +

The complete available daily Ka-band data volume is allocated to science data. The X-band data volume is foreseen for both housekeeping and science data together. A maximum of 20 % of the X-band data volume is foreseen for the HK data transmission. In case the HK data volume is smaller, the residual data volume is allocated to the science data transmission.

Page 119 Scope and Operation of JUICE Simulator

The exact allocation of the X-band volume to science and HK data transmission can be obtained with the condition of the optimal HK data transmission Cn,HK = 0 b. In that case Eq. (D.3) results in the required HK data rate: Cn 1,HK RD/L,HK ,req − RHK . (D.5) = ∆t +

In case R 0.2 R , it is: D/L,HK ,req ≤ · X ,red R R , (D.6) D/L,HK = D/L,HK ,req and R R ¡R R ¢ . (D.7) D/L,S = K a,red + X ,red − D/L,HK

Otherwise it is: R 0.2 R , (D.8) D/L,HK = · X ,red and Eq. (D.7) results in: R R 0.8 R . (D.9) D/L,S = K a,red + · X ,red

In general, downlink to the Earth is planned for a maximum of eight hours per Earth day (see Scheduler description in Section C). The daily downlink time interval can be shorter or equal to zero in case:

• Jupiter occultation occurs,

• the minimum elevation of the ground station of 10° is not reached,

• the angle between the orbiter, the Earth and the Sun is smaller than 5°.

Please note, that X-band downlink is only established if a ground station elevation of exactly 10° is given at the start of transmission. In case the downlink is delayed by Jupiter occultation, leading to higher elevations than 10° at transmission start, downlink in X-band is not established and the complete pass is lost for data transmission.

The ground stations are assumed to be one or two out of the ESA 35 m network comprising Cebreros, New Norcia and Malargue. Baseline ground station is planned to be Malargue, see Table C.1.

D.2. Scope and Operation of JUICE Simulator

The simulator delivery comprises four main parts. The simulator itself is an executable file, JUICE.exe, that can be run on any operation system without installation. Two folders, "Resources" and "Results", accom- pany the executable file. The first folder contains the configuration details for the simulation, i.e. the sim- ulation input. The second folder serves as repository for the simulation data, i.e. the simulation output. In addition, the simulator delivery comprises a user manual, explaining the main models and functionalities and describing operation and handling of the simulator.

The structure of the JUICE Simulator is comparable to the one from the SPS, see Fig. 2.1, with the dif- ference that the interface to the EPS and the trajectory loader are omitted and the operational scenario

Page 120 JUICE Simulator Models and Functionalities loader is added.The simulator comprised three types of models covering orbital mechanics, environmen- tal conditions and spacecraft specifics. The models were elaborated by the author of the current work and confirmed by the JUICE team in collaboration with VECTRONIC Aerospace GmbH who implemented the simulator in Delphi. The company has been realizing the successive evolution of the JUICE Simulator in Phase B2/C/D as well. According to the identified challenges and use cases described in Section 3.3.2, the following customized models and general simulator functionalities were specified for the final Phase A/B1 JUICE Simulator version 1.0.2 (status at first delivery on 11 July 2014):

• for the attitude and orbit control system

– implementation of and differentiation between Nadir and Communication pointing attitude,

– reaction wheel performance model with specified operation profile during slew,

– wheel desaturation model,

– gravity gradient torque model,

– slew manoeuvre execution according to specified spacecraft slew profile with

1. constant angular acceleration of spacecraft,

2. ramp with constant angular velocity of spacecraft, and

3. constant angular deceleration of spacecraft

to slew between Nadir and Communication pointing attitude.

• for the power system

– solar array performance model,

– battery performance model, and

– differentiation of instrument power consumption according to instrument modes.

• for the communication system

– mass memory capacity model (see Annex D.1), including data transmission constraints on ground stations, and

– definition of ground stations.

• for consideration of the operational scenario

– capability to read in and execute operation scenario, i.e. the scheduler, and

– differentiation between time spent in slew, wheel desaturation, Nadir and Communication point- ing attitude.

• fundamental models and functionalities as prerequisite for detailed simulation:

– physical properties of spacecraft and its equipment (i.e. inertial momentum),

– orbital mechanics, in particular

Page 121 Scope and Operation of JUICE Simulator

* propagation of Jupiter and Ganymede during the year and the implementation of a Kepler orbit for the spacecraft (Keplerian spacecraft orbit and orbit propagator),

* consideration of Jupiter occultations, and

* beta angle evolution of the spacecraft orbit, and

– coordinate system definitions and transformation matrices supplementing implemented mod- els.

• for the user interface

– comprehensive graphical user interface (GUI) with 3D-visualisation of the satellite in operation,

– output and saving of simulation data in a portable format (e.g. MS Excel, Matlab),

– configuration of simulation input and output (spacecraft and mission parameter settings, e.g. orbit parameters, inertial momentum of spacecraft, dimensions of solar array, operation status of reaction wheels, and simulator settings, e.g. simulation start date, record settings).

Handling and operation of the JUICE Simulator is to be performed as for the LOFT Simulator, see Annex B.3. More details on the functionality and models of the JUICE Simulator at delivery in July 2014 are given in related documentation [142, 141]. For the successive developments, details are to be found in further documentation [112, 111, 143].

The default set up of the JUICE Simulator GUI is pictured in Fig. D.1.

Page 122 JUICE Simulator Models and Functionalities

Figure D.1.: JUICE Simulator Default GUI Screenshot (Final Version 1.0.2, 11 July 2014).

Page 123 E. Tool and Simulator Quality Model Quality Criteria Weighting as Quantified Expression of User Needs

As described in in Chapter 4, several surveys were executed in the frame of this study to obtain results lead- ing to the determination and specification of the TSQM, in particular the usability breakdown structure pictured in Fig. 4.4, and the determination of quality criteria weightings, see Table E.1 for different respon- dent groups.

Annex E.2 addresses the initial survey that was conducted at the Future Programs Department of Airbus Defence and Space GmbH in Friedrichshafen, cf. [122]. The corresponding questionnaire is presented in Annex E.2.2. The feedback is summarized in Table E.1 in the User Group 1 (all respondents), and in addition differentiated according to the tool and simulator usage, i.e. User Group 2 represents the frequent tool and simulator users, and User Group 3 the occasional tool and simulator users, cf. [120, 122].

After the first survey, it was decided to broaden the perspective on eligible users and exploit the potential assembled in two conferences related to the topic to consolidate the TSQM and quality criteria weightings. A larger number of respondents than achieved through the first survey was desired to consolidate the ob- tained results. The results are summarized in Annex E.3 and E.4. The questionnaire presented in Annex E.4.4 was distributed among the participants of the "6th International Workshop in Systems & Concurrent Engineering for Space Applications (SECESA 2014)". Results of this interrogation are reflected in User Group 4 in Table E.1.

The questionnaire distributed at SECESA 2014 was the revised and translated version of the first survey. As one of the lessons learned, questions 62 to 65 were added, see Annex E.4.4, which allowed to assess the quality criteria Technical and Interface Maturity, Functional Suitability and Functional Correctness, and their respective parent criteria. To allow for comparison with the former survey and corresponding user groups, the answers of User Group 4 were evaluated twice, once with the original set of questions common to all user groups, and a second time taking into account the additional questions 62 to 65. Where applicable these values are marked with an elevated † in Table E.1. Details on the evaluation of the answered questionnaires, leading to the pictured quality criteria weighting in Table E.1, are outlined in Section 4.3.2 and further detailed by Nemetzade and Förstner [122].

Page 124 Tool and Simulator Quality Model Quality Criteria Weighting

E.1. Tool and Simulator Quality Model: Quality Criteria Weighting - Survey Results

Table E.1.: Tool and Simulator Quality Model: Quality Criteria Weighting - Survey Results for User Group 1 to 3 Based on Work by Nemetzade and Förstner [122] and Questionnaire Pictured in Annex E.2.2, for User Group 4 on Questionnaire Presented in Annex E.4.4. To be Noted That Statements Marked with a Precedent * Have Not Been Considered in the Overall Weighting of the Associated Criterion But Are Presented to Provide the Reader a Larger Picture of the Survey.

User group 1 2 3 4

Number of users 22 16 6 9

Criterion Meaning Relevance

Reliance in the serviceability/operational reliabil- 7.7 Emotional Reliability ity of the tool is given and is realized by tool matu- 7.7 8.1 6.6 8.5† rity and transparency.

*The tool is widely accepted in the space sector. 4.2 3.6 5.7 4.2

*The tool is not solely used by user 5.2 5.1 5.6 7.8

The tool is mature with regard to the implemented technical scope and functionalities and its user- Technical and 7.2 friendliness. The maturity is ensured by regular 7.2 7.3 6.9 Interface Maturity 7.7† maintenance, enhancements and validation of the tool.

The tool is maintained regularly. 6.8 6.9 6.7 7.3

The tool is validated. 8.4 8.8 7.3 8.2

The sequence of operations the tool functionality Transparency is based on is well-known by means of adequate 7.9 8.4 6.5 8.0 documentation and/or access to the source code.

It is possible to extend the tool with self- Accessibility 7.3 8.0 5.3 8.0 programmed code.

Documentation on the implemented system mod- Documentation els and calculations, i.e. specification of the mod- 8.5 8.9 7.7 7.1 els is given.

Page 125 Survey Results

Table E.1.: (continued).

User Group 1 2 3 4

Number of users 22 16 6 9

Criterion Meaning Relevance

Handling with and operation of the tool is possi- ble within a short time and supported by a suffi- Familiarization cient documentation, self descriptive functionali- 6.3 6.4 6.9 8.2 ties and a functional appropriate visualization and structure.

The tool can be handled within few hours (up 6.2 5.4 8.3 7.8 to 4h) such that basic functionalities can be per- formed.

After a pause of more than 3 weeks, the tool can be 6.5 6.3 7.2 8.2 used without any need for re-familiarization.

Documentation Support for tool usage is provided 7.1 7.0 7.8 7.5

via a user manual. 7.6 7.5 8.0 7.6

via a getting-started guide. 7.4 7.1 8.0 7.1

via tutorials. 6.4 6.3 6.7 7.8

via online help. 6.4 6.0 7.3 7.1

Self-Descriptiveness The tool can be handled largely intuitively. 6.1 6.2 6.0 9.1

FAVS1 Tool design and structure are user-friendly. 6.6 6.6 6.6 8.4

The tool design is clear. 7.1 7.3 6.8 7.8

The tool functionalities are self-descriptive. 6.1 6.0 6.4 9.1

The desired manageability of the tool is given and Manageability realized through the possibility to modify the tool, 6.6 6.6 6.5 7.8 its portability and interoperability with other tools.

It is possible to modify the tool, realized by the ac- Modifiability 6.3 6.1 6.7 7.8 cessibility to and modularity of the tool.

1Functional Appropriate Visualization and Structure

Page 126 Tool and Simulator Quality Model Quality Criteria Weighting

Table E.1.: (continued).

User Group 1 2 3 4

Number of users 22 16 6 9

Criterion Meaning Relevance

Accessibility It is possible to extend the tool with self- programmed code and modify the input data, i.e. 6.8 6.8 6.8 7.6 parametrize the tool.

The tool is parametrized via a GUI. 5.9 5.3 7.7 8.2

The tool is parametrized via a configuration file. 7.1 7.0 7.3 6.4

It is possible to extend the tool with self- 7.3 8.0 5.3 8.0 programmed modules.

Modularity The modular structure of the tool allows the ex- 5.7 5.4 6.7 8.0 pansion of the tool with ready-to-use modules.

Portability The tool can be run on several operating systems. 3.1 3.6 1.7 4.4

Interoperability The tool is compatible with other software prod- 7.2 7.7 6.2 7.8 ucts.

Interoperability is assured via an interface. 6.5 6.7 6.0 7.1

Post-processing of the results with other tools is possible through a compatible format of the out- 8.0 8.7 6.3 8.4 put data.

The technical scope of the tool is adequate for its 7.8 Functional reliability 5.7 5.5 6.3 purpose and it works reliably. 8.4†

Functional The technical scope of the tool is adequate for its 7.9 6.5 6.4 6.7 suitability purpose. 8.1†

Functional The tool comprises all required mathematical sys- 6.7 6.8 6.6 7.9 completeness tem models.

The tool supports analyses originating from the 7.7 7.6 7.7 7.8 user’s own discipline.

The tool supports analyses originating from other 6.3 6.0 7.0 6.9 disciplines.

The tool can be adapted to changing mission re- 6.2 6.7 5.0 9.1 quirements.

Page 127 Survey Results

Table E.1.: (continued).

User Group 1 2 3 4

Number of users 22 16 6 9

Criterion Meaning Relevance

*The tool can be used unchanged along several 5.6 5.5 6.0 8.4 mission phases.

Functional The tool models are correctly set up, implemented - - - - Correctness and executed. 9.0†

The tool is modifiable and as such allows to Modifiability 6.3 6.1 6.7 7.8 achieve functional suitability of the tool.

Accessibility It is possible to extend the tool with self- programmed code and modify the input data, i.e. 6.8 6.8 6.8 7.6 parametrize the tool.

The tool is parametrized via a GUI. 5.9 5.3 7.7 8.2

The tool is parametrized via a configuration file. 7.1 7.0 7.3 6.4

It is possible to extend the tool with self- 7.3 8.0 5.3 8.0 programmed modules.

The modular structure of the tool allows the ex- Modularity 5.7 5.4 6.7 8.0 pansion of the tool with ready-to-use modules.

Recoverability The tool has a recovery function after program 4.2 3.6 5.7 7.6 crashes.

It is possible to re-use parts/modules of the tool for Reusability 8.2 8.4 7.7 8.0 other missions.

It is possible to re-use the complete tool for other 6.9 7.0 6.7 8.8 missions.

Page 128 Tool and Simulator Quality Model Quality Criteria Weighting

E.2. Survey at Future Programs Department at Airbus Defence and Space GmbH

The initial survey to identify the TSQM and the quality criteria weighting was performed at the Future Pro- grams Department at Airbus Defence and Space GmbH, cf. [122].

E.2.1. Experiences and Lessons Learned

After several months of co-working on diverse studies in the Future Programs Department, the question- naire as presented in Section E.2.2 has been distributed personally in a printed version to all colleagues of the department, accompanied by a short oral explanation about the objective of the survey. The response rate evolved from 19.6 % after five weeks to 31.4 % after additional four weeks that included a reminder e- mail. The final response rate after twelve weeks including additional personal reminders is of 43.1 % which corresponds to 22 questionnaires.

The moderate response rate might be explained by the length of the questionnaire and the corresponding effort to fill it out during daily work as well as the general lack of interest in the topic. The importance of follow-up activities and the personal relation to the potential respondents is assumed to have increased the obligation, leading to the increase in the response rate in the last couple of weeks of the survey.

Following the evaluation of the answers [122], the questionnaire has been reworked for subsequent use, see Section E.4.1.

E.2.2. Questionnaire Version Airbus Defence and Space GmbH

In the following the questionnaire as distributed at the Future Programs Department of Airbus Defence and Space GmbH in Friedrichhafen in 2013 is pictured. Wording and layout are unchanged with regard to the originally circulated questionnaire for authenticity reasons to provide the same picture that faced the respondents.

The results of the survey are reflected for the User Groups 1 to 3 in Table E.1 based on the work by Nemetzade and Förstner [105].

Note: the notion of "tool" and "simulator" at the time of the survey was not as distinguished as described in Section 1.1.

Page 129 Survey at Future Programs Department at Airbus Defence and Space GmbH

Fragebogen zur Erfassung der Bedürfnisse von Toolnutzern

Im Sinne des User-Centered Design steht der Nutzer eines Tools immer im Zentrum der Toolentwicklung, um größtmöglichen Nutzen zu bringen. Zentrales Kriterium ist dabei die sogenannte Gebrauchstauglichkeit (engl. usability) eines Tools. Mit Hilfe dieses Fragebogens soll nun erfasst werden, was für Sie als Nutzer wichtige Aspekte an einem Tool sind, um den weiten Begriff der Gebrauchstauglichkeit zu konkretisieren, Sie als Nutzer besser zu verstehen und letztlich diese Erkenntnis zu nutzen, um in Zukunft bei der Toolen- twicklung mehr auf Ihre Bedürfnisse eingehen zu können. In diesem Sinne: vielen Dank, dass Sie an dieser Umfrage teilnehmen!

Um Missverständnisse zu vermeiden, folgen zunächst einige Erklärungen zu den Begrifflichkeiten, die im Fragebogen verwendet werden. Unter einem Tool wird im Folgenden ein Produkt verstanden, welches die Funktion hat, einen gewissen Input vom Toolnutzer mit mathematischen Formeln zu verarbeiten und da- rauf aufbauend einen Output zu generieren. Die mathematischen Formeln dienen der Modellierung des mit dem Tool zu untersuchenden Systems. Ein Tool ist somit ein Softwareprodukt mit dem Analysen zum modellierten System durchgeführt werden können. In diesem Zusammenhang werden MS Excel oder MAT- LAB nicht als Tool bezeichnet, sondern können dazu genutzt werden, ein Tool aufzubauen. Selbiges gilt für Modellbildungstools wie ESATAN und Simulink und Modellbibliotheken wie das SPICE Toolkit. Ein mit Formeln hinterlegtes Tabellenblatt in MS Excel oder ein ausführbarer MATLAB-Code hingegen können als Tool verstanden werden, falls sie der Analyse eines Sachverhalts dienen.

Es wird unterschieden zwischen verschiedenen Toolarten. Do-it-yourself -Tools sind komplett selbst en- twickelt. Beispielsweise ist ein mit Formeln hinterlegtes Tabellenblatt in MS Excel ein DIY-Tool. Wer- den bereits vorgefertigte Elemente (Modell ist bereits implementiert) zum Toolaufbau verwendet, wie zum Beispiel bei Simulink (Komponenten sind hier bereits modelliert und können aus den Modellbibliotheken entnommen werden), so ist das fertige Produkt ein Do-it-yourself light-Tool. Ready-to-use-Tools bedürfen nur noch der Parametrisierung, um gestartet und verwendet zu werden. Beispiele für solch fertige Tools sind STK oder auch ein Tool, welches ein Kollege in MATLAB geschrieben hat und Sie nun verwenden möchten.

Der Fragebogen besteht aus zwei Teilen. Im ersten Teil (Fragen A bis D) werden Sie gebeten, einige sehr offene Fragen zu beantworten. Dies geschieht hauptsächlich in Textform. Je ausführlicher Ihre Antwort ist, desto aussagekräftiger. Im zweiten Teil (Frage E) werden die Fragen konkreter und es reicht ein Kreuzchen als Antwort. Für Kommentare und Anregungen ist jedoch auch hier Platz und jede zusätzliche Bemerkung wird dankend in die Evaluierung des Fragebogens aufgenommen. Bei allen Kreuzchen-Fragen gilt: je größer die Zahl, die Sie ankreuzen, desto wichtiger ist Ihnen bei einem Tool das Kriterium bzw. die umschriebene Eigenschaft. Ein 0 bedeutet unwichtig, ein Kreuzchen bei 10 heißt sehr wichtig.

Eine letzte Anmerkung zum zweiten Teil, indem erfragt wird, auf was Sie bei einem bereits bei Ihnen in Ver- wendung stehenden Tool bzw. bei einem Tool, welches Sie erstmals verwenden möchten, Wert legen: Fra- gen E.1 bis E.3 dienen der Feststellung der von Ihnen präferierten Toolart. Schlussfolgernd beziehen sich

Page 130 Tool and Simulator Quality Model Quality Criteria Weighting alle weiteren Fragen auf die von Ihnen präferierte Toolart. Falls Sie strikter Befürworter der DIY-Tools sind, so sind einige Fragen für Sie sinngemäß nicht von Belang (zum Beispiel wird angenommen, dass bei einem selbst entwickelten Tool keine Einarbeitungszeit notwendig ist). Nichtsdestotrotz wird um eine generelle Beantwortung dieser Fragen gebeten. Bitte versuchen Sie alle Fragen zu beantworten. Falls Verständnisfra- gen bestehen, so bin ich über Nachfragen dankbar.

Kontakt: Büro FDH 8319, 07545/82733, [email protected]

Nochmals vielen Dank für Ihre Mitwirkung!

Page 131 Survey at Future Programs Department at Airbus Defence and Space GmbH

Datum:

Name:

Arbeits-/Projektposition:

Ich komme aus folgendem thematischen Fachgebiet:

Ich habe Erfahrung mit folgenden Softwareprodukten:

Ich arbeite im Schnitt ... Stunden/Woche mit dem Softwareprodukt ...:

A. Wie wichtig ist Ihnen, dass ein Tool die folgenden Charakteristiken mitbringt (bitte mit wenigen Stich- worten angeben, was Sie unter den Begrifflichkeiten verstehen) Zur Erinnerung: 0 = unwichtig, 10 = sehr wichtig

0 2 4 6 8 10

1. Kompatibilität mit anderen Tools JJJJJJ

2. Gebrauchstauglichkeit JJJJJJ

3. Nützlichkeit JJJJJJ

4. Effizienz JJJJJJ

5. Dokumentation JJJJJJ

6. Aufgabenangemessenheit JJJJJJ

Page 132 Tool and Simulator Quality Model Quality Criteria Weighting

0 2 4 6 8 10

7. Effektivität JJJJJJ

8. Handhabbarkeit JJJJJJ

9. Zuverlässigkeit JJJJJJ

10. Individualisierbarkeit JJJJJJ

11. Wiederverwendbarkeit JJJJJJ

B. Was ich an (vorhandenen) Tools gar nicht schätze (falls möglich bitte konkrete Tools aufführen):

C. Was ich an (vorhandenen) Tools sehr schätze (falls möglich bitte konkrete Tools aufführen):

D. Was ich schon immer bei einem Tool vorfinden wollte:

Page 133 Survey at Future Programs Department at Airbus Defence and Space GmbH

E. Wie wichtig ist es Ihnen, dass bei einem (neuen) Tool, dass Sie verwenden (würden), folgende Punkte erfüllt sind?

0 2 4 6 8 10

1. Ich habe alle Modelle selbst implementiert (Do-it- JJJJJJ yourself-Tool, Bsp: Excel-Tabellenblatt).

2. Ich habe vorgefertigte Module verwendet, um mein JJJJJJ Tool zusammenzustellen (Do-it-yourself-Light-Tool, Bsp: mit Simulink erstelltes Tool).

3. Das Tool ist fertig implementiert und muss lediglich JJJJJJ von mir parametrisiert werden (Ready-to-use-Tool, Bsp. STK).

4. Ich habe alle implementierten Modelle selbst spezi- JJJJJJ fiziert.

5. Das Tool ist mit vorgefertigten Modulen erweiterbar. JJJJJJ

6. Das Tool ist mit von mir programmierten Modulen/- JJJJJJ Code erweiterbar.

7. Die von mir nicht implementierten Modelle sind doku- JJJJJJ mentiert.

8. Ich kann effizient mit dem Tool arbeiten. JJJJJJ

9. Ich kann effektiv mit dem Tool arbeiten. JJJJJJ

10. Mir macht es Spaß mit dem Tool zu arbeiten. JJJJJJ

11. Ich bin zufrieden mit dem Tool. JJJJJJ

12. Das Tool genießt eine breite Akzeptanz in der Firma. JJJJJJ

13. Das Tool genießt eine breite Akzeptanz in der Raum- JJJJJJ fahrtbranche.

14. Das Tool ist seit mehr als drei Jahren von mir im Einsatz. JJJJJJ

15. Das Tool ist seit mehr als drei Jahren von der Firma im JJJJJJ Einsatz.

16. Das Tool ist seit mehr als fünf Jahren von mir im Einsatz. JJJJJJ

Page 134 Tool and Simulator Quality Model Quality Criteria Weighting

0 2 4 6 8 10

17. Das Tool ist seit mehr als fünf Jahren von der Firma im JJJJJJ Einsatz.

18. Das Tool ist seit mehr als zehn Jahren von mir im Ein- JJJJJJ satz.

19. Das Tool ist seit mehr als zehn Jahren von der Firma im JJJJJJ Einsatz.

20. Mit dem Tool ist eine spezifische Analyse von Fragestel- JJJJJJ lungen aus meinem Fachbereich möglich.

21. Mit dem Tool ist eine Analyse von Fragestellungen aus JJJJJJ anderen Fachbereichen möglich.

22. Falls Frage 21 nicht mit nicht wichtig beantwortet wurde, spezifizieren Sie bitte die Wichtigkeit der Fach- bereiche:

a. Thermal JJJJJJ

b. AOCS JJJJJJ

c. Communication JJJJJJ

d. Power JJJJJJ

e. Payload JJJJJJ

f. Struktur JJJJJJ

g. Propulsion JJJJJJ

23. Die implementierten Modelle weisen eine hohe De- tailtiefe aus.

a. Thermal JJJJJJ

b. AOCS JJJJJJ

c. Communication JJJJJJ

d. Power JJJJJJ

Page 135 Survey at Future Programs Department at Airbus Defence and Space GmbH

0 2 4 6 8 10

e. Payload JJJJJJ

f. Struktur JJJJJJ

g. Propulsion JJJJJJ

24. Das Tool wurde validiert. JJJJJJ

25. Mit dem Tool sind dynamische Simulationen über JJJJJJ die komplette Missionszeit (bis zu mehreren Jahren) möglich.

26. Mit dem Tool sind dynamische Simulationen über die JJJJJJ komplette Missionszeit in kurzer Rechenzeit (höchstens über Nacht) möglich.

27. Das Tool läuft auf verschiedenen Betriebssystemen. JJJJJJ

28. Das Tool hat nach Abstürzen eine Wiederherstellfunk- JJJJJJ tion (ähnlich zu MS Office).

29. Das Tool besitzt eine Schnittstelle zu anderen Software- JJJJJJ produkten.

30. Falls Frage 29 nicht mit nicht wichtig beantwortet wurde, spezifizieren Sie bitte die Wichtigkeit der Schnittstelle und die Art (Input/ Output):

a. MS Excel Input J Output J JJJJJJ

b. Catia Input J Output J JJJJJJ

c. Matlab Input J Output J JJJJJJ

d. Sonstiges Input J Output J JJJJJJ

Page 136 Tool and Simulator Quality Model Quality Criteria Weighting

0 2 4 6 8 10

31. Postprocessing der Ergebnisse mit anderen Software- JJJJJJ produkten ist möglich.

32. Falls Frage 31 nicht mit nicht wichtig beantwortet wurde, spezifizieren Sie bitte das Softwareprodukt, die Wichtigkeit und die Art des Postprocessing (Visual- isierung, Einlesen von Datenpunkten als Input für weit- ere Simulationen, usw.)

a. MS Excel JJJJJJ

b. Catia JJJJJJ

c. Matlab JJJJJJ

d. Sonstiges JJJJJJ

33. Das Tool kann in seiner Funktionalität komplett für ein JJJJJJ anderes Projekt wiederverwendet werden.

34. Einzelne Module des Tools können für ein anderes Pro- JJJJJJ dukt wiederverwendet werden.

35. Falls Frage 34 nicht mit nicht wichtig beantwortet wurde, bewerten Sie bitte die Module nach Fachbere- ich:

a. Thermal JJJJJJ

b. AOCS JJJJJJ

c. Communication JJJJJJ

d. Power JJJJJJ

e. Payload JJJJJJ

f. Struktur JJJJJJ

g. Propulsion JJJJJJ

36. Das Tool kann über mehrere Projektphasen hinweg ver- JJJJJJ wendet werden.

37. Das Tool kann über mehrere Projektphasen hinweg in JJJJJJ seiner Funktionalität unverändert verwendet werden.

Page 137 Survey at Future Programs Department at Airbus Defence and Space GmbH

0 2 4 6 8 10

38. Das Tool kann über mehrere Projektphasen hinweg mit JJJJJJ einigen Änderungen in den Funktionalitäten verwendet werden.

39. Das fertige Tool bzw. von mir nur teilweise implemen- tierte Tool

a. bringt ein Manual mit. JJJJJJ

b. hat einen Getting-Started Guide. JJJJJJ

c. stellt Tutorials zur Verfügung. JJJJJJ

d. stellt eine Online-Hilfe zu Verfügung. JJJJJJ

e. hat eine aktive Online Community bzw. Foren. JJJJJJ

f. hat eine Programm-interne Hilfe. JJJJJJ

40. Bei Problemen erhalte ich Support vom Entwickler vor JJJJJJ Ort.

41. Bei Problemen erhalte ich Support vom Entwickler über JJJJJJ Distanz (Telefon, Email, Remotedesktop).

42. Ich kann mit dem Tool innerhalb weniger Stunden (bis JJJJJJ zu 4h) umgehen und die Basisfunktionalitäten aus- führen.

43. Ich kann mit dem Tool innerhalb weniger Minuten (bis JJJJJJ zu 60 min) umgehen und die Basisfunktionalitäten aus- führen.

44. Ich kann mit dem Tool nach längerer Pause (mind. 3 JJJJJJ Wochen) ohne Einarbeitungszeit weiterarbeiten.

45. Ich kann mit dem Tool nach längerer Pause (mind. 3 JJJJJJ Wochen) mit wenig Einarbeitungszeit (bis zu 1h) weit- erarbeiten.

46. Ich kann mit dem Tool nach längerer Pause (mind. 3 JJJJJJ Wochen) mit Einarbeitungszeit (ab 1h) weiterarbeiten.

Page 138 Tool and Simulator Quality Model Quality Criteria Weighting

0 2 4 6 8 10

47. Das Tool ist in seinem Aufbau weitestgehend (zu 3/4) JJJJJJ selbsterklärend.

48. Das Tool ist in seinen Funktionalitäten weitestgehend JJJJJJ (zu 3/4) selbsterklärend.

49. Die Parametrisierung des Tools geschieht

a. über eine Eingabemaske auf der graphischen Ober- JJJJJJ fläche.

b. über eine Konfigurationsdatei, die geladen wird. JJJJJJ

c. direkt im Quellcode. JJJJJJ

50. Bei einer dynamischen Simulation verfügt das Tool über JJJJJJ eine 3D-Visualisierung des untersuchten Satelliten in seinem Orbit.

51. Das Tool hat ein ansprechendes Design. JJJJJJ

52. Das Tool hat ein klares Design. JJJJJJ

53. Das Tool wird regelmäßig weiterentwickelt. JJJJJJ

54. Das Tool hat einen festen Ansprechpartner, welcher es JJJJJJ regelmäßig wartet.

55. Wenn ich keine Zeit habe, hilft jemand anderes bei JJJJJJ Toolupdate/-erweiterung.

56. Das Tool wird nicht von mir allein genutzt. JJJJJJ

57. Das Tool soll sich an die ändernden Missionsan- JJJJJJ forderungen anpassen und weiterentwickeln.

58. Falls Frage 57 nicht mit nicht wichtig beantwortet wurde:

a. Das Tool kann von mir an die geänderten Mission- JJJJJJ sanforderungen angepasst werden.

b. Das Tool wird von jemand anderen angepasst, da JJJJJJ mir für Toolupdates die Zeit fehlt.

Page 139 Survey at DLRK 2014 Conference

E.3. Survey at DLRK 2014 Conference

After the survey at the Future Programs Department at Airbus Defence and Space GmbH, the next survey on the TSQM was conducted within the context of the conference "Deutscher Luft- und Raumfahrtkongress 2014" that took place in Augsburg, Germany from the 16 to 18 September 2014.

E.3.1. Preparatory Step: Adaptation of Questionnaire to Potential Respondents

Expecting an audience not necessarily specified to the unmanned space sector, the questionnaire as used for the first survey, cf. Annex E.2.2, was re-worked to obtain a second, space independent version to address a broader audience. For this purpose,

• the introductory part of the questionnaire was re-visited and partly re-formulated, replacing space specific explanations by generic explications,

• questions 22 and 35 were deleted,

• the options from question 23 were omitted and

• questions 13, 25 and 26 were re-worded, replacing space specific expressions.

E.3.2. Experiences and Lessons Learned

After the author’s presentation of the results of the first survey, cf. [122], the present audience was invited and encouraged to contribute to the topic by participation to the successive survey. The two mentioned versions of the questionnaires, i.e. a space specific and a space independent version, were made available at the presenter’s desk. From approximately 50 listeners, roughly 5 to 10 picked a questionnaire. The re- sponse rate was zero. Many reasons might explain the lack of participation, i.e. firstly the lack of interest to the survey in general and secondly the low response rate. To stimulate the interest of a broader part of the audience, retrospectively, it might have been potentially more fruitful to distribute the questionnaire di- rectly to the audience to increase the obligation. However, this was omitted to assure that the questionnaire version corresponding to the respondents background was handed out. Furthermore, the emotional rela- tion to the topic in general is assumed to be rather moderate, based on the potentially broad background of the audience. The low response rate might be explained in analogy to the experience encountered in the first survey, e.g. the length of the questionnaire. Also, being back in the usual working environment after the conference might have put critical work on the priority list first, forgetting the questionnaire. A further obstacle might have laid in the logistical constraints: the distribution of the questionnaire took place on the last day of the conference during the late morning, hindering the personal return of it. Though the author’s contact details were quoted in the questionnaire, the additional effort to sent the filled-out questionnaire with postal service or electronically as a scan might have contributed to the low response rate. In general, the experience made emphasizes the importance of follow-up activities and the personal relation to the po- tential respondents to increase the obligation (as experienced in the survey performed at Airbus Defence and Space GmbH [122]) for a high response rate. At the same time, the interest of the respondents in the

Page 140 Tool and Simulator Quality Model Quality Criteria Weighting topic is not to be underestimated as the response rate of the SECESA 2014 survey, see successive Section E.4, demonstrates.

E.4. Survey at SECESA 2014 Conference

A second, successive effort was made within the context of the "6th International Workshop in Systems & Concurrent Engineering for Space Applications (SECESA 2014)", that took place in Stuttgart, Germany from the 08 to 10 October 2014.

E.4.1. Preparatory Step: Improvement of Questionnaire

On the very basis, the questionnaire as used for the first survey, cf. Section E.2.2, was re-used for the con- secutive surveys. However, to address a larger group of respondents present at international conferences, the questionnaire was translated to English for SECESA 2014. This opportunity was seized to improve the questionnaire by considering the lessons learned from the first survey [122].

• Two additional bullets "I mostly work during the following mission phases(s)" and "I have experience in the following mission phase(s)" were introduced in the personal section to interrogate the respon- dents’ experience with regard to the mission phases. Aim of this was a more detailed insight into the respondents’ background to put the answers into a more differentiated context.

• Transparency and Maturity were introduced in section A due to their discovered significance and emer- gence in the frame of the evaluation of the results of the first survey. Transparency, in particular, was not specified as quality criterion in the previous questionnaire and emerged as result of the evaluation. The criterion adaptability to user character was omitted as the wording was evaluated in the first survey to tend to be misunderstood.

• In view of the lack of responses to the filter questions 22, 30, 32, 35 and 58 of Section E in the first survey, assuming a misunderstanding by the double negation in the questions [122], they were re-formulated and their filter character omitted, see questions 22, 31, 33, 36 and 60/61.

• Options Mission Analysis and Operations Planning, i.e. points h and i of questions 22, 23 and 36 in section E, were added to the list of disciplines.

• Wording in several questions in Section E was underscored, e.g. in questions 38 and 39 and 42, 43 and 44, respectively, to highlight the changes in subsequent questions that have very similar wording. As several respondents from the first survey did not differentiate the rating of these subsequent questions contrary to expectations, it was assumed that the differences had been read over without this additional emphasis.

• Question 50 was added to Section E to support the assessment of the importance of the quality criterion familiarization.

Page 141 Survey at SECESA 2014 Conference

• Questions 25, 62, 63, 64 and 65 were added to Section E of the questionnaire to support the assess- ment of the importance of the quality criteria Technical and Interface Maturity, Functional Suitability, Functional Correctness, and their respective parent criteria.

E.4.2. Experiences and Lessons Learned

After the author’s presentation on the lessons learned obtained with the user-centred, customized LOFT Simulator, cf. Section 3.2 and [105], the SECESA audience of approximately 60 people were invited and encouraged to contribute to the topic by participation to the survey. The speech took place in a plenary session on the afternoon of the first day of the conference. Though the presentation was not explicitly about the quality criteria, the speech emphasized the importance of knowing and considering the user needs for the success of the LOFT simulator, bridging to the importance of assessing the user needs systematically by questionnaires.

As lessons learned from the DLRK 2014, see Section E.3, the questionnaire as pictured in Section E.4.4 was personally passed around to the audience. Approximately 80 % of the present conference participants picked one. The response rate counts 9 copies that came back on the same day they were distributed. Though contact details were indicated, no further copies were received after the conference.

The question may arise why the response rate of the survey initiated at the SECESA 2014 conference stood positively in contrast to the reactions from the DLRK 2014 convention. Firstly, the type of the audience is experienced to be different. While the annual DLRK conference gathers several hundreds of participants from the entire German aerospace sector, often with a focus on the larger aeronautical business, the in- ternational, bi-annual SECESA meeting focuses on the space sector and the challenges encountered. The specific, more homogeneous group of people attending the SECESA conference is likely to be highly inter- ested in tools and simulators in general, having a lot of experience with tools/simulators and being open minded for new concepts and approaches. They potentially stand in contrast to the DLRK audience that is rather inhomogeneous with various interests. So the probability to come across respondents sympathizing with the topic out of own experience is estimated to be higher at SECESA than at DLRK which explains the positive resonance.

Further reasons for the satisfying response rate are assumed to be

• the distribution method of the questionnaire that potentially increased the obligation to participate: having the form physically passed through the hands decreases the effort to actively pick up a ques- tionnaire that is laid out by means of the direct and personal contact with the respondents,

• the timing of the speech on the first day of the conference in the large plenary session, where the con- centration and motivation of the participants are assumed to be higher than in one of eight splinter sessions on the last day of a 3-days conference as it was the case for the trial within the frame of the DLRK 2014 conference,

• the possibility to fill in the questionnaire and return it on site, omitting the need to send it back to the author or filling it in at work, and

Page 142 Tool and Simulator Quality Model Quality Criteria Weighting

• the existing acquaintanceship to several participants from former events that potentially led to partic- ipation to the survey out of the decision to support the author’s research.

E.4.3. Survey Results

E.4.3.1. Evaluation of Impacts of Questionnaire Improvements

The omission of filter questions led to the desired result that the questions were not skipped but answered. Underscoring the wording to emphasize the difference in consecutive though similar questions did not lead to the desired result that the questions were scored differently. Nevertheless, it is assumed that the ad- ditional highlight did mitigate a potential source of misunderstandings. Adding question 50 led to the result that it was exactly answered the same way as question 48, its synonymous question. Thus its equivalent meaning is confirmed.

E.4.3.2. Evaluation of Survey Results from Section A of Questionnaire - Validity of Obtained Results

The resulting rating of the quality criteria from Section A of the questionnaire is pictured in Table E.4. In comparison with the results from Section E, paraphrasing the quality criteria in questions, see Table E.1, discrepancies in the weighting can be observed. Documentation, for example, is lower rated when directly rated in Section A (6.7) than by means of the indirect questions of Section E (7.1 to 8.9). The same is true for Maturity (6.5 compared to 6.9 to 7.7) and Transparency (6.3 compared to 6.5 to 8.4). On the other side, some parameters are higher rated directly than through the assessment by the paraphrasing questions, for instance Functional Suitability (8.5 compared to 6.4 to 8.1) and Interoperability (8.0 compared to 6.2 to 7.8).

Table E.4.: SECESA 2014 - Survey Results, Questionnaire Part A.

Quality Criteria Weighting Quality Criteria Weighting

Compability with other tools 8.0 Effectivity 9.0 Usability 8.0 Manageability 7.5 Utility 9.6 Reliability 9.0 Efficiency 9.1 Transparency 6.3 Documentation 6.7 Reusability 7.0 Functional Suitability 8.5 Maturity 6.5

One reason for the discrepancies is assumed in the interpretation of the quality criteria wording. The given answers and comments to the single criteria suggest the assumption that the criteria were partly misinter- preted by the respondents. As in the case of the first survey [122], this result emphasizes the importance of wording and the underlying source of misinterpretation. It confirms the approach to paraphrase the quality criteria in questions as expression of their diverse implementations.

Another reason for the observed discrepancies might lie in the elusive, yet versatile character of the quality criteria. The respondent’s answer might have been related to one specific implementation of the crite-

Page 143 Survey at SECESA 2014 Conference ria he/she thought of in the moment of answering, not taken into account the other possible realizations. Therefore, the validity of the resulting rating has to be questioned. At the utmost, single answers to ques- tions in Section E can be consulted to confirm the rating of the criteria resulting from the answers in Section A.

The observed discrepancies between the directly weighted criteria (Section A) and the weighting obtained through the paraphrased criteria (Section E) emphasize the validity of the executed approach to assess the topic from various perspectives. At the same time, they reveal a general limitation of the overall approach. To the extent that the respondents might have had a specific implementation of the criteria in mind while working on Section A, omitting other possible criteria realizations, it cannot be excluded that Section E of the questionnaire covered all relevant realizations of the quality criteria, either. Based on the highly elusive, yet versatile character of the quality criteria, the consideration of all possible realizations is hard to assure. Therefore, the awareness for this limitation is important for the application of the survey results. Neverthe- less, the criteria rating resulting from Section E is considered to be reliable to the extent that the numerical values are put into the context of the underlying, concrete questions and not generalized.

Furthermore, it can be observed that Utility achieves 9.6 points, the highest rating of all criteria. In compar- ison, Usability is rated with an importance of 8.0 points. The high rating of utility was expected. On the first view, the lower rating of usability was surprising, though. Since usability embraces utility and adds aspects like manageability to it, it was expected that it would rate at least nearly as high as utility. However, looking into the comments, the low ratings can possibly be traced back to the interpretation of usability to be solely manageability. Again, this observation confirms the importance of wording and a common understanding as crucial basis for the reliability of the TSQM.

E.4.4. Questionnaire Version SECESA 2014

In the following the questionnaire as distributed in the frame of the SECESA 2014 conference is pictured. Wording and layout are unchanged with regard to the originally circulated questionnaire for authenticity reasons to provide the same picture that faced the respondents.

Note: the notion of "tool" and "simulator" at the time of the survey was not as distinguished as described in Section 1.1.

Note: within the pictured TSQM approach, the notion effectivity has been used as synonym of the more common notion effectiveness. It is recognized that its usage in the questionnaire, see Section A Point 7 in Annex E.4.4 might have caused confusion and affected the results of the survey. For future work, it is recommended to replace the words accordingly.

Page 144 Questionnaire for the assessment of tool user needs

Following the user-centered design approach, the user shall always be in the center of the tool development process to obtain a product which serves the user as much as possible. In order to do so, it is important for the tool developers to assess and understand the tool user’s needs to finally integrate them into the tool. This questionnaire is developed for this purpose in order to enhance the understanding between tool devel- opers and tool users. The questionnaire is not related to a specific tool but shall assess user needs in general for future tool development work.

A tool is defined as a product which processes an input from the tool user by means of mathematical for- mulas to an output. The formulas model the system which shall be analyzed with the help of the tool. In this context, MS Excel or MATLAB are not tools by themselves but can be used to set up tools. The same is true for modelling tools like Simulink and model libraries like the SPICE toolkit. An MS Excel data sheet with integrated formulas or an executable MATLAB code, however, can be understood as tool as long as they are used for analysis related reasons.

Please try to answer ALL questions. In case a question is not clear to you, please contact me for clarification or leave a comment.

Please note that all results will be integrated anonymously in my PhD thesis “Characterization and appli- cation of user-centered system tools as systems engineering support for satellite projects”. The thesis is academically supervised by Prof. Dr.-Ing. Roger Förstner from the Institute of Space Technology and Space Applications from the Bundeswehr University Munich and will be published next year.

Thank you very much for your participation!

Tanja Nemetzade

Universität der Bundeswehr München Institute of Space Technology and Space Applications LRT 9.1 Werner-Heisenberg-Weg 39 85577 Neubiberg Germany

Email: [email protected] Tel.: +49 89 6004 4830

Page 145 Survey at SECESA 2014 Conference

Name:

Technical background (astronautics, electronics, ...):

Job responsibilities:

I mostly work during the following mission phase(s):

I have experience in the following mission phase(s):

I have experience with the following software products:

In average, I work ... h/week with the following software products:

A. When using a (new) tool, how important are the following tool characteristics for you? Please describe the cited characteristics shortly in your own words! 0 = not important, 10 = very important

0 2 4 6 8 10

1. Compatibility with other tools JJJJJJ

2. Usability JJJJJJ

3. Utility JJJJJJ

4. Efficiency JJJJJJ

5. Documentation JJJJJJ

6. Functional suitability JJJJJJ

Page 146 Tool and Simulator Quality Model Quality Criteria Weighting

0 2 4 6 8 10

7. Effectivity JJJJJJ

8. Manageability JJJJJJ

9. Reliability JJJJJJ

10. Transparency JJJJJJ

11. Reusability JJJJJJ

12. Maturity JJJJJJ

B. What I do not like about (existing) tools (if possible please indicate the corresponding tools):

C. What I do like about (existing) tools (if possible please indicate the corresponding tools):

D. What I always wished a tool would come along with:

Page 147 Survey at SECESA 2014 Conference

E. When using a (new) tool, how important are the following aspects for you? 0 = not important, 10 = very important

0 2 4 6 8 10

1. The tool was completely created by myself, including JJJJJJ the programming (e.g. MATLAB code).

2. I used ready-to-use modules to compose my tool (e.g. JJJJJJ tool created with Simulink).

3. The tool is ready-to-use. I only have to parametrize it JJJJJJ (e.g. STK).

4. I specified all implemented models by myself. JJJJJJ

5. The tool can be extended by ready-to-use modules. JJJJJJ

6. The tool can be extended by modules/code which I pro- JJJJJJ grammed.

7. All models which are not implemented by myself are JJJJJJ documented.

8. I can work efficiently with the tool. JJJJJJ

9. I can work effectively with the tool. JJJJJJ

10. I have fun working with the tool. JJJJJJ

11. I am pleased with the tool. JJJJJJ

12. The tool is widely accepted in my company. JJJJJJ

13. The tool is widely accepted in the space sector. JJJJJJ

14. I have been using the tool for more than three years. JJJJJJ

15. The tool is in use within my company for more than JJJJJJ three years.

16. I have been using the tool for more than five years. JJJJJJ

17. The tool is in use within my company for more than five JJJJJJ years.

18. I have been using the tool for more than ten years. JJJJJJ

Page 148 Tool and Simulator Quality Model Quality Criteria Weighting

0 2 4 6 8 10

19. The tool is in use within my company for more than ten JJJJJJ years.

20. I can analyze problems specific to my discipline with JJJJJJ the tool.

21. I can analyze problems specific to other disciplines than JJJJJJ mine with the tool.

22. The following disciplines are analyzable with the tool:

a. Thermal JJJJJJ

b. AOCS JJJJJJ

c. Communication JJJJJJ

d. Power JJJJJJ

e. Payload JJJJJJ

f. Structure JJJJJJ

g. Propulsion JJJJJJ

h. Mission analysis JJJJJJ

i. Operations planning JJJJJJ

23. The implemented discipline models are highly detailed.

a. Thermal JJJJJJ

b. AOCS JJJJJJ

c. Communication JJJJJJ

d. Power JJJJJJ

e. Payload JJJJJJ

f. Structure JJJJJJ

g. Propulsion JJJJJJ

h. Mission analysis JJJJJJ

i. Operations planning JJJJJJ

Page 149 Survey at SECESA 2014 Conference

0 2 4 6 8 10

24. The tool has been validated. JJJJJJ

25. The tool has been verified. JJJJJJ

26. Dynamic simulations over the complete mission time JJJJJJ (up to several years) are possible with the tool.

27. Dynamic simulations over the complete mission time JJJJJJ (up to several years) are possible with the tool in short time.

28. The tool can be run on several operating systems. JJJJJJ

29. The tool has a recovery function after program crashes JJJJJJ (similar to MS Office).

30. The tool provides an interface to other software prod- JJJJJJ ucts.

31. An interface to the following software products is given (please specify if input and/or output):

a. MS Excel Input J Output J JJJJJJ

b. Catia Input J Output J JJJJJJ

c. Matlab Input J Output J JJJJJJ

d. Other Input J Output J JJJJJJ

32. It is possible to post-process the results of the tool sim- JJJJJJ ulation run with other software products.

33. Postprocessing (please specify: visualisation, reading in of data points as input for further simulations, etc.) is possible with

a. MS Excel JJJJJJ

b. Catia JJJJJJ

c. Matlab JJJJJJ

d. Other JJJJJJ

34. The tool can be completely used for a another mission. JJJJJJ

Page 150 Tool and Simulator Quality Model Quality Criteria Weighting

0 2 4 6 8 10

35. Separate modules of the tool can be re-used for another JJJJJJ mission.

36. In particular, modules of the following disciplines can be re-used:

a. Thermal JJJJJJ

b. AOCS JJJJJJ

c. Communication JJJJJJ

d. Power JJJJJJ

e. Payload JJJJJJ

f. Structure JJJJJJ

g. Propulsion JJJJJJ

h. Mission analysis JJJJJJ

i. Operations planning JJJJJJ

37. The tool can be used along several mission phases. JJJJJJ

38. The tool can be used unchanged along several mission JJJJJJ phases.

39. The tool can be used along several mission phases. JJJJJJ Tool changes are accepted.

40. The tool

a. comes along with a manual. JJJJJJ

b. provides a getting-started guide. JJJJJJ

c. comes along with tutorials. JJJJJJ

d. comes along with online help. JJJJJJ

e. does have an active online community (forum, etc.) JJJJJJ

f. does have an integrated help function. JJJJJJ

41. In case of problems, support is provided by the tool de- JJJJJJ velopers on-site.

Page 151 Survey at SECESA 2014 Conference

0 2 4 6 8 10

42. In case of problems, support is provided by the tool de- JJJJJJ velopers via phone, mail or remote desktop.

43. I can handle the tool within few hours (up to 4h) and JJJJJJ perform the basic fucntionalities.

44. I can handle the tool quickly (up to 1h) and perform the JJJJJJ basic fucntionalities.

45. After a pause of more than three weeks, I can use the JJJJJJ tool without any need for re-familiarization.

46. After a pause of more than three weeks, I can use the JJJJJJ tool with some time for re-familiarization (up to 1h).

47. After a pause of more than three weeks, I can use the JJJJJJ tool with time for re-familiarization (more than 1h).

48. The structure of the tool is largely self-explanatory. JJJJJJ

49. The fuctionalities of the tool are largely self-explanatory. JJJJJJ

50. I can handle the tool largely intuitively. JJJJJJ

51. The parametrization of the tool is realized

a. via an input mask on the graphical user interface. JJJJJJ

b. via an configuration file which is read in prior to a JJJJJJ simulation run.

c. directly in the source code. JJJJJJ

52. The tool provides a dynamic 3D-visualisation of the JJJJJJ satellite in operation.

53. The tool has a pleasant design. JJJJJJ

54. The tool has a clearly arranged design. JJJJJJ

55. The tool is regularly developed further. JJJJJJ

56. The tool comes along with a contact person who main- JJJJJJ tains the tool regularly.

57. In case I am busy, someone else helps to update or ex- JJJJJJ tend the tool.

Page 152 Tool and Simulator Quality Model Quality Criteria Weighting

0 2 4 6 8 10

58. The tool is not solely used by me. JJJJJJ

59. The tool can be adapted to changing mission require- JJJJJJ ments.

60. The tool can be adapted by myself to changing mission JJJJJJ requirements.

61. The tool can be adapted to changing mission require- JJJJJJ ments by someone else when I am short of time.

62. I can rely on the tool. JJJJJJ

63. The tool is mature. JJJJJJ

64. The tool functionalities are exhaustive for my needs. JJJJJJ

65. The implemented mathematical models are correctly JJJJJJ defined and implemented.

Comments

Page 153 Exemplary Application of Tool and Simulator Quality Model

E.5. Exemplary Application of Tool and Simulator Quality Model

The TSQM has been exemplary applied to the SSC-Simulators, e.g. SPS, GMSS, in comparison to STK (Free Standard Version 11.4 without add-ons) according to the evaluation scheme pictured in Section 4.5.2 with the evaluation scenario described in Section 4.5.1. In line with the context of use, the criteria weighting for the occasional users (User Group 3), see Table E.1, has been depicted for the evaluation. In case a weighting has not been available, e.g. for questions 65, the weighting by User Group 4 (SECESA participants) has been used. Table E.7 pictures the results of the evaluation, showing the higher usability for the SSC-Simulators.

It is noted that the results highly depend on the user or evaluator and his transfer of encountered simulator characteristics into fulfilment figures. Prior to the application of the TSQM, it is therefore recommended to propose a fulfilment grid for the simulator characteristics per question to align evaluations.

Table E.7.: Tool and Simulator Quality Model Application and Results: Simulator 1 Represents the SSC- Simulators, Simulator 2 Reflects STK (Standard Version 11.4).

Simulator 1 2

Criterion Question Fulfilment

Usability 7.8 6.6

Emotional Reliability 8.6 7.4

Has the tool/simulator been verified? (25) 9.0 9.0

Are the models correctly defined and imple- 9.0 8.0 mented? (65)

Can I rely on the tool/simulator? (62) 9.0 7.0

Has the tool been validated? (24) 9.0 7.0

Technical and 8.0 7.4 Interface Maturity

Is the tool mature? (63) 7.0 8.0

Is the tool regularly maintained? (54) 8.0 7.0

Has the tool been validated? (24) 9.0 7.0

Transparency 7.0 5.2

Can I change/adapt the tool/simulator to the mis- 8.0 8.0 sion requirements? (58a)

Accessibility 3.0 8.0

Page 154 Tool and Simulator Quality Model Quality Criteria Weighting

Table E.7.: (continued).

Simulator 1 2

Criterion Question Fulfilment

Does the tool/simulator provide the possibility to enlarge it via self-programmed modules/ele- 3.0 8.0 ments? (6)

Documentation 9.0 1.0

Is a specification of the implemented models 9.0 1.0 available? (7)

Familiarization 8.3 7.3

Can the tool be handled within few hours (up 10.0 8.0 to 4h) such that basic functionalities can be per- formed? (42)

After a pause of more than 3 weeks, can the tool be 9.0 7.0 used without any need for re-familiarization? (44)

After a pause of more than 3 weeks, can the tool be 10.0 8.0 used after a re-familiarization of up to 1h? (45)

Documentation 7.8 7.5

Is a manual provided to support the usage? (39a) 9.0 8.0

Is a getting-started guide provided to support the 1.0 8.0 usage? (39b)

Are online tutorials available to support the usage? 0.0 8.0 (39c)

Self-Descriptiveness 9.0 6.0

Is it possible to handle the tool/simulator largely 9.0 6.0 intuitively? (48)

FAVS2 8.5 6.5

Are the tool/simulator functionalities self- 9.0 7.0 descriptive? (47)

Is the tool/simulator design clear? (52) 8.0 6.0

2Functional Appropriate Visualization and Structure

Page 155 Exemplary Application of Tool and Simulator Quality Model

Table E.7.: (continued).

Simulator 1 2

Criterion Question Fulfilment

Manageability 7.5 7.0

Modifiability 7.0 6.2

Accessibility 7.0 5.5

Is the tool/simulator parametrized via a GUI? (49a) 8.0 8.0

Is the tool/simulator parametrized via a configura- 9.0 1.0 tion file? (49b)

Does the tool/simulator provide the possibility to enlarge it via self-programmed modules/ele- 3.0 8.0 ments? (6)

Modularity Does the tool/simulator provides a modular struc- 7.0 7.0 ture that allows its expansion with ready-to-use modules? (5)

Portability Can the tool/simulator be run on several operating 10.0 9.0 systems? (27)

Interoperability 8.5 8.5

Is the tool/simulator compatible with other soft- 7.0 7.0 ware products via an interface? (29)

Is it possible to postprocess the results with other tools through a compatible format of the output 10.0 10.0 data? (31)

Functional reliability 7.5 6.8

Is the tool/simulator reliable? (62) 9.0 7.0

Functional suitability 8.9 7.7

Are the tool/simulator functionalities exhaustive 10.0 8.0 for the user’s needs? (64)

Functional 9.5 8.0 completeness

Does the tool/simulator support analyses originat- 10.0 8.0 ing from the user’s own discipline? (20)

Page 156 Tool and Simulator Quality Model Quality Criteria Weighting

Table E.7.: (continued).

Simulator 1 2

Criterion Question Fulfilment

Does the tool/simulator support analyses originat- 10.0 8.0 ing from other disciplines? (21)

Is it possible to adapt the tool/simulator to chang- 8.0 8.0 ing mission requirements? (57)

Functional 9.0 8.5 Correctness

Is the tool/simulator verified? (25) 9.0 9.0

Are the implemented mathematical models cor- 9.0 8.0 rectly defined and implemented? (65)

Modifiability 7.0 6.2

Accessibility 7.0 5.5

Is the tool/simulator parametrized via a GUI? (49a) 8.0 8.0

Is the tool/simulator parametrized via a configura- 9.0 1.0 tion file? (49b)

Does the tool/simulator provide the possibility to enlarge it via self-programmed modules/ele- 3.0 8.0 ments? (6)

Modularity 7.0 7.0

Does the tool/simulator provides a modular struc- ture that allows its expansion with ready-to-use 7.0 7.0 modules? (5)

Recoverability 1.0 3.0

Does the tool/simulator provides a recovery func- 1.0 3.0 tion after program crashes? (28)

Reusability 7.0 5.0

Is it possible to re-use parts/modules of the tool for 7.0 5.0 other missions? (34)

Page 157 F. Parameter Influence Net Method - Algorithm For Trigonometric Calculations and Exemplary Application

F.1. Parameter Influence Net Method Algorithm For Trigonometric Calculations

Trigonometric functions play an important role in the satellite mission and system design. They are among others employed in the performance modelling of the spacecraft’s communication system, the calculation of the eclipse time, the thermal system and AOCS modelling.

The PINM algorithms for summations, multiplications or exponential calculations, see Section 5.2.2, how- ever, are not suited for trigonometric functions. To obtain a percentaged change p f (x) (f (x) being a trigono- metric function, i.e. sine, cosine, arc sine, arc cosine, tangent), one possibility is to work with the exact calculation of p as function of p . To give an example, the exact expression for p for f (x) sin(x) is: f (x) x f (x) =

sin(xDP ∆x) sin(xDP ) sin(xDP ∆x) sin(xDP (1 px )) psinx + − + 1 + 1. (F.1) = sin(xDP ) = sin(xDP ) − = sin(xDP ) −

In Eq. (F.1) px is not decoupled from the sine function. This impedes the explicit evaluation of its influence on p . This coupling p g(f (p )) is true for all trigonometric functions. The decoupling, however, is sinx f (x) = x essential for the lucidity of the PIN to see directly the parameter interdependencies and their strength.

As a first approach to achieve a decoupling of px from the trigonometric function f (x), Taylor’s theorem [145, p.185] was employed to approximate f (x). The theorem states that a function can be represented as a sum of its Taylor polynomial of degree n (n 0,1,2,...) and a remainder term if a real-valued function f = of one variable is n times differentiable at a point x . The trigonometric functions were developed at x 0 0 0 = and the PINM algorithm applied on the resulting polynomial. To give an example, the Taylor series for the trigonometric function sinx at x0 = 0 is:

x3 x5 x7 sinx x ... . (F.2) = − 6 + 120 − 5040 +

Aborting the Taylor series at the seventh degree and applying then the PINM algorithm yields: · ¸ 1 1 £ 3 ¤ 3 1 £ 5 ¤ 5 1 £ 7 ¤ 7 psinx px xDP (1 px ) 1 xDP (1 px ) 1 xDP (1 px ) 1 xDP . = sin(xDP ) · · − 6 · + − · + 30 · + − · − 5040 · + − · (F.3)

Page 158 Parameter Influence Net Method

The first drawback of this approach is the limited validity range of the approximation. The Taylor series expressed in Eq. (F.2) is only valid around the point it is developed, in the case above around x 0. Second, 0 = the term gets difficult to follow in the PIN representation. The influence of px on psinx , see Eq. (F.3), is multiple and not easily quantifiable. The clarity of the graphical representation is afflicted by the frequency of the direct interdependencies, see Fig. F.1. Consequently, the pictured approach was discarded.

Figure F.1.: Parameter Influence Net Representation of Sine Function According to Eq. (F.3).

In a second approach, Taylor’s Theorem was employed in a specific form that says if a real-valued function f is one time differentiable at the point x0, then it has a linear approximation at the point x0 which is:

f (x) f (x0) f 0(x0) (x x0), ≈ + · − (F.4) f (x ∆x) f (x ) f 0(x ) ∆x . 0 + ≈ 0 + 0 ·

Instead of applying the theorem to the trigonometric function prior to the deduction of p f , as done in the

first approach, the percentaged change p f (x) is derived first. It can be expressed by:

f (x0 ∆x) f (x0) f 0(x0) p f (x) + − ∆x . (F.5) = f (x0) ≈ f (x0) ·

∆x With px it is: = x0 f 0(x0) p f (x)(px ) px x0 . (F.6) ≈ f (x0) · · Eq. (F.6) proves that for all functions, including the trigonometric ones, it is possible to achieve a direct proportionality between p f and px with Taylor’s Theorem.

Table F.2 sets for common trigonometric functions the exact calculation of p f (x) against its Taylor approxi- mation and states the corresponding domains of definition. However, while using the Taylor approximation one has to be aware of the its validity range. For large ∆x, the error between the exact calculation of p f (x) and the Taylor approximation becomes too large to balance the benefits of the approximation.

A valuable range of ∆x depends on the chosen design point xDP and the type of the underlying trigono-

Page 159 Parameter Influence Net Method Algorithm For Trigonometric Calculations metric function. For a detailed determination of the acceptable ∆x limits, the deviation between the exact computation and Taylor approximation of p f (x) in dependence of xDP was investigated for several ∆x. Fig- ures F.2to F.9 show the relative deviation RD (p p )/p p /p 1 of the = f ,Taylor − f ,exact f ,exact = f ,Taylor f ,exact − exact and the Taylor based formula (on the y-axis) in dependence from the design point (on the x-axis) for several constant percentage changes px . Table F.3calls the developed relative deviations per trigonometric function and the evolution of terms towards their points of discontinuity.

SINE - As pictured in Figs. F.2and F.3,relative deviations are significantly high for design points around 90° for RDsinx . The high sensitivity can be traced back to the slope of sinx around 90° that tends against zero. As a consequence, both nominator and denominator tend against zero while the denominator tends against zero more quickly due to the subtraction. In sum, the whole terms tends against infinity.

COSINE - As pictured in Figs. F.4and F.5, relative deviations are significantly high for design points around

180° for RDcosx . The high sensitivity can be traced back to the slope of cosx around 180° that tends against zero. As a consequence, both nominator and denominator tend against zero while the denominator tends against zero more quickly due to the subtraction. In sum, the whole terms tends against infinity.

ARCSINE - As pictured in Figs. F.6and F.7, relative deviations are significantly high for design points around 1 for RD (excluding the values x (1 p ) 1 where arcsin(x (1 p )) is not defined). The high arcsinx DP + x ≥ DP + x sensitivity can be traced back to the slope of the two denominator terms arcsin(xDP (1 px )) arcsinxDP q q + − and 1 1 around 1. The slope of 1 1 tends against zero around x 1 while arcsin(x (1 p )) x2 − x2 − = | DP + x − arcsinx 1. With the influence of the subtrahend 1 the evolution of the relative deviation is finally DP | ¿ explained. It is noted that only positive x have been investigated due to the symmetry of arcsinx to x 0. =

ARCCOS - As pictured in Figs. F.8 and F.9, relative deviations are significantly high for design points around 1 for RD (excluding the values x (1 p ) 1 where arccos(x (1 p )) is not defined). The high ± arccosx | DP + x | ≥ DP + x sensitivity can be traced back to the slope of the two denominator terms arccos(xDP (1 px )) arccosxDP q q + − and 1 1 around 1. The slope of 1 1 tends against zero around x 1 while arccos(x (1 p )) x2 − ± x2 − = | DP + x − arcsinx 1. With the influence of the subtrahend 1 the evolution of the relative deviation is finally DP | ¿ explained.

In general, it is recommended to work with the Taylor approximation to have a decoupling of px from the trigonometric function f and benefit from the larger domain of definition than for the exact calculation.

However, this recommendation is only valid for xDP with significant distance (a) for psinx to 90° , (b) for pcosx to 180° , (c) for parcsinx to -1 and 1, and (d) for parccosx to -1 and 1. Getting closer to these design points increases the relative deviation between the Taylor approximation and the exact calculation significantly. Assuming that for the early design stages, a relative deviation of 10 % might still be acceptable, Table F.1 summarizes the ranges of x where the relative deviation is less than 10 % for 0,05 p 0,05. Errors DP − ≤ x ≤ larger than 10 % should be avoided by using the exact calculation for the percentaged change of a trigono- metric function.

Page 160 Parameter Influence Net Method

Table F.1.: Value Intervals for x Where Relative Deviations RD (p p )/p for Com- DP = f ,Taylor − f ,exact f ,exact mon Trigonometric Functions are Below 0.1 for Percentage Changes 0.05 p 0.05. − ≤ x ≤ x for x for p x for RD 0.1 x for RD 0.1 DP DP x DP sinx < DP cosx < RD 0.1 RD 0.1 arcsinx < arccosx < 0.05 [0°;72°[ ]113°;180°] [0°;145°[[0;0.89[] 0.89;0.89[ ∧ − 0.03 [0°;78°[ ]103°;180°][0°;156°[[0;0.93[] 0.93;0.93[ ∧ − 0.01 [0°;86°[ ]94°;180°][0°;171°[[0;0.98[] 0.98;0.98[ ∧ − -0.01 [0°;87°[ ]95°;180°][0°;173°[[0;0.98[] 0.98;0.98[ ∧ − -0.03 [0°;80°[ ]107°;180°][0°;160°[[0;0.94[] 0.94;0.94[ ∧ − -0.05 [0°;74°[ ]120°;180°][0°;151°[[0;0.91[] 0.91;0.91[ ∧ −

For the ultimate justification of the validity of the Taylor approximation for the parameter net models, it has to be investigated model by model what angular range and what ∆x are physically plausible (sanity check). A curvature of the Earth ρ of more than 90°, for example, is not real. At the same time, even a curvature of 90° is debatable since the value is corresponding to an orbital altitude of 0 km which is not relevant in general and system mission design. So, for Eq. (F.21), for instance, a design point equal or higher than 90° would not be relevant.

Page 161 Parameter Influence Net Method Algorithm For Trigonometric Calculations

Figure F.2.: Relative Deviation of Linear Approximation for Sine Function and Positive Constant Percentaged Change p.

Figure F.3.: Relative Deviation of Linear Approximation for Sine Function and Negative Constant Percent- aged Change p.

Page 162 Parameter Influence Net Method

Figure F.4.: Relative Deviation of Linear Approximation for Cosine Function and Positive Constant Percent- aged Change p.

Figure F.5.: Relative Deviation of Linear Approximation for Cosine Function and Negative Constant Percent- aged Change p.

Page 163 Parameter Influence Net Method Algorithm For Trigonometric Calculations

Figure F.6.: Relative Deviation of Linear Approximation for Arcsine Function and Positive Constant Percent- aged Change p.

Figure F.7.: Relative Deviation of Linear Approximation for Arcsine Function and Negative Constant Percent- aged Change p.

Page 164 Parameter Influence Net Method

Figure F.8.: Relative Deviation of Linear Approximation for Arccosine Function and Positive Constant Per- centaged Change p.

Figure F.9.: Relative Deviation of Linear Approximation for Arccosine Function and Negative Constant Per- centaged Change p.

Page 165 Application of Parameter Influence Net Method on Modelling of Eclipse Time tEcli pse

F.2. Application of Parameter Influence Net Method on Modelling of Eclipse

Time tEcli pse

F.2.1. Derivation of Eclipse Time for Circular Orbits

For circular orbits the eclipse time tEcli pse for a specific position of the Earth in the ecliptic can be calculated with: φ tEcli pse tOr bi t , (F.18) = · 2π yielding a direct proportionality between tEcli pse and the orbital period tOr bi t [206, p.107 f]. The angle φ

[rad] represents the segment of the orbit that is in eclipse. tOr bi t depends on the orbital altitude of the spacecraft hOr bi t by: s a3 tOr bi t 2π = µ (F.19) s 3 (REar th hOr bi t ) 2π + . = µ

Also, φ depends on hOr bi t by: µφ¶ cos(ρ) cos , (F.20) 2 = cos(βs) where ρ is the angular radius of the Earth and is also dependant on hOr bi t with:

REar th sin(ρ) . (F.21) = R h Ear th + Or bi t

Consequently, tEcli pse is in dependence of hOr bi t and βs.

Through φ and tOr bi t , hOr bi t has contradictory impact on tEcli pse . According to Eq. (F.20) and Eq. (F.21) an increase in hOr bi t leads to an decrease in φ. This is in direct opposition to the development of tOr bi t that increases with increasing hOr bi t according to Eq. (F.19).

For βs = 0 rad, Eqs. (F.18) to (F.21) evolve to: ρ tEcli pse tOr bi t , (F.22) = · π and yield the maximum eclipse time for a given altitude.

Figures F.10 and F.11 picture the evolution of tEcli pse in an Earth bound orbit in dependence of hOr bi t with

βs= 0 rad. tEcli pse decreases with increasing hOr bi t until approx. 1500 km. Increasing the altitude further causes the eclipse time to rise again, see also e.g. Errata of [206]. A nearly linear relation between tEcli pse and hOr bi t is evident for hOr bi t > 2000 km.

The quantity βs in Eq. (F.20) adds further complexity to the eclipse time calculations. βs, called the β-angle, is the angle of the Sun above the orbital plane, in other words the angle between the orbital plane and the incident Sun. During a year, β ranges from (23.5◦ i) with i being the orbital inclination, and modifies s ± + t . t reaches its maximum for β 0◦, i.e. the Sun vector coincides with the orbital plane, and Ecli pse Ecli pse s =

Page 166 Parameter Influence Net Method

its minimum for β (23.5◦ i). s = ± + βs [rad] is calculated by: ¯ ¯ π 0 βs ¯ βs ¯ , (F.23) = ¯ 2 − ¯

0 with βs being the angle between the orbital plane normal ~nOr bi t and the Sun vector ~S.

In a fixed heliocentric coordinate system with the vernal equinox as x-axis, ~nOr bi t is obtained by:   sin(Ω) sin(α)  ·      ~nOr bi t  cos(Ω) sin(α)  , (F.24) =  − ·    cos(α) with Ω being the Right Ascension of the Ascending Node (RAAN) and α [rad]: π α i 23.5◦ . (F.25) = + · 180◦

The Sun vector ~S is expressed by:   cos(δ)     ~   S  sin(δ)  , (F.26) =     0 assuming a perfectly circular Earth orbit around the Sun, with the auxiliary angle δ given by:

δ δ0 π. (F.27) = +

Here, the angle δ0 [rad] represents the current position of the Earth relative to the vernal equinox, thus it is:

2 0 π δ TPosi tion , (F.28) = 365.25 · with T 1,...,365.25[ ] being the day for which the eclipse time shall be calculated. T 1 Posi tion = − Posi tion = represents the day of the vernal equinox. Note: the term π in Eq. (F.28) enables the Sun vector to "start" at spring and is necessary because of the definition of the x-axis (vernal equinox).

Combining Eqs. (F.27) and (F.28), it is:

2π δ π TPosi tion . (F.29) = + 365.25 ·

0 With the scalar product for ~nOr bi t and ~S, βs is obtained by:

cos(β 0 ) sin(Ω) sin(α) cos(δ) cos(Ω) sin(α) sin(δ). (F.30) s = · · − · ·

With Eq. (F.30), Eq. (F.23) yields βs that is an input to Eq. (F.20).

Page 167 Application of Parameter Influence Net Method on Modelling of Eclipse Time tEcli pse

Figure F.10.: Evolution of Eclipse Time in Circular Orbits in Dependence of Orbital Altitude for LEO and GEO.

Figure F.11.: Evolution of Eclipse Time in Circular Orbits in Dependence of Orbital Altitude for Orbits up to 3000 km.

Eq. (F.20) can be further developed to: µφ¶ cos(ρ) cos , (F.31) 0 2 = sin(βs) since Eq. (F.23) can be detailed to: ³ ´ 0 π π 0 0 For 0 β : cos(βs) cos β sin(β ), (F.32) ≤ s ≤ 2 = 2 − s = s ³ ´ π 0 0 π 0 For β π: cos(βs) cos β sin(β ). (F.33) 2 ≤ s ≤ = s − 2 = s

Page 168 Parameter Influence Net Method

With Eq. (F.21) and the trigonometric relation sin2(x) cos2(x) 1, Eq. (F.31) is developed to: + = µ ³ ´2¶ RE 1 µ ¶ 1 R h 2 φ − E + Or bi t cos 1 . (F.34) 2 = (1 cos2(β0 )) 2 − s

φ Now that 2 can be obtained with Eq. (F.34), tEcli pse can be finally calculated with Eqs. (F.18) and (F.20). To sum up, for circular orbits it is t f (h ,i,Ω,T ) which is reflected in the corresponding Ecli pse = Or bi t Posi tion parameter net.

F.2.2. Derivation of Eclipse Time for Elliptical Orbits

For elliptical orbits, the time spent in eclipse varies depending on the segment of the orbit that falls into eclipse. This variation depends (1) on the variation of the spacecraft velocity along its orbit and (2) on ω, the orientation of the orbit to the Sun. To simplify the analysis of tEcli pse for elliptical orbits, two specific cases are considered. The first case assumes the perigee, the second the apogee to be opposite to the Sun with a constant spacecraft velocity while in eclipse. In both cases, the above derivation of tEcli pse for circular orbits is valid except for a change in Eq. (F.18) which is expanded by a term taking into consideration the elliptical shape of the orbit. So it is:

φ RE hp/a tEcli pse tOr bi t + . (F.35) = · 2π · a

Depending on the considered scenario, either the parameter ha or hp is used in Eq. (F.35).

F.2.3. Application of Parameter Influence Net Method Algorithm on Modelling of Eclipse Time for Circular Orbits

Starting from Eq. (F.30), it is required to assess the percentual changes of α, δ and Ω first.

Based on Eq. (F.25) and the PINM rules cited in Section 5.2.2, it is for α:

µ i ¶ pα pi . (F.36) = · α DP

Based on Table F.2,it is for sin(α): αDP psin(α) pα . (F.37) = tan(αDP ) ·

A percentual change of δ is supposed to be not of interest for ptEcli pse . However, δ influences the design point 0 (DP) of βs . Therefore, Tposi tion, expressed in days, is modelled as input parameter of the net.

Based on Eq. (F.30), the influence of the percentual change of Ω on sin(Ω) and cos(Ω) has to be derived. With Table F.2, it is: ΩDP pg psinΩ pΩ (F.38) = = tan(ΩDP ) · and p p Ω tan(Ω ) p . (F.39) j = cos(Ω) = − DP · DP · Ω

Page 169 Application of Parameter Influence Net Method on Modelling of Eclipse Time tEcli pse

To elaborate further on Eq. (F.30), its different functional levels are carved out with a substitution to:

0 cos(βs ) r sin(Ω) sin(α) cos(δ) cos(Ω) sin(α) sin(δ) = = · · − · · (F.40) x x , = 5 − 6 with the inner level x sin(Ω) sin(α) cos(δ) 5 = · · g f c1 , = · · (F.41) x cos(Ω) sin(α) sin(δ) 6 = · · j f c . = · · 2

Following the rules of the PINM, the percentaged change of the inner level, consisting of multiplications, is passed to the outer level that is composed of the elementary calculation of an addition.

According to the multiplication rule, the percentaged change of the inner level is:

px5 (1 pg ) (1 p f ) 1, = + · + − (F.42) p (1 p ) (1 p ) 1, x6 = + j · + f − while with the addition rule, the percentaged change of the outer level is:

1 p p ¡p x p x ¢ . (F.43) cos(β 0 ) r x5 5,DP x6 6,DP s = = rDP · · − · To elaborate further on Eq. (F.34), a substitution is employed to separate its functional levels. So it is:

µφ¶ 1 1 cos t u 2 u− 2 , (F.44) 2 = = 3 · 4 identifying the outer level as multiplication with the inner levels u3 and u4 to be defined as:

µ ¶2 RE u3 1 = − R h E + ¡ 1¢2 1 R w − = − E · 1 v2 = − 1 (F.45) 1 x = − 7 u 1 cos2(β0 ) 4 = − s 1 r 2 = − 1 s . = −

Applying the addition and multiplication rule on u3 according to its inner levels derived in Eq. (F.45), its percentaged change is: x7,DP pu3 px7 = −u3,DP · (F.46) 1 ¡ £ 2 ¤ 2 ¢ (1 pv1 ) 1 v1,DP , = u3,DP − + − ·

Page 170 Parameter Influence Net Method with 1 p (1 p )− 1 (F.47) v1 = + w − and 1 pw ph hDP . (F.48) = wDP · ·

Applying the addition and multiplication rule on u4 according to its inner levels derived in Eq. (F.45), its percentaged change is: sDP pu4 ps , (F.49) = u4,DP · with p (1 p )2 1. (F.50) s = + r −

Finally, Eq. (F.18) has to be considered. With Eq. (F.19) and the substitution φ ψ it is: 2 =

2 3 tEcli pse a 2 ψ. (F.51) = pµ · ·

So it is: 3 p (1 p ) 2 (1 p ) 1. (F.52) te = + a · + ψ −

With: a R h , (F.53) = E + it is: hDP pa hDP . (F.54) = aDP ·

Note: for circular orbits, it is w a, compare Eq. (F.45) and Eq. (F.53). This is not the case for elliptical orbits, = see Section F.2.4. In comparison to a, the parameter w is specifically defined with the orbital altitude for which the eclipse calculation is performed. This is a crucial difference that is essential in the eclipse time calculation for elliptical orbits.

To obtain pψ, Eq. (F.44) is further developed to:

φ ψ arccos(t). (F.55) 2 = =

With Table F.2 it is: tDP pψ parccos(t) q pt , (F.56) = = −arccos(t ) 1 t 2 · DP · − DP with 1 1 p (1 p ) 2 (1 p )− 2 1, (F.57) t = + u3 · + u4 − following Eq. (F.44).

Page 171 Application of Parameter Influence Net Method on Modelling of Sunlit Time tSunli t

F.2.4. Application of Parameter Influence Net Method Algorithm on Modelling of Eclipse Time for Elliptical Orbits

In accordance to the approach to base the calculation of the eclipse time in elliptical orbits on the calcula- tion of the eclipse time for circular orbits, Eqs. (F.36) to (F.50) are also valid for elliptical orbits.

For elliptical orbits, Eq. (F.35) evolves with Eq. (F.19) to:

2 3 RE hp/a tEcli pse a 2 ψ + = pµ · · · a (F.58) 2 1 a 2 ψ (RE hp/a). = pµ · · · +

With w R h, see Eq. (F.45), for h h , it is: = E + = p/a

1 p (1 p ) 2 (1 p )(1 p ) 1. (F.59) tEcli pse = + a · + ψ + w − p is obtained with Eq. (F.48) for h h . w = p/a Finally, with: hp ha a RE , (F.60) = + 2 + 2 it is: 1 µ1 1 ¶ pa php hp,DP pha ha,DP . (F.61) = aDP · 2 · · + 2 · · .

F.3. Application of Parameter Influence Net Method on Modelling of Sunlit

Time tSunli t

F.3.1. Derivation of Sunlit Time tSunli t for Circular and Elliptical Orbits

For both circular and elliptical orbits, the time spent in sunlight is calculated by:

t t t . (F.62) Sunli t = Or bi t − Ecli pse

F.3.2. Application of Parameter Influence Net Method Algorithm on Modelling of Sunlit Time for Circular and Elliptical Orbits

It is: 1 ¡ ¢ ptSunli t ptOr bi t tOr bi t,DP ptEcli pse tEcli pse,DP . (F.63) = tSunli t,DP · · − ·

The derivation of ptEcli pse is pictured in Section F.2for both circular and elliptical orbits.

Based on Eq. (F.19), it is: 3 p (1 p ) 2 1. (F.64) tOr bi t = + a − pa is obtained with Eq. (F.54) for circular orbits and with Eq. (F.61) for elliptical orbits.

Page 172 Parameter Influence Net Method

F.4. Application of Parameter Influence Net Method on Modelling of Power Subsystem

For the power performance analysis of a spacecraft, two potential cases were distinguished by Nemetzade and Förstner [144] to show the general approach of the parameter influence net method and its application. Case A relates to the assessment whether the designed system provides enough power to supply the payload and the satellite bus. Its modelling and PINM application are recalled in the following from the work by Nemetzade and Förstner [144]. Case B addresses the question of the maximum power amount a payload can ask for from a given system design. Details on the Case B analysis can be found in the publication by Nemetzade and Förstner [144].

F.4.1. Derivation of Battery Stored Energy EB at

Considering Earth-bound satellites, the power subsystem is by the majority realized with solar panels for energy generation and batteries for energy storage. According to Wertz and Larson (eds.) [206], the power generated by a solar array, regulated in maximum power point tracking (MPPT) mode, is calculated by:

µ100 d ¶l PSA S ASA cos(α) ηcov η − ηtemp ηMPPT . (F.65) = · · · · cell · 100 · ·

This equation is applicable at every instant of the sunlit fraction of the orbit.

The battery is characterized by its currently stored energy EB at . The parameter is time dependent, influ- enced by the energy charge in sunlight and the energy discharge in eclipse phases. Considering the energy storage of the battery in entire orbital steps and assuming that the battery is fully charged at the beginning of the analysis, the battery energy after n orbits is described by:

1 EB at,n Emax n (PSA Puser,s) ∆tc ηc n Puser,e ∆td . (F.66) = + · − · · − · · · ηd

In that orbital representation, the charging time ∆tc equals the time spent in sunlight tSunli t , and the dis- charging time ∆td represents the eclipse duration tEcli pse . They both depend on the orbit configuration, especially on the orbital altitude hOr bi t , the inclination i, and the eccentricity e and are seasonally vari- able. Their detailed modelling is presented in Sections F.2 and F.3.2. For simplicity reasons, however, tSunli t and tEcli pse are considered to be independent parameters in the following to focus on the power subsystem details. Nemetzade and Förstner [144] discuss the legitimacy of this assumption in more detail.

It is the battery stored energy EB at,n, see Eq. (F.66), that is a potential bottleneck in satellite operations and therefore the performance parameter of interest in the following, hence, the output of the performance influence net.

Page 173 Application of Parameter Influence Net Method on Modelling of Power Subsystem

F.4.2. Application of Parameter Influence Net Method Algorithm on Modelling of Battery Stored Energy

Applying the PINM rules for multiplication, presented in Section 5.2.2, on Eq. (F.65), it is:

p (1 p ) (1 p ) (1 p ) (1 p ) (1 p ) (1 p ) (1 p ) 1, (F.67) PSA = + ASA · + cos(α) · + ηcov · + ηcell · + l 0 · + ηtemp · + ηMPPT − with p tan(α ) p α (F.68) cos(α) = − DP · α · DP according to Table F.2 and µ ¶pl lDP 100 d · p − 1, (F.69) l 0 = 100 − with a constant solar cell yearly degradation d.

For EB at,n, three functional levels are to be considered. With the substitution:

x u ∆t η , 2 = 1 · c · c u (P P ), 1 = SA − user,s (F.70) 1 x3 Puser,e ∆td , = − · · ηd the outer level is identified to be a summation, consequently leading to:

1 ¡ ¢ pEB at pEmax Emax,DP px2 n x2,DP px3 n x3,DP . (F.71) = EB at,DP · · + · · − · ·

For the intermediate level, it is:

p (1 p ) (1 p ) (1 p ) 1 (F.72) x2 = + u1 · + ∆tc · + ηc − and ¡ ¢ ¡ ¢ ¡ ¢ 1 p 1 p 1 p 1 p − 1. (F.73) x3 = + Puser,e · + ∆td · + ηd −

Finally, a percentaged change in the inner level is described by:

¡ ¢ 1 ¡ ¢ p P P − p P p P . (F.74) u1 = SA,DP − user,s,DP · PSA · SA,DP − Puser,s · user,s,DP

Page 174 Parameter Influence Net Method

F.5. Application of Parameter Influence Net Method on Modelling of Communication Subsystem

Considering the communication performance of a spacecraft, the eminent question that rises in that re- spect is the storage capacity of the spacecraft in view of data production and transmission along the mission. The modelling of the free data storage capacity of the spacecraft is presented in the following section. A fu- ture potential enhancement is recommended to consider the losses during the data transmission within the modelling. As such, the actually received data amount on ground is put into the focus of the performance investigations.

F.5.1. Derivation of Free Data Storage Capacity C f ree

Within the frame of this work, the free data storage capacity of the spacecraft C f ree after n orbits is modelled with: C C n C n C , (F.75) f ree,n = max − · pr od + · D/L with C C C , (F.76) pr od = S + HK where

• C f ree,n represents the free storage capacity on-board after n orbits,

• Cmax represents the maximal storage capacity on-board,

• CS represents the produced and stored science data,

• CHK represents the house keeping data stored on-board,

• CD/L represents the data volume downlinked to Earth , and

• Cpr od represents the data volume produced.

Note: as the uplinked data volume is supposed to be small with regard to the other data volumes, it is neglected in the calculations.

CS and CHK each comprise the data volume that was produced in one orbit. So it is:

C R ∆t , (F.77) S = pr od,S · p where

• Rpr od,S represents the science data rate, and

• ∆tp represents the time span science data is produced in one orbit.

For the HK data it is: C R ∆t , (F.78) HK = pr od,HK · p where

Page 175 Application of Parameter Influence Net Method on Modelling of Communication Subsystem

• Rpr od,HK represents the HK data rate, and

• ∆tp represents the time span HK data is produced in one orbit.

∆tp is defined by: ∆t t ∆t . (F.79) p = Or bi t − D/L

The downlinked data volume is calculated by:

CD/L (RD/L,HK RD/L,S) ∆tD/L = + · (F.80) R ∆t . = D/L,sum · D/L

There is a differentiation between the data rate for science data RD/L,S and the one for housekeeping data

RD/L,HK . The time span data is downlinked to Earth, represented by ∆tD/L, is equal to the contact time with the actual contact time with the ground station(s) tC .

Note: the model presumes that science and HK data are continuously produced and downlinked in parallel. In reality, the amount of HK data is much smaller than the scientific data. To make efficient use of the contact times, missions make use of the entire data rate for the scientific data downlink as soon as the HK data is transmitted. This specific operation mode is not considered in the here presented modelling of the communication subsystem for the sake of simplicity.

The contact time with the ground station(s) tC is dependent on the orbit parameters of the spacecraft (pos- tulating that the required spacecraft antenna orientation is established) and differently calculated for circu- lar and elliptical orbits. The following models postulate that the spacecraft passes directly over the ground station and that the Earth rotation is negligible for the relatively brief period for which the spacecraft passes overhead.

F.5.1.1. Derivation of Maximum Ground Station Contact Time for Spacecraft in Circular Orbit

For circular low Earth orbits the maximal contact time between a spacecraft and a ground station according to Wertz and Larson (eds.) [206] equals:

λmax,c tC,c tor bi t , (F.81) = · π where the Earth central angle (or the off-ground track angle), see Fig. 5-17 in [206], is obtained with: π λmax,c ²min ηmax,c , (F.82) = 2 − − with the maximum nadir angle: sin(η ) sin(ρ) cos(² ). (F.83) max,c = · min

The minimum elevation angle ²min marks the minimum angle measured at the ground station between the spacecraft and the local horizon that is required to establish the link with the spacecraft in orbit.

Page 176 Parameter Influence Net Method

With Eq. (F.21), Eq. (F.83) is developed to:

RE sin(ηmax,c ) cos(²min). (F.84) = R h · E +

Note that for a circular orbit ρ is constant. So with Eq. (F.19), Eq. (F.81) can be resolved. Since the above modelling neglects the Earth rotation, the formulas are only valid for spacecraft in low Earth orbit where the contact time is relatively brief. As a general recommendation, an orbital altitude of 3000 km is considered as reasonable limit for the validity of the formulas cited within the frame of this work. Up to 3000 km, the true movement of the ground station due to Earth rotation in relation to the actual movement of the spacecraft during the contact time, with the covered distance expressed in degree measured in the Earth centric system, is of roughly 10 % (values are valid for orbital inclination of 0° and ground station’s location on the equatorial plane). Above that orbital altitude, the Earth rotation’s contribution is considered as too high to be neglected. For more details and exact calculations, please refer to Wertz (ed.) [207].

F.5.1.2. Derivation of Maximum Ground Station Contact Time for Spacecraft in Elliptical Orbit

For an elliptical orbit ρ varies, depending on the position of the spacecraft in its orbit. Additionally, the velocity of the spacecraft varies along the orbit. So, for the case of an elliptical orbit, the calculation of the contact time tC,e is deduced in extreme points, in the perigee and apogee of the elliptical orbit. To model the contact time for elliptical orbits, the mean motion ν [rad/s] of the spacecraft is employed:

r µ ν . (F.85) = a3 Consequently, the angular velocity of the spacecraft ω [rad/s] is calculated with: a ω ν . (F.86) = · r The angular velocity is maximal in perigee: a ωmax ν = · R h E + p r µ a (F.87) = a3 · R h E + p rµ 1 , = a · R h E + p and minimal in apogee: a ωmin ν = · R h E + a r µ a (F.88) = a3 · R h E + a rµ 1 . = a · R h E + a

Page 177 Application of Parameter Influence Net Method on Modelling of Communication Subsystem

According to Wertz (ed.) [207], the maximal possible contact time between the spacecraft and the ground station(s) in perigee is defined by: λmax,e,p tC,e,max,p , (F.89) = ωmax with π λmax,e,p ²min ηmax,e,p (F.90) = 2 − − and sin(η ) sin(ρ ) cos(² ), (F.91) max,e,p = max · min where RE sin(ρmax ) . (F.92) = R h E + p

Accordingly, for the apogee the maximal possible contact time according to Wertz (ed.) [207] is:

λmax,e,a tC,e,max,a , (F.93) = ωmin with π λmax,e,a ²min ηmax,e,a , (F.94) = 2 − − where sin(η ) sin(ρ ) cos(² ) (F.95) max,e,a = min · min and RE sin(ρmin) . (F.96) = R h E + a

Note that the modelling neglects the Earth rotation (as in the case for the circular orbit, see above section) and the variation in the spacecraft velocity and ρ along the orbit. Consequently, the above calculations are only valid for small eccentricities [207]. For further details see ibid.

F.5.2. Application of Parameter Influence Net Method Algorithm on Modelling of Free Storage Capacity

Applying the PINM algorithm for additions, presented in Section 5.2.2, on Eq. (F.75), it is:

1 pC f ree,n (pCmax Cmax,DP pCpr od n Cpr od,DP pCD/L n CD/L,DP ). (F.97) = C f ree,n,DP · · − · · + · ·

For Eq. (F.76), it is: 1 pCpr od (pCS CS,DP pCHK CHK ,DP ). (F.98) = Cpr od,DP · · + ·

The percentaged change of CS, see Eq. (F.77), is obtained by:

p (1 p ) (1 p ) 1, (F.99) CS = + Rpr od,S · + ∆tp −

Page 178 Parameter Influence Net Method

and similarly for CHK , see Eq. (F.78),by:

p (1 p ) (1 p ) 1. (F.100) CHK = + RPr od,HK · + ∆tp −

Developing Eq. (F.79) with the PINM rules cited in Section 5.2.2, results in:

1 ¡ ¢ p∆tp ptOr bi t tOr bi t,DP p∆tD/L ∆tD/L,DP . (F.101) = ∆tp,DP · · − ·

According to Eq. (F.80), pCD/L can be written as:

µ · 1 ¸¶ pCD/L 1 (pRD/L,HK RD/L,HK ,DP pRD/L,S RD/L,S,DP ) (1 p∆tD/L ) 1. (F.102) = + RD/L,sum,DP · · + · · + −

F.5.2.1. Application of Parameter Influence Net Method Algorithm on Modelling of Contact Time for Circular Orbits

Applying the PINM rules presented in Section 5.2.2 on Eq. (F.81), it is:

p p (1 p ) (1 p ) 1. (F.103) ∆tD/L = tC ,c = + tor bi t · + λmax,c −

The percentaged change of the orbital period, ptor bi t , is obtained with Eq. (F.64) and Eq. (F.54).

Based on Eq. (F.82) and the PINM rules presented in Section 5.2.2, pλmax,c is derived to:

1 pλmax ( p²min ²min,DP pηmax,c ηc,DP ). (F.104) = λDP · − · − ·

To elaborate further on Eq. (F.83), its different functional levels are carved out with a substitution to:

sin(η ) sin(ρ) cos(² ) m v l . (F.105) max,c = · min = = 1 · 1

So ηmax,c can be expressed by: η arcsin(m) arcsin(v l ). (F.106) max,c = = 1 · 1

With Table F.2, the percentaged change of ηmax,c is obtained with:

mDP p p , (F.107) ηmax,c q m = arcsin(m ) 1 m2 · DP · − DP with p (1 p ) (1 p ) 1. (F.108) m = + v1 · + l −

The parameter pv1 is obtained with Eqs. (F.47) and (F.48).

With l cos(² ) and Table F.2, l is expressed by: = min

p ² tan(² ) p . (F.109) l = − min,DP · min,DP · ²min,DP

Page 179 Parameter Influence Net - Exemplary Implementation

F.5.2.2. Application of Parameter Influence Net Method Algorithm on Modelling of Contact Time for Elliptical Orbits

Applying the PINM rules on Eqs. (F.89) and (F.93) with the substitution k 1 , it is: = ωmin/max p (1 p ) (1 p ) 1. (F.110) tC,e,max,p/a = + k · + λmax,e,p/a − Eqs. (F.104) to (F.109) with h h yield p , depending on the position of the spacecraft the contact = p/a λmax,e,p/a time shall be calculated.

With the substitution w R h, employed in Section F.2, it is: = E + s a k w , (F.111) = µ · and consequently: 1 p (1 p ) 2 (1 p ) 1. (F.112) k = + a · + w −

While p is obtained with Eq. (F.48) for h h , p is gained with Eq. (F.61). w = p/a a

F.6. Parameter Influence Net - Exemplary Implementation

In addition to Figs. 5.5 and 5.6, the following figures display further extracts of the exemplary graphical implementation of the PINs described in Annex F.2to F.5.

Figure F.12 pictures the PIN of the communication system according to its derivation in Annex F.5. An extract of the implementation of the communication system PIN in MS Excel is presented in Fig. F.13.

Page 180 Parameter Influence Net Method

Figure F.12.: Parameter Influence Net of Modelled Communication System.

Page 181 Parameter Influence Net - Exemplary Implementation

Figure F.13.: Exemplary Parameter Influence Net: Extracts of Modelled Communication System Net.

Page 182 Parameter Influence Net Method ∧ ]0;1[ ∧ 1;1[ ∈ − ]270°;360°] ]180°;360°[ ] DP 1;0[ ∧ [0°;90°[ DP ∈ x ∧ − x ∈ ] ∈ DP x DP x DP x ]0°;180°[ Domain of Definition ]90°;270°[ (F.7) (F.11) (F.13) ) (F.9) 2 DP 2 DP x DP x x − − ) 1 1 q DP tan( DP DP · · q x x DP x ) · · x · ) x · DP x DP p x x tan( p DP x · p x x = p x sin = − arccos( p arcsin( x Taylor Approximation = = − cos x p x arcsin arccos p p 1 1 ∧ ≤ < ) ) ∧ ∧ x x ]0;1] p p ∧ + + ∈ 1;1[ ]270°;360°] − (1 (1 ]180°;360°[ DP [ · · ∧ [0°;90°[ DP 1;0[ x ∧ ∈ x ∈ − [ DP DP x x ∈ DP DP x x ≤ ≤ DP 1 1 x ]0°;180°[ Domain of Definition − − ]90°;270°[ 1 (F.12) 1 (F.8) 1 (F.10) 1 (F.1) − − − − for Trigonometric Functions, Comparison of Exact Calculation and Taylor Based Approximation and Domains of )) )) )) )) p x x x x p p p p ) ) ) ) + + + + DP DP DP DP (1 (1 x x (1 (1 x x DP DP DP DP x x x x sin( cos( arcsin( arccos( sin( cos( Exact Formula = = arcsin( arccos( x x = = sin cos x x p p Definition. arcsin arccos p p : Table F.2. Percentaged Changes

Page 183 Parameter Influence Net - Exemplary Implementation Rltv eito ( Deviation Relative F.3.Table : eaieDvainEouino em oad oiinpito discontinuity of position/point towards terms of Evolution Deviation Relative RD RD arccos ity. arcsin x x = = q q RD RD x x DP DP 1 1 cos sin 2 2 − − x x = 1 = 1 · · p cos( (arccos( sin( (arcsin( f , Taylor x x − p DP DP p x x · (1 x (1 x x · − DP DP − x DP p + + DP p p x (1 (1 p p f x · , x cos x · exact + + )) sin )) p p − − x x x x sin DP cos )/ )) )) DP p − − x f x arcsin arccos DP , exact DP − − (F.14) 1 o omnTiooercFntosadEouino em oad on fDiscontinu- of Point Towards Terms of Evolution and Functions Trigonometric Common for (F.15) 1 x x DP DP ) ) − − (F.16) 1 (F.17) 1 x x x x x x x x DP DP DP DP DP DP DP DP lim lim lim lim lim lim lim lim → → → →± → → →± → 1 Π Π Π Π 1 1 1 /2 /2 q arcsin( cos( sin q arccos( sin( cos x DP 2 1 x x DP 2 1 x DP x DP − x x DP − DP 1 = DP x · 1 = DP (1 0 = · = (1 · 0 + (1 0 0 · + (1 p + p x + p )) x x )) p − )) x − cos )) − sin − arcsin x arccos x DP DP = x = DP 0 x 0 DP = = 0 0

Page 184 G. Hubble Space Telescope Data

Table G.1 summarizes data from the Hubble Space Telescope mission that was employed as exemplary de- sign point data for the implementation of the PINs in Chapter 5.

Derivation of Communication Subsystem Related Data

After its final servicing mission, the Hubble Space Telescope (HST) comprises three 12 Gbit solid state recorders to store the science data [208], i.e. it is assumed that Cmax,DP = 36 Gbit. The science data amount per week received on ground is 140 Gbit, see [209]. In accordance to the JUICE mission, see Section D.1, it is assumed that additional 10 % of HK data are transmitted to ground each week, i.e. 14 Gbit, resulting in a total amount of data of 154 Gbit/week. About twice a day, the HST uses relay satellites of the Track- ing and Data Relay Satellite System (TDRSS) to transmit data to ground to the White Sands Test Facility in White Sands, New Mexico [210], which are visible about 95 % of the orbit [208]. Therefore, the communica- tion models in Section F.5that assume a direct link between the data producing spacecraft and the ground stations are strictly speaking not representative for the HST. Nevertheless, equivalent parameter values are derived in the following to be used in the Parameter Influence Net for demonstration purposes.

The data production time span ∆tPr od of the HST is assumed to be equal to the orbital period minus time spent in outages of 25 min (worst case) over the South Atlantic Anomaly (see [208]) and shortened by the time spent for slew manoeuvres between targets, assumed to be equal to the LOFT mission, i.e. 5 % of the orbital period, hence 4.8 minutes. The similarity between LOFT and HST is based on the statement that the HST requires 14 min for a slew of 90 degree according to Nelson et al. [208] which is similar for LOFT, see slew profile for 60 degree in Fig. A.5. Following these assumptions, ∆tPr od for the HST results to be 66 min.

Assuming that science and HK data are produced simultaneously and continuously, Rpr od,S,DP is derived to be 231.5 kbps and Rpr od,HK ,DP to be 23.2 kbps.

∆tD/L,DP that equals tC,c,DP is calculated based on the assumption that the HST downlinks data directly to the ground. In that case, Eqs. F.81 to F.84 can be applied. With h = 560 km and the assumption that

²min,DP = 10°, it is tC,c,DP = 483.2 s. With this information, it is assumed that RD/L,S,DP and RD/L,HK ,DP are

8.5 times higher than Rpr od,S,DP and Rpr od,HK ,DP which is in line with the assumption that the produced data is downlinked to Earth in one ground station pass.

Page 185 Table G.1.: Hubble Space Telescope Design Point Data.

Orbit

Solar constant S 1358 W/m2

Orbital altitude hOr bi t 560 [208] km Orbital inclination i 0 ° Orbital eccentricity e 0 -

Solar incidence angle α 0.001 ° Number of orbits to be considered n 1 -

Battery

Maximum battery energy Emax 12375 [208, 211] Wh

Battery charging efficiency ηchar ge 0.9212 [208, 212] -

Battery discharging efficiency ηdi schar ge 0.784 [208, 212] -

Battery charging time ∆tc 1.0045 h

Battery discharging time ∆td 0.5931 h Solar array

2 Solar array size ASA 36.788 [208] m

Cell coverage efficiency ηcov 0.8 -

Cell efficiency ηcell 0.185 [206, 208] - Yearly degradation of cells d 2.75 [206] % S/C lifetime l 5 yrs

Temperature efficiency ηtemp 0.85 [212] -

MPPT efficiency ηMPPT 0.95 [208, 212] - S/C Bus / Payload

Power demand PUser in sunlight 4000 W in eclipse 485 W

Communication

Mass memory capacity Cmax 36 [208] Gb

Science data production rate Rpr od,S 231.5 kbps

HK data production rate Rpr od,HK 23.15 kbps

Science data downlink rate RD/L,S 1967.8 kbps

HK data downlink rate RD/L,HK 196.78 kbps

Minimum elevation angle ²min 10 °

Page 186 H. Definitions and Translations of Employed Notions

Table H.1.: Tool and Simulator Quality Model Criteria Definitions.

Criterion Definition

Acceptability Degree of acceptance by the user to see the usability of product for his/her needs.

Accessibility Degree to which the user can access and manipulate the internal sys- tem processes, e.g. the source code. [122]

Adaptability to user character Degree to which the product is capable to be individualized, i.e. users can modify interaction and presentation of information to suit their individual capabilities and needs. [118]

Biological Perceptibility Capability of the product to be perceptible by the user via its senses. [122]

Clarity Degree to which the display and the arrangement of information on the monitor is clear. [121]

Co-Existence Degree to which a product can perform its required functions ef- ficiently while sharing a common environment and resources with other product(s), without detrimental impact on any other prod- uct(s). [69]

Compatibility Degree to which a product can exchange information with other products and/or perform its required functions while sharing the same hardware or software environment. [69]

Conformity with User Expecta- Capability of a product to correspond to predictable contextual needs tions of the user and to commonly accepted conventions. [118]

Controllability Capability of the product to have the pace of the interaction and order of functions adjusted to the user needs. [118]

Documentation Degree to which a product and its functionalities are documented to be learned and looked-up. [122]

Emotional Reliability Capability of a product to evoke the user’s trust into the product.

Page 187 Table H.1.: (continued).

Criterion Definition

Familiarization Degree to which a product design allows for an intuitive handling of the product, e.g. via a self-descriptive structure. [122]

Fault Tolerance Degree to which a product operates as intended despite the presence of hardware or software faults. [69]

Functional Appropriate Visual- Degree to which a product’s visualization and structure is adapted to ization and Structure its task. [122]

Functional Completeness Degree to which the set of product functions covers the user’s tasks and objectives. [69]

Functional Correctness Degree to which a product provides the correct results with the needed degree of precision. [69]

Functional Reliability Degree to which a product fulfils a task in a given time span under defined circumstances. [69]

Functional Suitability Degree to which a product fulfils the functional requirements, imply- ing a tailored functional scope and a specified correctness. [69]

Interface Maturity Capability of a product to fulfil the required human-computer- interface reliability. [69]

Interoperability Degree to which two or more products are capable to exchange infor- mation and use the information that has been exchanged. [69]

Maintainability Degree of effectiveness and efficiency with which a product or system can be modified by the intended maintainers. [69]

Manageability Capability of the product to be used according to its purpose in its completeness without difficulties. [122]

Modifiability Degree to which a product or system can be effectively and efficiently modified without introducing defects or degrading the existing prod- uct quality. [69]

Modularity Degree to which a system or computer program is composed of dis- crete components such that a change to one component has minimal impact on other components. [69]

Portability Degree of effectiveness and efficiency with which a product can be transferred from one environment (e.g. hardware, user account) to another. [69]

Page 188 Definitions and Translations of Employed Notions

Table H.1.: (continued).

Criterion Definition

Readability Degree to which text and symbols are readable on the display without constraints. [121]

Recoverability Degree to which, in the event of an interruption or a failure, a prod- uct can recover the data directly affected and re-establish the desired state of the system. [69]

Reusability Degree to which a product or modules of it can be re-used in more than one project. [69]

Self-Descriptiveness Degree to which a product’s functionalities and handling are obvious to the users. [118]

Technical Maturity Capability of a product to fulfil the required functional reliability over time. [69]

Transparency Degree to which the program’s internal processes the functionalities of the product are based on, are known by documentation and/or ac- cessibility to the source code. [122]

Table H.2.: Definition of Common User-Centred Design Related Terms According to Standard DIN EN ISO 9241-11 [119] on Ergonomics of Human-System Interaction at Office.

Notion Definition

Context of Use Users, tasks, equipment (hardware, software and materials), and the physical and social environments in which a product is used.

Effectiveness Accuracy and completeness with which users achieve specified goals.

Efficiency Resources expended in relation to the accuracy and completeness with which users achieve goals.

Ergonomics / study of human Scientific discipline concerned with the understanding of interac- factors tions among human and other elements of a system, and the profes- sion that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance.

Human-Centred Design Approach to and development that aims to make in- teractive systems more usable by focusing on the use of the system and applying human factors/ergonomics and usability knowledge and techniques.

Pleasure/Joy of Use Positive emotions/happiness related to the use of a product.

Page 189 Table H.2.: (continued).

Notion Definition

Satisfaction Freedom from discomfort and positive attitudes towards the use of the product.

Usability Extent to which a system, product or service can be used by speci- fied users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.

User Person who interacts with the product.

User Experience Person’s perceptions and responses resulting from the use and/or an- ticipated use of a product, system or service.

User Interface All components of an interactive system (software or hardware) that provide information and controls for the user to accomplish specific tasks with the interactive system.

Table H.3.: Tool and Simulator Quality Model: English-German Translations of Common Notions.

English German

Acceptability Akzeptanz

Accessibility Zugänglichkeit

Adaptability to User Character Individualisierbarkeit

Biological Perceptibility Wahrnehmbarkeit

Clarity Klarheit

Co-Existence Co-Existenz

Compatibility Kombinierbarkeit

Conformity with User Expectations Konformität mit Nutzererwartung

Controllability Steuerbarkeit

Error Tolerance Fehlertoleranz

Documentation Dokumentation

Emotional Reliability Emotionale Zuverlässigkeit

Familiarization Eingewöhnung

Fault tolerance Fehlertoleranz

Page 190 Definitions and Translations of Employed Notions

Table H.3.: (continued).

English German

Functional Appropriate Visualization and Struc- Aufgabenangepasste Visualisierung und Struktur ture

Functional Completeness Funktionale Vollständigkeit

Functional Correctness Funktionale Richtigkeit

Functional Reliability Funktionale Zuverlässigkeit

Functional Suitability Aufgabenangemessenheit

Interface Maturity Schnittstellenreife

Interoperability Kompatibilität

Maintainability Wartbarkeit

Manageability Handhabbarkeit

Modifiability Modifizierbarkeit

Modularity Modularität

Portability Übertragbarkeit

Readability Lesbarkeit

Recoverability Wiederherstellbarkeit

Reusability Wiederverwendbarkeit

Self-Descriptiveness Selbst-Erklärbarkeit

Technical Maturity Technische Reife

Transparency Transparenz

Usability Gebrauchstauglichkeit

Utility Nützlichkeit

Page 191

Bibliography

[1] NASA. NASA Systems Engineering Handbook. NASA/SP-2007-6105 Rev1, Washington, D.C., December 2007.

[2] ECSS-E-ST-10. Space engineering - System engineering general requirements, European Coopera- tion for Space Standardization, ECSS Secretariat, ESA-ESTEC, Requirements & Standards Division, Noordwijk, The Netherlands, 06 March 2009.

[3] Systems Management College Department of Defense. Systems Engineering Fundamentals. Defense Aquisition University Press, Fort Belvoir, Virginia, 2001.

[4] BKCASE Editorial Board. The Guide to the Systems Engineering Body of Knowledge (SEBoK), v 1.8. Hoboken, NJ: The Trustees of the Stevens Institute of Technology. Accessed 09.01.2018. www.sebokwiki.org. BKCASE is managed and maintained by the Stevens Institute of Technology Sys- tems Engineering Research Center, the International Council of Systems Engineering, and the Insti- tute of Electrical and Electronics Engineers Computer Society, 2017.

[5] ECSS-E-TM-10-21A. Space engineering - System modelling and simulation, European Cooperation for Space Standardization, ECSS Secretariat, ESA-ESTEC, Requirements & Standards Division, No- ordwijk, The Netherlands, 16 April 2010.

[6] Robin Blommestijn and Joachim Fuchs. European Space Technology Harmonisation Technical Dossier System Modelling and Simulation Tools. European Space Agency, Issue 2, Rev. 0, 14 November 2011.

[7] . Methoden der System-Entwicklung. Carl Hanser Verlag, München, and John Wiley & Sons GmbH, Frankfurt am Main, 1973.

[8] Alexander Kossiakoff, William N. Sweet, Samuel J. Seymour, and Steven M. Biemer. Systems Engineer- ing Principles and Practice. John Wiley and Sons, Inc., Hoboken, New Jersey, USA, 2011.

[9] http://www.agi.com/products/engineering-tools, last access 20.05.2019.

[10] ECSS-M-ST-10. Space - Project planning and implementation, European Coop- eration for Space Standardization, ECSS Secretariat, ESA-ESTEC, Requirements & Standards Division, Noordwijk, The Netherlands, 06 March 2009.

[11] Stephan Theil. High Performance Spacecraft Dynamics Simulator. Presentation, http://www.dlr. de/sc/Portaldata/15/Resources/dokumente/WS_120607/ZARM_HPSim_v2Theil.pdf, 12 June 2007.

Page 193 Bibliography

[12] Valdemir Carrara. An Open Source Satellite Attitude and Orbit Simulator Toolbox for Matlab. In Proceedings of the 17th International Symposium on Dynamic Problems of Mechanics, Natal, Brazil, 22-27 February 2015.

[13] http://tudat.tudelft.nl/, last access 20.05.2019.

[14] http://www-os3.kn.e-technik.tu-dortmund.de/, last access 20.05.2019.

[15] Brian Niehoefer, Sebastian Subik, and Christian Wietfeld. The CNI Open Source Satellite Simulator based on OMNeT++. In 6th International OMNeT++ Workshop, Cannes, France, 05 March 2013.

[16] http://www.gano.name/SSS/index.php, last access 20.05.2019.

[17] http://www.gano.name/shawn/JSatTrak/, last access 20.05.2019.

[18] http://sourceforge.net/projects/sputnixsatellit/files/?source=navbar, last access 20.05.2019.

[19] TERMA A/S. SIMSAT Simulators. Presentation, http://www.terma.com/media/148537/simsat_ simulators.pdf, January 2002.

[20] http://www.eurosim.nl/products/systems.shtml, last access 20.05.2019.

[21] http://www.esa.int/Our_Activities/Operations/gse/SIMULUS, last access 20.05.2019.

[22] http://www.ecosimpro.com/products/ecosimpro/, last access 20.05.2019.

[23] Pedro Cobas-Herrero, Borja Garcia-Gutierrez, Ramon Perez-Vara, Raul Avezuela-Rodriguez, and Car- men Gregori de la Malla. An ESA State-of-the-Art Simulation Tool for Space Applications. In 9th International Workshop on Simulation for European Space Programmes, SESP 2006, Noordwijk, the Netherlands, 06-08 November 2006.

[24] http://www.esa.int/TEC/Modelling_and_simulation/SEMAWH8LURE_0.html, last access 20.05.2019.

[25] http://software.nasa.gov/software/GSC-17778-1, last access 20.05.2019.

[26] The GMAT Development Team. General Mission Analysis Tool (GMAT): User Guide. http://gmat. sourceforge.net/docs/R2017a/help-a4.pdf. Version R2017a.

[27] http://www.itrinegy.com/solutions/by-task/satellite-link-simulation, last access 20.05.2019.

[28] http://www.vocality.com/satellite-simulator-3/, last access 20.05.2019.

[29] http://satellite-ns3.com/, last access 20.05.2019.

[30] http://www.mscsoftware.com/product/msc-nastran, last access 20.05.2019.

[31] http://www.esatan-tms.com/products/product.php, last access 20.05.2019.

[32] http://products.office.com/de-de/excel, last access 20.05.2019.

[33] http://www.mathworks.com/products/matlab.html, last access 20.05.2019.

Page 194 Bibliography

[34] http://www.mathworks.com/products/simulink.html, last access 20.05.2019.

[35] http://www.astos.de/products/astos, last access 20.05.2019.

[36] STK SOLIS. Product Sheet, http://p.widencdn.net/1e3my6/SOLIS-Product-Specsheet.

[37] http://www.psatellite.com/products/sct/, last access 20.05.2019.

[38] Spacecraft Control Toolbox Product Comparison. Product Sheet, http://www.psatellite.com/ wp-content/uploads/2016/05/ToolboxComparisonTable.pdf.

[39] http://www.gmv.com/en/Sectors/space/Space_Segment/Satellite_and_mission_ simulators.html, last access 20.05.2019.

[40] Collegues of the Future Programs Department at Airbus Defence and Space GmbH, Friedrichshafen, Germany, and Tanja Nemetzade. Personal communication, several meetings over the year 2013.

[41] Marcel Anklam, Roger Förstner, and Susanne Fugger. Customized Science Payload Simulator for a Particular Mission (ESA’s BepiColombo). In 63rd International Astronautical Congress, Naples, Italy, 1-5 October 2012.

[42] Johannes Benkhoff, Jan van Casteren, Hajime Hayakawa, Masaki Fujimoto, Harri Laakso, Mauro No- vara, Paolo Ferri, Helen R. Middleton, and Ruth Ziethe. BepiColombo-Comprehensive exploration of Mercury: Mission overview and science goals. Planetary and Space Science, 58(Issues 1–2):2–20, January 2010.

[43] http://sci.esa.int/bepicolombo/33022-summary/, last access 20.05.2019.

[44] http://sci.esa.int/bepicolombo/47346-fact-sheet/, last access 20.05.2019.

[45] http://sci.esa.int/bepicolombo/60833-esa-pr-28-2018-bepicolombo-blasts-off-to- investigate-mercurys-mysteries/, last access 20.05.2019.

[46] Roger Förstner and Tanja Nemetzade. Personal communication, Meeting, October 2011.

[47] VECTRONIC Aerospace GmbH. BepiColombo Science Payload Simulator Architectural Design Docu- ment. BC-VAS-DD-00002, Issue 3, Version 0, 20.04.2012.

[48] VECTRONIC Aerospace GmbH. BepiColombo Science Payload Simulator User Manual. BC-VAS-MA- 00002, Issue 5, Version 0, 20.04.2012.

[49] Chadia Abras, Diane Maloney-Krichmar, and Jenny Preece. User-Centered Design. In W.S. Bainbridge, editor, Berkshire Encyclopedia of Human-Computer Interaction, volume 2, 1st edition, pages 763–767. Berkshire Publishing, Great Barrington, Massachusetts, USA, 2004.

[50] ISO 9241-210:2010. Ergonomics of human-system interaction - Part 210: Human-centred design for interactive systems, International Organization for Standardization (ISO), 1st edition, Switzerland, 2010.

[51] Ken Eason. Information technology and organizational change. CRC Press, London, UK, 2005.

Page 195 Bibliography

[52] Yvonne Rogers, Helen Sharp, and Jenny Preece. Interaction Design: Beyond Human-Computer Inter- action. John Wiley & Sons Ltd., Chichester, West Sussex, UK, 3rd edition, 2011.

[53] Donald A. Norman and Stephen W. Draper, editors. User-Centered System Design - New Perspectives on Human-Computer Interaction. Laurence Erlbaum Associates Inc., Hilldale, New Jersey, USA, 1986.

[54] John D. Gould and Clayton Lewis. Designing for Usability: Key Principles and What Designers Think. Communications of the ACM, 28, Issue 3:300–311, March 1985.

[55] Donald A. Norman, editor. The Design of Everyday Things. Doubleday, New York, USA, 1988.

[56] Jakob Nielsen. Designing Web Usabilty: The Practice of Simplicity. New Riders Publishing Thousand Oaks, CA, USA, 1st edition, 1999.

[57] Nuray Aykin, editor. Usability and Internatialization of Information Technology. Lawrence Erlbaum Associates Inc, Mahwah, NJ, USA, 1st edition, 2005.

[58] The research-based web design & usability guidelines, enlarged/expanded edition. Technical report, U.S. Department of Health and Human Services, U.S. Government Printing office, Washington, 2006.

[59] William Lidwell, Kritina Holden, and Jill Butler. Universal Principles of Design: 125 Ways to Enhance Usability, Influence Perception, Increase Appeal, Make Better Design Decsions, and Teach through De- sign. Rockport Publishers Inc, Beverly, MA, USA, 2nd edition, 2010.

[60] Tom Tullis and Bill Albert. Measuring the User Experience: Collecting, Analyzing, and Presenting Us- ability Metrics. Elsevier Inc., Waltham, MA, USA, 2nd edition, 2013.

[61] Jakob Nielsen and Kara Pernice. Eyetracking Web Usabilty: The Practice of Simplicity. New Riders, Berkeley, CA, USA, 1st edition, 2013.

[62] Katherine Isbister and Noah Schaffer. Game Usability: Advice from the Experts for Adavancing the Player Experience. Morgan Kaufmann Publishers, Burlington, MA, USA, 1st edition, 2008.

[63] http://www.usability-ux.fit.fraunhofer.de/, last access 20.05.2019.

[64] http://www.mitsue.co.jp/english/global_ux/, last access 20.05.2019.

[65] http://www.ischool.berkeley.edu/research/uxresearch, last access 20.05.2019.

[66] http://www.usability.de/, last access 20.05.2019.

[67] http://www.usability.gov/, last access 20.05.2019.

[68] ISO 9000:2005. Quality management systems - Fundamentals and vocabulary, International Organi- zation for Standardization (ISO), 2005.

[69] ISO/IEC 25010:2011. Systems and software engineering - Systems and software Quality Require- ments and Evaluation (SQuaRE) - System and software quality models, International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), 1st edition, Switzer- land, 2011.

Page 196 Bibliography

[70] ISO 8402:1994. Quality management and quality assurance - Vocabulary, International Organization for Standardization (ISO), 1994.

[71] ISO/IEC 9126-1:1991. Software engineering - Product quality, International Organization for Stan- dardization (ISO), 1991.

[72] Nigel Bevan. Quality in use: Meeting user needs for quality. Journal of System and Software, 49(1):89– 96, December 1999.

[73] Nigel Bevan. Measuring usability as quality of use. Software Quality Journal, 4(2):115–130, 1995.

[74] Nigel Bevan, Jurek Kirakowski, and Jonathan Maissel. What is usability? In Proceedings of the 4th International Conference on Human Computer Interaction, Stuttgart, September 1991, 1991.

[75] http://wirtschaftslexikon.gabler.de/Definition/benutzerfreundlichkeit.html? referenceKeywordName=Usability, last access 20.05.2019.

[76] Sven Heinsen and Petra Vogt, editors. Usabilty praktisch umsetzen: Handbuch für Software, Web, Mobile Devices und andere interaktive Produkte. Carl Hanser Verlag GmbH und Co. KG, München, 2003. 1. Auflage.

[77] Alina Horn. Gestaltung von Usabilty-Test für raumfahrtspezifische Software. Studienarbeit, ISTA-13- SA-05, Universität der Bundeswehr München, Germany, August 2013.

[78] http://www.nngroup.com/articles/definition-user-experience/, last access 20.05.2019.

[79] Ludwig von Bertalanffy. General Systems Theory: Foundations, Development, Applications. Braziller, New York, 1968.

[80] Eberhardt Rechtin. Systems Architecting of Organizations: Why Eagles Can’t Swim. CRC Press LLC, Boca Raton, Florida, USA, 2000.

[81] International Council on Systems Engineering (INCOSE) SE Handbook Working Group. Systems En- gineering Handbook: A Guide for System Life Cycle Processes and Activities, 4th edition, INCOSE-TP- 2003-002-04. John Wiley and Sons, Inc., Hoboken, New Jersey, USA, October 2015.

[82] Charles Francois. International Encyclopedia of Systems and Cybernetics. K.G. Saur, Muenchen, 2004.

[83] Scott Jackson, Derek Hitchins, and Howard Eisner. What is the Systems Approach? INCOSE Insight, 13(Issue 1):41–43, April 2010.

[84] Hillary G. Sillitto. Design principles for ultra-large-scale systems. In Proceedings of the 20th Annual International Council on Systems Engineering (INCOSE) International Symposium, Chicago, IL, USA, July 2010, 2010.

[85] Philip Warren Anderson. More is different. Science, 177(4047):393–396, August 1972.

[86] Yaneer Bar-Yam. Dynamics of Complex Systems. Addison-Wesley Reading, Massachussetts, 1st edi- tion, 1997.

[87] Warren Weaver. Science and complexity. American Scientist, 36:536–544, 1948.

Page 197 Bibliography

[88] Sarah A. Sheard and Ali Mostashari. Complexity types: From science to systems engineering. In Proceedings of the 21st Annual of the International Council on Systems Engineering (INCOSE) Interna- tional Symposium, 20-23 June 2001, Denver, Colorado, USA, volume 12, pages 295–311, 2011.

[89] Maik Maurer. Complexity Management in Engineering Design - a Primer. Springer-Verlag GmbH Germany, 2017.

[90] Eric C. Honour. Understanding the value of systems engineering. In Proceedings of the 14th Annual INCOSE International Symposium, 2004, 2004.

[91] International Council on Systems Engineering (INCOSE) SE Handbook Working Group. Systems En- gineering Handbook: A Guide for System Life Cycle Processes and Activities, v. 3.2.1, INCOSE-TP-2003- 002-03.2.1. International Council on Systems Engineering (INCOSE), San Diego, CA, USA, January 2011.

[92] Michael D. Griffin. Systems engineering and the "two cultures" of engineering. NASA Boeing Lecture, March 28 2007.

[93] Michael Ryschkewitsch, Dawn Schaible, and Wiley Larson. The art and science of systems engineer- ing. Systems Engineering Forum, 3(2):81–100, 2009.

[94] Wiley J. Larson, Doug Kirkpatrick, Jerry Jon Sellers, L. Dale Thomas, and Dinesh Varma (eds.). Applied Space Systems Engineering. Space Technology Series. McGraw-HIll, 2009.

[95] Charles D. Brown. Elements of Spacecraft Design. AIAA Education Series, 2002.

[96] John F. Muratore. The art of systems engineering. Lecture, University of Tennessee Space Institute, Tullahoma, Tennessee, October 16-17, 2008.

[97] ISO 15288. Systems and software engineering - System life cycle processes, International Organization of Standardization (ISO) / International Electrotechnical Commission (IEC), Institute of Electrical and Electronics Engineers (IEEE), 2008.

[98] NASA. Systems Engineering Processes and Requirements, NPR 7123.1. NASA, Washington, D.C., 2007.

[99] Adolf Peter Bröhl and Wolfgang Dröschel, editors. Das V-Modell: Der Standard für die Softwareen- twicklung mit Praxisleitfaden. Oldenbourg Wissenschaftsverlag, München, 2nd edition, 1995.

[100] Udo Lindemann. Handbuch Produktentwicklung. Carl Hanser Verlag München, 2016.

[101] Klaus Ehrlenspiel and Harald Meerkamm. Integrierte Produktentwicklung: Denkabläufe, Method- eneinsatz, Zusammenarbeit. Carl Hanser Verlag München, 6th edition, 2017.

[102] Mark W. Maier and Eberhardt Rechtin. The Art of Systems Architecting. CRC Press, Taylor and Francis Group LLC, Boca Raton, Florida, USA, 3rd edition edition, 2009.

[103] Ben Shneiderman, Catherine Plaisant, Maxine S. Cohen, and Steven M. Jacobs. Designing the User In- terface: Strategies for Effective Human-Computer Interaction. Addison-Wesley Longman, Amsterdam, The Netherlands, 5th edition, 2012.

[104] Susanne Fugger and Tanja Nemetzade. Personal communication, Meeting, January 2017.

Page 198 Bibliography

[105] Tanja Nemetzade and Roger Förstner. Lessons Learned - Development and Application of a User- Centered System Tool for ESA’s LOFT Mission. In 6th International Conference in Systems & Concur- rent Engineering for Space Applications, SECESA 2014, Stuttgart, Germany, 8-10 October 2014.

[106] Carlos Corral van Damme. LOFT Observation Plan. European Space Agency, draft, Issue 1, Revision 0, 15.03.2013.

[107] Astrium Satellites GmbH. LOFT Assessment Study System Design Report. LO-AST-TN-2.1, Issue 3.1, 06 August 2013.

[108] http://sci.esa.int/cosmic-vision/50321-juice-is-europe-s-next-large-science- mission/, last access 20.05.2019.

[109] http://sci.esa.int/juice/56165-preparing-to-build-esas-jupiter-mission/, last ac- cess 20.05.2019.

[110] http://sci.esa.int/juice/57014-jupiter-mission-contract-ceremony/, last access 20.05.2019.

[111] Eric Ecale. Inputs for JUICE simulator V2. Airbus Defence and Space SAS, JUI-ADST-SYS-TN-000160, Issue 1, 19 May 2016.

[112] Eric Ecale. Statement of work for the JUICE Mission Simulator. Airbus Defence and Space SAS, JUI- ADST-SYS-SOW-000138, Issue 1, 28 January 2016.

[113] Tanja Nemetzade and Roger Förstner. Generalizability of Mission and Satellite Design by Means of Key Variables and Key Indicators as Basis for User-Centered Mission and System Simulation. In 5th International Workshop in Systems & Concurrent Engineering for Space Applications, SECESA 2012, Lisboa, Portugal, 17-19 October 2012.

[114] Jens Christoph Pirzkall. Studie zur technischen Implementierung ausgewählter Raumfahrtmissio- nen. Bachelorarbeit, ISTA-13-BA-27, Universität der Bundeswehr München, Germany, February 2014.

[115] Stefan Bergler. Assessment of technical similarities of selected space missions. Bachelorarbeit, ISTA- 14-BA-26, Universität der Bundeswehr München, Germany, January 2015.

[116] VECTRONIC Aerospace GmbH. LOFT Mission and System Simulator User Manual. EN-VAS01, Version 1, Issue 1, 27.06.2013.

[117] VECTRONIC Aerospace GmbH. GMSS Mission and System Simulator User Manual. EN-VAS_GS-UM, Version 1, Issue 2, December 2013.

[118] ISO 9241-110:2006. Ergonomics of human-system interaction - Part 110: Dialogue principles, Inter- national Organization for Standardization (ISO), 1st edition, Switzerland, 2006.

[119] DIN EN ISO 9241-11:1998. Ergonomische Anforderungen für Bürotätigkeiten mit Bildschirmgeräten, Teil 11: Anforderungen an die Gebrauchstauglichkeit - Leitsätze, International Organization for Stan- dardization (ISO), Beuth-Verlag, Berlin, 1998.

Page 199 Bibliography

[120] Tanja Nemetzade, Muriel Lemaréchal, Peter Vörsmann, and Roger Förstner. Entwicklung eines Qual- itätsmodells für die Nutzerzentrierte Toolentwicklung zur Anwendung in den frühen Satellitenen- twurfsphasen. In 62. Deutscher Luft- und Raumfahrtkongress, Stuttgart, Germany, 10-12 September 2013.

[121] Muriel Lemaréchal. Analysis of system simulators in the space sector by means of a quality evaluation scheme. Diplomarbeit, R 1233 D, Technische Universität Braunschweig, Germany, April 2013.

[122] Tanja Nemetzade and Roger Förstner. Bestimmung der Relevanz von Qualitätskriterien für die nutzerzentrierte Toolentwicklung zur Anwendung in den frühen Satellitenentwurfsphasen. In 63. Deutscher Luft- und Raumfahrtkongress, Augsburg, Germany, 16-18 September 2014.

[123] David Kähler. Identifikation und Analyse von kommerziellen und offen zugänglichen Simulatoren in verschiedenen Industriezweigen mit Schwerpunkt in der Raumfahrttechnik. Bachelorarbeit, ISTA- 12-BA-21, Universität der Bundeswehr München, Germany, January 2013.

[124] Peter Atteslander. Methoden der empirischen Sozialforschung. Walter de Gruyter Verlag, Berlin, 10th edition, 2003.

[125] Jürgen Bortz and Nicola Döring. Forschungsmethoden und Evaluation für Human- und Sozialwis- senschaftler. Springer Verlag, Heidelberg, 4th edition, 2006.

[126] Bill Gillham. Developing a Questionnaire. Continuum International Publishing Group Ltd., London, UK, 2nd edition, 2008.

[127] Rolf Porst. Fragebogen: ein Arbeitsbuch, Studienskripten zur Soziologie. Springer-Verlag, Wiesbaden, Germany, 1st edition, 2008.

[128] Abraham N. Oppenheim. Questionnaire Design, Interviewing and Attitude Measurement. Pinter Pub- lications, 1992.

[129] Frederick Herzberg, Bernhard Mausner, and Barbara Bloch Snyderman. The Motivation to Work. New York, Chapman & Hall London, 2nd edition, 1959.

[130] ESA-ESTEC. Appendix 1 to ESTEC ITT AO/1-7414/13/NL/MH, Statement of Work. System Functional Simulations in the Concurrent Design Process. TEC-SWG_12-560, Issue 1, Revision 0, Noordwijk, The Netherlands, 30 November 2012.

[131] Jacob Nielsen. Usability Inspection Methods. Wiley and Sons, Inc., New York, NY, USA, 1994.

[132] Florian Sarodnick and Henning Brau. Methoden der Usability Evaluation - Wissenschaftliche Grund- lagen und praktische Anwendung. Hofgrefe Verlag, Bern, Schweiz, 3rd edition, 2016.

[133] Michael Richter and Markus Flückiger. Usability Engineering kompakt - Benutzbare Produkte gezielt entwickeln. Springer Vieweg, Springer-Verlag Berlin Heidelberg, 2013.

[134] James Hom. The Usability Methods Toolbox Handbook. http://www.idemployee.id.tue.nl/g.w. m.rauterberg/lecturenotes/usabilitymethodstoolboxhandbook.pdf, last access 18.01.2019.

Page 200 Bibliography

[135] Jeff Rubin and Dana Chisnell. Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. Wiley Publishing, Inc., Indianapolis, Indiana, 2nd edition, 2008.

[136] Joseph S. Dumas and Janice Redush. A Practical Guide to Usabilty Testing. Intellect Books, Bristol, U.K., 2nd edition, 1999.

[137] Jens Christoph Pirzkall and Michael Suttarp. Analyse und Bewertung raumfahrtspezifischer Soft- ware gemäß des User Centered Designs. Projektarbeit, ISTA-14-PA-11, Universität der Bundeswehr München, Germany, November 2014.

[138] Tanja Nemetzade and LOFT study team. LOFT Assessment Study LOFT-Simulator. Unpublished, Issue 0.3, 25 February 2013.

[139] Tanja Nemetzade and LOFT study team. Momentum off-loading of reaction wheels with magnetic torquers. Unpublished, 06 June 2013.

[140] Tanja Nemetzade and LOFT study team. Power Model. Unpublished, 06 June 2013.

[141] Tanja Nemetzade. Model Specification for JUICE-Simulator. Unpublished, Issue 0.6, 30 June 2014.

[142] VECTRONIC Aerospace GmbH. JUICE Mission and System Simulator User Manual. EN-VAS_JU-UM, Version 1, Issue 2, October 2014.

[143] VECTRONIC Aerospace GmbH. JUICE Mission and System Simulator Version 3 User Manual. EN- VAS_JU-UM_31, Version 3, Issue 1, September 2016.

[144] Tanja Nemetzade and Roger Förstner. Quantification of the Influence between Satellite Design Pa- rameters for the Support of Satellite Design Decisions. In AIAA Space, San Diego, USA, 10-12 Septem- ber 2013.

[145] Lennart Rade and Bertil Westergren. Springers Mathematische Formeln: Taschenbuch für Ingenieure, Naturwissenschaftler, Informatiker, Wirtschaftswissenschaftler. Springer-Verlag Berlin, 3rd edition, 2000.

[146] http://products.office.com/de-de/powerpoint, last access 20.05.2019.

[147] Tobias Kleinig. Modellierung von Satellitenparameterzusammenhängen mittels Simulink. Bachelo- rarbeit, ISTA-14-BA-29, Universität der Bundeswehr München, Germany, February 2015.

[148] Felix Zaumseil. Sensitivity analysis of satellite system design parameters. Masterarbeit, ISTA-13-MA- 09, Universität der Bundeswehr München, Germany, August 2013.

[149] Stefan Lipowski. Determination and modeling of parameter dependencies of the attitude and or- bit control system of an earth-bound satellite. Masterarbeit, ISTA-14-MA-09, Universität der Bun- deswehr München, Germany, September 2014.

[150] Heiner Bubb and Oliver Sträter. Grundlagen der gestaltung von mensch-maschine-systemen. In Bernhard Zimolong and Udo Konradt, editors, Ingenieurpsychologie, volume Bd. 2 of Wirtschafts-, Organisations- und Arbeitspsychologie, pages 143–180. Hogrefe, Göttingen [u.a.], 2006.

Page 201 Bibliography

[151] A. Wayne Wymore. Model-based systems engineering: an introduction to the mathematical theory of discrete systems and to the tricotyledon theory of system design. CRC Press, Boca Raton, 1993.

[152] Systems Engineering Vision 2020 Project Team of the International Council on Systems Engineer- ing (INCOSE). Systems engineering vision 2020. Technical report, International Council on Systems Engineering (INCOSE), 2007.

[153] Azad M. Madni and Michael Sievers. Model-based systems engineering: Motivation, current status, and needed advances. In Azad M. Madni, Barry Boehm, Roger G. Ghanem, Daniel Erwin, and Mar- ilee J. Wheaton, editors, Disciplinary Convergence in Systems Engineering Research, pages 311–325. Springer, Cham, 2018.

[154] Troy A. Peterson. Transformation of Systems Engineering into a Model Based Discipline, 2017 Annual INCOSE International Workshop. http://www.omgwiki.org/MBSE/lib/exe/fetch.php?media= mbse:incose_mbse_iw_2017:iw2017_transformation_closing_v1.pdf, last access 20.05.2019.

[155] http://www.omgwiki.org/MBSE/doku.php, last access 20.05.2019.

[156] http://www.incose.org/ChaptersGroups/WorkingGroups/Application/space-systems, last access 20.05.2019.

[157] David Kaslow and Azad M. Madni. Validation and Verification of MBSE Compliant CubeSat Reference Model. In Azad M. Madni, Barry Boehm, Roger G. Ghanem, Daniel Erwin, and Marilee J. Wheaton, editors, Disciplinary Convergence in Systems Engineering Research, pages 381–393. Springer, Cham, 2018.

[158] David Kaslow, Bradley Ares, Philip T. Cahill, Laura Hart, and Rose Yntema. A Model-Based Systems Engineering (MBSE) approach for defining the behaviors of CubeSats. In 2017 IEEE Aerospace Con- ference, pages 1–14, 2017.

[159] David Kaslow, Grant Soremekun, Hongman Kim, and Sara Spangelo. Integrated Model-Based Sys- tems Engineering (MBSE) Applied to the Simulation of a CubeSat Mission. In 2014 IEEE Aerospace Conference, pages 1–14, 2014.

[160] Systems Engineering Vision 2025 Project Team of the International Council on Systems Engineer- ing (INCOSE). Systems Engineering Vision 2025. Technical report, International Council on Systems Engineering (INCOSE), 2014.

[161] Jeff A. Estefan. Survey of Model-Based Systems Engineering (MBSE) Methodologies. Technical report, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA, 2008.

[162] http://www.omgwiki.org/MBSE/doku.php?id=mbse:usability, last access 20.05.2019.

[163] Joachim Fuchs and Don de Wilde. ESA Virtual Spacecraft Design. Presentation, http://www.vsd- project.org/download/presentations/VSD%20FP%20Intro%20ESA.pdf, 15 May 2012.

[164] Harald Eisenmann, Valter Basso, Joachim Fuchs, and Don de Wilde. ESA Virtual Spacecraft Design. In 4th International Conference in Systems & Concurrent Engineering for Space Applications, SECESA 2010, Lausanne, Switzerland, 13-15 October 2010.

Page 202 Bibliography

[165] Harald Eisenmann, Joachim Fuchs, Don de Wilde, and Valter Basso. ESA Virtual Spacecraft Design - Demonstration of Feasibility of MBSE approach for European Space Programs. In 5th International Workshop in Systems & Concurrent Engineering for Space Applications, SECESA 2012, Lisboa, Portugal, 17-19 October 2012.

[166] Harald Eisenmann. VSD Final Presentation, Introduction and Overview. Presentation, http://www. vsd-project.org/download/presentations/VSD_P2_FP_2012-05-15_v3.pdf, 15 May 2012.

[167] Jorge Pacios. VSD Final Presentation, Space System Reference Database. Presentation, http://www. vsd-project.org/download/presentations/VSD%20-%20GMV.pdf, 15 May 2012.

[168] Armin Mueller. VSD Final Presentation, SSDE Key Functions. Presentation, http://www.vsd- project.org/download/presentations/VSD_MBSE_FinalPresentation_Videos.pdf, 15 May 2012.

[169] Andre Brito. VSD Final Presentation, Space System Visualization Tool. Presentation, http://www.vsd-project.org/download/presentations/SSVT-presentation-2012-05- FP_novideos.pdf, 15 May 2012.

[170] Joel Rey. Modelling with VSEE: Definition of Guidelines and Exploitation of the Mod- els. YGT final report, http://www.vsd-project.org/download/documents/YGT%20final% 20report%20Rey%20V2.pdf, 08 August 2013.

[171] Arne Matthyssen, Merlin Bieze, and Alex Vorobiev. OCDT Deployment, Enhancement and Exploita- tion. In 5th International Workshop in Systems & Concurrent Engineering for Space Applications, SECESA 2012, Lisboa, Portugal, 17-19 October 2012.

[172] Hans Peter de Koning, Sam Gerene, Ivo Ferreira, Andrew Pickering, Friederike Beyer, and Johan Ven- nekens. Open Concurrent Design Tool - ESA Community Open Source Ready to Go! In 5th Interna- tional Workshop in Systems & Concurrent Engineering for Space Applications, SECESA 2012, Lisboa, Portugal, 17–19 October 2012.

[173] http://ocdt.esa.int/login?back_url=https%3A%2F%2Focdt.esa.int%2F, last access 20.05.2019.

[174] http://m.esa.int/Our_Activities/Space_Engineering_Technology/CDF/Open_ Concurrent_Design_In_the_CDF, last access 20.05.2019.

[175] ECSS-E-TM-10-25A. Space engineering - Engineering design models data exchange (CDF), European Cooperation for Space Standardization, ECSS Secretariat, ESA-ESTEC, Requirements & Standards Di- vision, Noordwijk, The Netherlands, 10 October 2010.

[176] Sam Genere. Next Generation Concurrent Design and Engineering. http://old. esaconferencebureau.com/docs/default-source/16c11-secesa-docs/8-2016-10-06-03- next-generation-concurrent-design-and-engineering.pdf?sfvrsn=2, last access 18.01.2019 2016. 7th International Systems & Concurrent Engineering for Space Applications Conference, SECESA 2016, Madrid, 5 - 7 October 2016.

Page 203 Bibliography

[177] Arne Matthyssen. MARVL-Model based Requirements Verification Lifecycle. http://old. esaconferencebureau.com/docs/default-source/16c11-secesa-docs/01_marvl.pdf? sfvrsn=2, last access 18.01.2019 2016. 7th International Systems & Concurrent Engineering for Space Applications Conference, SECESA 2016, Madrid, 5 - 7 October 2016.

[178] http://www.rheagroup.com/news/how-model-based-system-engineering-approach- design-changing-engineering, last access 20.05.2019.

[179] Dominik Quantius (ed.) Institute of Space Systems System Analysis Space Segment. Feasibility study, post-iss scenario-i, concurrent engineering study report. Technical Report DLR-RY-CE-R018-2015-3, German Aerospace Center (DLR), Institute of Soace Systems, September 2017.

[180] German Aerospace Center (DLR) e.V. Virtual Satellite User Guide. Version 3.9.0, 2016.

[181] http://www.dlr.de/sc/en/desktopdefault.aspx/tabid-5135/8645_read-8374/, last access 20.05.2019.

[182] Johannes Gross, Christian Messe, and Stephan Rudolph. A Model Based Thermal Systems Engineer- ing Approach. In 5th International Workshop in Systems & Concurrent Engineering for Space Applica- tions, SECESA 2012, Lisboa, Portugal, 17-19 October 2012.

[183] ECSS-E-TM-40-07 Volume 1A to 5A. Space engineering - Simulation modelling platform, European Cooperation for Space Standardization, ECSS Secretariat, ESA-ESTEC, Requirements & Standards Di- vision, Noordwijk, The Netherlands, 25 January 2011.

[184] Larry B. Rainey and Andreas Tolk, editors. Modeling and Simulation Support for System of Systems Engineering Applications. John Wiley & Sons, Inc., Hoboken, New Jersey, 1st edition, 2015.

[185] Mark W. Maier. Architecting Principles for Systems-of-Systems. Systems Engineering, 1(4):267–284, 1998.

[186] Daniel A. DeLaurentis. Understanding transportation as system-of-systems design problem. In 43rd AIAA Aerospace Sciences Meeting, Reno, Nevada, January 10-13 2005. AIAA-2005-0123.

[187] Mo Jamshidi, editor. System of Systems Engineering: Innovations for the 21st Century. John Wiley & Sons, Inc., Hoboken, New Jersey, 2009.

[188] Andrew P.Sage and Christopher D. Cuppan. On the Systems Engineering and Management of Systems of Systems and Federations of Systems. Information-Knowledge-Systems Management, 2:325–345, 2001.

[189] http://www.copernicus.eu/, last access 20.05.2019.

[190] Daniele Gianni, Niklas Lindman, Joachim Fuchs, and Robert Suzic. Introducing the European Space Agency Architectural Framework for Space-Based Systems of Systems Engineering. In Omar Ham- mami, Daniel Krob, and Jean-Luc Voirin, editors, Complex Systems Design and Management, pages 335–346. Springer, Berlin, Heidelberg, 2012.

[191] http://sci.esa.int/cosmic-vision/, last access 20.05.2019.

Page 204 Bibliography

[192] Astrium Satellites GmbH. LOFT Assessment Study Final Presentation. Presentation, LO_AST_HO- 08_V1.0, July 2013.

[193] LOFT Study Team, European Space Agency. LOFT Mission Requirements Document. SRE-F/2012- 076/RQ/MA, Issue 3, Rev. 6, 11 February 2013.

[194] LOFT Science Team, European Space Agency. LOFT Science Requirements Document. SRE- SA/LOFT/2011-001, Issue 1, Rev. 7, 02 February 2012.

[195] European Space Agency. LOFT Assessment Study Report. ESA/SRE(2013)3,December 2013.

[196] http://sci.esa.int/loft/49327-summary/, last access 20.05.2019.

[197] Astrium Satellites GmbH. LOFT Assessment Study Volume 1 Technical Proposal. Astrium GmbH Pro- posal No.: A.2011-4392-0-1, November 2011.

[198] http://sci.esa.int/juice/50073-science-payload/, last access 20.05.2019.

[199] http://sci.esa.int/juice/50074-scenario-operations/, last access 20.05.2019.

[200] European Space Agency. JUICE Assessment Study Report. ESA/SRE(2011)18, December 2011.

[201] Arnaud Boutonnet. JUICE Consolidated Report on Mission Analysis (CReMA). ESA, WP-578, Issue 1, Revision 2, draft, October 2013.

[202] Herve Camares. AOCS analysis report. Astrium Satellites SAS, JUI-ASFT-SYS-RP-005, Issue 1, Rev 0, 15.09.2013.

[203] European Space Agency. JUICE - Jupiter Icy Moons Explorer, Space Segment Requirements Document (SSRD). JUI-EST-SYS-RS-004, Issue 2, Revision 1, 03.06.2015.

[204] Ignacio Torralba Elipe. JUICE GCO500 Scenario. European Space Agency, JUI-EST-SYS-SP-005, Issue 4.0, 30.11.2014.

[205] Ignacio Torralba Elipe. JUICE GCO500 Power Assessment Guidelines. European Space Agency, JUI- EST-SYS-SP-002, Issue 4.0, 30.11.2014.

[206] James R. Wertz and Wiley J. Larson, editors. Space Mission Analysis and Design. Space Technology Series. Microcosm Press, 3rd edition, 1999.

[207] James R. Wertz, editor. Mission Geometry - Orbit and Constellation Design and Management. Micro- cosm Press, 1 edition, 2001.

[208] Buddy Nelson, Mel Higashi, Pat Sharp, Peter Leung, Dennis Connolly, Preston Burch, Mindy De- yarmin, Mark Jarosz, Susan Hendrix, Mike McClaire, Rob Navias, Stratis Kakadelis, Ray Villard, Dave Leckrone, Mike Weiss, Lori Tyahla, Roz Brown, and Russ Underwood. Hubble Space Telescope, Ser- vicing Mission 4, Media Reference Guide. Technical Report Rev.1, NASA, 2009.

[209] http://hubblesite.org/the_telescope/hubble_essentials/quick_facts.php, last access 20.05.2019.

[210] http://hubblesite.org/the_telescope/team_hubble/, last access 20.05.2019.

Page 205 Bibliography

[211] Gopalakrishna M. Roa. Hubble Space Telescope on–Orbit NiH2 Battery Performance. In Energy Con- version Engineering Conference, 2002. IECEC ’02. 2002 37th Intersociety, 2002.

[212] Wilfried Ley, Klaus Wittmann, and Willi Hallmann, editors. Handbuch der Raumfahrttechnik. Hanser, München, 4th edition, 2011.

Page 206