A Standards-based Framework for Test-driven Agile Simulation

Standardbasiertes Framework für die Testgetriebene Agile Simulation

Der Technischen Fakultät

der Friedrich-Alexander-Universität Erlangen-Nürnberg zur Erlangung des Doktorgrades

DOKTOR-INGENIEUR

vorgelegt von Vitali Schneider

aus Schemonaicha Als Dissertation genehmigt von der Technischen Fakultät der Friedrich-Alexander-Universität Erlangen-Nürnberg

Tag der mündlichen Prüfung: 09.11.2018 Vorsitzender des Promotionsorgans: Prof. Dr.-Ing. Reinhard Lerch

Gutachter: Prof. Dr.-Ing. habil. Reinhard German Prof. Dr.-Ing. habil. Dietmar Fey Abstract

In view of increasing the efficiency of development processes in the field of software and systems engineering, model-driven techniques are coming into ever more widespread use. On the one hand, the abstract graphic models help to master the complexity of the system under development. On the other hand, formal models serve as the source for analysis and automated synthesis of a system. Thereby, model-based transformation engines and generators allow the specifications to be defined platform- and target-language-independently and to be automatically mapped to the desired target platform. Test-driven development is a promising approach in the field of agile software development. In this method, the development process is based on relatively short iteration cycles with preceding test specifications. Due to the fact that the actual implementation is consistently carried out in compliance with the previously written tests, this method leads to a higher test coverage at early development stages and thus contributes to the overall quality assurance of the resulting system. This thesis introduces the concept of Test-driven Agile Simulation (TAS) as a consistent evolution of the systems engineering methods through the combination of test- and model-driven development techniques with the model-driven simu- lation. With the help of simulations, performance evaluations and validation of the modeled system can be carried out in the early stages of the development process, even when no program code or a fully implemented system is yet availa- ble. The primary goal of this approach is to combine the advantages of the above techniques to enable a holistic model-based approach to systems engineering with improved quality assurance. In particular, special attention is thereby paid to the modeling and validation of the overall system, taking into account the effects of communication between its components. The whole approach is founded upon the widespread and established standards for Model-Driven Architecture (MDA) provided by the Object Management Group

iii (OMG). Using the OMG’s standard modeling language UML in combination with the specialized extension profiles, it is possible to specify requirements, system models as well as test models in a uniform, formal, and standard-compliant manner. The creation and presentation of the essential elements of such specifications are largely done with the help of graphical diagrams, such as class, composition structure, state, and activity diagrams. In order to facilitate behavioral modeling using detailed activity diagrams, TAS provides support for the textual activity language Alf, which is also a standard provided by OMG. UML models can be used at different levels of abstraction for specification as well as for analysis. In the TAS approach, these models are automatically transformed into an executable simulation code which can then be executed to ensure primarily the required behavior of the system and the correctness of the tests. In this way, by running tests early on the simulated model, the mutual validation of the system and test specification is performed. The simulation of the modeled system also provides insights into the expected dynamic behavior of the system in terms of functional as well as non-functional properties. For supporting the TAS approach, a versatile integrated tool environment is provided by our framework SimTAny. The framework offers seamless support for modeling, transforming, simulating, and testing UML-based specification models. In addition to the modeling methodology of TAS, the realization of the frame- work itself is largely based on the standards of the OMG. Thereby, model-based approaches and standardized transformation languages have a wide application in different components of SimTAny. Other helpful features of SimTAny include traceability of requirements across modeled elements and automatically generated code artifacts, as well as the management and design of simulation experiments. The service-oriented architecture of the framework also makes it possible to met the challenges in the distributed development processes. This also leads to the simplifications in terms of functionality extension of the framework itself and its integration into existing development environments and processes.

iv Kurzfassung

Zur Effizienzsteigerung von Entwicklungsprozessen in der Software- und System- entwicklung werden verstärkt modellgetriebene Techniken eingesetzt. Zum einen helfen die abstrakten grafischen Modelle die Komplexität des zu entwickelten Systems zu beherrschen. Zum anderen dienen die formalen Spezifikationen als Basis für die Systemanalyse und automatische Systemimplementierung. Dabei erlaubt der Einsatz von modellbasierten Transformatoren und Generatoren die Spe- zifikationen plattform- und zielsprachunabhängig zu definieren und automatisiert auf die gewünschte Zielplattform abzubilden. Eine vielversprechende Methode in der agilen Softwareentwicklung stellt zu- dem der testgetriebene Ansatz dar. Bei dieser Methode findet die Entwicklung in relativ kurzen Iterationszyklen mit stets vorgelagerten Testspezifikationen statt. Dadurch, dass die eigentliche Implementierung konsequent unter Beachtung der zuvor geschriebenen Tests erfolgt, dient dieses Verfahren zu höherer Testabdeckung bereits in Anfangsphasen der Entwicklung und trägt somit zur Qualitätssicherung des resultierenden Systems bei. Im Rahmen der vorliegenden Arbeit wird das Konzept der Testgetriebenen Agi- len Simulation (TAS) als konsequente Weiterentwicklung der Systems Engineering Methoden durch die Kombination der test- und modellgetriebenen Entwicklungs- techniken mit ebenfalls modellgetriebener Simulation vorgestellt. Mit Hilfe von Simulationen lassen sich frühzeitig Leistungsbewertung und Validierung des mo- dellierten Systems durchführen, selbst in den früheren Phasen des Entwicklungs- prozesses, wo noch kein Programmcode oder ein fertig implementiertes System vorhanden ist. Das primäre Ziel dieses Ansatzes ist es, die Vorteile der genannten Techniken zu kombinieren, um letztendlich eine durchgehend modellgestützte Vorgehensweise zum System Engineering mit verbesserter Qualitätssicherung zu ermöglichen. Ein besonderes Augenmerk wird dabei auf die Modellierung und Validierung des Gesamtsystems gelegt, wobei insbesondere Effekte der Kommuni- kation zwischen den Komponenten berücksichtigt werden.

v Der gesamte Ansatz stützt sich grundsätzlich auf die weitverbreiteten und etablierten Standards der Object Management Group (OMG) für Modellgetriebene Software-Architektur (MDA). Mit der von OMG standardisierten Modellierungs- sprache UML in Kombination mit den spezialisierten Erweiterungsprofilen können so Anforderungen, Systemmodelle sowie Testmodelle einheitlich, formal und stan- dardkonform spezifiziert werden. Die Erstellung und Darstellung der wesentlichen Elemente solcher Spezifikationen erfolgt dabei zum größten Teil mit Hilfe von gra- fischen Diagrammen, wie zum Beispiel Klassen-, Kompositionsstruktur-, Zustands- und Aktivitätsdiagrammen. Zur Erleichterung der Verhaltensmodellierung mit detaillierten Aktivitätsdiagrammen unterstützt TAS außerdem die ebenfalls von OMG standardisierte textuelle Aktivitätssprache Alf. UML-Modelle lassen sich auf unterschiedlichen Abstraktionsebenen sowohl zur Spezifikation als auch zur Analyse einsetzten. In dem TAS Ansatz werden diese Modelle automatisiert in einen ausführbaren Simulationscode transformiert und zur Ausführung gebracht, um in erster Linie das geforderte Verhalten des Systems sowie die Korrektheit der Tests sicherzustellen. Auf diese Weise, durch die frühzeitige Ausführung von Tests auf dem simulierten Modell, wird die gegen- seitige Validierung der System- und Testspezifikation durchgeführt. Durch die Simulation des modellierten Systems lassen sich außerdem Erkenntnisse über das zu erwartende dynamische Verhalten des Systems hinsichtlich funktionaler als auch nicht-funktionaler Eigenschaften gewinnen. Eine vielseitige integrierte Werkzeugumgebung für den TAS Ansatz stellt unser Framework SimTAny zur Verfügung. Das Framework bietet eine nahtlose Unter- stützung für die Modellierung, Transformation, Simulation und das Testen von UML-basierten Spezifikationsmodellen. Neben der Modellierungsmethodik von TAS stützt sich auch die Realisierung des Frameworks selbst weitgehend auf die Standards der OMG. Eine breite Anwendung finden dabei modellbasierte Ansätze und standardisierte Transformationssprachen. Zu den weiteren hilfreichen Featu- res von SimTAny zählen zudem die Nachverfolgbarkeit von Anforderungen durch Modelle bis hin zu den automatisch generierten Codeartefakten sowie die Verwal- tung und das Design von Simulationsexperimenten. Durch die Service-orientierte Architektur des Frameworks lassen sich außerdem die Herausforderungen in ver- teilten Entwicklungsprozessen begegnen. Die Erweiterbarkeit des Frameworks sowie dessen Integration in bestehende Entwicklungsumgebungen und -prozesse werden dadurch erheblich vereinfacht.

vi Contents

Abstract iii

Kurzfassungv

Contents vii

1 Introduction1 1.1 Motivation...... 1 1.2 Contributions ...... 2 1.3 Outline ...... 5

2 Fundamentals and Related Work7 2.1 Modeling Standards ...... 7 2.1.1 MDA...... 8 2.1.2 MOF...... 10 2.1.3 UML...... 11 2.1.4 SysML...... 14 2.1.5 MARTE ...... 16 2.1.6 Alf ...... 19 2.1.7 UTP ...... 21 2.2 Transformation Languages ...... 22 2.2.1 QVT ...... 23 2.2.2 MOFM2T...... 25 2.3 Related Work ...... 27 2.3.1 Relevant Research Projects and Tools...... 27 2.3.2 Comparison with the Suggested Approach...... 31

3 Test-driven Agile Simulation 35 3.1 Motivation...... 35 3.2 TAS Development Process...... 39 3.2.1 Requirements Engineering ...... 40 3.2.2 Modeling ...... 41 3.2.3 Static Validation...... 42

vii CONTENTS

3.2.4 Transformation ...... 43 3.2.5 Simulation ...... 43 3.2.6 Dynamic Validation...... 44 3.2.7 Implementation and Test ...... 45 3.3 Required Features...... 45 3.3.1 Unified Modeling based on Standards ...... 46 3.3.2 Validation and Analysis ...... 47 3.3.3 Model Transformations...... 49 3.3.4 Traceability...... 51 3.3.5 Service Oriented Architecture...... 52 3.4 Summary...... 54

4 Modeling Methodology 57 4.1 Challenges of Combining UML Profiles...... 58 4.1.1 MARTE and SysML...... 58 4.1.2 Alf and MARTE ...... 60 4.1.3 SysML and UTP ...... 61 4.2 Modeling with OMG Standards ...... 61 4.2.1 Running Example...... 62 4.2.2 Model Packaging ...... 65 4.2.3 Requirements Modeling ...... 66 4.2.4 Data Modeling...... 69 4.2.5 Test Modeling ...... 72 4.2.6 Structure Modeling...... 78 4.2.7 Legacy Libraries...... 82 4.2.8 Behavior Modeling ...... 85 4.2.9 Extra-functional Properties ...... 93 4.2.10 Analysis...... 99 4.2.11 Traceability...... 101

5 SimTAny Framework 103 5.1 Main Features and Tools...... 103 5.1.1 SimTAny User Interface ...... 105 5.1.2 Modeling ...... 108 5.1.3 Transformation ...... 109 5.1.4 Simulation ...... 110 5.1.5 Design and Management of Experiments ...... 112 5.1.6 Validation...... 114 5.1.7 Analysis...... 116 5.1.8 Traceability...... 117 5.2 Generation of Simulation Code ...... 119 5.2.1 Transformation Process ...... 119 5.2.2 M2M Transformation of Test Models ...... 121 5.2.2.1 Abstract Syntax ...... 121 5.2.2.2 Transformation Rules...... 124 viii Contents

5.2.2.3 Examples...... 127 5.2.3 M2M Transformation of MARTE Timing Annotations . . . . 127 5.2.3.1 Abstract Syntax ...... 127 5.2.3.2 Transformation Rules...... 129 5.2.3.3 Examples...... 129 5.2.4 M2T Transformation to Simulation Code ...... 130 5.2.4.1 MOFM2T Templates ...... 131 5.2.4.2 Transformation of State Machines ...... 133 5.2.4.3 Additional Concepts ...... 139 5.3 Action Language Support...... 141 5.3.1 Embedded Pop-up Editors...... 141 5.3.1.1 Transition Editor ...... 143 5.3.1.2 State Editor ...... 147 5.3.2 Transformation of Alf Specifications ...... 149 5.3.3 Capabilities and Limitations...... 153 5.4 Software Architecture ...... 155 5.4.1 Overview ...... 155 5.4.2 Feature Components ...... 157 5.4.2.1 SimTAny Core...... 157 5.4.2.2 DOE...... 158 5.4.2.3 Traceability...... 159 5.4.2.4 Papyrus Extensions...... 160 5.4.2.5 OMNeT++ Support...... 161 5.4.3 Distributed Service-oriented Architecture ...... 163 5.4.3.1 ModelBus...... 164 5.4.3.2 Interaction Pattern ...... 164 5.4.3.3 Distributed Development Process ...... 166 5.5 Capabilities and Limitations ...... 170 5.5.1 Object of Study ...... 170 5.5.2 Modeling ...... 171 5.5.3 Transformation to Simulation Code ...... 172 5.5.4 Simulation Experiments and Results ...... 173

6 Conclusions 177 6.1 Achievements ...... 177 6.2 Suggestions for Future Work...... 180

A Model Diagrams 183

B MARTE Profile Diagrams 189

C SimTAny 193

List of Acronyms 201

ix CONTENTS

List of Figures 205

List of Tables 209

Bibliography 215

x CHAPTER 1

Introduction

1.1 Motivation

It is indisputable that a wide range of computerized information systems has long become a significant part of our professional activities as well as of our private lives. There is hardly any domain that does not already benefit from the support provided by such systems: from entertainment, telecommunication or automotive applications to appliances in health care or defense. The continuously increasing complexity of computer systems on the one hand and the general requirements for quality assurance in connection with minimizing costs and reducing time to market for new products on the other hand call for effective development methodologies.

Model-Driven Engineering (MDE) [44] and Test-Driven Development (TDD) [9] are two promising approaches to address these challenges. Thereby, MDE focuses on the utilization of models as formal specifications and fundamental artifacts for automated code generation that primary help to deal with complexity and to increase productivity during the engineering process of complex systems. In contrast, the central aspect of TDD is to ensure the quality of the resulting products by precocious testing activities. The experiences in systems engineering gained over past years show that errors and poor decisions in design are very difficult to resolve and the costs for fixing them are rising dramatically the later an error is detected [12, 71]. Therefore, it is especially important to perform verification and validation (V&V) [11] in the first design phases in order to identify drawbacks as early as possible. Besides functional defects in design, it is just as important to evaluate the design with regard to non-functional properties and above all to the performance of the system under development.

1 1. INTRODUCTION

In principle, there are two techniques available to perform such evaluations be- fore the system is implemented: analytic methods and simulation. In combination with formal tests, these methods are furthermore particularly suitable to be used for comparing different design alternatives in order to eliminate poor or faulty designs at an early stage and to focus on the most promising solutions from the beginning. The need to face potential performance problems at early design stages using model-based approaches has been recognized as one of the most important factors for cost-effective and highly productive development of complex software systems [15, 49]. Nowadays, MDE as well as TDD techniques have already gained a certain level of recognition. As recent studies show [38, 53], they are practiced across a wide range of industries, even though to varying degrees. Thereby, it becomes obvious that the standards provided by the Object Management Group (OMG) [1] find broad application, especially its Unified Modeling Language (UML) [61]. In addition to UML, several specialized extensions profiles for UML as well as model transformation languages have been standardized by OMG to cover specific modeling and synthesis issues during the development process.

1.2 Contributions

This work is largely based on the research of Isabel Dietrich [19], in which methods for the generation of discrete event simulations from standard-compliant UML models and for combining it with testing to enable validation on the model level have been introduced. Those methods provide actually an important foundation for our approach called Test-driven Agile Simulation (TAS), which extends them and turns them into a more holistic approach for the model-driven software and systems engineering. Besides a conceptual description of our approach, we also provide a framework named SimTAny in order to facilitate the practical application of the presented approach. This framework may be considered as a consequent evolution of the Syntony framework, which was also contributed by Isabel Dietrich. We redesigned most parts of Syntony in regard to the current state of the art and extended it with additional functionality. Thereby, a crucial importance in this work was attributed to the conformity with open standards in order to provide a solution prepared for the future.

2 1.2. Contributions

Figure 1.1 summarizes our contributions and illustrates the functionality of SimTAny while highlighting our amendments to the original framework implemen- tation. In particular, our contributions are as follows.

• TAS approach. We contribute an approach called TAS that may be consi- dered as an extension to the common software and systems engineering techniques. The approach combines established model-driven engineering, simulation and testing techniques within an iterative development process in order to increase the overall quality of the developed system by improving the quality of the involved specification models. The central idea of TAS is to enable mutual validation of the model-based system and test specifications by means of simulation at the very early stages of the development process. For this purpose, the TAS approach implies an automated transformation of specification models into executable discrete event simulations. Among other things, it also considers the aspects of test-driven development, requirements traceability, and integration into distributed development processes.

• Standard-based modeling methodology. An important foundation of TAS is a modeling methodology which enables seamless specification of require- ments and system architecture, as well as test and analysis models. Thus, in this thesis, we suggest a modeling methodology that is solely based on accepted OMG standards. Following the approach suggested for Syntony, we utilize UML as a common underlying modeling language and make use of several specialized extension profiles provided by OMG. These in- clude in particular: Modeling and Analysis of Real-time and Embedded systems (MARTE) [55] for non-functional properties and analysis purposes as well as UML Testing Profile (UTP) [57] for test specifications. Additio- nally, we added support of the System Modeling Language (SysML) [60] for modeling requirements and basic system structures. Furthermore, we apply Action Language for Foundational UML (Alf) [56] to allow for unified, standard-conform, platform-independent, and in the same time compact and efficient definition of actions and expressions in our models.

• SimTAny framework. With our SimTAny framework, we offer a versatile and widely standards-based tool environment for modeling, simulation, and validation of standard-conform model specifications. For this purpose, we combine relevant approved open-source tools with our newly developed

3 1. INTRODUCTION

components in an integrated environment based on a service-oriented ar- chitecture (SOA) and the popular platform1. We utilize an existing modeling tool, a simulation engine, and a tool for statistical computing introduced in Syntony. Our own enhancements include, among others:

– a modeling tool extension to provide additional editing support for Alf; – a component for the static validation of models; – a transformation component for the standard-conform generation of simulation code; – a component for collecting and visualizing traceability information; – a model-based component for design, management, and control of experiments; and – a service-oriented design of software components.

Thus, relating to the previous realization of Syntony, we completely reimple- mented the transformation process for simulation code generation, adjusting it to the standards for model-to-model and model-to-text transformations, respectively QVT [62] and MOFM2T [54]. At the same time, we extended and adapted the transformation rules to complete the support of applied standards. Additionally, we provide redesigned components for integration and process flow control. For most of our components, we apply model- based and SOA techniques in order to make our framework more flexible for further amendments. Initially built up on Syntony, we decided, however, to choose a different name for our framework to avoid naming conflicts, since the name “Syntony” has already been used in several other domains.

• Case Study. Throughout our work we illustrate the applicability of SimTAny and the underling TAS approach on an example monocular vision system for the autonomous approach and landing. This system is built upon a low- budget micro areal vehicle system (MAV) with limited computing power and sensor capabilities. First, we show how several aspects of this system can be modeled using the suggested UML-based modeling methodology. Then we demonstrate how the system can be analyzed and validated by means of SimTAny. Thereby, we concentrate on the holistic design of the system with special focus on the effects of communication between its components.

1http://www.eclipse.org/

4 1.3. Outline

Requirements Test Model Model System Model

Usage Model UTP Profile SysML Profile MARTE Profile SysML Profile

Casual ALF if (trigger.status == true) { Signal s = new Signal( param=>self.id); port.sendMessage(s); Traceability } transformation Model transformation (based on standards: (based on standards: MOFM2T, QVT) MOFM2T, QVT)

Simulation Discrete Test Suite Event Simulation Model validate (Boost MSM)

runs on runs on

Experiment Simulation Design Engine (model-based) setup (OMNeT++)

Parameters setup import added functionality

Simulation redesigned functionality Statistical Results Control available as WebService

Figure 1.1: Overview of SimTAny functionality (adapted from [19, Figure 1.3])

1.3 Outline

The remainder of this thesis is structured as follows: Chapter2 covers the funda- mental standards and techniques applied in this thesis and outlines the related work. In Chapter3, we describe the concept of our TAS approach. We intro- duce its main features and then discuss benefits and challenges of this approach. Chapter4 demonstrates the suggested standard-based modeling methodology. We first outline the problems caused by the intersections between different modeling60 standards used and present strategies for avoiding these interferences. Using an example system for autonomous approach and landing of a quadrotor MAV,we further illustrate the modeling paradigms we use for the specification of require- ments, system architecture, system behavior, and test models. Chapter5 contains details about our SimTAny framework. It offers an overview of the main features of SimTAny and some implementation specifics, including its software architecture and extendability. We also describe the capabilities and limitations of the current realization. In the concluding Chapter6, we summarize the main results of this thesis and provide some suggestions for future research.

5

CHAPTER 2

Fundamentals and Related Work

The first two sections of this chapter cover the fundamental standards and techni- ques applied in our work. They explain the background of this thesis and facilitates the understanding of later chapters. The third section outlines the related work on frameworks for model-driven engineering. In particular, we focus on recent comparable solutions related to the objectives of this thesis.

2.1 Modeling Standards

The variety of standards provided by the standardization consortium OMG allow for more comprehensive modeling and model-based approaches. Its modeling standards enable visual and formal specifications in different fields of engineer- ing, including object-oriented software engineering (UML), systems engineering (SysML), business process modeling (BPMN), and service-oriented architecture modeling (SoaML), just to mention a few examples. Beside that, OMG offers standards for model interchange (XMI) as well as for model transformations (QVT, MOFM2T) which support model-driven development. All these standards are freely available and are created by a set of member companies of OMG that include both industry and research organizations. This fact contributes to their wide acceptance in the practice. Although OMG only provides specifications of such standards without offering their implementations directly, the specification process of OMG implies the realization of a product for each finalized standard by the involved members. Thus, the availability of working and broadly used products that support these standards was an additional argument in favor of using the OMG standards for TAS.

7 2. FUNDAMENTALS AND RELATED WORK

In the following, we give a brief overview of the relevant modeling standards and methods applied in the context of TAS. This section aims at familiarizing the reader with the basic vocabulary, main concepts and relations of the standards used. It is not intended to discuss each individual standard here, which full particulars are given in the rather extensive standard specifications and relevant secondary literature that is available.

2.1.1 MDA

The Model-Driven Architecture (MDA) [58][87] is a software development pa- radigm provided by OMG which offers basic concepts and techniques for model- focused design an implementation of software systems. Thereby, following defini- tions are used in the context of MDA:

System The concepts of MDA are focused on information processing systems, either existing or planned, which “may include anything: a system of hardware, software, [...] some combination of parts of different systems, a federation of systems - each under separate control, a program in a computer, a system of programs, a single computer, a system of computers, a computer or system of computers embedded in some machine, etc.” [58, p. 5]

Model “A model in the context of MDA is information selectively representing some aspect of a system based on a specific set of concerns. The model is related to the system by an explicit or implicit mapping...” [58, p. 5]. A model can be specified using several notations and formats. It may consist of expressions in a modeling language or text in a natural language.

Platform “A platform is the set of resources on which a system is realized. This set of resources is used to implement or support the system...” [58, p. 9]. A platform may include operating systems, programming languages, databases, user interfaces, middleware solutions, etc. [87, p. 4]

Architecture “The architecture of a system is a specification of the parts and con- nectors of the system and the rules for the interactions of the parts using the connec- tors [81]. Within the context of MDA these parts, connectors and rules are expressed via a set of interrelated models.” [87, pp. 3-4]

8 2.1. Modeling Standards

In order to overcome the complexity of modern software systems and targe- ting on portability, interoperability, and reusability of the specification models, MDA suggests creating models at different abstraction levels. In particular, it recommends:

1. to start with a Computation Independent Model (CIM) that focuses on the domain or context of the system describing what the system is expected to do without specifying any constructional details,

2. to derive a Platform Independent Model (PIM) from CIM by adding archi- tectural and operational aspects that can be abstracted from the concrete platform(s),

3. to derive a Platform Specific Model (PSM) from PIM where technical details are added that relate to the usage of a specific platform and allow for implementing the system.

Each of these models can actually consist of several layers or views which focus on specific aspects and can be targeted to different stakeholders of the development process, like domain experts, software architects, developers, testers, etc. The MDA approach provides that the transformations between models are performed automatically by appropriate software tools. In principle, it is intended and actually possible to derive formal and precise PSMs at least from PIMs and even to generate a working implementation (source code) fully automatically. However, specialized translators are required that consider then individual characteristics of each target platform. In general, modeling and model transformation are the key steps of the de- velopment process in terms of MDA. Each step of the MDA approach is thereby supported by the corresponding OMG standards, which allows for seamless con- nectivity and interoperability during the overall development process. Typical is the application of OMG standards like UML or MOF for, respectively, model or metamodel specifications. Consequently, MOF QVT is the language of first choice in order to specify mapping rules for model-to-model transformations. MOFM2T is therefore applied for the generation of textual artifacts from models. XMI facilitates the interchange of models via XML documents. Before we introduce the related standards for modeling and transformation languages in the next sections, we want to clarify the relationship between MDA and MDE - the development paradigm mentioned above in Section1. On this we

9 2. FUNDAMENTALS AND RELATED WORK

agree with [83, pp. 138-139] that MDE (model-driven engineering) has a wider scope than MDA (model-driven architecture). As “engineering” in MDE suggests, it addresses all the aspects of the software engineering process, while MDA is primarily focused on its design and implementation phases. In particular, MDE may concern such additional tasks like model-based requirements engineering or model-based evolution of the system, which are beyond the original scope of MDA. Therefore, MDE can be seen as a more general engineering approach and as a superset of MDA, whereas MDA provides a OMG’s specific vision of the most aspects of MDE supported by the OMG’s own standards.

2.1.2 MOF

The Meta Object Facility (MOF) [63] of OMG is an open and platform-independent metadata framework for development and interoperability of metamodel-based solutions. In the similar way as metadata provides information about other data, a metamodel is primarily a model of a model. Moreover, a metamodel specifies modeling elements and describes syntax rules, how to form a valid model. MOF relates to the four-layered modeling approach of OMG illustrated in Figure 2.1. In the first place, MOF offers a metamodel which provides a modeling language and rules for specification of other metamodels. This also includes the metamodel of MOF, because this metamodel is used to model itself. From that point of view, the MOF metamodel can be seen as the top-level meta-metamodel, also called M3-model. The metamodels, that instantiate the MOF metamodel, are called M2-models. Typical examples of such metamodels are the well-known UML language and its extension profiles. Additionally, any kind of custom Domain- specific Modeling Language (DSML) or metadata could be defined by means of MOF. Models, that are created in conformance to the metamodels of layer M2, are called M1-models. These are instances of the metamodels of the upper layer, like for example models written in UML. Such M1-models typically describe the objects of the real world, which corresponds to the layer M0. Furthermore, MOF provides a couple of generic services that are applicable to arbitrary models and metamodels across the layers M3 to M1. Thus, for instance, it specifies generic reflection operations, allowing navigating throughout the models and from any model element to its metaobject. Another important service is the serialization capability which allows the universal interchange of models via the related XML Metadata Interchange (XMI) standard. Due to such services and the

10 2.1. Modeling Standards

M3 meta-metamodel MOF

UML Custom M2 UML metamodel Profile DSML XMI serialization XMI

M1 … model A model B … abstraction model

M0 object 1 object 2 … real world objects

Figure 2.1: Layered MOF-based metamodeling architecture of MDA fact that the common metaobjects are reused in both MOF meta-metamodel and UML metamodel specifications, it is recently possible to model metamodels using simple UML class diagrams. This means that any UML-conform modeling tool can be applied to model a metamodel (e.g. of some custom DSML) and to exchange it using the common XMI format. The listed characteristics of MOF make it a key foundation for OMG’s MDA approach (see also [58, p. 14]). Thus, among others, MOF serves as the basis for metamodel definitions of all OMG languages applied in the context of MDA. Furthermore, it facilitates interoperability between different modeling and model transformation tools.

2.1.3 UML

The Unified Modeling Language (UML) is one of the core standards provided by 1 the OMG which is also accepted as an ISO standard (ISO/IEC 19505 ). The UML standard is primarily intended to be used for specification, visualization, analysis, and implementation of software-based systems [61]. It offers a general-purpose modeling language applicable to various domains. In particular, as one of the recent studies in [51] shows, UML and UML-based modeling languages play a leading role in the field of model-based software engineering. This is certainly also a consequence of the harmonized interactions with other MDA-related OMG standards, such as XMI, MOF, MOFM2T, QVT, etc., which enable interoperability between modeling tools and provide support of the entire MDA process. At the

1https://www.iso.org/standard/52854.html

11 2. FUNDAMENTALS AND RELATED WORK

Figure 2.2: The taxonomy of UML 2.5 diagrams (based on [61, Figure A.5])

time of this writing, the current version of the UML specification is 2.5, released in June 2015. However, the most of the available modeling concepts have existed for more than a decade, since the version 2.0 of UML was released in 2005. As illustrated in Figure 2.2, the current UML standard specification defines 14 diagram types for graphical representation of the elements in a UML model. Divided in two major kinds: structural diagrams and behavior diagrams, they allow a modeler to represent, respectively, structural and behavioral aspects of the system under design from different perspectives. It should be emphasized, that diagrams always provide only partial views on the elements of the model and a number of concepts have no explicit graphical representation at all and therefore cannot be visualized in standard diagrams.

Structure Diagrams

The static structures and relationships between different architectural elements of a model can be represented by means of seven structural diagrams. A class diagram shows data types, classes, and interfaces as well as their features (pro-

12 2.1. Modeling Standards perties, operations, etc.), relationships (generalizations, realizations, associations, dependencies, etc.), and constraints. An object diagram can be used to represent instance specifications (objects) of classes and interfaces with concrete value spe- cifications of their features (slots), as well as instances of their associations (links), relating to a particular state of the running system. A component diagram shows the composition of a system based on components while abstracting from details of their inner structure (classes and objects). With the help of a package diagram, the packaging structure of a model can be represented. Packages serve for generic structuring and can contain packageable elements, such as classes, components, etc., providing a namespace for them. A composite structure diagram can be used to show the internal structure of a classifier, such as, for example, a class or colla- boration. Using a deployment diagram, the distribution (deployment) of artifacts over nodes (deployment targets) can be visualized. For instance, artifacts that implement (manifest) system’s software components can be deployed to hardware components. A profile diagram, allows further to represent custom stereotype definitions with tagged values and constraints (see below).

Behavior Diagrams

Behavior diagrams can be used to represent the dynamic behavior of active objects of a system from different perspectives. Based on series of actions, states, or interactions, behavior diagrams can describe changes occurring over time, for example, in class objects, their methods, and also in global collaborations or activities. An activity diagram shows control or object flows as a sequence of actions coordinated with conditions. A state machine diagram (which is also called state diagram), represents all possible states of a part of the system along with (conditional) state transitions which describe its reaction to the external and internal events. There are four kinds of interaction diagrams in UML. The most common sequence diagrams can show the interactions between a number of objects, parts, or external actors (represented as lifelines) as sequences of messages. The other three diagram types, communication diagrams, interaction overview diagrams, and timing diagrams, focus respectively on the communication architecture, overview of a control flow, and precise time schedule of exchanged messages and internal state changes. With the help of a use case diagram, it is further possible to provide a rather static view of actions (use cases), that a system

13 2. FUNDAMENTALS AND RELATED WORK

or a part of it can perform, focusing on the relationships between the actions, system’s parts, and external actors.

Extension Profiles Mechanism

In order to allow adaptation of the general purpose UML metamodel for specific domains, platforms, or methods, the UML standard provides a generic, lightweight extension mechanism via so called profiles. Profiles can be applied to any package of a UML model or to the whole model. A profile can provide definitions of custom stereotypes, tagged values, and constraints to extend or adapt the semantic of the standard UML metamodel elements. Stereotypes are specific metaclasses, that extend the standard ones and can be applied on the corresponding model elements. In UML diagrams, stereotypes are typically represented as textual annotations on the model elements to which they are applied, where the name of a streotype is surrounded by a pair of angle brackets, for example, «CustomStereotype». However, icons can be attached to a stereotype to define an explicit graphical presentation for an annotated model element. Tagged values are metaattributes of stereotypes that allow definition of specialized properties. Additional constraints may be defined for a profile to ensure the model to be well-formed, when the profile is applied. Besides the standard profile with a set of predefined standard stereotypes provided with the UML, there are a number of standardized profiles available for different domains. In the subsequent sections, we briefly introduce several profiles that are relevant for this thesis.

Literature For Further Reading

Although there is a huge amount of books and tutorials about UML available, the most complete and detailed description is provided by the UML standard specification [61]. The version 2.5 of the specification has even become more readable. Nevertheless, for a quick introduction in UML we can recommend the classic book by Russ Miles “Learning UML 2.0” [70]. For the more recent versions 2.4 and 2.5 of UML there are a few books in German [69, 43] that also prepare the reader for the practical application of UML.

2.1.4 SysML

The System Modeling Language (SysML) [60] is an OMG standard that intents to allow for designing of complex systems. SysML has been initiated by the

14 2.1. Modeling Standards

International Council of Systems Engineering (INCOSE) in 2001 as a standard language for systems engineering based on UML. The first OMG specification of the standard has been published in September 2007. The most recent version is 1.5 that was released in May 2017 during the writing of this thesis. Although the implementation of the SysML version 1.4, dated August 2015, was last being used in our work, the main modeling concepts described are still compatible with the latest release. Since recently, SysML is also accepted as an ISO standard (ISO/IEC 19514:20172). While reusing a subset of UML, SysML provides additional extensions, for example, to allow explicit modeling of requirements and continuous systems. Technically, SysML is defined as an extension profile of UML which comes along with a few new modeling libraries that provide some reusable model elements as well as with new diagrams notations that amend the diagram notations reused from UML. The SysML diagram taxonomy is shown in Figure 2.3. Due to the fact that SysML references only to a subset of UML, not all UML diagram types are used in SysML. It reuses the UML packet diagram, use case diagram, sequence diagram, and state machine diagram as is without changes, whereas other SysML diagram types are either modified from UML or newly

Figure 2.3: The taxonomy of SysML diagrams (based on [60, Figure A.1])

2https://www.iso.org/standard/65231.html

15 2. FUNDAMENTALS AND RELATED WORK

added. Thus, activity diagrams are extended to support continuous functions and Enhanced Functional Flow Block Diagrams (EFFBDs), which are often used for systems engineering. The block definition diagram is similar to UML class diagram relating to blocks as the main elements for decomposition and structuring of a system in SysML, based on UML classes. The internal block diagram is thereby an extension of the UML composite structure diagram type. SysML also defines two new diagram types: requiremens diagram and pa- rametric diagram. The requirements diagram type supports representation of requirements in graphical or tabular notation, allowing visualizing of composi- tion, dependency, and derivation relationships between requirements as well as traceability, satisfaction, and verification of requirements by other model elements. A parametric diagram can represent parametric relationships (aka constraints) between properties of blocks. This can be used to express mathematical equations, useful for instance when performing system analysis.

For more information about SysML we refer to the standard’s specification [60]. For a quick overview of the language as well as for a detailed practical guidance on the application and adoption of SysML for model-based systems engineering we can recommend the book “A Practical Guide to SysML” by Friedenthal et al. [26]. Another readable book in German is the third edition of “Systems Engineering mit SysML/UML” by Tim Weilkiens [93]. The book, which is written by a member of OMG group and co-author of the SysML specification, provides an introduction to the current version of SysML and shows its practical application. The author describes first the fundamental modeling elements of UML and then the extensions provided by SysML, explaining how both languages can be used in combination. The first edition of the book is also available in English, which was actually the first English book on SysML.

2.1.5 MARTE

The OMG provides an UML extension profile for Modeling and Analysis of Real- time and Embedded systems (MARTE) [55] in order to add additional capabilities to UML. On the one side, the profile provides support for detailed specification of real-time and embedded systems, facilitating their model-driven development. On the other side, it adds annotation facilities required for model-based analysis. Thus, MARTE can be used to annotate models with qualitative as well as quantita-

16 2.1. Modeling Standards

Figure 2.4: Package diagram of MARTE sub-profiles (based on [55, Figure 6.4]) tive information to allow for analysis or validation of system properties, such as performance or schedulability. As depicted in Figure 2.4, the MARTE profile is structured as a set of sub- profiles clustered in four packages for different modeling and analysis aspects. There are five sub-profiles that form the foundation of MARTE, providing type definitions and core model elements for the common concepts, among others, for time (Time), non-functional properties (NFP), generic resources (GRM), and allocations (Alloc). The sub-profiles of the MARTE_DesignModel package cover the main modeling aspects:

• GCM offers additional stereotypes of MARTE’s generic component model, which refines the UML model of composite structure by supporting client- server like and data-flow like communication schema definitions.

17 2. FUNDAMENTALS AND RELATED WORK

• HLAM supports high-level application modeling concepts related in the first instance to behavior, communication, parallelism, and concurrency. The most central concept is the real-time unit (RtUnit) that extends the concept of UML active object by adding additional features for controlling the owned schedulable resources, concurrent behavior invocations, and a central queue for received messages.

• SRM and HRM provide together a set of detailed resources to allow for co-modelling of software and hardware platforms. Thereby, SRM focuses on modeling of software resources and services to describe multitasking application programming interfaces, whereas HRM provides stereotypes for modeling hardware resources allowing their functional logical classification and physical specification.

The package MARTE_AnalysisModel consists of three sub-profiles. GQAM provides the fundamental concepts for generic quantitative analysis modeling that are specialized for the domains of performance and schedulability analysis in respective sub-profiles: PAM and SAM. Among others, the concern of these sub-profiles is to offer stereotypes for the description of:

• the context of analysis, which determines the system or part of it to be analyzed and allows for the definition of global input and output parameters;

• the workload of the system, which is typically expressed by the events generated from the system’s environment;

• the scenario concepts related to the system execution behavior, which is triggered by corresponding workload events and may be composed of sub- scenarios called steps.

Finally, the package MARTE_Annexes contains the annex sub-profiles defined in MARTE as well as a set of predefined model libraries. The sub-profile RSM allows for repetitive structure modeling, i.e. to describe structures or topologies, which are composed of a multiplicity of structural elements and links, in a compact way. VSL stands for Value Specification Language and is the expression language of MARTE used for value specification of properties, constraints, and stereotype attributes, especially those typed with MARTE NFP types. The VSL sub-profile primarily contains stereotypes related to the definition of extensive data types and the declaration of variables. A collection of predefined data types together with

18 2.1. Modeling Standards operations supported on them is provided by the set of MARTE model libraries. These data types are intensively used throughout the MARTE standard and cover generic primitive and collection types as well as different NFP, time, and resource types. Like UML and SysML, MARTE is also an OMG standard. Its first release was published in 2009 and was called to replace the existing UML profile for Schedulabi- lity, Performance and Time (SPT) which has suffered from the lack of compatibility with the UML 2.0 and other OMG standards. The current version 1.1 of the MARTE standard, that is used in this thesis, has existed since 2011. Although the relevant concepts of the standard are discussed in later sections of this thesis, for additional information or for detailed guidance on the use of MARTE we refer to the standard specification in [55]. Since the current standard specification exceeds 700 pages and is rather structured as a reference guide, a more user-friendly introduction to MARTE can also be find in [78].

2.1.6 Alf

The Action Language for Foundational UML (Alf) [56] is an OMG standard that specifies a textual notation for UML modeling elements. As indicated by its name, the primary goal of Alf is to provide a textual action language for specifying execu- table behaviors within UML models. Therefore, the code written in the Alf syntax can typically be applied as the body of UML OpapaqueBahavior, OpaqueAction, or OpaqueExpression elements to define executable behaviors, actions, or expressi- ons, respectively. Alternatively, the Alf-code can also be directly transformed into corresponding UML notation using the abstract syntax and mapping specification provided with the standard. With help of Alf, it is further possible, to represent structural modeling elements, which allows for the representation of nearly an entire UML model instance in Alf. In this regard, it should be noted that the Alf syntax covers only a subset of UML known as Foundational UML or fUML [64]. The execution semantics of Alf are defined in the fUML specification and apply to Alf due to its complete mapping to the abstract syntax of fUML. Syntactically, Alf closely resembles the common programming languages Java and C++, which are actually often used as action languages for UML in the practice. The syntax of Alf supports most of the typical control statements, like if, switch, for, while, etc., and operators know from these programming languages. In addition, Alf also adopts the notations for accessing model elements as well as for

19 2. FUNDAMENTALS AND RELATED WORK

Listing 2.1: Example of defining an activitiy using Alf notation (from [56, p. 375]) activity Quicksort(in list: Integer [ 0 . . ] sequence ) : I n t e g e r [ 0 . . ] sequence { i f ( l i s t >isEmpty () ) { return− null; } x = l i s t [ 1 ] ; l i s t >removeAt(1) ; return− Quicksort(list >s e l e c t a ( a < x ) ) > including(x) >union(Quicksort(list− >s− e l e c t b (b >= x ) ) ) ; } − −

dealing with collections introduced in Object Constraint Language (OCL) [59], a further OMG standard. For the purpose of example, some capabilities of the Alf notation are illustrated below in Listing 2.1 for the Quicksort activity from the standard specification. Alf uses an implicit type system with static type checking during the transfor- mation/mapping of Alf-code. It basically supports the primitive types imported from UML, like Boolean, Integer, String, and UnlimitedNatural. It must be noted, that the support for floating-point values and the corresponding UML type Real was first introduced in the standard specification of Alf in its most recent version 1.1, which was released shortly before finishing this thesis. Besides standard UML types, Alf can of course work with all the types defined in the surrounded model. Additionally, the Alf standard also defines a model library that includes additional primitive types and templated collection types as well as a large set of functions used to operate with these types. Among others, the Alf standard model library contains predefined types Queue, Set, List, Map, etc. Furthermore, a profile is distributed with the model library for Alf that contains a single stereotype «TextualRepresentation». This stereotype can be applied to a comment to indicate that it contains a textual representation of the element to which the comment is attached. Typically, when a model element is mapped from a textual notation written in Alf, its original textual representation is considered to be attached as a comment. However, the stereotype itself is also generally applicable due to the provided language attribute which is called to hold the name of the notation language used. Since the Alf language is relatively new and not yet widely used in the practice, there are only few publications available besides the official standard specifica- tion [56]. For further reading we can also suggest [33], [77], and [76].

20 2.1. Modeling Standards

2.1.7 UTP

In order to address the domain of test modeling, OMG provides an additional UML Testing Profile (UTP) [57]. This profile extends UML by a set of stereotypes to describe a test model and principally offers an extensive testing language. The basic concepts of UTP are test context, test case, test component, system under test (SUT), and data pool, that are represented by the corresponding stereotypes. These individual concepts will be discussed briefly in the following lines. A test context, also referred to as test suite, describes, in the first instance, the configuration of a test execution environment (the so-called test configuration), providing its structural architecture. This includes the composition of the test environment by its parts, which are in particular test components being connected to the SUT. Since the stereotype «TestContext» is applicable both to the UML structural classifiers and to the behavioral classifiers, a test configuration can be represented by means of UML composite structure diagrams. For different types of test, i.e. unit, integration, or system test, the SUT may either correspond to a single system component, a set of components, or the whole system, respectively. The «SUT» stereotype is applicable to a property, which is typically a part of the test context, while its type or classifier is considered to be an element of the system model. During test execution, all interactions with the SUT are presumed to run over its public interface operations, ports, and signals, since the SUT is regarded as a black-box. A test component is acting as a trigger of a test that is responsible for the generation of stimuli on the SUT and for the evaluation of received replies. Test components can be connected to the SUT or other test components they are interacting with. The stereotype «TestComponent» can be applied to a UML class or any derived element, e.g. components, normally representing an active class. A test case describes the behavior of a test, i.e. the trigger behavior of the involved test components and the expected reaction of the SUT. Each test case is always related to a test context that defines the environment in which it can be executed. This relation is realized by defining test cases as owned operations or behaviors of a test context classifier annotated with «TestCase». The specification of a test case is typically given as a sequence of interactions between the SUT and test components with possible alternatives and loops. Furthermore, UTP provides additional stereotypes to the behavioral elements to express log, validation, termi- nation, and timer-related actions. Principally, any kind of UML behavior diagrams

21 2. FUNDAMENTALS AND RELATED WORK

could be used for the behavior specification of a test case. However, sequence diagrams and state machine diagrams have prevailed in the practice. Optionally, a test context can also specify a classifier behavior that acts as a test controller for determining the order and conditions of test cases to be executed. In order to express the test data used, for example, in test case’s stimuli and responses, UTP mainly provides three stereotypes: «DataPool», «DataPartition», and «DataSelector». A data pool allows defining a container for allowable values - either explicit values or sets of equivalence classes (so-called data partitions). It can be modeled as any UML classifier and must be associated to either a test context or test components. A data partition is also a classifier and represents a set of values which should be treated equally, e.g. allowed input values of stimuli that are expected to result in the same response of the SUT or possible return values identifying a valid response. A data partition is always associated to a data pool. A data selector is an operation that implements some data selection strategy to select concrete values from a data pool or partition. The UTP standard was first released in 2005 and since March 2013 version 1.2 has been available. Currently, a new version of the standard (UTP 2.0) is in the process of standardization. Besides the official standard specification [57], we can recommend two books: “Model-Driven Testing: Using the UML Testing Profile” by Baker et al. [8] and “Basiswissen modellbasierter Test” by Roßner et al. [68] for in-depth introduction into the application of UTP for model-based testing.

2.2 Transformation Languages

Model transformations play a central role in model-driven approaches such as MDA, in which models serve as primary artifacts for specification and design at different abstraction levels, which are then, for example, used as a source for automated generation of specific artifacts. Actually, the intended applications of model transformations include, among others, the generation of models or other artifacts, mapping and synchronization between models, creation of different views on a system, and model evolution tasks like model refactoring [18, 52]. In general, model transformations can be categorized as model-to-model (M2M) or model-to-text (M2T) transformations. Whereas M2M supposes ma- nipulations on the level of models, thought that source and target models for these manipulations can represent different metamodels or different levels of

22 2.2. Transformation Languages abstraction of the same metamodel, M2T transformations support generation of textual artifacts, such as code or documentations. Although, a large number of model transformation approaches, languages and tools have been proposed in the past, the decisive contribution to this field was made by the OMG, which has brought out the two standards: QVT and MOFM2T, respectively for M2M and M2T transformations. These standards, that are supported by several tools and are part of the official MDA approach, provide a basis for our work and therefore briefly introduced in the subsequent sections.

2.2.1 QVT

With the aim of providing a language for defining model transformations bet- ween MOF-based metamodels, a standard called Query/View/Transformation (QVT) [62] has been released by the OMG in 2008. At the time of writing this thesis, the version 1.3 of the standard dated June 2016 had been available. Based on other well-established OMG standards: MOF and OCL, QVT provides formal means to create query-based views of MOF-based models and express relationships (i.e. mappings on the lower abstraction layer, relations or transformations on the higher layers) between source and target models. As illustrated in Figure 2.5, the abstract syntax of QVT is defined as a MOF-conform metamodel and concrete mappings are specified on the basis of the source and target metamodels. These mappings are then executed by a transformation engine for concrete instances of

conforms to M3 meta-metamodel MOF

conforms to

M2 source QVT target metamodel MM MM MM

conforms toconforms to conforms to

M1 source target mapping model model model

executed on

transformation engine

Figure 2.5: Concept of QVT model-to-model transformations (adapted from [50, Fig. 6])

23 2. FUNDAMENTALS AND RELATED WORK

those models while quering input models and producing output models, thought that transformations can be specified for several input and/or output models. The QVT specification actually defines three transformation sublanguages on different abstraction layers: Core (QVTc), Relations (QVTr), and Operational Map- pings (QVTo). Core and Relations are two declarative languages that allow for defining transformations as a set of bidirectional relations or mappings among models. The Core language provides a relatively simple and explicit syntax, whe- reas Relations language offers a more user-friendly syntax at a higher abstraction level with implicit traces between model elements involved. The Operational Mappings language extends the both declarative languages to allow for imperative specification of mappings in a more procedural style. An important difference to the declarative languages is that QVTo transformations are always unidirectional. As QVTo is primarily used in this work, the following explanations only refer to the Operational Mappings. Listing 2.2 presents an example of defining a model transformation using the basic QVTo syntax. An operational transformation in QVTo specifies a signature of the operation indicating the set of involved input and output models (line 1). Transformations as well as helper queries can be organized in libraries and included in a current transformation by using the keywords access or extends (lines 2-3). With a special mapping main (line 4), each transformation defines a starting point for execution. Other mappings or transformations can be invoked from the current one respectively using the map keyword for mappings (line 8) or calling the transform() operation on a transformation instance (line 6). The

Listing 2.2: Example of defining a QVTo transformation 1 transformation UMLToOut( in uml :UML, out outM :OUTMM) 2 access transformation SimplifyUML( in UML, out UML) , 3 extends Uml2General(UML) ; 4 main () { 5 var tmp : UML; 6 var retcode := (new SimplifyUML(uml, tmp)) >transform(); 7 i f ( not retcode.failed()) { − 8 tmp.objectsOfType(Package) >map packageToFolder() ; 9 } else raise "UmlTransformationFailed"− ; 10 } 11 12 mapping UML::Package:: packageToFolder() : OUTMM:: Folder { 13 name := " Folder " + s e l f . name ; 14 }

24 2.2. Transformation Languages

QVTo language allows the definition of local and global variables (lines 5-6) and contains typical imperative constructs, such as loops, conditions, branches, etc. In the example presented, a transformation is defined that operates on any UML model as input and generates a model of some different metamodel called OUTMM. In the body of the main mapping an external transformation is invoked to perform some simplifications on the input model (line 6), then all UML Package elements of the model are selected (line 8), for which the mapping operation packageToFolder (lines 12-14) is called that creates a Folder element of the target metamodel for each UML Package. Inside the packageToFolder mapping the properties of the created Folders are assigned with appropriate values. Besides its very expressive syntax for the definition of model transformations, QVT provides a possibility to execute external code. This mechanism (also referred as Black Box Mechanism) allows for reusing already existing transformation or quering libraries that may be implemented in any programming language. It introduces additional flexibility to the organization of model transformations. However, this mechanism also carries some potential risks, since the access to and manipulations on model elements are opaque to the transformation engine and beyond its control in that case. For further information on QVT, we refer to the standard’s specification in [62].

2.2.2 MOFM2T

MOF Model to Text Transformation Language (MOFM2T) [54] is the standard pro- vided by the OMG for M2T transformations. It defines a template-based language that can be used to transform MOF-conform input models into any kind of textual artifacts, such as source code, configuration scrips, reports, or documents. The core language elements are: template, query, and module.A template contains textual constructs of the target language with integrated placeholders for dynamic data that have to be generated from models. These placeholders contain expressions that need to be evaluated to the corresponding textual fragments, while accessing data from the input elements (template parameters). The type of a template parameter is either specified by the metamodel used or relates to a default type from OCL, like String, Boolean, Integer, OclAny (the basic supertype). The expression language of MOFM2T provides basic control structures, such as for, if-else, and let, and allows invocations of other templates and queries. Thereby, queries encapsulate complex or frequently used expression and are usually applied

25 2. FUNDAMENTALS AND RELATED WORK

for selecting specific model elements and/or extracting data values from them. The resolved values are then converted to an appropriate string representation by means of the expression language and the adapted OCL String library. Queries and templates are organized in modules and may be specified as public, protected, or private elements of a module. Modules can import other modules to access their public elements or modules can be extended for overriding public and protected templates, thus supporting reuse and inheritance of code and allowing for customization of module’s behavior. Furthermore, templates can be overloaded for different types of input elements or parameter settings and can have guard conditions. A guard condition permits the execution of a template in specific situations and enables selection of a particular template in case of template overloading. Finally, the output of a template can be directed to a file identified by a resource URI. Writing in a file can be performed in two modes: append or overwrite. An example illustrating the usage of basic language constructs is presented in Listing 2.3. For more detailed information on MOFM2T, we recommend reading the standard’s specification in [54], which, in contrast to other OMG standards, does not exceed 25 pages excluding the examples.

Listing 2.3: Example of defining a MOFM2T transformation (adapted from [54, p. 5]) 1 [ query public allOperations(c: Class) : Set ( Operation ) = 2 c.operation >union( c.superClass >select(sc|sc.isAbstract=true ) 3 >iterate(ac− : Class; os:Set(Operation)− = Set {}| 4− os >union(allOperations(ac)))) /] 5 − 6 [ template public classToJava(c : Class) ? (c.isAbstract = f a l s e ) ] 7 [ f i l e (’ f i l e :\\ ’+ c . name+ ’.java’, false, c.id + ’ impl ’ ) ] 8 c l a s s [ c . name/]{ 9 // Attribute declarations 10 [ f o r (a: Attribute | c.attribute) ][ a.type.name/][ a . name /] ; [/ f o r ] 11 12 // Operation declarations 13 [ operationToJava(allOperations(c)) /] 14 } 15 [/ f i l e ] 16 [/ template ] 17 18 [ template public operationToJava(o : Operation) ] 19 [o.type.name/][o . name /] ( [ f o r (p:Parameter | o.parameter) 20 s e p a r a t o r ( ’ , ’ ) ][p . type /][p . name/][/ f o r ]) {} ; 21 [/ template ]

26 2.3. Related Work

2.3 Related Work

Since model-driven engineering techniques based on OMG standards, and above all on UML, become increasingly important, a number of tools and projects have been established in this field. There are a variety of UML editors with integrated code generation capabilities. Beside this, different approaches can be found in the literature that aim to allow for verification or performance analysis of UML-based system specifications. A good summarizing evaluation of the relevant approaches and some existing tools, that were available only a few years ago, is given in [19] and [49]. As claimed by both authors, their capabilities are still not sufficient or not generic enough for systematic, standard-compliant modeling and analysis of system specifications. In the meantime, however, several advancements have been made and new projects have been launched with the aim to contribute to this end. We discus the individual methodological approaches and current state of practice in the filed of the model-driven engineering in detail in Chapter3 while introducing the concept of our TAS approach. The purpose of this section is, however, to provide an updated overview of the recent comparable projects and currently available tools related to the objectives of this thesis. Thus, the focus of this overview lies on those projects and tools that make a substantial contribution towards an integrated development of model-driven systems engineering allowing system specification based on common modeling standards as well as simulation- based analysis and/or verification of specification models. At the end of this section, we summarize the similarities and differences that exist between existing and suggested solutions.

2.3.1 Relevant Research Projects and Tools

MADES

The EU founded FP7 MADES [2, 67, 66] project ran in the period 2007-2013 and aimed at improving the productivity of developing real-time embedded systems for avionics and surveillance industries by introducing model-driven techniques. The modeling approach proposed by MADES is based on UML profiles SysML and MARTE. In order to avoid intersections between both profiles and to simplify the design process, an own MADES language has been defined as a subset of SysML and MARTE. The language introduces an additional set of proprietary stereotypes for specifying further semantics that are needed for the underlying model trans-

27 2. FUNDAMENTALS AND RELATED WORK

formations. In addition, specialized diagrams, for example for requirements or hardware/software specifications, have been defined. The proposed tool chain of MADES allows generation of hardware descriptions from the modeled architecture specification as well as of platform-specific software for the modeled target archi- tecture. The formal verification of the design models, including functional and non-functional properties, is primarily performed by means of model checking. Therefore, scripts are generated for the verification tool Zot 3 by transforming the notations used in MADES diagrams into temporal logic. The advertised simulation capabilities of the proposed verification framework are based on the evaluation of the temporal logic formulas, which represent the behavior of the system model, with respect to input variable traces generated from a corresponding environmen- tal model, which is typically described by differential equations. The result of this simulation is a trace, i.e. a sequence of variable values over a time interval, that satisfies both the system and environmental model, if any could be found.

COMPLEX

Model-driven techniques have also been applied within the COMPLEX project [3, 32, 36] - another EU founded FP7 project - to address the problem of platform-based design space exploration (DSE) of embedded HW/SW systems. The most important outcomes of the project are a reference framework and design flow concept that are aimed to enable for rapid virtual prototyping of such systems under consideration of timing and power aspects. The suggested design methodology is based on UML and MARTE for the specification of application, architecture, and platform description models, including non-functional properties. The functional behavior is, however, considered to be written in sequential C/C++ code. For the purpose of design space description, a custom UML extension profile has been proposed, whereas UTP standard profile was applied to describe systems environment and associated interaction scenarios between environment components and the system. Based on UML specification models and C/C++ functional descriptions, executable SystemC simulations can be generated by the proposed COMPLEX Eclipse Application (CEA) tool, which then allow to analyze different system designs for a set of use cases, e.g. using various platforms, alternative HW/SW mappings, and power management strategies. The CEA tool has been deliberately built on the Eclipse platform to provide an integrated and extendable framework based on open modeling

3https://github.com/fm-polimi/zot

28 2.3. Related Work standards and tools available for Eclipse. In addition to the support for the generation of executable performance models, the CEA tool implements several analysis engines, which are primarily used to check for compliance of UML-based specification models with the COMPLEX modeling concept.

ANSYS SCADE

The ANSYS SCADE4 product family provides a full-featured integrated model- driven development environment which is targeting the development of critical embedded software and systems. It supports the entire development workflow - from requirements engineering and design, through model checking, virtual prototyping and simulation, up to automated code generation and test execution. The core of the product family is the formal SCADE language which is based on decision diagrams, synchronous data flows and state machines. This language is well suited for the specification of deterministic, safety-critical embedded con- trol applications. At higher abstraction level, the modeling of complete system architectures can be achieved with SysML in SCADE System tool. A SysML system model is automatically transformed to SCADE enabling synchronization with the specification of software subsystem components made in SCADE Suite tool. With ANSYS Simplorer and SCADE Test there are additional tools available for verifica- tion and validation. Whereas ANSYS Simplorer allows modeling and simulation of system prototypes, SCADE Test provides a comprehensive testing environment for creation, maintenance and execution of test cases.

Sparx Enterprise Architect

A commercial UML modeling tool, which finds broad application in industry, is Sparx Enterprise Architect5. This tool is based on UML 2.5 and provides built- in support for a several standards, first of all for SysML, BPMN, and SoaML. Furthermore, Enterprise Architect covers a lot aspects of the model-driven software development including requirements modeling, code and document generation, reverse engineering, debugging, traceability, and extensive project management support. A proprietary solution is available to model and manage test scenarios. The tool also offers some basic model validation capabilities to check validity of UML model against predefined UML rules and model specific constraints that can

4http://www.esterel-technologies.com/products 5http://www.sparxsystems.eu/

29 2. FUNDAMENTALS AND RELATED WORK

be defined in OCL. The advertised simulation features of Enterprise Architect can be used to examine the behavior of individual behavioral diagrams, such as UML activity, interaction, and state machine diagrams, also supporting BPMN process and SysML parametric diagrams. Although based on several OMG standards and providing an expendable model-driven generation framework, in this product we found a huge amount of proprietary solutions and a lack of strict adherence to the standards.

IBM Rational Rhapsody

IBM Rational Rhapsody6 is a popular commercial product family that provides a model-driven development environment for systems engineers and embedded software developers. Rhapsody’s editors allow for extensive and rapid modeling with UML and SysML, while also supporting the application of predefined profiles, such as MARTE, or creation of custom profiles for DSMLs. Requirements modeling and analysis, model-checking, code generation, as well as traceability features are included in the individual tools. Beyond this, Rhapsody provides facilities to simulate system designs by means of built-in model execution or through integra- tion with the simulation tool Simulink7. Using a tool add-on called Test Conductor, it is farther possible to apply concepts of the UTP profile for model-based test specification. This add-on also allows for generation, execution and management of test cases, which are executable on the generated implementation code or even on the instrumented code that allows for model simulation. However, similar to Enterprise Architect, Rhapsody products are too overloaded with proprietary solutions. Thus, for example, Rhapsody provides own implementation of the OMG modeling standards, such as SysML and UTP, that is not fully and strictly com- plying with the official standards. Furthermore, Rhapsody does not yet support standardized action language Alf, although it was planned, and offers its own action language instead.

Syntony

The lack of compliance with common modeling standards, proprietary input formats and simulation cores, and as a result the poor exchangeability and inter- operability between model-based tools have already been criticized by [19]. As

6https://www-03.ibm.com/software/products/en/ratirhapfami 7https://www.mathworks.com/products/simulink.html

30 2.3. Related Work a consequence of these drawbacks, the tool called Syntony has been developed which was aimed to enable an “integrated process for the modeling and simulation of systems that is fully compliant to the UML 2 standard” [19, p. 70]. The main contribution of Syntony was the transformation framework for the automated gene- ration of discrete-event simulations for OMNeT++ [89] from standard-compliant UML models annotated with MARTE and UTP. In doing so, Syntony enables the simulation and analysis of system models considering performance aspects, while also allowing for their early validation and test by applying the method of unit testing on simulated UML models. Since the Alf standard was not yet released at that time, Syntony provides its own action language called Casual for textual representation of activity diagrams along with an associated compiler to transform this notation to the standard-conform UML activities. Additional features for auto- mated generation of simulation experiments, simulation control, and statistical analysis of simulation results were also introduced in Syntony.

2.3.2 Comparison with the Suggested Approach

Related to the work discussed above, it can be seen that even after years of develop- ment, significant gaps in the consistency as well as in the practical implementation of the methods still exist. We intent to close these gaps by providing the TAS appro- ach and developing our SimTAny framework strictly based on common standards. In Table 2.1, we summarize the main capabilities of the presented tools in terms of applied model-driven engineering methods and standards with a view to providing a comparison with SimTAny. Although the importance of common modeling standards, first of all UML and SysML, is uncontroversial and most of the tools are already built on them or provide appropriate interfaces, as in the case of SCADE, the application of the standards is often limited to individual engineering aspects and proprietary solutions are predominant. Thus, the consequent application of MARTE, e.g. for annotating performance-relevant, timing, or analysis aspects, is only supported by MADES, COMPLEX, and Syntony, albeit to differing extents. UTP is only used by Syntony and an add-on for Rhapsody for systematic, model-based specification of test suites that serve as the source for formal validation on the model level with Syntony and additionally on the code level with Rhapsody. Although COMPLEX also uses UTP, its application is limited to the description of environment and interaction scenarios in order to derive stimuli for simulated systems, but there

31 2. FUNDAMENTALS AND RELATED WORK

Table 2.1: Capabilities overview of SimTAny and the relevant tools

Tools Modeling Action Transformation Validation Languages Language Language(s) Methods MADES UML - Epsilon8 model checking SysML MARTE (specific no- tation) COMPLEX UML C MOFM2T model-checking MARTE DES with SystemC UTP design space exploration (excluding behavior) SCADE proprietary proprietary proprietary model checking (SysML in- model execution terface avai- (co-simulation possible) lable) testing on generated code Rhapsody UML proprietary proprietary, model execution SysML Java (co-simulation with Simulink (MARTE possible) and UTP testing on generated code available) testing on simulated model Syntony UML proprietary Java AOP DES with OMNeT++ MARTE testing on simulated model UTP SimTAny UML Alf QVT, MOFM2T static validation SysML DES with OMNeT++ MARTE testing on simulated model UTP design of experiments is no information about observing and validating system responses. All the tools either provide their own action languages or just allow to use higher programming languages like C++ or Java. The common Alf standard is not yet supported by the relevant tools. For the early validation of model specifications, most of the tools apply static model-checking and/or dynamic model-execution methods. However, advertised simulation capabilities of the most tools are usually quite limited. The integrated simulation engines typically allow more or less trivial model executions in the manner of animating individual behavior diagrams. Due to the integration of full-featured simulation frameworks, most tools provide, in principle, the ability to perform complex simulation studies of complete systems. Nevertheless, a standard-conform way for the definition of randomness, non-functional aspects,

8http://www.eclipse.org/epsilon/

32 2.3. Related Work and extensive experiment scenarios on the level of models using MARTE was only supported by Syntony at that moment. According to the approach of model-driven development, all tools provide automated generation of implementation and/or simulation artifacts from models, such as code, scripts, configuration specifications, etc. Thereby, a significant draw- back of the most tools presented is that the proprietary transformation engines or custom languages are used for generation. Even in case of flexible extension mechanisms, potential adaptations or extensions of transformation rules are re- stricted to the particular tool. One exception is the COMPLEX project which utilizes a standardized transformation language MOFM2T. However, the transformation rules of COMPLEX do not cover the system behavior. Since we believe that the ubiquitous application of common and open standards is required to make the TAS approach extendable, universally applicable, and future-proof, we therefore decided to build upon the approach that has been suggested for Syntony and to reimplement large parts of the framework in order to integrate the current standards. Thus, in SimTAny we added support for SysML and Alf modeling standards. In combination with MARTE and UTP it allows for a uniform design approach, where all the specifications regarding requirements, system and software design, analysis, as well as testing can be made in a standard- conform way using a single modeling paradigm based on UML. We implemented the complete transformation process of simulation code generation based on the open standard transformation languages MOFM2T and QVT. Due to this fact, the transformation rules implemented become reusable and future extensions and adoptions are easily possible for everyone. Furthermore, in comparison to Syntony, SimTAny implements additional fea- tures, enabling for example, for static model validation or complete traceability between requirements, related model elements, and generated code. These featu- res are called to increase the quality and efficiency of the development process and are indispensable part of the functionality considered for a comprehensive de- velopment environment, as it is the case in SCADE and Rhapsody. An outstanding feature of SimTAny is further a model-based component for design, generation, exe- cution, and management of experiments that facilitates extensive and structured simulation studies. Furthermore, our framework is designed on a service-oriented architecture to simplify its integration in a distributed development environment, while also making it easier to adapt and extend its functionality, for example to provide support for different simulation engines.

33

CHAPTER 3

Test-driven Agile Simulation

In this chapter, a theoretical overview of the concept of Test-driven Agile Simulation (TAS) is presented. We start with the motivation of the suggested approach in Section 3.1, describing the intention behind it, its central idea, and the main objectives. The concept of TAS is then presented in more detail in Section 3.2, in which we describe our vision of an improved systems development process. In Section 3.3, we then give an overview of the most important features which are required in order to achieve the presented objectives. Finally, in Section 3.4 we summarize the main aspects of TAS and discuss the potential advantages as well as challenges of this approach. Parts of this chapter are based on our previous publications on several workshops and conferences [20, 73, 74, 75].

3.1 Motivation

The continuously increasing complexity of software and embedded systems makes advanced engineering techniques indispensable for a successful and effective development of such systems. This is not new knowledge. During the last decades, there were already many developments in terms of formal methods, languages, and tolls, which have been put into practice.

Development Processes

Nowadays, a variety of formal models exist, that describe stages of the develop- ment process and the order in which they have to be carried out. Generally, the development of complex software-dominated systems consists of a series of design, implementation as well as verification and validation (V&V) phases that typically

35 3. TEST-DRIVEN AGILE SIMULATION starts with requirements definition followed by several specification, programming, and testing steps. Depending on the aims and circumstances of the development project, there are different process models that might be suitable for that specific situation. In [71] or [83] an extensive discussion of all the process models along with some other aspects of software engineering can be found. Just to mention a few of the most popular models:

• V-Model describes a sequential process focused on intensive V&V of each phase. It begins with a detailed requirements specification followed by systems and components design with subsequent implementation and testing. It considers the verification of correct derivation of each subsequent step from the foregoing one and includes extensive validation by performing unit, integration, system, and acceptance tests after coding is concluded. Since each phase needs to be completed and verified before starting the next one, this approach is especially suitable for well equipped projects with high demands on quality and initially clear requirements.

• Iterative Model consists of repeating cycles of specification, implementation, and testing. Starting with a rough prototype, it can then be reviewed and improved in the next iteration cycles. This method enables rapid develop- ment of the first prototypes and high flexibility during the project. Therefore, projects with uncertain or changing requirements might be conducted based on this model.

• Agile Model combines the iterative approach with the incremental one where only a part of the system, one or a few features, is considered at each iteration. This approach provides the highest flexibility and at the same time reduces the effort for planning and documenting, particularly in the beginning of the project. Due to frequent and rapid cycles, the agile development model is well-suited for smaller teams of experienced developers and for projects in which a lot of requirement adjustments and changes can be expected.

Test-driven Development

Each development process model attributes particular importance to tests as a means of verification and validation. However, the intensity of tests and their site of deployment vary from model to model. For instance, the V-Model [4] considers

36 3.1. Motivation an explicit segregation between distinct test levels, such as unit, component, integration, system, and acceptance tests, all performed after the implementation, whereas agile methods apply more implicit techniques. One of the approaches often used in combination with agile methods is Test-Driven Development (TDD) [9]. TDD was originally introduced as a part of agile methods, but is also applicable to other development processes [83, p. 221]. In contrast to classical software testing, the idea of TDD is to develop test specifications as early as possible in parallel with or even before coding. Moreover, test specifications are called for to drive design and programming. The main advantages of TDD are:

• Better understanding. It helps to clarify the requirements, parameters, and constraints for the feature under development before starting the implemen- tation.

• Code coverage. In principle, each implemented feature has to be covered by associated tests, which increases the level of confidence in the code.

• Less debugging. Writing and executing tests as soon as possible lead to early and easier detection of defects, so that programmers spend less time debugging on the code level.

As recent studies show (for instance in [40], [53], or [14]), TDD has a positive effect on the quality of the resulting product, but it usually decreases the productivity due to the higher initial effort for test specification and the greater number of tests. Therefore, the expense of test specification as well as of test execution should be kept to a minimum to increase the benefit of TDD.

Model-driven Engineering

In order to deal with the increasing complexity of systems, the application of struc- tured methods for design is essential. The approaches based on formal modeling languages like UML have now gained in popularity. At the moment, MDE [44] is one of the promising approaches, in which formal specification models are being used as primary artifacts throughout the whole engineering process. By offering manageable models, due to several abstraction levels and graphical modeling, this approach facilitates the design of complex systems. The application of such models also leads to an improved portability, maintainability, and interoperability during the engineering phases. Furthermore, detailed architectural and behavior

37 3. TEST-DRIVEN AGILE SIMULATION

specifications enable automated code generation, allowing for focusing on a proper design rather than on the underlying implementation technology. All in all, MDE therefore helps to increase the productivity of the development process [38]. As mentioned in Section 2.1, MDA is the OMG’s specific vision on MDE based on a set of further OMG standards.

Analysis and Simulation

So far, numerous efforts have been considered to develop techniques for deriving quality of service (QoS) properties from model specifications in order to allow pre- dictive assessments of design solutions, first of all with regard to their performance. In principle, the exiting techniques can be classified as analytic, simulation-based or hybrid, i.e. a combination of both [39]. A basic problem hereby is, however, that different modeling paradigms are typically used for design models and for analysis models. [49] In this context, the recent trend is to apply UML-based models for design and to use automated transformation techniques to derive either an analytical or a simulation model. Therefore, the UML models are often annotated, e.g. with stereotypes from the MARTE profile, to express the relevant non-functional properties. Thus, in [45] the authors propose, for instance, a framework to transform UML models to Stochastic Reward Nets (SRNs). These SRNs are then evaluated analytically with the software package SHARPE 1 to obtain performability metrics. Such analytical approaches provide a valuable insight in the system under development. In practice, however, their applicability is usually limited to relatively simple and standardized systems or to isolated system parts [49, p. 23]. In contrast, simulation-based techniques appear to be more promising for the evaluation of complex systems, especially for heterogeneous and communicating systems. As shown in [19] and [49], UML-based design models can be used as a source for quality assessments by means of discrete-event simulations derived from such models. Additionally, in [19] the author also addresses the validation of system designs by testing based on simulation.

Towards Test-driven Agile Simulation

Summarizing the research findings quoted above, we are convinced that the combination of model-driven engineering and test-driven development techniques

1http://sharpe.pratt.duke.edu

38 3.2. TAS Development Process can help to achieve improved overall quality during the development process along with increased productivity. This new approach is called for to contribute a cheap and agile technique for systems engineering which includes seamless model-based design of all relevant engineering aspects, early detection of design errors as well as quantitative assessments and performance estimates at the early stages of the development process.

We see a lot of potential for our approach by utilizing simulation techniques in order to analyze and validate specification models. For that purpose, our approach considers automated derivation of executable discrete-event simulations from both the system and test models, which for their part are derived from common requirements. By means of simulation, potential drawbacks and bottlenecks in the design can be identified, even prior to expensive implementation and testing on a real system. Moreover, investigation and comparison of alternative designs and solutions at model level is facilitated when applying simulation. Furthermore, early validation of the specification models, by executing tests on a simulated system, also helps to reduce development risks and increase the quality of the resulting product.

3.2 TAS Development Process

Following the basic idea of TAS presented in the previous section, our approach makes provisions for various aspects of the model-driven systems development process. Thereby, we combine several best practices from software and systems engineering as well as from simulation-based quality assessment and performance estimation.

In order to gain more profound understanding of the suggested approach, we first start with an overview of the overall concept of TAS by describing our vision of an iterative development process. As highlighted in Figure 3.1, the development process of TAS covers, in the first place, modeling, transformation, simulation, and validation in order to enable early quality assurance at the level of specification models. It also relates to widely known techniques of requirements engineering, generation of the implementation code, and testing on an implemented system, although these aspects are not treated to their full extent in this thesis.

39 3. TEST-DRIVEN AGILE SIMULATION

Requirements Requirements Engineering

Modeling System Model static validation Usage/Test Model

Transformation generate generate

Simulation Simulation Model dynamic validation Simulation Test Suite

Implementation and Test System test Executable Test Suite

Validation

Figure 3.1: Concept of Test-driven Agile Simulation (adapted from [73])

3.2.1 Requirements Engineering

As demanded by any formal development process model, each development cycle shall start with collecting and specifying requirements. In case of an iterative development process as assumed for TAS, this applies for each iteration. Whereby, besides adding new requirements, the existing requirements may need to be adapted or refined according to the results of the previous iteration. In this step, functional as well as non-functional requirements have to be obtained and specified for the features to be realized. Thereby, functional re- quirements describe the features or services the system must provide along with rules of how the system shall behave in particular situations. On the other hand, non-functional requirements2, including among others timing, performance, and availability limitations, are typical constraints on those services or on the system as a whole. The level of detail and the notation used for requirements description may vary depending on the type of system being developed and the kind of require- ment, e.g. user, system or software requirements, functional or non-functional etc. In most cases the specification of requirements is written in a natural language supplemented by appropriate diagrams, tables or mathematical formulas. Howe-

2Also called extra-functional requirements

40 3.2. TAS Development Process ver, there are many attempts to introduce formal and semi-formal notations for requirements specification in order to avoid any problems caused by using natural languages and to enable automated derivation and validation of requirements (see for example [29, 85, 90]). One of the most promising approaches to this task has been recently presented in [10]. Such techniques would certainly contribute to our approach and could actually be integrated easily, but they are still being researched and have not yet found a wide application in practice (see also [83, p. 96]). A lot of useful recommendations on writing requirements and requirements engineering in general can be found, for instance in [83, p. 82f.]. In our approach, however, we focus on representing the requirements based on a common modeling paradigm, regardless of how formal they have to be written. As will be demonstra- ted by an example in Section 4.2.3, it is actually possible to specify requirements and their relations directly in a model by means of SysML. Alternatively, they can also be imported into a model from classical requirements management tools (see for example [22]).

3.2.2 Modeling

In the next step, our approach suggests deriving system and test specifications independently and in parallel to one another, based on their common requirements. These specifications can be either deducted manually by the engineers or, in case of formal requirements, derived by automated transformation methods as mentioned above.

Test Model

We recommend starting with the specification of a test model as early as possi- ble. Ideally, test specifications should even drive the system development, since they contribute to the clarification of requirements and the intended usage of the system or its parts, on the one side, and help to identify interactions and interfaces between system components, on the other side. Furthermore, due to early considerations of testability aspects, the plausibility of requirements would be prematurely checked while developing a test model. Should it, for instance, not be possible to derive a test case from a functional or non-functional requirement in this way, the requirement must be reviewed again [71, p. 152].

41 3. TEST-DRIVEN AGILE SIMULATION

Test cases are the key behavioral information contained in a test model. Besides that, the test model has to include test relevant structural information and test data (see Section 2.1.7). Commonly, a test specification starts with setting up a test context which serves then as a container for a set of test cases. Particular test cases, of course, can be modeled individually. But due to the system’s complexity and the often unlimited number of possible test sequences, it is recommendable to apply automatic test generation techniques to derive useful test cases from more abstract test or usage models. In Section 4.2.5 we demonstrate additional details about the modeling of test specifications using the suggested modeling methodology.

System Model

Based on the specified requirements, the system model can be widely established in parallel with the corresponding test model. As already mentioned, it makes sense, however, also to use the findings of initial test specification activities in order to clarify these requirements. The definition of data types, interfaces, and main architectural elements, for instance, should therefore be performed hand in hand with test modeling, since they are involved in both models. On the contrary, the detailed transformation of functional and non-functional requirements in the form of concrete behaviors and parameters should occur independently in system and test models in order to allow their utilization for mutual validation. The system model provides a formal specification as well as documentation of the system’s architecture and behavior. In the end, it should satisfy all requirements. Furthermore, it should contain all the details required to derive the implementation code in terms of Model-Driven Development (MDD), on the one hand, and an executable simulation model as requested for our approach, on the other hand. Sections 4.2.6 and 4.2.8 describe how the particular aspects can be modeled to fulfill these tasks.

3.2.3 Static Validation

In order to detect design and modeling errors as soon as possible and thus to increase the efficiency of the development process itself, it is reasonable to perform V&V activities at the modeling level already. In this way, the static structures of the involved models can be verified at very early stages of the design phase, even when an executable simulation or system is not yet available. In addition to

42 3.2. TAS Development Process the known verification methods, metrics collection and model checking methods (see [65, 7]), system and test models that have been derived from the same requi- rements independently can serve also for their mutual validation. For this purpose, several constraints can be specified and examined relating to the static modeling aspects, such as structure, naming conventions, well-formedness, and traceability to requirements. Since such examinations on models are easily applicable and can be run within a short period of time, they can be performed frequently during the design phase, almost after every significant model modification.

3.2.4 Transformation

When the specification models are sufficiently detailed to consider an analysis of their dynamic behavior, the next step intended by our approach is the trans- formation of the models to executable simulations. This inheres in automated generation of simulation artifacts from both system and test models for the purpose of validation and performance analysis as described in the subsequent section. The concrete manifestation of the generated artifacts can vary depending on the target simulation engine that may be applied. Therefore, we use the more general terms simulation model and simulation test suite to label the different artifacts derived from a system model and a test model. Thereby, the latter case has to be differentiated from those test suites generated for real tests on the implemented system, since a simulation test suite runs on the simulated system.

3.2.5 Simulation

In the first instance, the simulation model derived from a system model can be utilized to examine the functionality of the designed system, for which the generated simulation model represents all modeled components of the system with their reproduced behaviors. Running a simulation can be very helpful to get a deep insight into the expected dynamic behavior of the system under development. Since most simulation tools provide detailed event traces or even enable for interactive or stepwise execution, it is also particularly suitable for debugging. Another special advantage of using simulation is its applicability for a predictive analysis of the system’s performance characteristics. Therefore, during a simulation run various predefined measurements can be collected and analyzed afterwards or even live during the execution. A comprehensive statistical analysis of the collected results, however, are often still required. Thus, in order to facilitate this task, our

43 3. TEST-DRIVEN AGILE SIMULATION

framework aims to provide support for convenient calculation and visualization of some basic measures (see Section 5.1.7). In this way, it is possible to achieve the expected performance or to detect potential bottlenecks of the designed system. The search for optimal design solutions and the analysis of different conditions that can be modeled as specific parameter configurations in the easiest way are other promising opportunities driven by simulation. For this purpose, simulation tools can provide fundamental support for parameter variation and for comparison of different solutions. We also address this aspect in Section 5.1.5 in order to enable for more extensive, model-based design of simulation experiments.

3.2.6 Dynamic Validation

In addition to the isolated simulation of the modeled system, test suites that are generated from the test model can serve for dynamic validation of the involved models by executing test cases on the simulated system model. A generated simulation test suite largely consists of the simulation code of test components for each test case. Thereby, the simulated behavior of a test component has to be derived according to the specification of an individual test case in which it occurs. While the behavior of test components is derived from test cases of the test model, the behavior of the SUT is embedded in the simulation code generated from the system model, which allows for their mutual validation. A test component is generally responsible for the creation of stimuli for the SUT and for the evaluation of its responses. These responses can be compared to the expected results or evaluated in respect to the constraints specified in the corresponding test case, based on which each test component has to determine its local test verdict. The overall verdict of the test case is then composed of local verdicts of all the involved test components. In the first instance, a failed or inconclusive test case indicates some discrepancies in the system and test model. It may, however, also be an indication of inconsistencies in the original requirements model and calls for a closer inspection of these models in any case. The usage of simulation can significantly facilitate the process of debugging. Similar to software debugging, simulation enables for deeper investigation into the behavior of the designed system and test models, even considering the effects of hardware and communication media when necessary. Thereby, a special benefit of simulation is the ability to control the environmental influences and thus to make procedures like fault injections easier. Generally, simulation enables both

44 3.3. Required Features the trace-based and interactive debugging, provided that the simulation tool used supports it, which we actually expect for our TAS approach.

3.2.7 Implementation and Test

As soon as the specification models have been validated and the quality as well as the performance of the selected design have been evaluated as sufficient by means of simulation, the implementation of the designed feature or system and its subsequent testing can be considered. According to the idea and the ever growing practice of MDE, the implementation code, and also concrete executable test suites can be automatically generated from the corresponding models on a large scale. Using validated models along with approved tools for automated model transformation and code generation helps to reduce costs and risks of programming. Furthermore, solutions are existing that enable automated execution of tests on the implemented system. These aspects have already been extensively covered in the past by research and industry. Hence, a broad scientific and methodological expertise along with a wide range of tools are already available. The most common modeling language to be mentioned in this context is of course UML. As shown in Chapter4, our modeling approach is widely based on UML and related standards. Thus, many of the existing tools can also be applied within the framework of TAS. Moreover, similar to the generation of the simulation code that will be presented later in Section 5.2.4, code generators for specific domains and platforms can also be created. However, the mentioned aspects of implementation and test code generation with testing on a real system would go beyond the scope of this thesis. Therefore, we can recommend, among others, [8], [13], and [47] for further reading.

3.3 Required Features

As mentioned in the previous section, the TAS approach primarily covers such as- pects of the model-driven development process as modeling, model-based analysis and validation, simulation code generation, and simulation-based testing. Besides that, TAS also addresses integrability and extendability for different domains and development environments. In the following sections, we will introduce the main features required for TAS (see Figure 3.2).

45 3. TEST-DRIVEN AGILE SIMULATION

Modeling Refactoring (M2M)

Requirements Simulation Code System Test Generation Transformation Specification Models Specification Traceability (M2T) Experiments Traces

Implementation Code Generation (M2T) Validation & Analysis Services

Dynamic Static (Simulation)

Figure 3.2: Main required features for Test-driven Agile Simulation

3.3.1 Unified Modeling based on Standards

As depicted in Figure 3.2, models play the primary role of the main artifacts for most engineering tasks in the TAS approach, in accordance with the principles of MDE. In order to enable seamless modeling of various aspects in the context of the suggested development process for TAS, a powerful and highly versatile modeling language is required. A common language is called for to facilitate the exchange and transformation of models between different process phases. Such a language is needed to simplify communication in complex development projects and distributed teams, since models in MDE are applied at the same time as specification and documentation. Despite the fact that DSMLs are quite often used in practice, because of their reduced complexity and precise tailoring to the domain, we believe that UML is the only modeling language that is principally applicable for TAS in several development stages for the time being. Due to its general and extendable na- ture, UML is particularly suitable for requirements, system, simulation, and test modeling, which are all addressed by the TAS approach. UML is a standardized, widely known, and practically approved modeling language which enjoys a broad acceptance by both industry and academia. Therefore, we decided to base our approach by default on UML. As is shown in detail in Chapter4, we propose a UML-based modeling metho- dology which utilizes a combined subset of several standardized extension profiles like SysML, MARTE, and UTP for the specification of different aspects that are

46 3.3. Required Features relevant in the scope of TAS. In particular, we apply SysML to specify requirements and traces between them and other model elements. A number of stereotypes originating from several sub-profiles of MARTE are used, for instance, to characte- rize analysis aspects and non-functional properties, to introduce non-determinism, and to describe hardware/software allocations. The UTP profile is furthermore used to express parts of the test model, like test contexts, test components, and test cases. Additionally, our modeling methodology involves the application of the textual action specification language Alf, first of all for the efficient definition of behaviors, which also comes with a modeling library that offers a useful set of collection types and operations. The combination of different specialized profiles, however, implicates several challenges. The main difficulties are primarily caused by the partial semantic and syntactic intersections between different profiles. In order to overcome these challenges, we suggest a strategy of selective combination of proper subsets of profiles in Section 4.1.

3.3.2 Validation and Analysis

The fundamental objectives of TAS are to enhance the overall quality of the system under development and to increase the efficiency of the development process by enabling the verification, validation, and analysis of the model-based specifications at the very early design stages. In this context, our approach distinguishes between static and dynamic investigations.

Static Validation In the first instance, static verification can be used to check whether the specification models are syntactically correct and satisfy any structural constraint in order to detect inconsistencies or casual errors in the models. Thus, in addition to checking the well-formedness rules, like those given by the UML metamodel, our approach also considers the mutual relations between models and relations to the common requirements (see Section 3.2.3). Thereby, the approach implies an extensible and adaptable application of additional rules, which offers a certain flexibility in adding new or replacing existing rules with regard to the specific application domain or the collected experiences of the developer team.

Dynamic Validation In contrast, dynamic validation aims to inspect the behavior modeled in system and corresponding test models by performing test cases derived from the test model on the executable system model. The information obtained in

47 3. TEST-DRIVEN AGILE SIMULATION

this way serves for the validation of the involved models, i.e. it checks whether the designed system and tests behave as required. As already mentioned, to achieve this task, our approach makes use of transformation and simulation techniques. The important target of the TAS approach in this context is to enable management and execution of the modeled tests, including the evaluation of test verdicts and test coverage statistics. Furthermore, due to the applied simulation, TAS is intended to support debugging by providing step-wise execution and detailed event traces.

Dynamic Analysis Besides structural and behavioral aspects, the aim is to enrich specification models with performance relevant aspects along with the correspon- ding non-functional requirements. In contrary to the model execution techniques, a fully-fledged simulation of a system enables analysis of its expected behavior and performance taking into account different conditions and alternative design solutions. This task has to be supported by adequate provisions for experimental design and statistical results analysis, which is also an essential part of TAS.

Static Analysis In addition to the simulation-based evaluation, formal analytic methods can be applied to achieve static analysis with precise assertions about the performance and quality of the system under design without executing it in any way. As an example, typical metrics for system availability like blocking or fault probability as well as for performance like mean response times, throughput, and utilization can thus be computed based on models [86]. Most of the existing analytical methods, however, often require a higher level of abstractions and are usually limited to relatively simple systems as mentioned above. Since the focus of TAS was originally placed on simulation-based techniques, the aspects of static analysis were therefore out of scope of this work and are only mentioned here for the sake of completeness. Nevertheless, a huge number of approaches and tools are available for this kind of analysis that can be integrated within TAS in order to provide additional guarantees for the accuracy and reliability of results obtained via simulation. Against this backdrop, the most promising approaches applicable in the context of TAS are those based on the same modeling notation, namely UML and its standardized extension profiles. First of all we would like to emphasize the novel approaches proposed in [37, 45, 49], which correspond very well to the modeling and transformation techniques (see Sections 4.2 and 5.2) suggested for TAS. The implementation of the approach mentioned in [37], has actually been tried and already integrated within SimTAny.

48 3.3. Required Features

3.3.3 Model Transformations

Automated transformation of models is one of the core features of TAS. On the one side, model to model (M2M) transformations are for instance needed to support static analysis and validation on the basis of specification models. On the other side, model to text (M2T) transformations enable automated generation of the simulation and implementation code. Another important aspect to be mentioned in the context of M2M transformati- ons is the general possibility to use transformation rules for refactoring and refining of the model specifications. This kind of transformations is typically endogenous (for the classification please see [17]), i.e. applied on the models which instan- tiate the same metamodel. In the first case, relating to the classical source code refactoring, model transformations especially aim to improve designs and have to be performed on the models of the same abstraction level. In case of refining, the main objective is to derive a more specific model from the abstract one. Typical scenarios are when the transformation adds some domain or platform specific elements to a general model or generates the obligatory content of superordinate elements according to predefined modeling patterns or rules. For several reasons we believe that the TAS approach should principally build on the common standards for model transformations provided by the OMG group, namely MOFM2T and QVT (see Section 2.2). In particular, the most important reasons are:

• These standards provide transformation languages, which allow the use of straightforward and partly similar syntax for the definition of transfor- mation rules. In case of QVT the rules can be easily specified on a higher abstraction level by means of relations between the source and target meta- model, whereas MOFM2T enables for using of code templates of the target language.

• They ideally correspond to the UML-based modeling methodology applied for TAS, not least thanks to the fact that both UML and the transformation languages originate from OMG.

• There are standard-conform, open source implementations of transformation frameworks for both standards available, which drastically reduce the effort for the creation of working transformations.

49 3. TEST-DRIVEN AGILE SIMULATION

• In the meantime the standards and their implementations have become widely known and approved in practice. Therefore, development teams may be already familiar with these techniques or can relatively easy learn to apply them for adjustment to the specific project needs.

In contrast to general-purpose programming languages, the standard transforma- tion languages are of course especially well-suited for accessing model elements and performing modifications on them. Using such specialized languages accompa- nied by the available reference framework implementations with included editors and syntax checkers additionally reduces the chance to produce errors while crea- ting the transformation rules themselves and thus contributes to the overall quality of the development process. All in all, the application of OMG standards also opens up a perspective for simple integration of TAS in the model-driven development processes that follow these standards. The definition of transformation rules for reliable mapping of structural, beha- vioral, and analytical elements of a UML model to their appropriate representations in a target environment, e.g. evaluation, simulation or implementation platform, poses one of the biggest challenges for TAS. The variety of elements offered by UML and related extension profiles as well as the flexibility of their application and interpretation make this task even more difficult. Thus, a harmonized modeling methodology (see Chapter4) is required to establish some conventions for a clear interpretation during the transformation process. The most important aspect of model transformations for TAS that is covered in this thesis relates to the generation of the executable simulation models. Due to the time-discrete nature of most computing systems, which also complies with the original semantics of UML, discrete event simulation (DES) tools are particularly suitable for our purposes. Therefore, in our approach we principally focus on supporting the transformation of time-discrete models for discrete-event simulators (see Section 5.2). Nevertheless, time-continuous aspects can also be represented in UML by means of the SysML extension profile and transformed to appropriate simulation tools, as shown for instance in [41]. Additional provisions offered in this thesis include the aspects of model refac- toring by means of M2M transformations as well as of traceability data extraction. Details about the implementation of these tasks with the help of QVT are presented in Section 5.2.

50 3.3. Required Features

3.3.4 Traceability

One of the essential features for accomplishing high productivity with the model- driven engineering approach applied in TAS is traceability. Thorough knowledge about all relationships, so called trace links3, between the artifacts created during the engineering process is quite crucial for the development of complex systems, especially for effective refactoring, validation, and debugging. For instance, such trace links can contain the information about the relationships and dependencies between and across requirements, elements of the system and test models, as well as the derived implementation artifacts like generated source code. In general, the traceability analysis based on models can help to determine the impact of an element on the other parts of the specification models. Coverage and traceability metrics can further be calculated to assist in the identification of gaps in complex specifications. In order to enable efficient management and analysis of traceability information it is necessary to have explicit and fine grained trace links for all relevant artifacts, ideally collected in one place. In the context of MDE, however, this could be quite a challenging task, since trace artifacts are typically heterogeneous (for instance, if looking at model elements and implementation code) and can be distributed throughout various models, which can even relate to different metamodels, and tools (see also [27]). Besides, it can be very difficult to keep this information consistent after each modification and transformation. To meet these problems, TAS provides a pragmatic approach which collects the composed traceability information in a dedicated model preserving the references to original artifacts. Such a traceability model largely consists of references (trace links) to the relevant elements of requirements, system, and test models. Thereby, trace links in the UML-based source models, as considered by TAS, are typically represented as associations annotated with special stereotypes from the SysML profile. In addition, this model is enriched with traceability links to the genera- ted implementation and simulation artifacts derived from specification models. The relatively compact traceability model enables for easier visualization of and navigation across the traceability information from different modeling domains. Furthermore, traceability metrics can be easily calculated based on this model and potential gaps like unsatisfied or untested requirements can be easily identified.

3Besides the term trace links, other terms like traceability links or traces are often interchangeably used in the literature. Please look in [30] for definitions of traceability-relating terms.

51 3. TEST-DRIVEN AGILE SIMULATION

In Section 5.1.8 we demonstrate how this feature is realized in our SimTAny framework.

3.3.5 Service Oriented Architecture

For the previously outlined basic features of the suggested TAS approach a compre- hensive tooling support is essential. It is, of course, desirable to enable modeling as well as automated model transformations, simulation, and analysis to be acces- sible from one integrated tool environment running on a single workstation. In case of large development teams and distributed processes or due to performance reasons it is, however, often required to provide some designated functionality on powerful workstations and to make it accessible over the network. For exam- ple, the execution of a substantial number of simulation experiments or tests for complex system models can be very time and resource consuming or have to meet special system requirements. Moreover, to enable prospective integration of the TAS approach within an existing development environment, the interoperability and loose coupling of the supporting tools via ubiquitous standards are required. In view of these challenges, various technologies have already been established to enable tool integration. Primarily, the approaches based on the Service-Oriented Architecture (SOA) and Web Services (WS)4 have become widely used as a stan- dardized way of integration of and interoperation between different software applications (see for instance [42]). With SOA a system is built according to an architectural model which considers the distribution of the system’s functionality as a set of coarse-grained, loosely coupled processing units, so called services, that can be published, discovered, and invoked via well-defined interfaces. SOA itself stands for an architectural style that is independent of any technology platform, whereas Web Services can be seen as an implementation of SOA, which provides concrete technology based on open standards. These standards include: Web Services Description Language (WSDL) for interface definitions, Simple Object Access Protocol (SOAP) for service access and message format specification and also Hypertext Transfer Protocol (HTTP) for transport of messages serialized with Extensible Markup Language (XML). Compared to the other common technologies like CORBA, DCOM, EJB or Java RMI, the paradigm of loosely coupled services along with application of ubiquitous standards provide several advantages, especially for heterogeneous

4 https://www.w3.org/TR/ws-arch/

52 3.3. Required Features

User Frontend

Modeling Rich Client GUI

WS

Core

Traceability Registry Model Repository Logging Notification

WS

Services Test Data Transformation Analysis Generation

Test Case Simulation/ Test Validation Generation Case Execution

Figure 3.3: Service-oriented architecture for TAS (reproduced from [74]) systems regarding platform independence as well as cross-platform interoperability and deployment. Therefore, we suggest setting up the implementation of TAS based on the SOA paradigm. We propose an architectural design for TAS which separates the tool environment in three levels: user front-end, core, and services, as illustrated in Figure 3.3. The core level of this design consists of a central model repository with several common services like registry, notification, traceability, and logging services that are used by all other components. The client application provided for the user, i.e. the user front-end, can in the most basic case only include the modeling tool and the GUI (graphical user interface) components for accessing the remote services running on dedicated workstations. The concrete fragmentation could depend on the specific development process and the applied tools. In general, however, following tasks in the context of TAS can be reasonably provided as autonomous services: analysis and calculation of statistics; model transformations, e.g. for (simulation) code generation (including the eventual compilation of a generated code); simulation and test execution for dynamic analysis and validation based on simulations. Additionally, the services,

53 3. TEST-DRIVEN AGILE SIMULATION

such as those for test data and test case generation as well as for automated testing on a real system, which are not treated in this thesis, could also be established. The communication between individual components, i.e. between service consumers and providers, can thereby be realized using open standards and well-defined interfaces according to the Web Service architecture. The realization of the architecture suggested in our framework SimTAny is presented later on in Section 5.4. Further below in Section 5.4.3.3 we also present a distributed development process in the contexts of TAS as supported by SimTAny, which illustrates how the services of TAS can be remotely invoked by the experts. This is particularly advantageous in the case of distributed development teams working on different locations. For more in-depth reading on topics related to SOA and Web Services we suggest, for example, the works of [42] and [24]. The adoption of service-oriented principals for simulation and testing has also been addressed in several publications. Thus, for instance in [34] the authors describe the specification of simulation software as a black box service accessible over a network. A service-oriented and distributed simulation framework has been presented in [88], which also relates to the idea of automated code generation and validation via testing based on model checking. Despite SOA and Web Services, however, the underlying modeling methodologies applied in the works mentioned differ from the standards intended for TAS.

3.4 Summary

To summarize, the most important aspects and benefits as well as the challenges of the suggested TAS approach are briefly mentioned once again. Our approach combines different techniques, i.e. model-driven engineering, test-driven deve- lopment, and model-driven simulation in order to provide a favorable and agile technique for quality assurance during the development process. We propose a holistic modeling methodology for TAS based on one common language UML and its standardized extension profiles to enable seamless modeling of all relevant engineering aspects including requirements, system design, test specification, and analysis. Thereby, we call for the widely independent and even test-driven creation of the system and test models based on the common requirements in order to achieve their mutual validation.

54 3.4. Summary

By deriving executable simulations from specification models the approach fa- cilitates automated analysis and validation of the modeled system at early stages of the process. It helps to detect design errors, potential drawbacks, and performance bottlenecks in both, the system and test specification, even prior to expensive implementation and testing on a real system. Furthermore, the simulation enables a relatively simple investigation and comparison of alternative design solutions, which in combination with early validation of the specification models contributes to reducing development risks. Since models are used as central artifacts during the entire development pro- cess, the use of a common standardized modeling language has several advantages. First of all, this simplifies the communication between team members and ensures a joint understanding of definitions and relations across different disciplines. It saves learning and tooling costs due to widespread knowledge of UML, the pos- sibility of using only one modeling tool as well as a high level of availability of professional and good open-source tools. The application of standardized techniques for model transformations and code generation, as explicitly intended for TAS, serves effective and efficient creation of transformation rules. Additionally, common standards for modeling as well as transformations reduce the dependency on a specific tool manufacturer and make it potentially possible to reuse available or future third-party contributions in that field, which all in all allows to keep the development costs low. A homogeneous modeling also simplifies establishing the traceability between relevant model parts. The compact representation of traceability information in an additional model enriched with links to the generated artifacts further facilitate efficient coverage analysis and impact assessment. Furthermore, the suggested SOA-based architecture makes provisions for sca- lability improvements and the integrability of TAS in distributed development processes. Along with the standards suggested for service description and inte- ractions, this architecture enables a loose coupling of heterogeneous tools that can be applied in different development environments for the task identified by TAS. Without the need to eliminate the heterogeneity of involved tools it is thus easier to combine them within a distributed environment. One of the additional benefits of the applied model-driven techniques in com- bination with service-oriented design is the automation of tasks such as model transformation, code generation as well as validation and tracing of requirements. Consequently, this automation leads to an increasing agility in the development

55 3. TEST-DRIVEN AGILE SIMULATION

process. Since modifications are easily possible on the specification models and a new design can be efficiently analyzed and validated, it allows for implementing an agile and iterative approach which is able to react to the changing require- ments. On the other hand, due to the service-oriented architecture, such an approach is also flexible in adapting to the changes in the project and development environment. The suggested approach, however, faces a variety of challenges. The complexi- ty of the underlying modeling language poses one of the biggest challenges for TAS. The expected standard conformity and applicability for different enginee- ring aspects make it actually necessary to combine several specialized modeling languages, i.e. extension profiles on the basis of UML. These profiles provide a large number of modeling elements, in some instances with complex relations or semantic variation points. Although we solely apply languages provided by the same standardization consortium, namely OMG, there are still some interferences between them. All this urgently necessitates additional clarifications and the specification of an appropriate modeling methodology. Based on this methodology, the definition of transformation rules for the effective generation of executable simulations is another big challenge for TAS. Taking into account the standard conform modeling, this requires an appropriate mapping of the model elements to the platform-specific implementation for the target simulation engine. In addition, the realization of a framework that is intended to support the TAS approach poses a piratical challenge. It requires different aspects, like modeling, transformations, simulation, analysis, testing, and traceability, to be integrated in such a framework. In order to benefit from advanced tools, that may already exist, for those special issues and since the available tools are very heterogeneous, an integration solution on the basis of common standards is hence desirable. In the subsequent chapters we address these challenges by providing a consoli- dated modeling methodology and an integrated tool environment.

56 CHAPTER 4

Modeling Methodology

As it has been identified above, one of the key features required for the TAS approach is the possibility to express various engineering aspects based on a common modeling paradigm. Moreover, we stated already that it is crucial to have a powerful and unified modeling language, in order to facilitate the exchange and transformation of involved models as well as to simplify the communication throughout the entire development lifecycle. In this chapter, we demonstrate the suggested standard-based modeling metho- dology for TAS. As previously mentioned, the basis for our approach is the unified modeling paradigm of UML enriched by several extension profiles standardized by OMG. Therefore, in Section 4.1, we outline the challenges that arise by combined usage of different profiles and present strategies how to avoid