FP7 ICT STREP Project

Deliverable D3.3

Final Learn PAd Metamodels and Implementation of Model Transfor- mations for Managing Business Processes Models in Public Ad- ministrations

http://www.learnpad.eu

LATEX template v. 0.5

Project Number : FP7-619583 Project Title : Learn PAd Model-Based Social Learning for Public Administrations

Deliverable Number : D3.3 Title of Deliverable : Final Learn PAd Metamodels and Implementation of Model Transformations for Managing Business Processes Models in Public Administrations Nature of Deliverable : Report Dissemination level : Public Licence : Creative Commons Attribution 3.0 License Version : 4.0 Contractual Delivery Date : 31 January 2016 Actual Delivery Date : 29 January 2016 Contributing WP : WP3 Editor(s) : Alfonso Pierantonio Author(s) : Francesco Basciani (UDA), Nesat Efendioglu (BOC), Jo- valdas Januskeviˇ ciusˇ (NME), Alfonso Pierantonio (UDA), Gianni Rosa (UDA), Jean Simard (XWIKI), Barbara Thonssen¨ (FHNW) Reviewer(s) : Guglielmo De Angelis (CNR), Stefania Gnesi (CNR)

Abstract This deliverable reports on the tooling chain and related architecture that have been designed and implemented in the context of the Learn PAd project. In particular, a transformation platform has been proposed to underpin the proposed informative learning approach: starting from an enriched business process model, an associated wiki structure is automatically generated. To this end, a final Learn PAd metamodel and associated modeling environments have been defined starting from the preliminary metamodel given in the previous WP3 deliverable D3.2.

Keyword List Metamodel, Conceptual Metamodel, Platform Specific Metamodel, Domain-Specific Modeling Lan- guage, MOF, UML, BPMN, CMMN, BMM, Model-Driven Engineering, Model-Driven Development, Weaving Model, Metamodel Stack, Organizational Metamodel, Competency Metamodel, Document and Knowledge Metamodel, Metamodel composition, Model Transformation, Model-2-Model Trans- formation, Model-2-Text Transformation, Ecore, Modeling Framework

Learn PAd FP7-619583 III Learn PAd FP7-619583 IV Document History

Version Changes Author(s) 0.1 Setting of the Template Guglielmo De Angelis 0.2 ToC Proposal Alfonso Pierantonio 0.3 ToC Revised Alfonso Pierantonio Francesco Basciani, Al- 1.0 First sections inserted fonso Pierantonio, Gianni Rosa Nesat Efendioglu, Al- 1.1 Added Contribution From BoC fonso Pierantonio Jovaldas Januskeviˇ cius,ˇ 1.2 Added Contribution From NME Alfonso Pierantonio Jean Simard, Alfonso 1.3 Added Contribution From XWIKI and FHNW Pierantonio, Barbara Thonssen¨ Francesco Basciani, Ne- sat Efendioglu, Jovaldas 2.0 Refined Contents for the Internal Release Januskeviˇ cius,ˇ Alfonso Pierantonio, Gianni Rosa Francesco Basciani, Al- 3.1 Addressed Comments from Stefania Gnesi fonso Pierantonio, Gianni Rosa Francesco Basciani, Al- 3.2 Addressed Comments from Guglielmo De Angelis fonso Pierantonio, Gianni Rosa Francesco Basciani, Al- 3.3 Candidate Release fonso Pierantonio, Gianni Rosa

Document Reviews

Release Date Ver. Reviewers Comments ToC Dec 2, 2015 0.3 Francesco Bas- Draft Dec 27, 2015 1.3 ciani, Gianni Rosa Guglielmo De Detailed review reports are avail- Internal Jan 13, 2016 2.0 Angelis, Stefania able on the internal wiki. Gnesi Antonia Bertolino, Candidate Jan 25, 2016 3.3 Guglielmo De An- Final gelis

Learn PAd FP7-619583 V Learn PAd FP7-619583 VI Glossary, acronyms & abbreviations

Item Description ATL Atlas Transformation Language BP Business Process BMM Business Motivation Model BPMN Business Process Model and Notation CD Class Diagram CM Competency Metamodel CMMN Case Management Model and Notation DKM Document and Knowledge Metamodel EA Enterprise Architecture EMF Eclipse Modeling Framework ETL Epsilon Transformation Language KPI Key Performance Indicatore Metamodel LCMM Learn PAd Conceptual Metamodel LPIMM Learn PAd Platform-Independent Metamodel LPSMM Learn PAd Platform-Specific Metamodel MOF Meta-Object Facility MT Model Transformation OM Organization Model OMG UML Unified Modeling Language

Learn PAd FP7-619583 VII Learn PAd FP7-619583 VIII Table Of Contents

List Of Tables...... XI

List Of Figures...... XV

List of Listings...... XVII

1 Introduction...... 1 1.1 Structure of this deliverable...... 1

2 Model-Driven Engineering and Model Transformation...... 3 2.1 Model-Driven Engineering...... 3 2.2 Model Transformations...... 4 2.2.1 Classification...... 5 2.2.2 Languages...... 7 2.3 Discussion...... 10

3 Learn PAd Modelling Architecture...... 13 3.1 A transformational architecture...... 13 3.2 Current status of maturity...... 15

4 Modeling Environment...... 17 4.1 Learn PAd Final Metamodel...... 17 4.2 ADOxx Approach...... 24 4.2.1 Conceptualization Process...... 24 4.2.2 Generic Modelling Method Framework...... 25 4.2.3 Development on ADOxx...... 26 4.3 Learn PAd Modeling Environment (based on ADOxx)...... 32 4.3.1 The Learn PAd Modelling Language...... 32 4.3.2 Mechanisms & Algorithms...... 40 4.4 No Magic Approach...... 43 4.4.1 Domain Specific Language (DSL) engine for MagicDraw customization...... 43 4.4.2 Customization principles of MagicDraw...... 45 4.4.3 Example Learn PAd DSL elements creation...... 47 4.5 Learn PAd Modeling Environment (based on MD)...... 48

5 Learn PAd Transformation Platform...... 53 5.1 EMF/Ecore...... 53 5.1.1 Model-to-model integration: ATL...... 56 5.1.2 Model-to-code integration: ...... 58

Learn PAd FP7-619583 IX 5.2 Architecture...... 59 5.2.1 Pre-processing...... 59 5.2.2 Model Transformation Environment...... 63 5.3 Transformations...... 67 5.3.1 M2M ad-hoc ADOxx (XSD) to ADOxx (EMF)...... 67 5.3.2 M2M ADOxx (EMF) to XWIKI (EMF)...... 73 5.3.3 M2T Transformation from XWIKI (EMF) to XWIKI (XML)...... 81 5.3.4 M2M Transformation from MD LPAD (EMF) to ADOxx (EMF)...... 83

6 Conclusions...... 87

Annex A Metamodel Diagrams...... 89

Annex B Listings...... 95 Annex B.1 Maven Library Specification...... 95

Bibliography...... 97

Learn PAd FP7-619583 X List Of Tables

Table 3.1: Technology Readiness Levels in the European Commission...... 15

Table 3.2: Learn PAd components readiness levels...... 16

Table 4.1: Basic customization concepts...... 45

Learn PAd FP7-619583 XI Learn PAd FP7-619583 XII List Of Figures

Figure 1.1: Objectives...... 2

Figure 2.1: The four layer metamodeling architecture...... 4

Figure 2.2: Basic Concepts of Model Transformation...... 5

Figure 2.3: QVT Architecture...... 7

Figure 2.4: Fragment of a declarative ATL transformation...... 9

Figure 3.1: The Learn PAd Architecture...... 14

Figure 4.1: Overview Metamodel...... 23

Figure 4.2: OMILAB Lifecycle ([27])([28])...... 24

Figure 4.3: Modelling Method Framework Based on [30]...... 26

Figure 4.4: ADOxx Development Approach...... 27

Figure 4.5: ADOxx Metamodel mapping...... 27

Figure 4.6: ALL Class Definition...... 28

Figure 4.7: ALL Relation Class Definition...... 28

Figure 4.8: ALL GraphRep Definition...... 29

Figure 4.9: Modeltype definition ALL example...... 30

Figure 4.10: ADOxx Metamodel...... 31

Figure 4.11: Realized Learn PAd Modelling Environment Meta Model Stack...... 32

Figure 4.12: Sample Business Motivation Model...... 32

Figure 4.13: Sample BPMN Model...... 33

Figure 4.14: Sample Case Management Model...... 34

Figure 4.15: Sample Organization Model...... 35

Figure 4.16: Sample Documents and Knowledge Model...... 35

Figure 4.17: Sample KPI Model...... 36

Learn PAd FP7-619583 XIII Figure 4.18: Sample Business Process Constraint Model...... 37

Figure 4.19: MR Competence Model and Details for a Competence...... 38

Figure 4.20: Competency Profiles for Required and Acquired Competencies...... 39

Figure 4.21: Sample Model Set Overview Model...... 39

Figure 4.22: Learn PAd profile structure...... 44

Figure 4.23: Stereotype properties...... 44

Figure 4.24: Specification window Competency element...... 44

Figure 4.25: Custom Learn PAd diagrams...... 45

Figure 4.26: Example of applying customization rules to DSL element...... 46

Figure 4.27: DSL element properties...... 46

Figure 4.28: Properties of stereotype Knowledge Product...... 47

Figure 4.29: MagicDraw customization process...... 47

Figure 4.30: Components of DSL customization creation...... 48

Figure 4.31: Example of profile customization diagram...... 48

Figure 4.32: Learn PAd profile usage as Module in project...... 48

Figure 4.33: Business Architecture Diagrams...... 49

Figure 4.34: Learn PAd specific diagrams...... 49

Figure 4.35: BMM diagram...... 50

Figure 4.36: BPMN diagram...... 50

Figure 4.37: Competency diagram...... 51

Figure 4.38: Document and Knowledge diagram...... 51

Figure 4.39: Organization Structure diagram...... 52

Figure 5.1: Simplified Ecore Meta-model...... 55

Figure 5.2: Overview of ATL transformational approach...... 56

Figure 5.3: LPMT Component Diagram...... 60

Figure 5.4: LPMT Sequence Diagram...... 60

Learn PAd FP7-619583 XIV Figure 5.5: LPMT Activity Diagram...... 65

Figure 5.6: LPMT Sequence Diagram Model Transformation Environment...... 66

Figure 5.7: ADOXX Conformance...... 67

Figure 5.8: ADOXX Source Metamodel...... 68

Figure 5.9: ADOxx DTD to XSD transformation...... 72

Figure 5.10: ADOxx XSD to Ecore metamodel transformation...... 73

Figure 5.11: XWIKI Target Metamodel...... 74

Figure 5.12: Source and Target Metamodel definition...... 76

Figure 5.13: Fragment of Target Model (WebHome)...... 77

Figure 5.14: Magic Draw BPMN fragment metamodel...... 84

Figure 5.15: Magic Draw to ADOXX ATL transformation example...... 85

Figure Annex A.1: final BPMN Metamodel...... 89

Figure Annex A.2: final CMMN Metamodel...... 90

Figure Annex A.3: final BMM Metamodel...... 91

Figure Annex A.4: final Competency Metamodel...... 92

Figure Annex A.5: final Document and Knowledge Metamodel...... 92

Figure Annex A.6: final Organizational Structure Metamodel...... 93

Figure Annex A.7: final KPI Metamodel...... 94

Learn PAd FP7-619583 XV Learn PAd FP7-619583 XVI List of Listings

5.1 Class constructor for metamodel initialization...... 57 5.2 Entrypoint Rule...... 76 5.3 Init base WebHome...... 77 5.4 Generate model WebHome...... 78 5.5 Generate Instances...... 79 5.6 Generate links between classes...... 79 5.7 Generate Object2Object weaving...... 80 5.8 Generate Model2Object weaving...... 81 5.9 Example of a Page XML file for XWiki...... 81 5.10 Example of a Object XML file for XWiki...... 81 5.11 Example of the result of Acceleo transformation for 2 set of models epbr and unico with 2 pages in each and 1 object in each page...... 82 5.12 Source metamodel...... 82 5.13 Template WebHome...... 83 5.14 Template Objects...... 83 5.15 Process and SubProcess to Model transformation fragment...... 84 5.16 Process and SubProcess to Model transformation fragment...... 85 5.17 Connector transformation fragment...... 86

Learn PAd FP7-619583 XVII Learn PAd FP7-619583 XVIII 1 Introduction

This deliverable reports on the tooling chain and related architecture that have been designed and implemented within the Learn PAd project to support the adopted learning paradigms. In particular, the project fosters an informative learning approach besides the learning-by-doing paradigm. This enables civil servants to learn by accessing and studying the business process models and related material. However, the enriched business models might not convey enough information to support on the one side the enactment of the represented Public Administration (PA) process, and on the other side the training of the civil servant who is assigned to the tasks. Thus, it is of great relevance to be able to trace and relate the Learn PAd models and the informative artifacts that structure and represent information with the specific tasks to which they refer. To this end, we have devised, designed, and implemented a tooling chain that starting from a Learn PAd model, i.e., an enriched process model, translates it into a wiki structure by cross-linking the model structures into articles in the wiki. In order to bridge the abstraction distance between the Learn PAd models and the wiki structures, state-of-the-art techniques and tools have been employed, such as the Eclipse Modeling Framework, that permitted to map models between the related modeling languages by means of specialized languages and translation engines, as illustrated in the sequel of the deliverable. This conforms with what prescribed for the Task 3.3 and Deliverable D3.3 (from the Learn PAd DoW):

Task 3.3: Develop a set of EMF/Ecore [54] based meta-models that capture the previously identified elements and that allow a reasonable description of the considered business processes, their context, and the representation of their evolution over time. Moreover, such meta-models will be instantiated on a set of representative business processes. In this way, it will be possible to assess the expressiveness of the developed meta-models and identify unforeseen requirements that in case should be added or refined in order to support learning. There will be close link to the Open Models Initiative (www.openmodels.at) to freely distribute the metamodels.

Deliverable D3.3: The generation of e-learning artifacts out of specified business processes will be performed by means of horizontal model-transformations that will be specifically developed by using specific model transformation languages (e.g., ATL, and ETL). Moreover, in the context of Learn PAd there will be the need for techniques introducing automation in the management of artifacts that have to be kept consistent to each other. In this respect we intend to tackle the problem by conceiving advanced model-driven techniques able to keep aligned different views (i.e., models specified at the same level of abstraction) and to manage multiscale models (i.e., models in which parts of the system are specified at different level of detail).

The objectives are illustrated in Figure 1.1 where it is depicted how the goals of the deliverable D3.3 are based on the intermediate results achieved in the previous deliverable and tasks.

1.1. Structure of this deliverable

The deliverable is organized as follows. In Chapter2 the description of the methodology which has been followed is presented. In particular, a brief catalog of model transformation approaches are pre- sented and discussed. In Chapter3 the Learn PAd Architecture is introduced, whereas in Chapter4

Learn PAd FP7-619583 1 Figure 1.1: Objectives the Learn PAd Modelling Environment is described. Chapter5 introduces the transformation platform underpining the tooling chain utilized for the generation of the wiki system. Finally, Chapter6 draws some conclusions. Moreover, in Annex A and in Annex B all the class diagrams formalizing the final Learn PAd metamodel and additional code listings are reported, respectively.

Learn PAd FP7-619583 2 2 Model-Driven Engineering and Model Transforma- tion

In this chapter, we present the main concepts characterizing the Model-Driven Engineering and how model transformations permit to realize the full potential for both the end-user and transformation de- veloper [58]. In particular, an overview of the current approaches is presented.

2.1. Model-Driven Engineering

In recent years, Model-Driven Engineering [51] (MDE) has taken a leading role in advancing a new paradigm shift in software development. Leveraging models to a first-class status is at the core of this methodology. In particular, MDE proposes to extend the formal use of modelling languages in several interesting ways by adhering to the “everything is a model” principle [8]. Domains are analysed and engineered by means of metamodels, i.e., coherent sets of interrelated concepts. A model is said to conform to a metamodel, or in other words it is expressed in terms of the concepts formalized in the metamodel, constraints are expressed at the metalevel, and model transformations occur to produce target models out of source ones. Summarizing, these constitute a body of inter-related entities pursu- ing a common scope as in a (modeling) ecosystem [17]. The objective is to increase productivity and reduce time-to-market by enabling the development of complex systems by means of models defined with concepts that are much less bound to the underlying implementation technology and are much closer to the problem domain. This makes the models easier to specify, understand, and maintain [53] helping the understanding of complex problems and their potential solutions through abstractions. The concept of Model Driven Engineering emerged as a generalization of the Model Driven Archi- tecture (MDA) proposed by OMG in 2001 [48]. Kent [31] defines MDE on the base of MDA by adding the notion of software development process and modeling space for organizing models. Favre [20] proposes a vision of MDE where MDA is just one possible instance of MDE implemented by means of a set of technologies defined by OMG (MOF [45], UML [2], XMI [44], etc.) which provided a conceptual framework and a set of standards to express models, metamodels, and model transformations. Even though MDA and MDE rely on models that are considered “first class citizens”, there is no common agreement about what is a model. In [52] a model is defined as “a set of a statements about a system under study”. Bezivin´ and Gerbe´ in [7] define a model as “a simplification of a system built with an intended goal in mind. The model should be able to answer questions in place of the actual system”. According to Mellor et al. [38] a model “is a coherent set of formal elements describing something (e.g. a system, bank, phone, or train) built for some purpose that is amenable to a particular form of analysis” such as communication of ideas between people and machines, test case generation, transformation into an implementation etc. The MDA guide [41] defines a model of a system as “a description or specification of that system and its environment for some certain purpose. A model is often presented as a combination of drawings and text. The text may be in a modeling language or in a natural language”. In MDE models are not considered as merely documentation but precise artifacts that can be under- stood by computers and can be automatically manipulated. In this scenario metamodeling plays a key role. It is intended as a common technique for defining the abstract syntax of models and the interre-

Learn PAd FP7-619583 3 conformsTo conformsTo conformsTo conformsTo Level

M3 meta-metamodel MOF EBNF XSD

conformsTo conformsTo conformsTo conformsTo conformsTo conformsTo conformsTo conformsTo Pascal XSD XSD M2 Metamodel UML SPEM CWM grammar grammar Schema S1 Schema S2

conformsTo conformsTo conformsTo conformsTo

UML Java XML M1 model Model Program P Document

describedBy describedBy describedBy describedBy

real Execution M0 instance Data System of P

Figure 2.1: The four layer metamodeling architecture lationships between model elements. metamodeling can be seen as the construction of a collection of “concepts” (things, terms, etc.) within a certain domain. A model is an abstraction of phenomena in the real world, and a metamodel is yet another abstraction, highlighting properties of the model itself. A model is said to conform to its metamodel like a program conforms to the grammar of the programming language in which it is written [8]. In this respect, OMG has introduced the four-level architecture shown in Fig. 2.1. At the bottom level, the M0 layer is the real system. A model represents this system at level M1. This model conforms to its metamodel defined at level M2 and the metamodel itself conforms to the metametamodel at level M3. The metametamodel conforms to itself. OMG has proposed MOF [45] as a standard for specifying metamodels. For example, the UML metamodel is defined in terms of MOF. A supporting standard of MOF is XMI [44], which defines an XML-based exchange format for models on the M3, M2, or M1 layer. In EMF [18], Ecore is the provided language for specifying metamodels. This metamodeling architecture is common to other technological spaces as discussed by Kurtev et al. in [5]. For example, the organization of programming languages and the relationships between XML documents and XML schemas follows the same principles described above (see Fig. 2.1). In addition to metamodeling, model transformation is also a central operation in MDE. While technolo- gies such as MOF [45] and EMF [18] are well-established foundations on which to build metamodels, there is as yet no well-established foundation on which to rely in describing how we take a model and transform it to produce a target one. In the next section more insights about model transformations are given and after a brief discussion about the general approaches, the attention focuses on some of the today’s available languages.

2.2. Model Transformations

The MDA guide [41] defines a model transformation as “the process of converting one model to another model of the same system”. Kleppe et al. [32] defines a transformation as the automatic generation of a target model from a source model, according to a transformation definition. A transformation definition is a set of transformation rules that together describe how a model in the source language can be transformed to a model in the target language. A transformation rule is a description of how one or more constructs in the source language can be transformed to one or more constructs in the target language. Rephrasing these definitions by considering Fig. 2.2, a model transformation program takes as input a model conforming to a given source metamodel and produces as output another model conforming to a target metamodel. The transformation program, composed of a set of rules, should itself considered as a model. As a consequence, it is based on a corresponding metamodel, that is an abstract definition of the used transformation language. Many languages and tools have been proposed to specify and execute transformation programs. In 2002 OMG issued the Query/View/Transformation request for proposal [43] to define a standard transformation language. Even though a final specification has been adopted at the end of 2005, the area of model transformation continues to be a subject of intense research. Over the last years, in

Learn PAd FP7-619583 4 Figure 2.2: Basic Concepts of Model Transformation parallel to the OMG process a number of model transformation approaches have been proposed both from academia and industry. The paradigms, constructs, modeling approaches, tool support distinguish the proposals each of them with a certain suitability for a certain set of problems. In the following, a classification of the today’s model transformation approaches is briefly reported, then some of the available model transformation languages are separately described. The classification is mainly based upon [14] and [55].

2.2.1. Classification

At top level, model transformation approaches can be distinguished between model-to-model and model-to-text. The distinction is that, while a model-to-model transformation creates its target as a model which conforms to the target metamodel, the target of a model-to-text transformation essentially consists of strings. In the following some classifications of model-to-model transformation languages discussed in [14] are described.

Direct manipulation approach. It offers an internal model representation and some APIs to ma- nipulate it. It is usually implemented as an object oriented framework, which may also provide some minimal infrastructure. Users have to implement transformation rules, scheduling, tracing and other facilities, mostly from the beginning in a programming language.

Operational approach. It is similar to direct manipulation but offers more dedicated support for model transformations. A typical solution in this category is to extend the utilized metamodeling formalism with facilities for expressing computations. An example would be to extend a query language such as OCL with imperative constructs. Examples of systems in this category are QVT Operational mappings [46], XMF [65], MTL [61] and Kermeta [40].

Relational approach. It groups declarative approaches in which the main concept is mathematical relations. In general, relational approaches can be seen as a form of constraint solving. The basic idea is to specify the relations among source and target element types using constraints that in general are non-executable. However, declarative constraints can be given executable semantics, such as in logic programming where predicates can be used to describe the relations. All of the relational

Learn PAd FP7-619583 5 approaches are side-effect free and, in contrast to the imperative direct manipulation approaches, create target elements implicitly. Relational approaches can naturally support multidirectional rules. They sometimes also provide backtracking. Most relational approaches require strict separation between source and target models, that is, they do not allow in-place update. Example of relational approaches are QVT Relations [46] and those enabling the specification of weaving models (like AMW [36]), which aim at defining rigorous and explicit correspondences between the artifacts produced during a system development [11]. Moreover, in [23] the application of logic programming has been explored for the purpose. Finally, in [12, 19] bidirectional transformations are given semantics by means of Answer Set Programming [22] for addressing the problem of non-determinism in non-bijective transformations.

Hybrid approach. It combines different techniques from the previous categories, like ATL [24] and ETL [33] that wrap imperative bodies inside declarative statements.

Graph-transformation based approach. It draws on the theoretical work on graph tranformations. Describing a model transformation by graph transformation, the source and target models have to be given as graphs. Performing model transformation by graph transformation means to take the abstract syntax graph of a model, and to transform it according to certain transformation rules. The result is the syntax graph of the target model. Being more precise, graph transformation rules have an LHS and an RHS graph pattern. The LHS pattern is matched in the model being transformed and replaced by the RHS pattern in place. In particular, LHR represents the pre-condition of the given rule, while RHS describes the post-conditions. LHR ∩ RHS defines a part which has to exist to apply the rule, but which is not changed. LHS − LHS ∩ RHS defines the part which shall be deleted, and RHS − LHS ∩ RHS defines the part to be created. AGG [56] and AToM3 [15] are systems directly implementing the theoretical approach to attributed graphs and transformations on such graphs. They have built-in fixpoint scheduling with non-deterministic rule selection and concurrent application to all matching locations, and the rely on implicit scheduling by the user. The transformation rules are unidirectional and in-place. Systems such as VIATRA2 [60] and GReAT [4] extend the basic functionality of AGG and AToM3 by adding explicit scheduling. VIATRA2 users can build state machines to schedule transformation rules whereas GReAT relies on data-flow graphs. Another interesting mean for transforming models is given by triple graph grammars (TGGs), which have been introduced by Schurr[¨ 34]. TGGs are a technique for defining the correspondence between two different types of models in a declarative way. The power of TGGs comes from the fact that the relation between the two models cannot only be defined, but the definition can be made operational so that one model can be transformed into the other in either direction; even more, TGGs can be used to synchronize and to maintain the correspondence of the two models, even if both of them are changed independently of each other; i.e., TGGs work incrementally. The main tool support for TGGs is Fujaba1, which provided the foundation for MOFLON2.

Rule based approach. Rule based approaches allow one to define multiple independent rules of the form guard => action. During the execution, rules are activated according to their guard not, as in more traditional languages, based on direct invocation [57]. When more than one rule is fired, more or less explicit management of such conflicting situation is provided, for instance in certain language a runtime error is raised. Besides the advantage of having an implicit matching algorithm, such approaches per- mit to encapsulate fragments of transformation logic within the rules which are self-contained units with crispy boundaries. This form of encapsulation is preparatory to any form of transformation composi- tion [62].

1http://www.fujaba.de 2http://www.moflon.org

Learn PAd FP7-619583 6 Figure 2.3: QVT Architecture

2.2.2. Languages

In this section some of the languages referred above are singularly described. The purpose of the description is to provide the reader with an overiew of some existing model transformation languages.

QVT In 2002 OMG issued the QVT RFP [43] describing the requirements of a standard language for the specification of model queries, views, and transformations according to the following definitions:

• A query is an expression that is evaluated over a model. The result of a query is one or more in- stances of types defined in the source model, or defined by the query language. Object Constraint Language (OCL 2.0) [47] is the query language used in QVT;

• A view is a model which is completely derived from a base model. A view cannot be modified separately from the model from which it is derived and changes to the base model cause corre- sponding changes to the view. If changes are permitted to the view then they modify the source model directly. The metamodel of the view is typically not the same as the metamodel of the source. A query is a restricted kind of view. Finally, views are generated via transformations;

• A transformation generates a target model from a source one. If the source and target metamod- els are identical the transformation is called endogeneous. If they are different the transformation is called exogeneous. A model transformation may also have several source models and several target models. A view is a restricted kind of transformation in which the target model cannot be modified independently from the source model. If a view is editable, the corresponding transfor- mation must be bidirectional in order to reflect the changes back to the source model.

A number of research groups have been involved in the definition of QVT whose final specification has been reached at the end of November 2005 [46]. The abstract syntax of QVT is defined in terms of MOF 2.0 metamodel. This metamodel defines three sublanguages for transforming models. OCL 2.0 is used for querying models. Creation of views on models is not addressed in the proposal. The QVT specification has a hybrid declarative/imperative nature, with the declarative that forms the framework for the execution semantics of the imperative part. By referring to Fig. 2.3, the layers of the declarative part are the following:

• A user-friendly Relations metamodel which supports the definition of complex object pattern matching and object template creation;

• A Core metamodel defined using minimal extensions to EMOF and OCL.

By referring to [46], a relation is a declarative specification of the relationships between MOF models. The Relations language supports complex object pattern matching, and implicitly creates trace classes and their instances to record what occurred during a transformation execution. Relations can assert that other relations also hold between particular model elements matched by their patterns. Finally, Relations language has a graphical syntax. Concerning the Core it is a small model/language which only supports pattern matching over a flat set of variables by evaluating conditions over those variables against a set of models. It treats all of the model elements of source, target and trace models symmetrically. It is equally powerful to the Relations

Learn PAd FP7-619583 7 language, and because of its relative simplicity, its semantics can be defined more simply, although transformation descriptions described using the Core are therefore more verbose. In addition, the trace models must be explicitly defined, and are not deduced from the transformation description, as is the case with Relations. The core model may be implemented directly, or simply used as a reference for the semantics of Relations, which are mapped to the Core, using the transformation language itself. To better clarify the conceptual link between Relations and Core languages, an analogy can be drawn with the Java architecture, where the Core language is like Java Byte Code and the Core semantics is like the behavior specification for the Java Virtual Machine. The Relations language plays the role of the Java language, and the standard transformation from Relations to Core is like the specification of a Java Compiler which produces Byte Code. Sometimes it is difficult to provide a complete declarative solution to a given transformation problem. To address this issue QVT proposes two mechanisms for extending the declarative languages Relations and Core: a third language called Operational Mappings and a mechanism for invoking transformation functionality implemented in an arbitrary language (Black Box). The Operational Mappings language is specified as a standard way of providing imperative imple- mentations. It provides OCL extensions with side effects that allow a more procedural style, and a concrete syntax that looks familiar to imperative programmers. A transformation entirely written using Operation Mappings is called an “operational transformation”. The Black Box mechanism makes possible to “plug-in” and execute external code. This permits to implement complex algorithms in any programming language, and reuse already available libraries.

AGG AGG [56] is a development environment for attributed graph transformation systems supporting an algebraic approach to graph transformation. It aims at specifying and rapid prototyping applica- tions with complex, graph structured data. AGG supports typed graph transformations including type inheritance and multiplicities. It may be used (implicitly in “code”) as a general purpose graph transfor- mation engine in high-level JAVA applications employing graph transformation methods. The source, target, and common metamodels are represented by typed graphs. Graphs may additionally be at- tributed using Java code. Model transformations are specified by graph rewriting rules that are applied non-deterministically until none of them can be applied anymore. If an explicit application order is re- quired, rules can be grouped in ordered layers. AGG features rules with negative application conditions to specify patterns that prevent rule executions. Finally, AGG offers validation support that is consis- tency checking of graphs and graph transformation systems according to graph constraints, critical pair analysis to find conflicts between rules (that could lead to a non-deterministic result) and checking of termination criteria for graph transformation systems. An available tool support provides graphical edi- tors for graphs and rules and an integrated textual editor for Java expressions. Visual interpretation and validation of transformations are also supported.

ATL ATL (ATLAS Transformation Language) [24] is a hybrid model transformation language contain- ing a mixture of declarative and imperative constructs. The former allows to deal with simple model transformations, while the imperative part helps in coping with transformation of higher complexity. ATL transformations are unidirectional, operating on read-only source models and producing write-only tar- get models. During the execution of a transformation source models may be navigated but changes are not allowed. Target models cannot be navigated. ATL transformations are specified in terms of modules. A module contains a mandatory header section, import section, and a number of helpers and transformation rules. Header section gives the name of a transformation module and declares the source and target models (e.g., see lines 1-2 in Fig. 2.4). The source and target models are typed by their metamodels. The keyword create indicates the target model, whereas the keyword from indicates the source model. In the example of Fig. 2.4 the target model bound to the variable OUT is created from the source model IN. The source and target metamodels, to which the source and target model conform, are PetriNet [49] and PNML [9],

Learn PAd FP7-619583 8 1 module PetriNet2PNML; 2 create OUT : PNML from IN : PetriNet; 3 ... 4 rule Place { 5 from 6 e : PetriNet!Place 7 --(guard) 8 to 9 n : PNML!Place 10 ( 11 name <- e.name, 12 id <- e.name, 13 location <- e.location 14 ), 15 name : PNML!Name 16 ( 17 labels <- label 18 ), 19 label : PNML!Label 20 ( 21 text <- e.name 22 ) 23 }

Figure 2.4: Fragment of a declarative ATL transformation

respectively. Helpers and transformation rules are the constructs used to specify the transformation functionality. Declarative ATL rules are called matched rules. They specify relations between source patterns and target patterns. The name of a rule is given after the keyword rule. The source pattern of a rule (lines 5-7, Fig. 2.4) specifies a set of source types and an optional guard given as a Boolean expression in OCL. A source pattern is evaluated on a set of matches in the source models. The target pattern (lines 8-22, Fig. 2.4) is composed of a set of elements. Each of these elements (e.g., the one at lines 9-14, Fig. 2.4) specifies a target type from the target metamodel (e.g., the type Place from the PNML metamodel) and a set of bindings. A binding refers to a feature of the type (i.e. an attribute, a reference or an association end) and specifies an expression whose value is used to initialize that feature. In some cases complex transformation algorithms may be required and it may be difficult to specify them in a pure declarative way. For this issue ATL provides two imperative constructs: called rules, and action blocks. A called rule is a rule called by other ones like a procedure. An action block is a sequence of imperative instructions that can be used in either matched or called rules. The imperative statements in ATL are the well-known constructs for specifying control flow such as conditions, loops, assignments, etc.

ETL Similarly to ATL, ETL [33] (Epsilon Transformation Language) is a hybrid model transformation language that has been developed atop the infrastructure provided by the Epsilon model management platform3. By building on Epsilon, ETL achieves syntactic and semantic consistency and enhanced interoperability with a number of additional languages, also been built atop Epsilon, and which target tasks such as model-to-text transformation, model comparison, validation, merging and unit testing. ETL enables the specification of transformations that can transform an arbitrary number of source models into an arbitrary number of target models. ETL transformations are given in terms of modules. An ETL module can import a number of other ETL modules. In this case, the importing ETL module inherits all the rules and pre/post blocks specified in the modules it imports (recursively).

3http://www.eclipse.org/epsilon/

Learn PAd FP7-619583 9 GReAT GReAT [4] (Graph Rewriting and Transformation Language) is a graph-transformation lan- guage that supports the high-level specification of complex model transformation programs. In this language, one describes the transformations as sequenced graph rewriting rules that operate on the input models and construct an output model. The rules specify complex rewriting operations in the form of a matching pattern and a subgraph to be created as the result of the application of the rule. The rules i) always operate in a context that is a specific subgraph of the input, and ii) are explicitly sequenced for efficient execution. The rules are specified visually using a graphical model builder tool. GReAT can be divided into three distinct parts:

• Pattern specification language. This language is used to express complex patterns that are matched to select elements in the current graph. The pattern specification language uses a notion of cardinality on each pattern vertex and each edge; • Graph transformation language. It is a rewriting language that uses the pattern language de- scribed above. It treats the source model, the target model and temporary objects as a single graph that conforms to a unified metamodel. Each pattern object’s type conforms to this meta- model and only transformations that do not violate the metamodel are allowed. At the end of the transformation, the temporary objects are removed and the two models conform exactly to their respective metamodels. Guards to manage the rule applications can be specified as boolean ++ expressions; • Control flow language. It is a high-level control flow language that can control the application of the productions and allow users to manage the complexity of the transformations. In particular, the language supports a number of features: (i) Sequencing, rules can be sequenced to fire one after another, (ii) Non-Determinism, rules can be specified to be executed “in parallel”, where the order of firing of the parallel rules is non deterministic, (iii) Hierarchy, compound rules can contain other compound rules or primitive rules, (iv) Recursion, a high level rule can call itself, (v) Test/Case, a conditional branching construct that can be used to choose between different control flow paths.

VIATRA2 VIATRA2 [60] is an Eclipse-based general-purpose model transformation engineering frame- work intended to support the entire life-cycle for the specification, design, execution, validation and maintenance of transformations within and between various modelling languages and domains. Its rule specification language is a unidirectional transformation language based mainly on graph transformation techniques that combines the graph transformation and Abstract State Machines [10] into a single paradigm. Being more precise, in VIATRA2 the basic concept to define model transforma- tions is the (graph) pattern. A pattern is a collection of model elements arranged into a certain structure fulfilling additional constraints (as defined by attribute conditions or other patterns). Patterns can be matched on certain model instances, and upon successful pattern matching, elementary model ma- nipulation is specified by graph transformation rules. There is no predefined order of execution of the transformation rules. Graph transformation rules are assembled into complex model transformations by abstract state machine rules, which provide a set of commonly used imperative control structures with precise semantics. This permits to collocate VIATRA2 as a hybrid language since the transformation rule language is declarative but the rules cannot be executed without an execution strategy specified in an imperative manner. Important specification features of VIATRA2 include recursive (graph) patterns, negative patterns with arbitrary depth of negation, and generic and meta-transformations (type parameters, rules manipulating other rules) for providing reuse of transformations [59].

2.3. Discussion

Model transformation languages are specialized tools that can be adopted according to a number of diffent (internal and external) criteria. An aspect, which is often neglected, is related to some pragmatic

Learn PAd FP7-619583 10 qualities including maturity, support, and industrial adoption. In this respect, the number of languages, which have been considered above, can be restricted to those languages that already demonstrated effectiveness on the practical side of software development. The ATL language is supported by a set of development tools built on top of the Eclipse environment: a compiler, a virtual machine, an editor, and a debugger. ATL allows both imperative and declarative approaches to be used in transformation definitions depending on the problem at hand. ATL is cur- rently used or evaluated on hundreds of academic and industrial sites. The language is part of the M2M Eclipse project. The current state of ATL tools already allows solving non-trivial problems. This is demonstrated by the increasing number of implemented examples, and the interest shown by the rapidly growing ATL user community that provides a valuable feedback. Even more interesting, the ATL language has an industrial and commercial support provided by the Obeo4 company. Last but not least, the team at the University of L’Aquila, which developed the Ecore model transformations developed over the years a considerable experience in using ATL and related tools. For this reasons, the ATL language has been adopted and the results proved it has been the right choice for this kind of applications.

4http://www.obeo.fr

Learn PAd FP7-619583 11 Learn PAd FP7-619583 12 3 Learn PAd Modelling Architecture

In order to obtain a wiki structure starting from a Learn PAd model, the Learn PAd project relies on a tooling chain capable of automatically generate wiki systems starting from enriched process models. To this end, a number of different notations and platforms need to be consistently bridged. In this chapter, a description of the conceptual (transformational) architecture is given.

3.1. A transformational architecture

One of the main aspects of Model-Driven Engineering [50] (MDE) is to consider models as first-class entities. In fact, besides their descriptive nature, models assume a prescriptive role: a prescriptive model is a representation of a system intended to be built [6] and therefore it is a formal artifact that can be automatically processed by a computer program. As described in the previous chapter, model operations can be defined in terms of transformation programs written in specialized languages, as for instance the ATL language. However, in order to map a model conforming to a source metamodel into a model conforming to a target metamodel both models and metamodels must be homogeneous, i.e., are given in the same technical space where “a technical space is a model management framework with a set of tools that operate on the models definable within the framework” [35]. In the context of the Learn PAd project, several technical spaces are identifiable. This represents a major challenge because while the informative content of the various models is comparable, the way they are represented is based on different formats and standards. Bridging the different notation presents intrinsic difficulties whenever the artifacts are not belonging to the same technical space re- gardless of their content. For instance, a Learn PAd model produced by means of the ADOxx is not directly comparable to the same model produced with MagicDraw, although they represents the same business process with the same informative content. The direct consequence is that the transformation programs to manipulate the ADOxx-based Learn PAd models cannot be applied to MagicDraw-based Learn PAd models, although the models represents the same kind of structures and systems. In order to minimize the number of model transformations, avoid inconsistencies and reduce infor- mation erosion, the transformational architecture in Fig. 3.1 has been deviced. In particular, the ar- chitecture accommodates the ADOxx platform, MagicDraw, the Eclipse Modeling Framework and the XWiki system1. Since none of the mentioned systems adhere to the same standards, they have been ”connected” by means of model transformations that have been realized with ad-hoc techniques. In particular, the platforms and the related metamodels involved in the tooling chain are the following:

– ADOxx Modeling Environment, it represents the modeling environment for the Learn PAd meta- model realized on the ADOxx generic modeling platform; the models authored by means of such modeling tool can be serialized in a proprietary XML-based exchange format. The metamodels given in ADOxx are

• the (ADOxx-based) Learn PAd final Metamodel and

1It is worth noting how XWiki is not per-se a technical space. However, the final product of the Learn PAd tooling chain is a XWiki model that is in turn transformed into an XML document conforming to the XWiki schema. In this respect, is considered part of the overall architecture.

Learn PAd FP7-619583 13 Figure 3.1: The Learn PAd Architecture

• the ADOxx metamodel. – MagicDraw Modeling Environment, it is similar to the ADOxx Modeling Environment apart from the fact that has been implemented on top of Magic Draw; the models authored with it can be serialized into the XMI [44] standard. The metamodel given in MagicDraw is • the (MagicDraw-based) Learn PAd final Metamodel. – Eclipse Modeling Framework, it is one of the most supported and used modeling platforms in both academia and industry; it consists of plenty of tools and transformation languages. For this rea- son, it has been adopted in order to take advantage of the most advanced model transformation frameworks; artifacts are represented in the EMF/Ecore format and serialized/unserialized in XMI. The metamodels given on this platform are

• the Ecore version of the ADOxx-based (MMADOxx),

• the Ecore version of the MagicDraw-based Learn PAd final Metamodel (MMMD), and

• the XWiki metamodel (MMXWIKI ). – XWiki, it is the platform on which the outcome of the tooling chain is deployed. No metamodels are given on this platform (even because XWiki is not a modeling framework), however an XWiki XML schema is given to represent the XML documents generated by the tooling chain.

The transformation chain takes the Learn PAd models from the ADOxx and the MagicDraw platforms and transform them into the corresponding Learn PAd models in EMF. In order to do that the models must undergo some pre-processing that accounts for some consistency restoring actions and makes the models XMI complaint. At this moment, the processed models can be processed on the EMF platform. Unfortunately, the metamodeling architecture of the ADOxx is substantially different from that of most generic modeling platforms, including MagicDraw and EMF. Thus, despite the ADOxx- based and the MagicDraw-based Learn PAd models represent the same enriched business process, they are syntactically different due to the intrinsic differences between the platform they originated. However, a model-to-model transformation can map any MagicDraw-based Learn PAd model into the corresponding ADOxx version. At this point, it is possible to generate XWiki models from which XML documents can be instantiated. While this workflow is conceptually straightforward, the notational discrepancies between the techni- cal spaces represent a major obstacle that must be addressed in several respect. In particular,

Learn PAd FP7-619583 14 Technical Readiness Level Description TRL 1 basic principles observed TRL 2 technology concept formulated TRL 3 experimental proof of concept TRL 4 technology validated in lab TRL 5 technology validated in relevant environment (industrially rele- vant environment in the case of key enabling technologies) TRL 6 technology demonstrated in relevant environment (industrially relevant environment in the case of key enabling technologies) TRL 7 system prototype demonstration in operational environment TRL 8 system complete and qualified TRL 9 actual system proven in operational environment (competitive manufacturing in the case of key enabling technologies; or in space)

Table 3.1: Technology Readiness Levels in the European Commission

– the pre-processing is quite complex and takes into account different peculiarities of the modeling environments;

– the abstraction gap between the Learn PAD models (regardeless whether MagicDraw or ADOxx- based) required a relatively complex model-to-model transformation in ATL.

All the details about both the metamodels and the transformations represented in this architecture are described in the rest of this deliverable.

3.2. Current status of maturity

The architecture presented above consists of a number of components. In order to provide the inter- ested reader with accurate information about the overall technical readiness a catalog of the compo- nents with their technical readiness level is reported. To this end, we considered the Technology Readiness Levels in the European Commission [13] that prescribes how different maturity levels can be assigned to technical artifacts as reported in Table 3.1. Thus, the developed components have been, in turn, classified and reported in Table 3.2; a brief moti- vation for the assigned level is also given. Moreover, the table contains also some relevant third-party systems, e.g., the ATL language engine, that have been employed for the realization of the components. It is worth noting that the maturity expressed by each component refers to the date of publication of this deliverable and are therefore subject to enhancement during the next months until the project is finalized.

Learn PAd FP7-619583 15 Component TRL Description ADOxx Modeling Environment 4 The modeling environment defined upon ADOxx for au- thoring Learn PAd models with the proper diagrammatic concrete syntax Magic Draw Modeling Environ- 2 The modeling environment defined upon Magic Draw for ment authoring Learn PAd models with the proper diagram- matic concrete syntax EMF Core 9 Eclipse Modelling Framework XWiki 9 XWiki is a professional wiki that has powerful extensibil- ity features such as scripting in pages, extensions and a highly modular architecture. ATL 8 Atlas Transformation Language Acceleo 9 Templating Engine Pre-processing (ADOxx) 4 The pre-processing consists in procedural operation able to refine the exported ADOxx models into EMF en- vironment ADOxx (DTD) to ADOxx (XSD) 7 The DTD ADOxx metamodel has been translated by transformation means of an OMG Python procedure into the corre- sponding XSD schema Model-to-Model transformation: 4 The models are automatically transformed in XWIKI into ADOxx (EMF) to XWIKI (EMF) the EMF environment Model-to-Text transformation: 4 The XWIKI models are transformed in XML code in order XWIKI (EMF) to XWIKI (XML) to be imported in the XWIKI environment Pre-processing (Magic Draw) 3 The pre-processing consists in procedural operation able to refine the exported Magic Draw models into EMF environment Model-to-Model transformation: 2 The exported models in Magic Draw are transformed in MD LPAD (EMF) to ADOxx the notation of ADOxx in order to exploit the validated (EMF) and tested transformation chain

Table 3.2: Learn PAd components readiness levels

Learn PAd FP7-619583 16 4 Modeling Environment

In this chapter, we describe the modeling environments deviced in the Learn PAd project. Such environ- ments represent fully-fledged tools taylored around the Learn PAd Final Metamodel. Their purpose is that of offering a wide-range of funtionalities to the modeler who wants to design and manage extended (in the sense of Learn PAd) process models. Complex functionailties, such as consistency checking, export/import serialization, and model and persistency management, are provided as well as the im- plementation of the final metamodel by providing the graphical notation. In particular, two different environments have been designed and implemented, each for any different modeling platform in the project, i.e., the ADOxx1 and MagicDraw2 platforms. Before describing the methods and the modeling environments for each platform, we briefly introduce the Learn PAd final Metamodel given is a refinement of the Learn PAd Platform Independent Metamodel (LPIMM) developed during the Task 3.2 and described in the deliverable D3.2 [37].

4.1. Learn PAd Final Metamodel

As already mentioned, a detailed description of the Learn PAd modeling notations3 is provided in de- liverable D3.2 [37]. In this section, we report a brief description about how the LPIMM metamodel has been refined, extended, and amended in order to be implemented into the corresponding Learn PAd modeling environment. Such environments represent the modeling tools for designing and managing the extended Learn PAd process models and let the modeler interact with the whole Learn PAd plat- form. In particular, in this section we will focus on the abstract syntax given in terms of metamodels. The corresponding concrete syntax, i.e., the graphical notations, and related modeling environment are described in Sect. 4.3 and Sect. 4.5 for the ADOxx and MagicDraw platforms. Most of the concepts of the Learn PAd final Metamodel have been already presented in the deliv- erable D3.2 [37]. Nevertheless, a number of changes has been necessary due to new insights and requirements emerging from the domain, and the necessity to better accommodate the modeling struc- tures in the modeling tools (provided that the modeling expressiveness remained unchanged for the scope of the project). It is worth noting how, at this stage of the project, the metamodel changes did not give place to any co-evolution problem [16] since the number of models was limited and mainly they were used for assessing the method and provide hints for the definition of the metamodels. In order to better understand the differences between the LPIMM and the Learn PAd final Metamodel, we adopted a visualization technique inspired by that proposed by Ohst et al in [42]. The technique permits an intuitive visualization of the differences between two UML class diagrams where deleted and newly added elements are colored in red and green, respectively, leaving the unchanged elements in a neutral color4. Without loss of generality, we used tables instead where red and green cells de- note deleted and added elements since the modifications, which have been operated on the original

1http://www.adoxx.org 2http://www.nomagic.com/products/magicdraw.html 3The terms modeling notation, modeling language, model kind, and metamodel are interchangeably used for denotating the linguistic structures used for describing terminal models. 4Please note that the technique does not easily permit the representation of more complex changes like moving an element, which is approximated by a deletion and addition.

Learn PAd FP7-619583 17 metamodels, do not comprises complex patterns as those described in the Metamodel Refactorings Catalog [1]. In the rest of the section we will describe the differences between the corrisponding component metamodels (as given in [37]) of the LPIMM metamodel and of the Learn PAd final Metamodels.

Business Process Metamodel

Business Process Modeling Notation [63] (BPMN) is a method of illustrating business processes in pro- cedural terms. BPMN was originally conceived and developed by the Business Process Management Initiative (BPMI). It is currently maintained by the Object Management Group (OMG). BPMN provides a standard, easy-to-read way to define and analyze public and private business processes. BPMN provides a standard notation that is readily understandable by management personnel, analysts and developers. The original intent of BPMN was to help bridge communication gaps that often exist be- tween the various departments within an organization or enterprise. In the context of the Learn PAd project the modeling of business processes is a pivotal activity. Thus, the BPMN represents one of the main cornerstones of the Learn PAd final Metamodel. As aforesaid, a detailed description of the LPIMM metamodel (which comprises a revised version of BPMN) is given in the deliverable D3.2 [37] where a detailed description of the concepts are given together with their representation in terms of UML class diagrams. However, in order to accomodate the modeling elements given by the LPIMM metamodel and additional requirements due to the task of implementing the complete modeling environments, the final version presents some differences as denoted in the following table.

Learn PAd FP7-619583 18 The overall diagram of the metamodel as implemented in ADOxx is reported in Fig. Annex A.1.

Case Management Model And Notation

Not always processes can be deterministically specified, sometimes they present uncertainties. Nev- ertheless, this uncertainty can be adequately describied in order to avoid confusion and misinterpre- tations. Similarly to the process concepts in BPMN, the concept of case represents a proceeding that involves actions taken regarding a subject in a particular situation to achieve a desired outcome. Tra- ditional examples come from the legal and medical worlds, where a legal case involves the application of the law to a subject in a certain fact situation, and a medical case involves the care of a patient in the context of a medical history and current medical problems. In essence, a case allows humans to do work in a more or less structured way in order to achieve something. Any individual case may be resolved in a completely ad-hoc manner. But as experience grows in resolving similar cases over time, a set of common practices can be defined for cases. In the context of Learn PAd cases can be specified by means of the Case Management Model and Notation [3] (CMMN) and similarly to BPMN we report the comparison between the version present in the LPIMM metamodel and that in the Learn PAd final Metamodel as reported in the following table.

Learn PAd FP7-619583 19 The complete metamodel is represented in the class diagram given in Fig. Annex A.2

Business Motivation Model

The Business Motivation Model (BMM) is about motivation. Whenever a given procedure or approach is chosen by an organization, it is important to motivate the choice by describing what results the approach is meant to achieve. A cornerstone of any work addressing motivation is the organization vision and its action plans for how to realize them, i.e., its mission. Both the vision and the mission are refined into a number of concepts, which form the modeling notation. The comparison between the BMM final version and the one in the LPIMM metamodel is given by the following table.

Learn PAd FP7-619583 20 while the complete component metamodel is given in Fig. Annex A.3.

Competency Metamodel

Being able to denote and manage the learners level of compentencies is one of the most important aspects for the Learn PAd project. In complex organization competencies are often described within job descriptions but not defined in specific models leaving to the modeler the responsibility of consis- tently managing them. The Learn PAd Competency Model permits the explicit modeling of such aspect. The current version of the competency metamodel is based on a simplification of the framework the European Committee for standardisation, CEN WS-LT LTSO (Learning Technology Standards Obser- vatory)5. Analogously to the previous component metamodels we are able to represent the differences with the preliminary version in LPIMM and the final one in the following table.

A diagrammatic representation of the competency metamodel is given in Fig. Annex A.4.

Document and Knowledge Metamodel

Harnessing the possibility of an explicit knowledge representation of enormous relevance for the pro- cess. It fosters different learning paradigms and can be useful in simulation activities as well. This metamodel permits to specify knowledge models that contain documents, knowledge products knowl- edge resources, which are utilized in the processes (input, output to activities etc.). In addition, knowl- edge models are structured and can contain (sub) models. Also for this metamodel we can provide a comparison with the LPIMM version as described by the following table.

5EN WS-LT Learning Technology Standards Observatory. URL: http://www.cen-ltso.net/

Learn PAd FP7-619583 21 while the diagrammatic counterpart is given in Fig. Annex A.5.

Organizational Structure Metamodel

Organization models describe the structure of an organization. Such models are typically hierachical to illustrate a detailed structure of a working environment. The differences with the preliminary version in LPIMM are given as follows

The class diagram describing the metamodel can be found in Fig. Annex A.6.

KPI Metamodel

A Key Performance Indicators (KPI) is seen as performance measurement. Therefore, if opportunely defined they can evaluate the success of an organization or of a particular activity in which it engages. Often success is simply the repeated, periodic achievement of some levels of operational goal, and sometimes success is defined in terms of making progress toward strategic goals. Thus, KPIs can be succesfully employed in process models in order to assess performance of activities and processes. The KPI metamodel permits the explicit modeling of performance indicators to be related with business models and corresponding business motivations. As for the previous component metamodels, also in this case we are able to provide a comparison with the corresponding metamodel in LPIMM.

Learn PAd FP7-619583 22 The diagrammatic representation of the KPI metamodel is given in Fig. Annex A.7.

Overview Metamodel

The Overview Metamodel is an auxiliary metamodel that has been introduced for the first time in the Learn PAd final Metamodel, i.e., it was not present in the LPIMM metamodel. This is motivated by the necessity of linking together all the model kinds and therefore the overview models serve as glue to con- sistently keep together all the relevant information complementing a process model. Its representation is illustrated in Fig. 4.1.

Figure 4.1: Overview Metamodel

Learn PAd FP7-619583 23 4.2. ADOxx Approach

4.2.1. Conceptualization Process

The conceptualization process based on the OMILAB Lifecycle consists of five phases;

Figure 4.2: OMILAB Lifecycle ([27])([28])

1) Create Phase: in this phase the system under study, the intended application scenarios and the derived requirements are investigated. Typical support is pen and paper and common instruments of application specifications and requirement analysis. A conceptual meta model, describing the main concepts and relevant standards is recommended.

2) Design Phase: this phase specifies the modelling language with its required syntax, semantics and notation. Hence the so-called Platform Independent meta-model is specified, mechanisms and algorithms are described indicated the aimed functionality of the modelling tool.

3) Formalization Phase: the conceptual meta model must be transformed into software, hence before starting with the implementation, the Platform Independent Meta Model must be approved, if it is formally correct. This can be performed by mathematics (e.g. using FDMM ([21])), by semantics (e.g. using RDF (W3C 2014)) or via rapid prototyping (e.g. using ADOxx.org - http: //www.adoxx.org.)

4) Development Phase: this phase transforms the platform independent meta model into a platform specific one and hence implements it into a meta model platform to realize the modelling tool.

5) Deployment Phase: this phase is concerned with the packaging, installation and deployment of the modelling tool.

This generic development methodology proposed by OMILAB is instantiated for the needs of the development of the Learn PAd Modelling Environment Design Environment in the following form:

1) Create Phase: in this phase the domain and scope or the modelling framework are determined and the classes and the class hierarchy are defined.

- Determining the scope by analysis of business scenarios and deriving acquiring require- ments. - State of the art surveys and literature research of existing modelling languages and ontolo- gies in order to ensure the coverage of existing material.

Learn PAd FP7-619583 24 - Continuous adaptation and feedback through typical collaboration instruments such as phys- ical meetings, Internet workshops, publications, presentations and collaborative develop- ment.

2) Design Phase and Formalization Phase: in case of Learn PAd we have made 29 Iteration, which are exchanged among developers and internally released to evaluation those two phases are combined using a rapid prototyping approach.

1) We used Modelling Method Design Environment ([27]) ([26]) to design Learn PAd Modelling Method. 2) In ADOxx.org rapid prototypes indicating the intension and the scope of a solution is im- plemented. This platform enables a quick development of prototypes, and hence enables continued feedback on the meta model design. 3) The rapid prototypes are presented, discussed and feedback is provided.

3) Development Phase: in this phase the rapid prototype on the open and public platform ADOxx.org is transformed into Learn PAd Modelling Environment Community Edition released to End-Users as well as into the closed and commercial Learn PAd Modelling Environment is being transformed on ADOxxNP within BOC.

4) Deployment Phase: In this phase we released 5 versions (3 Major 2 Minor) of open-to-use edition –so called academic version -of Learn PAd Modelling Environment to Learn PAd End-users and academic community through Learn PAd Developer Space on ADOxx.org. The commercial edition is deployed on Web within BOC. The commercial edition does not have all features yet, what academic version has. The features implemented and tested in academic version are being transferred into the commercial edition.

The OMiLAB approach allows going back and forth between individual steps. As mentioned before there have been 29 iterations of rapid prototyping cycles for creation and design, while there have been 5 iterations for development and deployment.

4.2.2. Generic Modelling Method Framework

Conceptual modelling is a knowledge representation with the aim to observe relevant parts of the real world. Such conceptual models gained commodity, as their simplified view enables to focus on the relevant aspects and thanks to the abstraction enables IT-based support like visualization, queries, simulation and transformation. The term “model” has an extremely ambiguous nature and hence is interpreted with the meaning dis- cussed in the feasibility study of the Open Models Laboratory ([29]), where a model is “a representation of either reality or vision” ([64]), that are created “for some certain purpose” (OMG 2003). Conceptual models belong to the family of linguistic models that use an available set of pre-defined descriptions to create a model, and enrich the pure textual models (such as mathematical formula) with diagrammatic notations. The research community Open Models Initiative Laboratory (OMiLAB) proposes a generic modelling method specification framework ([30]) that identifies all relevant parts that need to be considered for conceptual modelling. The generic framework introduced in Figure 4.3 enables the specification of conceptual models. The framework considers three building blocks: (1) the modelling language that is most prominently associated with conceptual models, as available concepts to be used for such models are pre-defined according their semantic, their syntax and their graphical notation, (2) the modelling procedure defines the stepwise usage of the modelling language and hence is not always available, this means there are modelling languages that have not a pre-defined way of usage but leave the modeller freedom during

Learn PAd FP7-619583 25 Figure 4.3: Modelling Method Framework Based on [30] model construction, (3) mechanisms and algorithms enable the computer-based processing of models and hence provide an IT support for the aforementioned modelling scenarios – specification, execution support, knowledge representation and evaluation. Those three main building blocks are composed to achieve different levels in form of a modelling technique or modelling method of a concept model approach. Although there is a discussion on the different terms, it helps to classify the different approaches. The traditional Entity Relationship (ER) diagram for example has a modelling language, a modelling procedure and algorithms that enable the transformation from model into a relational database schema. UML in contrast has an expressive modelling language but no modelling procedure explaining the stepwise approach how to create a model. OWL for example defines its concepts in form of a modelling language and provides extensive algorithms for ontology inferences, but does not provide a procedure how to define a model. All conceptual model approaches, hence also the Learn PAd Modelling Method and it is correspond- ing modelling environment Learn PAd Modelling Environment, can be described with aforementioned framework.

4.2.3. Development on ADOxx

The development of modelling methods with ADOxx can be done in two ‘styles’ (1) interactive style and (2) programming style (see Figure 4.4).

1) Interactive style: this approach is supported by the ADOxx ‘Development Toolkit’, which provides functionalities like ‘Modeling Language Management’, ‘GraphRep Notation Editor’, ‘AttrRep Nota- tion Editor’, ‘Modeltype/View configuration’, ‘Library validation’, and ‘External Coupling’, that help the developer.

2) Programming style: this approach is done by implementing in the so called ADOxx Library Language (ALL) (see Figure 4.5, Figure 4.6 and Figure 4.7) with any text editor. In order to compile the ALL source to the binary language one can use the ALL2ABL service, which is provided at www.adoxx.org.

Learn PAd FP7-619583 26 Figure 4.4: ADOxx Development Approach

In both cases the elements of the components (a) modeling language and (b) mechanisms and algorithms as defined in the modeling method framework (Figure 4.3) has to be implemented and are described below. For further documentation the ‘Programming style’ case is chosen.

Figure 4.5: ADOxx Metamodel mapping

Learn PAd FP7-619583 27 Figure 4.6: ALL Class Definition

Figure 4.7: ALL Relation Class Definition

Learn PAd FP7-619583 28 a. Modelling Language Implementation

The modelling language consists of: - Notation: the representation of a modelling construct (e.g. graphical (Figure 4.8)) - Syntax: the specification of a modelling construct. - Semantic: the definition of the meaning for a modelling construct.

Figure 4.8: ALL GraphRep Definition

The above mentioned modelling constructs can be: 1) Classes: For the implementation of classes in ADOxx we distinguish between the following cate- gories/types of classes: - Pre-defined Abstract Classes derived from the ADOxx meta model classes (Figure 4.8): Pre- defined abstract classes are classes that are provided by ADOxx with a given semantic and basic syntax in form of attributes. They can be used to inherit the pre-defined syntax and the attributes to either self-defined abstract classes or to classes (e.g. in Learn PAd case the class ‘Task’ see (Figure 4.6)). - Abstract Classes: Abstract classes are self-defined classes enabling to structure the meta model and define syntax in form of attributes and semantic, which is inherited by sub-classes. - (Concrete) Classes: Classes inherit the semantic and the attributes from the Pre-defined abstract class and additionally - in case of inheriting - from the abstract class and enable the realisation of a concrete meta model. The implementation of the mapping of the concrete classes to the ADOxx metamodel in ALL is shown in (Figure 4.5). 2) Relation classes: Relation classes define the relationships between classes (abstract/concrete). They are defined by their source and target class, their cardinality, and their attributes as shown in Figure 4.6.

Learn PAd FP7-619583 29 3) Modeltypes: Modeltypes represent meaningful combinations of classes and relation classes as views to be used by the modeller to create models. Figure 4.8 shows an example of definition of a modeltype with the ALL language.

4) Attributes: Attributes are properties of a modelling construct such as a model, object or relation. Each attribute has a type and a value. The definition of the attributes are shown in Figure 4.5 and Figure 4.6.

The relations and dependencies of these modeling construct are defined in the meta2model of the meta modelling language (see Figure 4.9).

Figure 4.9: Modeltype definition ALL example

Mechanisms and Algorithms Implementation

- The following implementation types are provided by ADOxx platform to add functionalities to the modelling method:Core Functions for Model Manipulation: this functionality is providet by the platform without any further configuration or implementation need.

- Configuration of ADOxx Functionality6: functionality provided on platform level and possibility for adaptation through configuration.

- External Coupling ADOxx Functionality7: extending functionality through AdoScript and query implementation using the extension capabilities of the platform.

- Add-On Implementation8: integration support to use functionality developed externally to ADOxx.

6https://www.adoxx.org/live/configuration-of-adoxx-functionality 7https://www.adoxx.org/live/external-coupling-adoxx-functionalty 8https://www.adoxx.org/live/add-on-implementation

Learn PAd FP7-619583 30 Figure 4.10: ADOxx Metamodel

Learn PAd FP7-619583 31 4.3. Learn PAd Modeling Environment (based on ADOxx)

This chapter introduces the Learn PAd modelling environment based on ADOxx R , which realizes the human-interpretable graphical and textual models for model-based social learning for public adminis- tration. Since the modelling procedures based on evaluation scenarios have been introduced in D8.2, in this deliverable, we present Learn PAd Modelling Language and then Mechanisms & Algorithms, which provides visualization, querying and transformation functionalities to process models created by the Learn PAd Modelling Language.

4.3.1. The Learn PAd Modelling Language

The Figure 4.11 depicts the realized Learn PAd Modelling Stack, which was introduced in D3.2. In the following we will briefly introduce Model Types from the modelling stack only, which have been utilized during the demonstration and evaluation activities so far.

Figure 4.11: Realized Learn PAd Modelling Environment Meta Model Stack

Business Motivation Model

Fundamental to the Business Motivation Model (BMM) is the notion of ”motivation”. If an enterprise pre- scribes a certain approach for its business activity, it ought to be able to say ”why”; that is, what result(s) the approach is meant to achieve. Sometimes it is difficult to uncover such motivation, especially in operations that have been going on for some time. For cases in public administration, this BMM Model

Figure 4.12: Sample Business Motivation Model

Learn PAd FP7-619583 32 Type allows one to encapsulate in the modelling environment, precisely what the vision and goals are, the discrete and progressive steps of how they may be attained and crucially, how to assess whether they have been achieved or not.

Business Process Model

This metamodel defines a minimal subset of BPMN 2.0 that Learn PAd intends to use for describing business processes in public administration domain for learning purpose. The subset is selected based on practical experience of Learn PAd partners working with Public administrations and the Learn PAd requirements.

Figure 4.13: Sample BPMN Model

Case Management Model

Case management requires modeling and notation which can express the essential flexibility that hu- man case workers, especially knowledge workers, require for run-time planning for the selection of tasks for a case, run-time ordering of the sequence in which the tasks are executed, and ad-hoc collaboration with other knowledge workers on the tasks. The CMMN specification defines a common meta-model and notation for modeling and graphically expressing a case. The specification is intended to capture the common elements, while also taking into account current research contributions on case manage- ment. Because BPMN does not deal with knowledge intensive tasks, CMMN is used for modelling knowledge-intensive (sub)-processes in Learn PAd. For Learn PAd purposes the CMMN metamodel has been adapted and reduced in complexity through the addition of three additional concepts (PlanElement, Rule and CaseActivity).

Learn PAd FP7-619583 33 Figure 4.14: Sample Case Management Model

Learn PAd FP7-619583 34 Organization Model

Organization models describe the structure of an organization (organization chart). In Learn PAd orga- nizational structure models can be built hierarchically using organizational sub models to e.g. illustrate a detailed structure of a working environment.

Figure 4.15: Sample Organization Model

Documents and Knowledge Model

Knowledge models contain documents (templates), knowledge products knowledge resources, which are utilized in the processes (input, output to activities etc.). Knowledge models can be built hierarchi- cally using document sub models to e.g. illustrate a detailed structure of documents.

Figure 4.16: Sample Documents and Knowledge Model

Learn PAd FP7-619583 35 KPI Models

Key Performance Indicators (KPIs) are seen as a virtualisation instrument enabling to conceptualise relevant parts of the concrete instances of the production processes. A performance indicator is a measurement of the success of a given organization or activity in which it engages. Thus, KPIs can be succesfully employed in process models in order to assess performance of activities and processes.

Figure 4.17: Sample KPI Model

Learn PAd FP7-619583 36 Business Process Constraints Model Type

The Business Process Constraint Model give the possibility to perform compliance checking as specified in deliverable 4.1 using the extended Compliance Rule Graphs (eCRG) formalism. The compliance properties defined involve the control flow, the data flow and resource perspectives. It actually consists of compliance rules, which may: impose constraints on the control flow schema of process models, constrain the data to be managed, require certain types of activities to be present in a process model, or enforce access control policies. About the control flow is it possible to define temporal constraints on task subsequence. In particular is it possible to model the occurrence (or not) of a task eventually or definitely followed by the occurrence (or not) of another task in the future. Constraints about data are specified using the Document (Constraints) object. This object can be related with a Task object in order to constraint the association of the document to the task. The Role, Organizational Unit and User constraints in the end give the possibility to create compliance rules on the organizational model and on access control policies. In particular is possible to relate this three objects together in order to define rules, while is it possible to relate one of this objects to a task in order to define access and execution policies. Figure 4.18 show an example of this model.

Figure 4.18: Sample Business Process Constraint Model

Learn PAd FP7-619583 37 Competency Model Type

The Competency Model Type is realized in Learn PAd based on recommendations of the European Committee for standardisation, CEN WS-LT LTSO9, driven by 9 international institutions and organiza- tions involved in the standardization of E-learning technologies. (Refer to D3.2 for a class diagram of the Competency Model of the LPIMM; and to D5.1 for details on the ontological representation). Furthermore the ECF (European Qualifications Framework) is taken to determine the object type ’competency’, comprising descriptions of competence, skills and knowledge along with their levels. With this competencies modelled in Learn PAd become understandable and comparable. Marche Region initiated the competency model based on EQF, which has then been cooperatively enriched by FHNW and Marche Region. The competency model contains all competencies required by an organization to reach its goals, be it on strategic or operational level, be it on organization or team level, be it by a role or for specific tasks.

Figure 4.19: MR Competence Model and Details for a Competence

To determine what competencies are required for what, respectively on what level, competency pro- files are created. Competency profile is a specific type of document (defined in the document and knowledge meta model) to group competencies that are required - for example needed to fill a role or meet objectives of an organizational unit - and acquired by individuals, i.e. by PA staff. Assume, a new employee is assigned the role SUAP officer then the required competencies of this employee are the ones determined by the role (and eventually additional ones determined by the organi- zational unit she belongs to or specific tasks she has to perform). However, the acquired competencies this employee has when she starts the job might not be totally equal to the required ones or might be on a lower EQF level. The difference between the acquired and required competencies determines the individual learning goal.

9Learning Technology Standards Observatory / URL: http://www.cen-ltso.net/Main.aspx (retrieved: 14.9.2015)

Learn PAd FP7-619583 38 Figure 4.20: Competency Profiles for Required and Acquired Competencies

Model Set Overview Model Type

This model type is utilized to define scenario specific model sets to be pushed into the Learn PAd Core Platform and to be transformed. In this model type we have only one concept called “model set”, which persists the information of model set id, version, short description about model set and every pointers individual model relevant for given scenario. Antoher goal of this model type is enabling to reuse models in different scenarios and give the possibility to define/check required models for given scenario. Besides those, this model type allows to persists model set specific feedbacks and patches retrieved from Collaboration Workspace through the Learn PAd Core Platform.

Figure 4.21: Sample Model Set Overview Model

Learn PAd FP7-619583 39 4.3.2. Mechanisms & Algorithms

Semantic Lifting

Semantic lifting allows integrating human-interpretable models with the machine interpretable Ontology built in WP5, which required by the Learn PAd Recommender system. Semantic Lifting Mechanism is a generic mechanism, which means it can work on each meta-model, hence can be applied on every model type. Semantic lifting is a form of a loose coupled model weaving, where concepts of a business process – e.g. tasks – are semantically lifted. This semantic lift is implemented by annotating the concept with an ontological concept. Hence, each object in a business process model can optionally be annotated with an ontology concept.

Push Model Set to Learn PAd Core Platform and Start Verification and Retrieve Results

Using the Model Set Overview Model Type it is possible to push a Model Set to the Learn PAd Platform. All the models in the model set are first exported as XML, images, image maps and (only for business process model type) as BPMN2.0 and subsequently are zipped and sent to the Learn PAd Platform. An unique ID for each model set, is automatically generated and is used to reference the model set inside the platform and be retrieved by other components like the Verification Component.

Learn PAd FP7-619583 40 The Modelling Environment can then start a model set verification of a previously pushed model set. The list of all available verification type provided by the Learn PAd Platform is displayed to the modeller, and once the verification is started, a unique verification Id is associated to the model set. This verification Id will be used to retrieve the verification results once completed. The results will be showed directly in the Modelling Environment in a user friendly way.

Retrieve Feedback and Patches from XWiki

A user of the Learn PAd platform can release feedback and apply patches on a model or a specific object of a model in order to improve it or report a mistake. The modeller can retrieve this feedbacks from the Modelling Environment and take care or reject them. Every feedback is in the form of a comment box associated to the relative object involved.

Learn PAd FP7-619583 41 People like View

In order to ease the interpretation of business processes, so-called people oriented view has been introduced that enables the switch form a business process in traditional graphical notation to a new graphical notation, where icons graphically describe the nature of the activity. This is achieved, by a semantic lifting of each concept, hence the relation of a model object with an ontological description. A list of explanatory graphical icons is also annotated to the same ontological description. Hence, when switching into the people-like view, the images that are annotated with the model object are included in the new graphical description.

Bar-Display

The modelling environment allow a convenient overview of certain intermodel references and attributes of objects in the model. These are displayed in a format whereby the modelling area is separated from the bars where the values of the attributes are displayed. For any given Task or sub-process object in the model, Bar Display View complements the modelling procedure by allowing an overview intermodel references that the object may have. These are pre- sented as clickable links in-line with the object itself, but out of the way of the modelling area affording the user a quick means to follow these links without having to open the notebook.

Learn PAd FP7-619583 42 4.4. No Magic Approach

Learn PAd project supports process- driven collaborative knowledge sharing and process improvement on a user-friendly basis of wiki pages together with guidance based on formalized models. The platform supports both an informative learning approach based on enriched Business Process models and a procedural learning approach based on simulation and monitoring (learning by doing). As a Learn PAd platform is based on process- driven collaborative knowledge sharing, it needs to have integration with modelling environment. Modeling environment is used to create different kinds of models that help to describe and share know-how. Modelling environment is used to create Business Process, Competency, Document and Knowledge, Organization, Key performance Indicators, BMM and CMMN models. Learn PAd has it‘s own metamodel. It means that modelling tool must be customizable to support metamodel defined by Learn PAd. MagicDraw is one of the modelling tools that was chosen to implement Learn PAd metamodel and adopt it for creating aforementioned models. MagicDraw by itself is created to model all aspects of system or selected business domain: business processes, requirements, static structure, deployment structure, activities, states, detailed action sequences, and much more. MagicDraw as modelling environment supports the UML 2 metamodel and the latest XMI standard for data storage. MagicDraw allows to extend UML by adding its own stereotypes, constraints, tagged values and even new appearance of the model element in the diagram (gif images can be attached to the specific stereotypes for a different display). Different filling colors and fonts may be used for this purpose as well. Custom appearance of diagram elements can be defined using SVG or bitmap images. Aforementioned features allow MagicDraw to be customized and adopted for the modelling purposes in Learn PAd project. Further sections explain principles of MagicDraw tool customization.

4.4.1. Domain Specific Language (DSL) engine for MagicDraw customization

MagicDraw provides a Domain Specific Language (DSL) engine for adapting domain-specific profiles to create specialized customizations, real-time semantic rules, custom elements and other domain-bound modelling concepts. In other words, modellers can create specialized domain-specific tool and hide the underlying UML infrastructure. MagicDraw provides a model-driven approach to customize modelling notations relying on UML profiling. The MagicDraw DSL customization engine is able to process user defined rules for DSL elements and reflect this in MagicDraw GUI and diagrams behaviour. UML pro- files and custom diagrams allow users to extend standard UML 2 to address specific problem domain. Therefore it is possible easily to create a modelling environment that consists of custom tool-bar for managing custom elements. Additional functionalities can be provided by means of the MagicDraw Open API, which permits the modellers to include new design patterns, custom elements, metrics, transformations and other functionalities denoted by plugins supported by modelling tool. Example of Learn PAd profile with custom stereotypes and customization elements created for competency model shown in Fig. 4.22: The main purpose of a DSL is to provide a conceptual modelling layer which is abstract from UML, so that the DSL elements would look in MagicDraw as standard or as a new type elements. DSL elements specification may contain custom properties defined in stereotype. For example a new element Com- petency was created with stereotype Competency and this stereotype has special properties that were defined in Learn PAd metamodel, see Fig. 4.23. Example shows stereotype properties Competency Level and Competency Required For Activity that is a custom relation between Competency model elements and BPMN model elements: New element Competency with defined properties is created after applying this stereotype to UML class. Example of Competency element shown in Fig. 4.24: Modelling tool customization allows to use all UML element properties. During customization modeller has ability to select which UML properties will be used in the DSL element Specification window. It means that during creation of custom elements it is possible to choose scope of standard UML prop- erties and use them or hide them for new element. Stereotypes used in customization define how an

Learn PAd FP7-619583 43 Figure 4.22: Learn PAd profile structure

Figure 4.23: Stereotype properties

Figure 4.24: Specification window Competency element

Learn PAd FP7-619583 44 existing metaclass may be extended, and enables the use of platform or domain specific terminology or notation in place of, or in addition to, the ones used for the extended metaclass. When a stereotype is applied to a model element, the values of the properties may be referred to as tagged values. Generally stereotype properties become visible in an element’s Specification window only after the stereotype is applied on the element. However, it is possible to customize stereotype properties to be visible in the el- ement Specification window even if the stereotype is not applied yet on the element. This feature allows to specify some domain-specific properties for standard UML elements and this feature is also used in Learn PAd profile. Learn PAd metamodel also requires to create specific diagrams for Learn PAd. Mag- icDraw allows to create specific diagram types for Learn PAd by using the Customize Diagram Wizard. This is a powerful engine that allows to create your own custom elements in diagram tool-bar, custom symbol styles, and other customizations. It is possible to change properties of existing diagrams (Edit function) or create own brand new diagram type (Create function). Diagram customization descriptors are saved into a separate file for every diagram, so it is possible to exchange these customizations with your partners or colleagues (use Import or Export function). Diagrams created for Learn PAd are shown in Fig. 4.25:

Figure 4.25: Custom Learn PAd diagrams

DSL engine allows to create special customizations that support Learn PAd metamodel concepts. Further chapter provides more detailed explanation how to create customization for Learn PAd mod- elling with MagicDraw.

4.4.2. Customization principles of MagicDraw

First of all we would like to introduce Basic Customization Concepts that could be explained using common glossary for customization. Basic customization concepts are listed in table in Tab. 4.1: In

Table 4.1: Basic customization concepts the following figure (see Fig. 4.26), you can see a detailed example of how works customization in MagicDraw and how customization data is passed to DSL elements through stereotypes. In the preceding figure, you can see the customization element, stereotype, class element, DSL element and relations between them. The KnowledgeProduct stereotype element is set as customization tar-

Learn PAd FP7-619583 45 Figure 4.26: Example of applying customization rules to DSL element get in the KnowledgeProductCustomization customization element. In addition, the KnowledgeProduct stereotype is applied to the class element. The customization data from the KnowledgeProductCus- tomization element is passed to the class element. Thus, the class element becomes the DSL element. DSL element properties will appear in the Specification window in the Properties panel of the DSL element as regular properties, see Fig. 4.27:

Figure 4.27: DSL element properties

DSL element properties are defined in KnowledgeProduct stereotype. All these properties are displayed in the Specification window of DSL element after stereotype is applied to standard UML class. You can see all properties of stereotype KnowledgeProduct in Fig. 4.28: This approach is used to customize MagicDraw modelling environment and create all specific concepts for Learn PAd. More detailed explanation and examples how to customize MagicDraw tool, how to extend standard UML profile and how to create new elements you can find in MagicDraw online doc- umentation. Link to online documentation: http://docs.nomagic.com/display/MD183/UML+ Profiling+and+DSL+Guide.

Learn PAd FP7-619583 46 Figure 4.28: Properties of stereotype Knowledge Product

4.4.3. Example Learn PAd DSL elements creation

It is recommended to use customization creation process that is shown in Fig. 4.29:

Figure 4.29: MagicDraw customization process

As it is shown in process, the best practice is to create MagicDraw customization elements separate from Learn PAd model and use them as external UML profile that gives special properties for standard MagicDraw tool. Short description of MagicDraw customization process:

– Create Profile Project – in order to start customization first of all we need to create a separate project where all modeling tool customization elements will be placed. DSL elements must be placed in package that has type Profile;

– Create Customization Element - Customization begins from creating the following components (see Fig. 4.30):

• Profile diagram – is usually used for customization creation. It is easier to create customiza- tion in diagram. Diagram also used to visualize customization and show relations in visual form between stereotypes (see Fig. 4.31): • Stereotype – is used to apply customization data to standard UML element. Standard UML elements are extended with properties of stereotype when this stereotype is applied to ele- ment. Examples of stereotypes are shown in Fig. 4.31; • Customization element – specific element that contains customization data and specifies customization target. Examples of customization elements are shown in Fig. 4.31.

– Create Custom Diagram – in order to create custom diagram first of all we need to have cus- tom stereotypes. In Fig. 4.31 you can see stereotypes that are used to create Document and Knowledge diagram for Learn PAd. Custom diagrams creates new elements that have proper- ties defined by stereotypes. Custom diagrams are created as separate descriptors that can be imported and exported;

– Add profile to project – customized elements can be used in modelling environment after adding Learn PAd profile to every new project. Profile can be added to a selected project as an external module.

Learn PAd FP7-619583 47 Figure 4.30: Components of DSL customization creation

Figure 4.31: Example of profile customization diagram

4.5. Learn PAd Modeling Environment (based on MD)

Learn PAD profile for MagicDraw was created with MagicDraw using its DSL engine. It allows to create models with properties and relations defined in Learn PAd metamodel. Profile created as separate project and must be added to project as external module see Fig. 4.32:

Figure 4.32: Learn PAd profile usage as Module in project

In overall Learn PAd profile provides ability to create such models:

Learn PAd FP7-619583 48 – Business Process Model (BPMN)

– Business Motivation Model (BMM)

– Competency Model

– Document and Knowledge Model

– Organizational Structure Model

– Key Performance Indicator Model (KPI)

Learn PAd profile for MagicDraw does not support Case Management Model and Notation at the mo- ment. In order to create Learn PAd models modeller needs to choose appropriate diagrams. Business Process, Business Motivation and Organization Structure diagrams can be accessed through menu Create Diagram>Business Architecture Diagrams see Fig. 4.33:

Figure 4.33: Business Architecture Diagrams

Competency, Document and Knowledge and KPI diagrams can be accessed through menu Create Diagram>Learn PAd see Fig. 4.34:

Figure 4.34: Learn PAd specific diagrams

Creating Business Motivation Model (BMM)

For BMM model creation modeller needs to create a diagram from menu Diagram>Business Architec- ture Diagrams>Business Motivation Diagram, see Fig. 4.33. After creating BMM diagram modeller will be able to create BMM model elements and define a specification of these elements. Diagram allows to create all BMM model elements defined in Learn PAd metamodel, see Fig. 4.35. After creation of ele- ment in diagram modeller needs to open elements specification window and enter additional information for element, see Fig. 4.35.

Learn PAd FP7-619583 49 Figure 4.35: BMM diagram

Creating BPMN diagram

For BPMN model creation modeller needs to create a diagram from menu Diagram>Business Archi- tecture Diagrams>BPMN process Diagram, see Fig. 4.33. After creating BPMN diagram modeller will be able to create BPMN model elements and define a specification of these elements. Diagram allows to create all BPMN model elements defined in Learn PAd metamodel, see Fig. 4.36. After creation of element modeller needs to open elements specification window and enter additional information for element, see Fig. 4.36.

Figure 4.36: BPMN diagram

Creating Competency diagram

For Competency model creation modeller needs to create a diagram from menu Diagram>Learn PAd>Competency Diagram, see Fig. 4.34. After creating Competency diagram modeller will be able to create Com- petency model elements and define a specification of these elements. Diagram allows to create all Competency model elements defined in Learn PAd metamodel, see Fig. 4.37. After creation of element modeller needs to open elements specification window and enter additional information for element, see Fig. 4.37.

Learn PAd FP7-619583 50 Figure 4.37: Competency diagram

Creating Document and Knowledge diagram

For Document and Knowledge model creation modeller needs to create a diagram from menu Diagram>Learn PAd>Document and Knowledge Diagram, see Fig. 4.34. After creating Document and Knowledge dia- gram modeller will be able to create Document and Knowledge model elements and define a specifica- tion of these elements. Diagram allows to create all Document and Knowledge model elements defined in Learn PAd metamodel, see Fig. 4.38. After creation of element modeller needs to open elements specification window and enter additional information for element, see Fig. 4.38.

Figure 4.38: Document and Knowledge diagram

Creating Organization Structure Diagram

For Organization Structure model creation modeller needs to create a diagram from menu Diagram>Business Architecture Diagrams>Organization Structure Diagram, see Fig. 4.33. After creating Organization Structure diagram modeller will be able to create Organization Structure model elements and define a specification of these elements. Diagram allows to create all Organization Structure model elements defined in Learn PAd metamodel, see Fig. 4.39. After creation of element modeller needs to open elements specification window and enter additional information for element, see Fig. 4.39.

Creating KPI diagram

For KPI model creation modeller needs to create a diagram from menu Diagram>Learn PAd>KPI Diagram, see Fig. 4.34. After creating KPI diagram modeller will be able to create KPI model elements

Learn PAd FP7-619583 51 Figure 4.39: Organization Structure diagram and define a specification of these elements. Diagram allows to create all KPI model elements defined in Learn PAd metamodel. After creation of element modeller needs to open elements specification window and enter additional information for element.

Learn PAd FP7-619583 52 5 Learn PAd Transformation Platform

In this chapter we describe the Learn PAd Transformation Platform that, starting from a graphical repre- sentation of a business process, allows to automatically create the corresponding process wiki pages. In particular, the result of the process described in this architecture is a XWiki structure containing files and folders in a well defined order. To this end, we adopt state-of-the-art techniques and tools provided by the Eclipse Modeling Framework (EMF). Like this it is possible to

– transform the outcome of the diagrammatic modeling stage (typically an XML document) into a XWiki structure representing the process and

– factorize model operations and enhance relevant quality factors such as maintenance and ex- tendibility.

In the next section we introduce the Eclipse Modeling Framework (EMF)1; more specifically we present the components that have been isolated and integrated into the Learn PAd platform in order to have the modeling facilities needed for our purposes. EMF is part of the Eclipse project, whose goal is to provide a highly integrated tool platform. It has a core project which includes a generic framework for tool integration and a Java development envi- ronment. Other projects extend the core framework to support specific kinds of tools and development environments. The development work in Eclipse is divided into numerous top-level projects, including the Eclipse Project, the Modeling Project, the Tools Project, and the Technology Project among oth- ers. In particular, the Eclipse Project contains the core components needed for the development using Eclipse. In addition, the Eclipse Modeling Project is the focal point for the evolution and promotion of model-based development technologies.

5.1. EMF/Ecore

The Eclipse Modeling Framework (EMF) is an Eclipse-based modeling framework and represent the core of the Eclipse Modeling Project. It is composed by a set of plug-ins which can be used to model a data model and to generate code or other output based on this model. A data model, sometimes also called domain model, represents the data you want to work with. For example, if you develop an online flight booking application, you might model your domain model with objects like Person, Flight, Booking etc. A good practice is to model the data model of an application independently of the application logic or user interface. This approach leads to classes with almost no logic and a lot of properties, e.g., Person would have the properties firstName, lastName, Address, etc. EMF has a distinction between the metamodel and the actual model: the metamodel describes the model structure while an actual model is a concrete instance of this metamodel. EMF allows the de- veloper to create the metamodel via different means, e.g., XMI, Java annotations, UML or an XML scheme. It also allows to persists the data model; the default implementation uses a data format called XML Metadata Interchange (XMI) that is a standard for exchanging meta-data information via Extensible Markup Language (XML). It is a direct serialization of Ecore and doesn’t add any extra information.

1https://eclipse.org/modeling/emf/

Learn PAd FP7-619583 53 From the information stored in the EMF models specification, described in XMI, EMF provides tools and runtime support to produce a set of Java classes for the model, a set of adapter classes that enable viewing, command-based editing of the model and a basic editor. Models can be specified using annotated Java, UML, XML documents, or modeling tools, then could be imported into EMF. With these characteristics, EMF provides the foundation for interoperability with other EMF-based tools and applications. A typical use case for EMF is to specify a meta-data which represents the domain model of your application and use EMF functionalities to generate corresponding Java implementation classes from this model. The output generation is not limited to Java classes and the EMF framework supports the generated code extension by hand. Alternatively, the EMF model (which holds real data based on the model structure) can also be used to generate output, or it can be interpreted at runtime within an application. EMF consists of three fundamental pieces:

– EMF: the core EMF framework includes a meta model (Ecore) for describing models and run- time support for the models including: change notification, persistence support with default XMI serialization, and a very efficient reflective API for manipulating EMF objects generically.

– EMF.Edit: the EMF.Edit framework includes generic reusable classes for building editors for EMF models. It provides:

• content and label provider classes, property source support, and other convenience classes that allow EMF models to be displayed using standard desktop (JFace) viewers and property sheets; • a command framework, including a set of generic command implementation classes for build- ing editors that support fully automatic undo and redo.

– EMF.Codegen: the EMF code generation facility is capable of generating everything needed to build a complete editor for an EMF model. It includes a GUI from which generation options can be specified, and generators can be invoked. The generation facility leverages the JDT (Java Development Tooling) component of Eclipse.

Three levels of code generation are supported:

– Model: provides Java interfaces and implementation classes for all the classes in the model, plus a factory and package (meta data) implementation class.

– Adapters: generates implementation classes that adapt the model classes for editing and display.

– Editor: produces a properly structured editor that conforms to the recommended style for Eclipse EMF model editors and serves as a starting point from which to start customizing.

All generators support regeneration of code while preserving user modifications and can be invoked either through the GUI or headless from a command line. To summarize, with EMF it is possible to explicitly define the domain model. This helps to provide clear visibility of the model. The code generator for EMF models can be adjusted in its default setting. It provides change notification functionality to the model in case of model changes. EMF generates interfaces and a factory to create user objects; therefore, it helps to keep the application clean from the individual implementation classes. Another advantage is the Java code regeneration starting from the model in any moment.

Ecore The model used to represent models in EMF is called Ecore that is itself an EMF model, and thus is its own metamodel (i.e.: Ecore is defined in terms of itself). Ecore, and its XMI serialization, is the

Learn PAd FP7-619583 54 centre of the EMF world and a simplified subset of its metamodel is shown in Figure 5.1 2. Four Ecore

Figure 5.1: Simplified Ecore Meta-model

classes are needed to represent our model:

- EClass is used to represent a modeled class. It has a name, zero or more attributes, and zero or more references.

- EAttribute is used to represent a modeled attribute. Attributes have a name and a type.

- EReference is used to represent one end of an association between classes. It has a name, a boolean flag to indicate if it represents containment, and a reference (target) type, which is another class.

- EDataType is used to represent the type of an attribute. A data type can be a primitive type like int or float or an object type like java.util.Date.

An Ecore model can be created from any of at least three sources: a UML model, an XML Schema3, or annotated Java interfaces. The most important benefit of EMF, as with modeling in general, is the boost in productivity that results from automatic code generation so, starting from an Ecore model can be generated Java implementation code and, optionally, other forms of the model. This generation is done by means of the genmodel file which is also part of the metamodel and contains additional information for the code generation, e.g., the path and file information. The genmodel file also contains the control parameter how the code should be generated.

In the follow there is the list of the minimum required libraries in Maven POM file format4 needed to integrate the EMF into the Learn PAd platform: 1 ... 2 3 4 org.eclipse.core 5 runtime 6 3.9.100-v20131218-1515 7 8 9 10 org.eclipse.emf 11 ecore 12 2.10.0 13

2This diagram only shows parts of Ecore needed to provide a glimpse of it, so, we avoid showing base classes. For example, in the real Ecore metamodel the classes EClass, EAttribute, and EReference share a common base class, ENam- edElement, which defines the name attribute that here we’ve shown explicitly in the classes themselves. 3There is an important advantage to using XML Schema to define a model: given the schema, instances of the model can be serialized to conform to it. In addition to simply defining the model, the XML Schema approach is also specifying something about the persistent form of the instances. 4https://maven.apache.org

Learn PAd FP7-619583 55 14 15 org.eclipse.emf 16 ecore.xmi 17 2.10.0 18 19 20 org.eclipse.emf 21 common 22 2.10.0 23 24 25 ...

Once integrated EMF in the platform there is the advantage to perform different kind of transforma- tions. For example, both the model-to-model and the model-to-code transformations, or a combination of them. This leads to modularity improvement, indeed, instead of making a single big transformation it can be divided into smaller once increasing, also, the overall process maintenance. In the following we will see how we integrated in the EMF environment both the model-to-model and model-to-code transformations.

5.1.1. Model-to-model integration: ATL

ATL (ATLAS Transformation Language)5 [25] is a hybrid model transformation language that allows both declarative and imperative constructs to be used in transformation definitions. ATL is applied in a transformational pattern shown in Figure 5.2. In this pattern a source model Ma is transformed into a target model Mb according to a transformation definition mma2mmb.atl written in the ATL language.

Figure 5.2: Overview of ATL transformational approach

The transformation definition is a model. The source and target models and the transformation defi- nition conform to their metamodels MMa, MMb and ATL respectively. The metamodels conform to the MOF meta-metamodel [39]. ATL transformations are unidirectional, operating on read-only source mod- els and producing write-only target models. During the execution of a transformation the source model may be navigated but changes are not allowed. Target model cannot be navigated. A bidirectional transformation is implemented as a couple of transformations: one for each direction.

5https://eclipse.org/atl/

Learn PAd FP7-619583 56 The model-to-model ATL transformation integration in the Learn PAd platform is done with the in- clusion of the libraries as given by the corresponding (fragment of) the Maven specification in Annex B.1. Our goal was to create a simple standalone Java class, that can be reused to run every ATL (EMF) model-to-model transformations programmatically. Such a class has a constructor, reported in Listing 5.1 with some transformation options initialization and, at the same time, it makes the Ecore metamodel registration. The registration of all the metamodels involved in the transformation is manda- tory because to let EMF aware about them. When a metamodel is registered, the actual file (.ecore) is parsed into an EPackage that is stored in the EMF’s package registry (essentially an hash-map), using its nsURI as key. Once EMF knows about a registered metamodel we are able to handle with the models that are conforming to it. 1 public ATLTransformation() throws IOException { 2 options = new HashMap(); 3 options.put("supportUML2Stereotypes","false"); 4 options.put("printExecutionTime","true"); 5 options.put("OPTION_CONTENT_TYPE","false"); 6 options.put("allowInterModelReferences","false"); 7 options.put("step","false"); 8 Resource.Factory.Registry.INSTANCE.getExtensionToFactoryMap().put("ecore", new EcoreResourceFactoryImpl()); 9 } Listing 5.1: Class constructor for metamodel initialization

After the Ecore metamodel registration, there is the set function that deals with the creation of the execution environment and loading all the metamodels needed to the transformation. The execution environment is created starting from the resources provided as input to the function: input model, input metamodel, output metamodel, the ATL transformation (called modules) and two strings indicating the tags present within the transformation. 1 private void set(String model_in, String metamodel_in, String metamodel_out, String modules, String inTag, String outTag) throws ATLCoreException { 2 ModelFactory factory = new EMFModelFactory(); 3 EMFInjector injector = new EMFInjector(); 4 this.inmodelMetamodel = factory.newReferenceModel(); 5 injector.inject(this.inmodelMetamodel, metamodel_in); 6 this.outmodelMetamodel = factory.newReferenceModel(); 7 injector.inject(this.outmodelMetamodel, metamodel_out); 8 this.outModel = factory.newModel(this.outmodelMetamodel); 9 this.inModel = factory.newModel(this.inmodelMetamodel); 10 injector.inject(this.inModel, model_in); 11 12 ResourceSet resourceSet = new ResourceSetImpl(); 13 14 try { 15 resource.save(Collections.EMPTY_MAP); 16 } catch (IOException e) { 17 e.printStackTrace(); 18 } 19 this.modules = modules; 20 this.inTag = inTag; 21 this.outTag = outTag; 22 } The getModulesList function returns the modules present in the transform file (.atl). It may contain more modules each of them with the mapping that makes possible the actual transformation. Than, each module is saved in a separate ASM file. As said, the ASM file is the actual intermediate file used to perform the transformation. 1private InputStream[] getModulesList() throws IOException { 2 InputStream[] modules = null; 3 String[] moduleNames = this.modules.split(","); 4 modules = new InputStream[moduleNames.length]; 5 for (int i = 0; i < moduleNames.length; i++) { 6 7 String asmModulePath = new Path(moduleNames[i].trim()) 8 .removeFileExtension() 9 .addFileExtension("asm")

Learn PAd FP7-619583 57 10 .toString(); 11 System.out.println(asmModulePath); 12 13 Atl2006Compiler compiler = new Atl2006Compiler(); 14 compiler.compile(new FileInputStream(new File(moduleNames[i])), asmModulePath); 15 16 modules[i] = new FileInputStream(asmModulePath); 17 } 18 return modules; 19}

The doTransformation function performs the actual transformation using the getModulesList function in order to create, and than use, the ASM file. 1private Object doTransformation(IProgressMonitor monitor) throws ATLCoreException, IOException, ATLExecutionException { 2 ILauncher launcher = new EMFVMLauncher(); 3 List inputStreamsToClose = new ArrayList(); 4 Map launcherOptions = getOptions(); 5 launcher.initialize(launcherOptions); 6 launcher.addInModel(this.inModel,"IN", this.inTag); 7 launcher.addOutModel(this.outModel,"OUT", this.outTag); 8 InputStream[] modulesStreams = getModulesList(); 9 inputStreamsToClose.addAll(Arrays.asList(modulesStreams)); 10 Object result = launcher.launch("run", monitor, launcherOptions, 11 (Object[]) modulesStreams); 12 for (InputStream inputStream : inputStreamsToClose) { 13 inputStream.close(); 14 } 15 return result; 16}

This Java class is used to perform all the ATL transformations present in the Learn PAd platform: ADOxx2XWiki and MagicDraw2ADOxx (see Section 5.3.2). The ATL transformation result is a model, in the XMI serialization, conforming to the output metamodel that was provided as input.

5.1.2. Model-to-code integration: Acceleo

Acceleo is an open source code generator developed inside of the based on the OMG’s6 and implementing the MOFM2T(Model-to-text)7 specification. Acceleo has been created with the objective of having the best tooling possible to generate code. As such, it possess several key features like an editor with syntax highlighting, errors detection, completion, refactoring etc. in order to help the developer to handle the lifecycle of its code generators. With its template based approach, Acceleo can generate code for any kind of languages: If you can write it, Acceleo can generate it8. Acceleo does not lock you inside of the Eclipse environment, as such you can build and run your generator easily out of Eclipse. As well as we have done for the model-to-model transformation we created a class for the Acceleo transformation execution in a standalone environment. To do that, we imported the following libraries in Maven POM file format: 1... 2 3 4 org.eclipse.acceleo 5 org.eclipse.acceleo.maven 6 3.5.0-SNAPSHOT 7 8 9 org.eclipse.acceleo 10 org.eclipse.acceleo.engine

6http://www.omg.org 7http://www.omg.org/spec/MOFM2T/1.0/ 8http://help.eclipse.org/mars/index.jsp?topic=%2Forg.eclipse.acceleo.doc%2Fpages%2Findex.

Learn PAd FP7-619583 58 11 3.5.0-SNAPSHOT 12 13... The Acceleo standalone Java Class is composed by different functions. The first of them is the meta- models EMF registration of the involved model-to-code transformation: Ecore and XWiki metamodels. 1 Resource.Factory.Registry.INSTANCE.getExtensionToFactoryMap().put("ecore", new EcoreResourceFactoryImpl()); 2 Resource.Factory.Registry.INSTANCE.getExtensionToFactoryMap().put(IAcceleoConstants. EMTL_FILE_EXTENSION, new EMtlResourceFactoryImpl()); 3 4 EPackage.Registry.INSTANCE.put(XwikiPackage.eNS_URI, XwikiPackage.eINSTANCE); 5 Resource.Factory.Registry.INSTANCE.getExtensionToFactoryMap().put("*", XwikiFactory.eINSTANCE); 6 super.registerResourceFactories(resourceSet); Than, there is the function that actually performs the Acceleo transformation (execute). The function take as input the Acceleo template (see Section 5.3.4) and according to its rules, produce the result XWiki structure. 1public void execute(String modelPath, String resultFolderPath) { 2 3 URI modelURI = URI.createFileURI(modelPath); 4 File folder = new File(resultFolderPath); 5 List arguments = new ArrayList(); 6 Generate generator; 7 try { 8 generator = new Generate(modelURI, folder, arguments); 9 generator.doGenerate(new BasicMonitor()); 10 } catch (IOException e) { 11 e.printStackTrace(); 12 } 13} The Acceleo result is the XWiki structure creation. More specifically, with this transformation files and folders in a well defined order are created.

5.2. Architecture

The system architecture, illustrated in Figure 5.3, is composed by two subsystems: – Pre-processing; – Model Transformation Environment. Each of them represents an overall process phase that, starting from an XML representation of the modeled business process, create the XWiki structure from which will be created the wiki pages. That process, as described in the Figure 5.4, receive as input an XML file coming from graphical modeling tools such as: ADOxx and Magic Draw. As first operation it is enabled the pre-processing phase that is composed by different activities (see Figure 5.5) depending on the XML file type provided as input: ADOxx or Magic Draw. At the end of the pre-processing phase, an XMI file conform to the corresponding meta-model (ADOxx or Magic Draw) is created, so there could be applied on it all the processing techniques made available by EMF. More specifically, model-to-model and model-to-code transformations provided by the MTE component are enabled.

5.2.1. Pre-processing

In Figure 5.5 there is the description of all the operations composing the flow of the pre-processing subsystem. These operations allow to perform the XML9 input file pre-processing to obtain a valid XMI

9In this section we refer to XMI and XML only with the XML-term since, as already seen in Section 5.1, the XML Metadata Interchange (XMI) is an Object Management Group (OMG) standard for exchanging metadata information via Extensible

Learn PAd FP7-619583 59 Figure 5.3: LPMT Component Diagram

Figure 5.4: LPMT Sequence Diagram

Markup Language (XML), so XMI is a specific application of XML. Learn PAd FP7-619583 60 file as a result. Whatever is the file type to pre-process, ADOxx or Magic Draw, the first operation to do is create a copy of the input file so all subsequent changes will made to a copy, leaving the original file unchanged. Then the validity check is performed, e.g. it is checked if the XML file is well-formed or not. In case the file was not valid some procedures are activated to fix it, such as:

- XML root node replace;

- XML root node namespaces replace;

- XML tag addition/deletion;

- XML tags repair (if there are open tags that have never been closed for example).

A valid XML is mandatory because the system have to parse it with a Java parser (SAX Java parser10). This parsing is necessary in order to create an XMI file (see the Section 5.1) namely the Ecore model representation conforming to the ADOxx or Magic Draw metamodel. After carried out the common operations the pre-processing phase becomes specific depending on the file type provided as input.

ADOxx Repair

Let’s examine the case where the input is an ADOxx XML file. The goal in this phase is to create a valid XMI model file conform to the ADOxx metamodel. The first operation to do is the header replacement because, for an ADOxx conforming model, is needed a tag in which there are mandatory properties, so this operation replace the old header: 1 2 3

with a new one: 1 2

It must be ensure the model conforming updating the closing tag at the end of file according to this header replacement. So this tag replace have to be done: 1

with this one: 1

At the end of the header replacement we have a situation like that: 1 2 8... 9 10

10https://docs.oracle.com/javase/tutorial/jaxp/sax/parsing.html

Learn PAd FP7-619583 61 After that, the subsequent operation concern the model tag format. Indeed, ADOxx metamodel states that tags names of its models have to be in the form: 1... 2 while in the XML file in input we have: 1... 2 To create an XMI model conform to the ADOxx meta-model the system have to parse the XML and retrieve all the tags name and then replace them in the right way for both open and close tags. However, we must pay attention to cases in which within the XML file we have a situation like this: 1... 2... 3 a simple tag name replace in these case could lead to inconsistent situations like this: 1... 2 For this reason the tags name replacement order is important and that’s why, after the tag names retrieving, we sort them according to the string length in descending order. After that, the XML file is parsed in order to find again both the opened and closed tags so to perform the tags name replacement starting from the longest one to the shortest. As result of this phase the system create an XMI file that is a model conforming to ADOxx meta-model passed to the Model Transformation Environment ready for processing.

Magic Draw Repair

As well as happened for the ADOxx repair also for the Magic Draw XML file there are some operations to perform in order to obtain a valid XMI model file. The difference with ADOxx is that the Magic Draw Tool environment export directly an XMI file. On the other hand the problem with this file is the presence of a huge number of stereotypes not useful for our purposes. So, the main operation is to clean the XMI file from this stereotypes. For lack of space here we omit the complete XMI. In the following there is a partial root tag with the namespaces, many of which must be eliminated: 1 2

Learn PAd FP7-619583 62 24 25 26

In particular we have to delete this tags with all sub tags and relative attributes: 1 . . . 2 . . . 3 . . . 4 . . . 5 . . . 6 . . . 7 . . . 8 . . . 9 . . .

For proper disposal of these tags we have to clean their declaration as namespaces to the root node (< xmi : XMI >). After that there is a phase in which we parse the XMI file in order to retrieve all the tags to delete and consequently remove them and recursively all the relative children.

5.2.2. Model Transformation Environment

The Model Transformation Environment (MTE) component uses all the techniques and the operations made available by the EMF environment (see the Section 5.1) and it is composed by different other components:

- Metamodels Corpus (MC);

- MagicDraw2ADOxx Transformation (model-to-model transformation);

- ADOxx2XWiki Transformation (model-to-model transformation);

- model-to-code Transformation.

The MC component is dedicated to provide all the meta-models needed to perform all the transforma- tions involved in the whole Model Transformation Environment:

- Ecore meta-model;

- ATL meta-model;

- ADOxx meta-model;

- MagicDraw meta-model;

- XWiki meta-model.

In our implementation all of this meta-models are presented in the form of Java classes and interfaces derived by the genmodel (see Section 5.1) in order to use them programmatically. The MTE component take as input a valid XMI file and different activities are performed (Figure 5.5) depending on its type.

Model-to-model transformation: ADOxx2XWiki

If the XMI file is conform to the ADOxx meta-model than the ADOxx2XWiki model to model transfor- mation is activated. More precisely, there are performed all the operations needed to enable an ATL transformation (see Figure 5.6). The first operation is to prepare the Java class whose task is to perform the ATL transformation (see Section 5.1.1), so the needed meta-models request is made to the MC component that response with the following resources in order to enable the ADOxx2XWiki transformation:

Learn PAd FP7-619583 63 - Ecore meta-model;

- ADOxx meta-model;

- XWiki meta-model.

The ADOxx2XWiki.atl is the actual ATL file in which there are all the rules used for the transformation. Starting from this file, as saw in Section 5.1.1, a compilation phase is enabled in order to generate a new ASM intermediate assembler file. An assembler file has the extension .asm and contains the compiled code of the corresponding ATL file: ADOxx2XWiki.asm. The ASM file creation is also indication of ATL transformation correctness. Once this file and all the meta-models needed are provided, the model- to-model transformation is automatically and programatically performed and a new model, an XMI file conforming to the XWiki meta-model, is created.

Model-to-code transformation

The last transformation involved in the overall process is the model to code transformation. Specifi- cally, this transformation is an Acceleo transformation that take as input the XMI file, conform to the XWiki meta-model provided as result of the ADOxx2XWiki model-to-model transformation (see Sec- tion 5.2.2.1) and create the expected XWiki structure, i.e., files and folder with a specific order. This component require the XWiki meta-model that is provided by the MC component and similarly to the model-to-model transformation, also in this model-to-code transformation there is a intermediate compiled file creation. In particular, in the Acceleo transformation the rules are written in a template file the Model to Text Language (MTL) file, i.e. an Acceleo module. The compiled file, produced from this module, is the EMTL.

Model-to-model transformation: MagicDraw2ADOxx

If the XMI file is conform to the MagicDraw meta-model, so, the MagicDraw2ADOxx is enabled. This model-to-model transformation creates a model conforming to the ADOxx meta-model. In this way we reuse both the existing model-to-model and model-to-code transformations. Indeed, once obtained a model conforming ADOxx the system is able to produce a model conforming to XWiki meta-model (see Section 5.2.2.1) and consequently perform the model-to-code transformation (see Section 5.2.2.2). As depicted in Figure 5.6, this component request the needed meta-models to the MC component:

- Ecore meta-model;

- ADOxx meta-model;

- MagicDraw meta-model.

The MagicDraw2ADOxx.atl is the actual transformation file in which there are all the rules to produce a model conforming to ADOxx starting from a model conform to MagicDraw meta-model. So, after the ASM file creation it is performed the ATL transformation and the XMI file is produced as result. Once created this file both the model-to-model ADOxx2XWiki (see Section 5.2.2.1) and the model-to-code (see Section 5.2.2.2) transformations are enabled sequentially.

Learn PAd FP7-619583 64 Figure 5.5: LPMT Activity Diagram

Learn PAd FP7-619583 65 Figure 5.6: LPMT Sequence Diagram Model Transformation Environment

Learn PAd FP7-619583 66 5.3. Transformations

In this section the concrete transformations are described. In particular, we provide for each of them the involved metamodels analysis (which are different from the Metamodel in Sec. 4.1) and the rationale behind it.

5.3.1. M2M ad-hoc ADOxx (XSD) to ADOxx (EMF)

ADOxx Metamodel

The ADOxx modelling environment, permits modelers to draw how processes, actors, means, goal, resources and KPI in Public Administration works. The resulting model contains the knowledge and the procedures of the entire administration which must be imported in the Learn PAd platform in order to fulfill the learning objectives set in the project. Such model, according to the Learn PAd metamodel defined in [37], is conform to the ADOxx Meta-model (see Fig.5.7). The conformance check, is done in the ADOxx platform level by means of the libraries that ensure well-formed models. In order to define

Figure 5.7: ADOXX Conformance the mappings for bridging ADOxx and EMF/Ecore, the ADOxx meta-model, depicted in Fig.5.8, has been in-depth inspected and reverse engineered. It is a universal (graph-based) meta-model, which is agnostic of the domain-specific language: any model (eg LPAD, BPMN2, or even UML) can be represented with it. Following, an analysis of the artifact is given in order to better understand the data involved in the transformation chain.

Class Name: ADOXMLType

Super class:

Description: This is the root element of the exported model from the ADOXX modelling tool.

Attributes Name Type Multiplicity Description mODELS MODELSType 0..* This relationship contains the models type exported from the modelling tools. mODELGROUPS MODELGROUPSType 0..* This relationship contains the group of models

Class Name: MODELSType

Super class:

Learn PAd FP7-619583 67 Figure 5.8: ADOXX Source Metamodel

Description: The model element MODELSType collect the list of exported models.

Attributes Name Type Multiplicity Description mODEL MODELType 1..* The collections of MODELType in the model.

Class Name: MODELType

Super class:

Description: MODELType defines the model kind to be processed in the learning environment. It could be: Overview Model, BPMN, BMM, CMMN, Document and Knowledge Model, Competency Model and KPI.

Attributes Name Type Multiplicity Description id String The model id provided by ADOXX modeltype String The modeltype defines the model kind (one of the above defined models). name String The name of the model version String The version of the model cONNECTOR CONNETCTORTYpe 0..* The reference to the CONNECTORType (intra model relationship) mODELATTRIBUTES MODELATTRIBUTESType1..1 The reference to the MODELATTRIBUTESType Learn PAd FP7-619583iNSTANCE INSTANCEType 0..* The reference to the INSTANCEType 68

Class Name: MODELATTRIBUTESType

Super class:

Description: The MODELATTRIBUTESType is the model element that gathers all the ATTRIBUTE- Type of the model.

Attributes Name Type Multiplicity Description aTTRIBUTE ATTRIBUTEType 0..* The relationship with the attributes

Class Name: ATTRIBUTEType

Super class:

Description: Contains several informations useful to ADOXX environment for represent, draw and manage model. Moreover, there is the ’Model Set ID’ of the model used in the xwiki Attributes Name Type Multiplicity Description name String The name of the attribute type String The type of the attribute value String The value

Class Name: INSTANCEType

Super class:

Description: The INSTANCEtype represent every entity defined in the Learn PAd metamodel accord- ing to [37] deliverable.

Attributes Name Type Multiplicity Description class String The instance class name id String The object id provided by ADOXX modelling tool name String The instance name aTTRIBUTE ATTRIBUTEType 0..* The related ATTRIBUTEType iNTERREF INTERREFType 0..* The related INTERREFType

Class Name: INTERREFType

Super class:

Description: The INTERREFType model element collects the inter-model references from the source INSTANCEType and a target model element defined in another model kind. The INTERREFType is the representation of the model weaving defined in [37].

Attributes Name Type Multiplicity Description name String A description name iREF IREFType 0..* The relationship with the IREFType

Class Name: IREFType

Super class:

Description: The IREFType contains the informations about inter-model or intra-model relations.

Attributes Name Type Multiplicity Description tclassname String The instance class name

Learn PAd FP7-619583 69 tmodelname String The model name tmodeltype String The modeltype defines the model kind tmodelver String The linked model version tobjname String type String The type is one between the contants ’modelreference’ for inter-model or ’objectreference’ for intra-model

Class Name: CONNECTORType

Super class:

Description: CONNECTORType is the model element that provide the intra-relationship between two instance, respectively in ’fROM’ and ’tO’.

Attributes Name Type Multiplicity Description class String The classname in ADOXX id String The id provided by modelling environment aTTRIBUTE ATTRIBUTEType 0..* The related ATTRIBUTEType iNTERREF INTERREFType 0..* The related INTERREFType fROM FROMType 1..1 The source instance tO TOType 1..1 The target instance

Class Name: FROMType

Super class:

Description: FROMType collect the informations that refers to a source Instance by means of a ’logi- cal’ link

Attributes Name Type Multiplicity Description class String The class name instance String The instance name

Class Name: TOType

Super class:

Description: TOType collect the informations that refers to a target Instance by means of a ’logical’ link

Attributes

Learn PAd FP7-619583 70 Name Type Multiplicity Description class String The class name instance String The instance name

Class Name: MODELGROUPSType

Super class:

Description: MODELGROUPSType defines a grouped collector of model kinds.

Attributes Name Type Multiplicity Description mODELGROUP MODELGROUPType 1..* The reference to the grouped models

Class Name: MODELGROUPType

Super class:

Description: MODELGROUPType defines the group of model kinds,

Attributes Name Type Multiplicity Description name String The name of the group mODELGROUP MODELGROUPType 0..* The reference permits to have a hierarchical structure mODELREFERENCE MODELREFERENCEType0..* The reference to the models in the group

Class Name: MODELREFERENCEType

Super class:

Description: MODELREFERENCEType define the ’logical’ connection to the model kind

Attributes Name Type Multiplicity Description libtype String The ADOXX library,’bp’ modeltype String The model type: Overview Model, BPMN, BMM, CMMN, Document and Knowledge Model, Competency Model and KPI. name String The name of the model version String The version of the model

The ADOxx export procedure, provide us: i) the xml file that gather the whole data model (models, graphical notations, informations system) in ADOxx and ii) the dtd file that express the grammar and

Learn PAd FP7-619583 71 the constraints which must to be respected from the exported model. As described in Sec.5.2, the transformations chain in Learn PAd, works on EMF Environment therefore, both artefacts (xml and dtd) have to be transformed in Ecore format in order to be automatically processable. In this section, we describe how the ADOxx metamodel was transformed from the initial dtd, to the final ecore format by means of the following two operations:

1) the DTD ADOxx metamodel has been translated with a Python procedure into a corresponding XSD schema. In particular, in Fig.5.9, the source DTD model adoxml31.dtd is passed as input to the dtd2xsd.pl procedure. The computation of such artifact, produces as output, the converted XSD model according to the encoding rules, for elements and attributes, defined respectively in Tab.5.15 and in Tab.5.16.

2) the generated XSD schema is the input of the EMF wizard procedure whose results is the ADOxx Ecore Metamodel.(see Fig.5.10).

Figure 5.9: ADOxx DTD to XSD transformation

Encoding elements Source DTD Target XSD

Learn PAd FP7-619583 72 Encoding attributes Source DTD Target XSD content="elementOnly">

Figure 5.10: ADOxx XSD to Ecore metamodel transformation

5.3.2. M2M ADOxx (EMF) to XWIKI (EMF)

XWIKI Metamodel

The XWIKI platform provides a general information system able to deal with a multitude of interests. In order to satisfies a wide range of users and functionality, its metamodel is quite general, it includes 32 meta-classes of which, only a subset of them was used in the transformation chain for Learn PAd purpose. As discussed in Sec.5.2 the transformation chain in Learn PAd, consist in several step in order to obtain the final source code, among them, we have two transformation: i) Model2Model and ii) Model2Code. The XWIKI metamodel in this stage plays a key role since it is the target metamodel of the first transformation and the source metamodel of the latter. In Fig. 5.11 the above mentioned artifact is depicted, and following, we provide an analysis of model elements involved in the chain.

Learn PAd FP7-619583 73 Figure 5.11: XWIKI Target Metamodel

Class Name: Document Root

Super class:

Description: The Document Root model element is the root of XWIKI metamodel. It contains the definitions of the pages, the properties and the object that belongs to the model.

Attributes Name Type Multiplicity Description page Page 0..* A containment field that gather all pages defined in the XWIKI. object Object 0..* A containment field of the objects defined. property Property 0..* A containment field of the properties defined in the platform.

Class Name: ObjectSummary

Super class:

Description: The ObjectSummary model element is responsible of the gathering of informations used by the controller that manage pages in the XWIKI environment. In particular it describes how the object will be identified, numbered, managed from the learning platform.

Attributes Name Type Multiplicity Description classname String The classname value is responsible of defining the controller that manage the object. Depending on that, we can have: i) ’LPCode.ModelSetClass’ for manage a set of models, for instance the ’WebHome’ might contain several model kinds as defined in [37]; ii) ’LPCode.ModelClass’ responsible to management the view of a specific model kind; iii) ’LPCode.BaseElementClass’ able to manage a view of a specific instance of an element in a model kind ie. class Activity in BPMN; and finally iv) ’LPCode.LinkClass’ for the management of intra and extra model Learn PAd references. FP7-619583 74 number Int A field that identifies the instances in the XWIKI environment. wiki String As decided by Learn Pad architecture this field contain a static parameter whose value is ’xwiki’. space String This field contains the modelset id provided by adoxx model. pageName String The page name of the processed model element. Class Name: Object

Super class: ObjectSummary

Description: The Object model element inherits the property defined in ObjectSummary. It Defines the collector of the properties and resources contained in a page.

Attributes Name Type Multiplicity Description property Property 0..* A sequence of related property.

Class Name: Property

Super class:

Description: The Property model element, permit to define the meta-elements used in the object.

Attributes Name Type Multiplicity Description name String According to the learning infrastructure, the allowed values could be: id, modelid, type, name and documentation. type String Defines the type of the property value String Defines the value of the property

Class Name: PageSummary

Super class:

Description: The PageSummary model element is responsible of the gathering of informations and the links structure in the pages displayed in XWIKI environment.

Attributes Name Type Multiplicity Description title String The title name of the page parent String Tha parent page wiki String As decided by Learn Pad architecture this field contain a static parameter whose value is ’xwiki’. space String This field contains the modelset id provided by adoxx model. pageName String The page name of the processed model element. content String The learning content to be displayed to civil servant.

Learn PAd FP7-619583 75 Class Name: Page

Super class: PageSummary

Description: The Page model element is responsible of the management of the learning content used by civil servant.

Attributes Name Type Multiplicity Description content String The learning content to be displayed to civil servant.

The exported models by ADOxx Modelling Environment, after an appropriate refinement as discussed in5, must be transformed in the XWIKI format. The domain translation is made by an automatic ATL 11 Model2Model model transformation12. In ATL, model element variables are referred by means of the notation metamodel!class in which metamodel identifies (through its name) one of the metamodels handled by the transformation. In Fig.5.12(a), the snippet code defines respectively the source ADOxx (Fig.5.12(b)) and target XWIKI (Fig.5.12(c)) metamodels involved in the transformation. The model el- ements can only be generated by means of the ATL rules (either matched or called rules). Generating a newly model element consists in initializing its different features that can be either attributes or refer- ences. The assignments are operated by means of the bindings of the target pattern model elements. The code excerpt in List. 5.2, shows the entrypoint called rule ModelOverview (line 1). The rule is

Figure 5.12: Source and Target Metamodel definition implicitly invoked at the beginning of the transformation execution, once the module initialization phase has completed. It sets i) the modelSet used for assign the ModelSetId in the generated model ele- ments in the rest of the transformation (line 7); ii) the filtered models to take into account as defined in Overview Model (line 10) and, finally iii) invoke the rule initWebHome (line 13). The code specified within the imperative block makes the pointed class (modelSet) available in the whole context of the ATL module. This mean, that it remains accessible for further computation during the transformation.

11http://eclipse.org/atl 12https://github.com/LearnPAd/learnpad/blob/master/lp-collaborative-workspace/ lp-cw-bridge/lp-cw-bridge-transformer/resources/transformation/ado2xwiki.atl

Learn PAd FP7-619583 76 1entrypoint rule ModelOverview() { 2 using { 3 ... 4 } 5 do { 6 --set the modelSet 7 thisModule.modelSet <- modelSet; 8 9 --set the models to take into account 10 thisModule.sourceModels <- thisModule.loadSourceModels(); 11 12 --create the WebHome 13 thisModule.initWebHome(); 14 } 15} Listing 5.2: Entrypoint Rule

The called rule initWebHome reported in List. 5.3, yields as depicted in Fig.5.13, i) the Docu- ment Root that contains each generated model element in XWIKI (lines 2-5); ii) the class Page that represent the WebHome in the Learn PAd platform (lines 7-14), and iii) the Object and Properties able to manage the aforementioned Page by XWIKI environment (lines 15-28). About Object class LPCode.ModelSetClass has three properties: id, name and documentation and in the aforemen- tioned fragment, the definition of the Property id is showed (lines 23-27). The code enclosed in do section assigns the value of i) the documentRoot and ii) the list of models-kind to be translated in XWIKI used respectively in the entire transformation.

Figure 5.13: Fragment of Target Model (WebHome)

1rule initWebHome() { 2 to t: XWIKI!DocumentRoot ( 3 object <- msc, 4 page <- Sequence{}->append(p) 5 ), 6 -- WebHome 7 p: XWIKI!Page ( 8 title <- ’Home’, 9 parent <- ’Main.WebHome’, 10 wiki <- ’xwiki’, 11 space <- thisModule.getModelSetId(), 12 name <- ’WebHome’, 13 content <- ’{{include reference="LPCode\\.ModelSetWebHome"/}}’ 14 ), 15 msc:XWIKI!Object ( 16 className <- ’LPCode.ModelSetClass’, 17 number <- 0, 18 wiki <- ’xwiki’, 19 space <- thisModule.getModelSetId(), 20 pageName <- ’WebHome’,

Learn PAd FP7-619583 77 21 property <- Sequence{msc_p1, msc_p2, msc_p3, msc_p4, msc_p5, msc_p6, msc_p7} 22 ), 23 msc_p1:XWIKI!Property ( 24 name <- ’id’, 25 type <- ’String’, 26 value <- thisModule.getModelSetId() 27 ), 28 ... 29 do { 30 thisModule.documentRoot <- t; 31 thisModule.instances <- thisModule.loadInstances; 32 } 33} Listing 5.3: Init base WebHome

The matched rules listed in the following complete the semantic of the transformation. In this case, the generated target model elements are obtained from the source therefore, we have to specify i) which is source model element that must be matched by means of a pattern; ii) the number and the type of the generated target model elements, and iii) the way these elements must be initialized from the matched source. In the List. 5.4, the rule generate the WebHome elements (page, object and properties) (lines 4-12) for each matched ADOXX!MODELType element defined in the thisModule.sourceModels variable (line 2). The generated Object class LPCode.ModelClass has four properties: id, name, type and documentation. At the end, the imperative do section assigns the generated pages and objects respectively to documentRoot.page (line 14) and documentRoot.object (line 15).

1rule MODELType2WebHome { 2 from s:ADOXX!MODELType(thisModule.sourceModels.includes(s)) 3 ------mod.XXXXX ------4 to p: XWIKI!Page ( 5 title <- s.name.escapeXML, 6 parent <- ’WebHome’, 7 wiki <- ’xwiki’, 8 space <- thisModule.getModelSetId(), 9 name <- s.id, 10 content <- ’’ 11 ), 12 ... 13 do { 14 thisModule.documentRoot.page <- p; 15 thisModule.documentRoot.object <- mc; 16 } 17} Listing 5.4: Generate model WebHome

Whilst the MODELType2WebHome rule generates the WebHome for each model kind, the code ex- cerpt in List. 5.5 shows how a model element of type ADOXX!INSTANCEType will be translated in a XWIKI Page. The INSTANCEType model element in ADOxx metamodel (see Sec. 5.3.1.1) is able to represent any concept that belong to the parent modelkind (ie. Task, Activity, StartEvent for BPMN; CompetenceSet, Profile for Competency Model and so on) therefore typing information are incoded into attributes (mainly) as strings. The flexibility of that encoding, permit us to define only one transformation rule for all concepts in Learn PAd metamodel, making the transformation very lightweight and flexible. Going in depth, the rule, generate a XWIKI Page characterized by a title (line 5), the parent model kind represented by means of the Id attribute (line 6), the space that is the identifier of the Main.WebHome (line 8), the name that is the page Id (line 9). The Object class LPCode.BaseElementClass has five properties: modelid, id, name, type and documentation. Among these, the most relevant is bec p4 (line 12) because it manage the documentation in terms of learning content in the Learn PAd platform (lines 12-16).

Learn PAd FP7-619583 78 1rule INSTANCEType2Page { 2 from s:ADOXX!INSTANCEType 3 ------obj.XXXXX ------4 to t_p: XWIKI!Page ( 5 title <- s.name.escapeXML, 6 parent <- s.refImmediateComposite().id.escapeId, --modelsetid 7 wiki <- ’xwiki’, 8 space <- thisModule.getModelSetId(), 9 name <- s.id 10 ), 11 ... 12 bec_p4:XWIKI!Property ( 13 name <- ’documentation’, 14 type <- ’TextArea’, 15 value <- thisModule.getDocumentationFromInstance(s).escapeXML 16 ) 17} Listing 5.5: Generate Instances

The fragment reported in List. 5.6 is responsible of the class links creation that allow the brows- ing of wiki pages. The matched rule CONNECTORType2Object takes the CONNECTORType (line 2) model element for instantiate in XWIKI target model, two kinds of Object able to manage respec- tively the incoming and outgoing link. The generated Object class (lines 3-10) manage the outgoing link by means of the attribute className sets to LPCode.LinkClass (line 4). The source class is stored in the attribute pageName and it is identified by means of the ”Id” provided by the ATL helper getConnectorSourceId (line 8). Each source class could have more than one link therefore, the number (line 5) identify the object among others. Such number will used be incremented in the im- perative block in section do by the helper incLinkClassNumber (line 34). The lco p3 Property (lines 12-16) of name uri (line 13), defines the target class by means of the identifier ”Id” provided by the helper getConnectorTargetId (line 15). The other properties are: i) the id of type ”String” which is the connector identifier and ii) the type of type ”StaticList” which specifies the link direction (outgoing). The discussed fragment(lines 3-16) is about the creation of the outgoing link; without loss of detail, the same operations are about the incoming link (lines 17-30) reversing the target elements with the respective source. 1rule CONNECTORType2Object { 2 from c:ADOXX!CONNECTORType 3 to lco:XWIKI!Object ( 4 className <- ’LPCode.LinkClass’, 5 number <- thisModule.getNumberFromConnector(’SOURCE’, c), 6 wiki <- ’xwiki’, 7 space <- thisModule.getModelSetId(), 8 pageName <- thisModule.getConnectorSourceId(c), 9 property <- Sequence{lco_p1, lco_p2, lco_p3} 10 ), 11 ... 12 lco_p3:XWIKI!Property ( 13 name <- ’uri’, 14 type <- ’String’, 15 value <- thisModule.getConnectorTargetId(c) 16 ), 17 lci:XWIKI!Object ( 18 className <- ’LPCode.LinkClass’, 19 number <- thisModule.getNumberFromConnector(’TARGET’, c), 20 wiki <- ’xwiki’, 21 space <- thisModule.getModelSetId(), 22 pageName <- thisModule.getConnectorTargetId(c), 23 property <- Sequence{lci_p1, lci_p2, lci_p3} 24 ),

Learn PAd FP7-619583 79 25 ... 26 lci_p3:XWIKI!Property ( 27 name <- ’uri’, 28 type <- ’String’, 29 value <- thisModule.getConnectorSourceId(c) 30 ) 31 do { 32 thisModule.documentRoot.object <- lco; 33 thisModule.documentRoot.object <- lci; 34 thisModule.linkClassNumberMap <- thisModule.incLinkClassNumber(thisModule.getConnectorSourceId(c)); 35 thisModule.linkClassNumberMap <- thisModule.incLinkClassNumber(thisModule.getConnectorTargetId(c)); 36 } 37} Listing 5.6: Generate links between classes

In [37] the weaving models are defined in order to define correspondences between modelling ele- ments belonging to different metamodels. In the transformation we have two kind of inter-model con- nection weaving:

– Object2Object between two meta-classes;

– Object2Model between a meta-class and a model-kind.

In List. 5.7 a fragment of the Object2Object transformation is given. The code is quite straightfor- ward and it is similar to that illustrated and discussed about List.5.6. Following, we focus only on the explanations of code that differs respect to the aforementioned ones. In particular, the transforma- tion rule IREFTypeObjRef2Object in section from, filters the model element of type IREFType which have the attribute type equals to ”objectreference” (line 2). The section to generates two Object with className ”LPCode.LinkClass” each able to manage respectively outgoing and incom- ing inter-model weaving connection. The outgoing-weaving direction is managed by the Object (lines 3-7), the pageName attribute contains the referenced source class Id (line 6) provided by the helper getIrefSourceId. The Property lco p3 (lines 9-13) of name sets to ”uri” (line 10), defines the target instance of the link. The helper getIrefTargetInstanceId provide the Id of type ”String” to be stored in value attribute. The incoming-weaving direction is managed by the Object (lines 14-18). Oppositely to the outgoing-weaving, pageName attribute contains referenced target class Id (line 17) provided by the helper getIrefTargetInstanceId. The Property lci p3 (lines 20-24) of name sets to ”uri” (line 21), defines the source instance of the link. The helper getIrefSourceId provide the Id of type ”String” to be stored in value attribute. Other defined properties are: i) the id of type ”String” which is the iref identifier and ii) the ’type’ that specifies the link direction that could be of value outgoing-weaving or incoming-weaving.

1rule IREFTypeObjRef2Object { 2 from i:ADOXX!IREFType(i.type=’objectreference’) 3 to lco:XWIKI!Object ( 4 className <- ’LPCode.LinkClass’, 5 ... 6 pageName <- thisModule.getIrefSourceId(i), 7 ), 8 ... 9 lco_p3:XWIKI!Property ( 10 name <- ’uri’, 11 type <- ’String’, 12 value <- thisModule.getIrefTargetInstanceId(i) 13 ), 14 lci:XWIKI!Object ( 15 className <- ’LPCode.LinkClass’, 16 ... 17 pageName <- thisModule.getIrefTargetInstanceId(i),

Learn PAd FP7-619583 80 18 ), 19 ... 20 lci_p3:XWIKI!Property ( 21 name <- ’uri’, 22 type <- ’String’, 23 value <- thisModule.getIrefSourceId(i) 24 ) 25 do { 26 thisModule.documentRoot.object <- lco; 27 thisModule.documentRoot.object <- lci; 28 ... 29 } 30} Listing 5.7: Generate Object2Object weaving

The Object2Model transformation rule differs from Object2Object ones only for the section from. As shown in the excerpt in List. 5.8, the source model element IREFTYPE (line 2) is filtered by means of the attribute type equals to ”modelreference”.

1rule IREFTypeModRef2Object { 2 from i:ADOXX!IREFType((i.type=’modelreference’) and (not thisModule.getIrefTargetModelId(i). oclIsUndefined()) ) 3 ... 4} Listing 5.8: Generate Model2Object weaving

5.3.3. M2T Transformation from XWIKI (EMF) to XWIKI (XML)

XWIKI XML

The result of this transformation will create one XML file for each Page and each Object of the model. You’ll find an example of a Page XML file in Listing 5.9 and an example of a Object XML file in Listing 5.10.

1 2 3Main.WebHome 4Home 5{{include reference="LPCode\.ModelSetWebHome" /}} 6 Listing 5.9: Example of a Page XML file for XWiki

1 2 3LPCode.ModelSetClass 4epbr 5Set of models 6 Listing 5.10: Example of a Object XML file for XWiki

All these files are organized in a structure reflecting XWiki page organization. In an XWiki instance, you can have multiple wikis; each of them will contain pages which are organized per space. Finally, a page can contain zero, one or more objects (identified by its class, for example LPCode.ModelClass and its index). Note that by convention, a specific page called WebHome is also created as the entry point of the space this page is in.

Learn PAd FP7-619583 81 As expressed in the Acceleo template, each page will be named after a unique ID; all these pages will be stored into a unique space for each set of models (also identified by an ID). Everything will be stored into a unique wiki call xwiki. For example, the Listing 5.11 show an example of 2 set of models (identified by epbr and unico) with 2 pages in each containing a few objects.

1|-- xwiki 2|-- epbr 3| |-- mod.25873 4| | |-- objects 5| | | |-- LPCode.ModelClass 6| | | |-- 0 7| | | |-- __object.xml 8| | |-- __page.xml 9| |-- WebHome 10| |-- objects 11| | |-- LPCode.ModelSetClass 12| |-- 0 13| |-- __object.xml 14| |-- __page.xml 15|-- unico 16|-- mod.26725 17| |-- objects 18| | |-- LPCode.ModelClass 19| | |-- 0 20| | |-- __object.xml 21| |-- __page.xml 22|-- WebHome 23|-- objects 24| |-- LPCode.ModelSetClass 25| |-- 0 26| |-- __object.xml 27|-- __page.xml Listing 5.11: Example of the result of Acceleo transformation for 2 set of models epbr and unico with 2 pages in each and 1 object in each page

This is the actual status of the Acceleo transformation. However, they may evolve in the future as we reach bugs or need for improvements. The transformation discussed in Sec.5.3.2, generates as output, the XWIKI EMF artifact. In this Section, we discuss about the M2C transformation that generates the learning content (XML files) for the Learn PAd platform13. The transformation takes as input the generated model by the M2M transformation (see Sec.5.3.2), and yields the resulting output by means of Acceleo 14, a template based transformation engine. The engine works by means of the mtl input file that embody the module (see List.5.12) which parametrize the source URI of the XWIKI meta-model (http://www.xwiki.org). Such module contains two kind of elements: i) templates, that are generating the code, and ii) queries, that are used to encapsulate complex expressions.

1[module generate(’http://www.xwiki.org’)] Listing 5.12: Source metamodel

Templates are the most used structure in our transformation as they directly generate the code. They have: i) a visibility (public | protected | private) with a scope similar to that of the object oriented programming languages; ii) the name of the template and finally iii) the parameters declared

13https://github.com/LearnPAd/learnpad/blob/master/lp-collaborative-workspace/ lp-cw-bridge/lp-cw-bridge-transformer/src/main/java/eu/learnpad/transformations/model2text/ main/generate.mtl 14https://eclipse.org/acceleo/

Learn PAd FP7-619583 82 following the convention : where types are both the source model elements (ex: Page, Object, Property) and the default one from OCL like Boolean, String, Integer, OclAny. The template able to generate the Page file structure is given in List.5.13. The template (line 1), as well as those which will be discussed below, uses two kind of expressions to generate code: i) the Acceleo programming language constructs and ii) the XML code template. Such excerpt generates for each instance p : Page in the model (line 4), the file page.xml (line 5) in a well formed path according to XWIKI needs. Each generated page contains the XML namespace (line 7), the parent item of the current object (line 8), the title (line 9) and the learning content (line 10).

1[template public webHome(doc : DocumentRoot)] 2[comment @main/] 3 4 [for (p : Page | Page.allInstances())] 5 [file (’//’+p.wiki+’//’+p.space+’//’+p.name+’//__page.xml’, false, ’UTF-8’)] 6 7 8 [p.parent/] 9 [p.title/] 10 [p.content/] 11 12 [/file] 13 [/for] 14 15 [for (o : Object | Object.allInstances())] 16 [file (’//’+o.wiki+’//’+o.space+’//’+o.pageName+’//objects//’+o.className+’//’+o.number+’//__object .xml’, false, ’UTF-8’)] 17 [getObjectContent(o)/] 18 [/file] 19 [/for] 20[/template] Listing 5.13: Template WebHome

As for the pages, the template for each instance o:Object (line 15), generate the file object.xml (line 16) and call, like it was a procedure, the template getObjectContent (line 17). The List.5.14 shown how, the Object and Properties in a WIKI page are produced. In particular, the template create the Object tag with its namespace (line 3) and sets the className (line 4) and then, for each Property (line 5), such that its name is not equal to "folder id" (line 6), create the tag with some attributes like name, type and value.

1[template private getObjectContent(obj:Object)] 2 3 4 [obj.className/] 5 [for (s : Property | obj.property)] 6 [if (not s.name.equalsIgnoreCase(’folder_id’))] 7 [s.value/] 8 [/if] 9 [/for] 10 11[/template] Listing 5.14: Template Objects

5.3.4. M2M Transformation from MD LPAD (EMF) to ADOxx (EMF)

In this section, we describe the model-to-model transformation written in ATL that maps models con- forming to the MagicDraw-based Learn PAd final Metamodel to the ADOxx-based Learn PAd final Meta- model; the involved metamodels are both belonging to the EMF technical space. The most interesting

Learn PAd FP7-619583 83 aspect of this transformation is that it does not have to bridge any abstraction gap, i.e., the two meta- models have the same informative content and therefore can be considered in this respect isomorphic. Nevertheless, it is necessary to provide a transformation capable of translating the models from the former metamodel to the latter one. For instance, the transformation must take the model in Fig. 5.15(a) (conforming to the metamodel in Fig. 5.14) and transform it into the model in Fig. 5.15(b) (conforming to the metamodel in Fig. 5.8). The transformation is outlined in the listings reported below. Since the source and target metamodels are isomorphic, the transformation performs only a notational translation without adding or deleting any element. The source metaclasses, like for instance FlowElementContainer (see the abstract rule starting at line 1 in Listing 5.15) and FlowElement (see the abstract rule starting at line 1 in Listing 5.16), is mapped into the MODEL Type and INSTANCE Type elements, respectively. Similarly, any instance of the metaclass Association in the MagicDraw-based Learn PAd final Metamodel is translated into instances of the Connector container in the ADOxx metamodel (see the rule at line 1 in Listing 5.17).

Figure 5.14: Magic Draw BPMN fragment metamodel

1abstract rule FlowElementsContainer2ModelType { 2 from s: MD!FlowElementsContainer 3 to t: ADOXX!MODELType ( 4 id <- s.id, 5 name <- s.name 6 ) 7} 8 9rule Process2ModelType extends FlowElementsContainer2ModelType { 10 from s: MD!SubProcess 11 to t: ADOXX!MODELType () 12 do { 13 thisModule.model2Instance(t); 14 } 15} 16 17rule SubProcess2ModelType extends FlowElementsContainer2ModelType { ... } 18rule AdHocSubProcess2ModelType extends FlowElementsContainer2ModelType { ... } Listing 5.15: Process and SubProcess to Model transformation fragment

Learn PAd FP7-619583 84 Figure 5.15: Magic Draw to ADOXX ATL transformation example

1abstract rule FlowElement2Instance { 2 from s:MD!FlowElement 3 to t:ADOXX!INSTANCEType ( 4 id <- s.id, 5 name <- s.name, 6 class <- s.oclType().toString() 7 ) 8} 9 10rule Activity2Instance extends FlowElement2Instance { 11 from s: MD!Activity 12 to t: ADOXX!INSTANCEType () 13} 14 15rule CallActivity2Instance extends FlowElement2Instance { ... } 16rule Event2Instance extends FlowElement2Instance { ... } 17rule ThrowEvent2Instance extends FlowElement2Instance { ... } 18rule EndEvent2Instance extends FlowElement2Instance { ... } 19rule StartEvent2Instance extends FlowElement2Instance { ... } 20rule BoundaryEvent2Instance extends FlowElement2Instance { ... } 21rule IntermediateCatchEvent2Instance extends FlowElement2Instance { ... } 22rule Task2Instance extends FlowElement2Instance { ... } 23rule ManualTask2Instance extends FlowElement2Instance { ... } 24rule ServiceTask2Instance extends FlowElement2Instance { ... } 25rule UserTask2Instance extends FlowElement2Instance { ... } 26rule Gateway2Instance extends FlowElement2Instance { ... } 27rule InclusiveGateway2Instance extends FlowElement2Instance { ... } 28rule EventBasedGateway2Instance extends FlowElement2Instance { ... } 29rule ComplexGateway2Instance extends FlowElement2Instance { ... }

Learn PAd FP7-619583 85 30rule ExclusiveGateway2Instance extends FlowElement2Instance { ... } 31rule ParallelGateway2Instance extends FlowElement2Instance { ... } 32rule DataObject2Instance extends FlowElement2Instance { ... } Listing 5.16: Process and SubProcess to Model transformation fragment

1rule Association2Connector { 2 from s:MD!Association 3 to t:ADOXX!CONNECTORType ( 4 id <- s.id, 5 fROM <- t1, 6 tO <- t2 7 ), 8 t1:ADOXX!FROMType( 9 class <- s.sourceRef.oclType().toString(), 10 instance <- s.sourceRef.toString() 11 ), 12 t2:ADOXX!TOType( 13 class <- s.targetRef.oclType().toString(), 14 instance <- s.targetRef.toString() 15 ) 16} Listing 5.17: Connector transformation fragment

Learn PAd FP7-619583 86 6 Conclusions

In this deliverable, we described the tooling chain that transforms Learn PAd models into wiki structures. The most difficult part has been represented by the fact that the chain had to cross several technical spaces. In fact the models edited in the modeling environment (i.e., the ADOxx and the MagicDraw modeling environment) have to be processed in order to make them complying to the XMI standard and, therefore, processable on the EMF platform. The EMF platform represents one of the widely industry adopted modeling frameworks because of the availability of a wide range of tools and languages. In particular, the model transformation ATL has been used for generating XWiki models (from which the corresponding XML code is obtained) starting from the Learn PAd models. In order to take advantage of the EMF platform, it has been necessary to embed it into the Learn PAd platform. This required to identify the necessary EMF components that have been, in turn, wrapped in order to expose the overall functionalities by means of well-defined APIs. The EMF embedding into the Learn PAd platform has been the starting point for realizing the tooling chain that consists of both ad-hoc operations and standard model-to-model transformations. It is worth noting that the model transformations have been defined starting from the ADOxx-based Learn PAd final Metamodel. This is due to the fact that the ADOxx metamodels have been made available in the project before the MagicDraw ones. Later, once also the MagicDraw-based Learn PAd metamodel was ready, a bridging from it to the ADOxx ones has been realized as a model transforma- tion in ATL.

Learn PAd FP7-619583 87 Learn PAd FP7-619583 88 Annex A Metamodel Diagrams

Figure Annex A.1: final BPMN Metamodel

Learn PAd FP7-619583 89 Figure Annex A.2: final CMMN Metamodel

Learn PAd FP7-619583 90 Figure Annex A.3: final BMM Metamodel

Learn PAd FP7-619583 91 Figure Annex A.4: final Competency Metamodel

Figure Annex A.5: final Document and Knowledge Metamodel

Learn PAd FP7-619583 92 Figure Annex A.6: final Organizational Structure Metamodel

Learn PAd FP7-619583 93 Figure Annex A.7: final KPI Metamodel

Learn PAd FP7-619583 94 Annex B Listings

Annex B.1. Maven Library Specification

1... 2 3 4org.eclipse.m2m.atl 5driver.emf4atl 63.4.0 7 8 9org.eclipse.m2m.atl 10common 113.5.0 12 13 14org.eclipse.m2m.atl 15core.emf 163.4.0 17 18 19org.eclipse.m2m.atl 20core 213.4.0 22 23 24org.eclipse.m2m.atl 25emftvm 263.6.0 27 28 29org.eclipse.m2m.atl 30emftvm.compiler 313.4.100.201308012235 32 33 34org.eclipse.m2m.atl 35engine 363.4.0 37 38 39org.eclipse.m2m.atl 40engine.emfvm 413.4.0 42 43 44org.eclipse.m2m.atl 45engine.emfvm.launch 463.4.0

Learn PAd FP7-619583 95 47 48 49org.eclipse.m2m.atl 50engine.vm 513.4.0 52 53 54org.eclipse.m2m.atl 55dsls 563.4.0 57 58 59...

Learn PAd FP7-619583 96 Bibliography

[1] The Metamodel Refactorings Catalog. http://www.metamodelrefactoring.org. Accessed: 2015-12-12.

[2] Omg unified modeling language, 2011. Version 2.4.1.

[3] OMG. Case Management Model and Notation (CMMN), V 1.0. Technical report, Object Manage- ment Group OMG, 2013.

[4] A. Agrawal, G. Karsai, Z. Kalmar, S. Neema, F. Shi, and A. Vizhanyo. The Design of a Language for Model Transformations. Journal of Software and System Modeling, 2005.

[5] M. Aksit, I. Kurtev, and J. Bezivin.´ Technological Spaces: an Initial Appraisal. International. Feder- ated Conf. (DOA, ODBASE, CoopIS), Industrial Track, Los Angeles, 2002.

[6]JB ezivin.´ On the unification power of models. Software & Systems Modeling, 2005.

[7] J. Bezivin´ and O. Gerbe.´ Towards a Precise Definition of the OMG/MDA Framework. In Automated Software Engineering (ASE 2001), pages 273–282, Los Alamitos CA, 2001. IEEE Computer Soci- ety.

[8] Jean Bezivin.´ On the unification power of models. Software & Systems Modeling, 4(2):171–188, 2005.

[9] Jonathan Billington, Søren Christensen, Kees M. van Hee, Ekkart Kindler, Olaf Kummer, Laure Petrucci, Reinier Post, Christian Stehno, and Michael Weber. The Petri Net Markup Language: Concepts, Technology, and Tools. In ICATPN, pages 483–505, 2003.

[10]E.B orger¨ and R. Stark.¨ Abstract State Machines - A Method for High-Level System Design and Analysis. Springer-Verlag, 2003.

[11] A. Cicchetti and D. Di Ruscio. Decoupling Web Application Concerns through Weaving Operations. Science of Computer Programming, 70(1):62–86, 2008.

[12] Antonio Cicchetti, Davide Di Ruscio, Romina Eramo, and Alfonso Pierantonio. JTL: a bidirectional and change propagating transformation language. In B. Malloy, S. Staab, and M. van den Brand, editors, 3rd International Conference on Software Language Engineering (SLE 2010), number 6563 in LNCS, pages 183–202. Springer, Heidelberg, October 2010.

[13] European Commission. Technology readiness levels (TRL). HORIZON 2020 – WORK PRO- GRAMME 2014-2015 General Annexes, Extract from Part 19 - Commission Decision C(2014) 4995.

[14] K. Czarnecki and S. Helsen. Feature-based Survey of Model Transformation Approaches. IBM Systems J., 45(3), June 2006.

Learn PAd FP7-619583 97 [15] J. de Lara and H. Vangheluwe. AToM3: A tool for multi-formalism and meta-modelling. In Ralf-Detlef Kutsche and Herbert Weber, editors, FASE, volume 2306 of LNCS, pages 174–188. Springer, 2002.

[16] Davide Di Ruscio, Ludovico Iovino, and Alfonso Pierantonio. Coupled Evolution in Model-Driven Engineering. IEEE Software, 29(6):78–84.

[17] Davide Di Ruscio, Ludovico Iovino, and Alfonso Pierantonio. Evolutionary togetherness: How to manage coupled evolution in metamodeling ecosystems. In ICGT, volume 7562, pages 20–37. Springer, 2012.

[18] Eclipse. Eclipse Modeling Framework (EMF), 2005. http://www.eclipse.org/emf.

[19] Romina Eramo, Alfonso Pierantonio, and Gianni Rosa. Managing uncertainty in bidirectional model transformations. In Proceedings of the 2015 ACM SIGPLAN International Conference on Software Language Engineering, pages 49–58. ACM, 2015.

[20] J-M. Favre. Towards a Basic Theory to Model Model Driven Engineering. WiSME 2004.

[21] Hans-Georg Fill, Timothy Redmond, and Dimitris Karagiannis. Fdmm: A formalism for describing adoxx meta models and models. 2012.

[22] M. Gelfond and V. Lifschitz. The Stable Model Semantics for Logic Programming. In Robert A. Kowalski and Kenneth Bowen, editors, Proceedings of the Fifth Int. Conf. on Logic Programming, pages 1070–1080, Cambridge, Massachusetts, 1988. The MIT Press.

[23] A. Gerber, M. Lawley, K. Raymond, J. Steel, and A.Wood. Transformation: The Missing Link of MDA. In 1st International Conference on Graph Transformation.

[24] Fred´ eric´ Jouault and Ivan Kurtev. Transforming Models with ATL. In Jean-Michel Bruel, editor, MoDELS Satellite Events, volume 3844 of LNCS, pages 128–138. Springer-Verlag, 2005.

[25] Fred´ eric´ Jouault and Ivan Kurtev. Transforming models with atl. In satellite events at the MoDELS 2005 Conference, pages 128–138. Springer, 2006.

[26] Dimitris Karagiannis. Modelling Method Design Environment. 2014.

[27] Dimitris Karagiannis. Agile modeling method engineering. Panhellenic Conference on Informatics, pages 5–10, 2015.

[28] Dimitris Karagiannis. Agile Modeling Method Engineering. Proceedings of the 19th Panhellenic Conference on Informatics, 2015.

[29] Dimitris Karagiannis, Wilfried Grossmann, and Peter Hofferer.¨ Open model initiative: A feasibility study. University of Vienna, Department of Knowledge Engineering (September 2008), 2008.

[30] Dimitris Karagiannis and Harald Kuhn.¨ Metamodelling platforms. In EC-Web, volume 2455, page 182, 2002.

[31] Stuart Kent. Model driven engineering. In Integrated formal methods, pages 286–298. Springer, 2002.

[32] A. Kleppe and J. Warmer. MDA Explained. The Model Driven Architecture: Practice and Promise. Addison-Wesley, 2003.

[33] Dimitrios Kolovos, Richard Paige, and Fiona Polack. The epsilon transformation language. In Antonio Vallecillo, Jeff Gray, and Alfonso Pierantonio, editors, Theory and Practice of Model Trans- formations, volume 5063 of Lecture Notes in Computer Science, pages 46–60. Springer Berlin / Heidelberg, 2008.

Learn PAd FP7-619583 98 [34] Alexander Konigs and Andy Schurr. Tool Integration with Triple Graph Grammars - A Survey. Electronic Notes in Theoretical Computer Science, 148:113–150, 2006.

[35] Ivan Kurtev, Jean Bezivin,´ Fred´ eric´ Jouault, and Patrick Valduriez. Model-based DSL frameworks. ACM, October 2006.

[36] M.Didonet Del Fabro, J.Bezivin, F. Jouault, E. Breton, and G.Gueltas. AMW: A generic Model Weaver. In Int. Conf. on Software Engineering Research and Practice (SERP05), 2005.

[37] Knut Hinkelmann Jovaldas Januskevicius Alfonso Pierantonio Gianni Rosa Darius Silingas Bar- bara Thonssen¨ Sabrina Tonon Robert Woitsch Mehmet Albayrak, Nesat Efendioglu. Design and Initial Implementation of Metamodels for Describing Business Processes in Public Administrations. Deliverable D3.2 - EU FP7 Project Learn PAd.

[38] S. J. Mellor, A. N. Clark, and T. Futagami. Guest Editors’ Introduction: Model-Driven Development. IEEE Software, 20(5):14–18, 2003.

[39] OMG MOF. Omg meta object facility (mof) specification v1. 4, 2002.

[40] P-A. Muller, F. Fleurey, and J-M. Jez´ equel.´ Weaving Executability into Object-Oriented Metalan- guages. In ACM/IEEE 8th International Conference on Model Driven Engineering Languages and Systems, pages 264–278, Montego Bay, 2005.

[41] Object Management Group (OMG). MDA Guide version 1.0.1, 2003. OMG Document: omg/2003- 06-01.

[42] Dirk Ohst, Michael Welle, and Udo Kelter. Differences between versions of UML diagrams. In the 9th European software engineering conference held jointly with 10th ACM SIGSOFT international symposium, pages 227–236, New York, New York, USA, 2003. ACM Press.

[43] OMG. MOF 2.0 Query/Views/Transformation RFP, 2002. OMG document ad/2002-04-10.

[44] OMG. XMI Specification, v1.2, 2002. OMG Document formal/02-01-01.

[45] OMG. Meta Object Facility (MOF) 2.0 Core Specification, OMG Document ptc/03-10-04. http://www.omg.org/docs/ptc/03-10-04.pdf, 2003.

[46] OMG. MOF QVT Final Adopted Specification, 2005. OMG Adopted Specification ptc/05-11-01.

[47] OMG. OCL 2.0 Specification, 2006. OMG Document formal/2006-05-01.

[48] MDA OMG. Guide version 1.0. 1. Object Management Group, 62, 2003.

[49] Venkatramana N Reddy, Michael L Mavrovouniotis, Michael N Liebman, et al. Petri net represen- tations in metabolic pathways. In ISMB, volume 93, pages 328–336, 1993.

[50] D. C. Schmidt. Guest Editor’s Introduction: Model-Driven Engineering. IEEE Computer, 39(2):25– 31, 2006.

[51] Douglas C Schmidt. Guest editor’s introduction: Model-driven engineering. Computer, 39(2):0025– 31, 2006.

[52] Ed Seidewitz. What Models Mean. IEEE Software, 20(5):26–32, September/October 2003.

[53] B. Selic. The Pragmatics of Model-driven Development. IEEE Software, 20(5):19–25, 2003.

[54] Dave Steinberg, Frank Budinsky, Ed Merks, and Marcelo Paternostro. EMF: eclipse modeling framework. Pearson Education, 2008.

Learn PAd FP7-619583 99 [55] G. Taentzer, K. Ehrig, E. Guerra, J. de Lara, L. Lengyel, T. Levendovszky, U. Prange, D. Varro,´ and S. Varro-Gyapay.´ Model Transformation by Graph Transformation: A Comparative Study. In ACM/IEEE 8th International Conference on Model Driven Engineering Languages and Systems, Montego Bay, Jamaica, October 2005.

[56] Gabriele Taentzer. AGG: A graph transformation environment for modeling and validation of soft- ware. In John L. Pfaltz, Manfred Nagl, and Boris Bohlen,¨ editors, AGTIVE, volume 3062 of LNCS, pages 446–453. Springer, 2003.

[57] L. Tratt. Model transformations and tool integration. Jour. on Software and Systems Modeling (SoSyM), 4(2):112–122, May 2005.

[58] Laurence Tratt. Model transformations and tool integration. Software & Systems Modeling, 4(2):112–122, 2005.

[59] D. Varro´ and A. Pataricza. Generic and Meta-Transformations for Model Transformation Engineer- ing. In International Conference on the Unified Modeling Language, pages 290–304, 2004.

[60] D. Varro,´ G. Varro,´ and A. Pataricza. Designing the automatic transformation of visual languages. Science of Computer Programming, 44(2):205–227, August 2002.

[61] D. Vojtisek and J-M. Jez´ equel.´ MTL and Umlaut NG: Engine and Framework for Model Transfor- mation. http://www.ercim.org/publication/Ercim News/enw58/vojtisek.html.

[62] Dennis Wagelaar, Massimo Tisi, Jordi Cabot, and Fred´ eric´ Jouault. Towards a general composition semantics for rule-based model transformation. In MoDELS, pages 623–637, 2011.

[63] Stephen A White. Introduction to bpmn. IBM Cooperation, 2(0):0, 2004.

[64] Bentley Whitten and LD Bentley. Dittman. 2004. Systems analysis and design methods.

[65] Xactium. Xmf-mosaic. http://xactium.com.

Learn PAd FP7-619583 100