Model-based interface framework : an extensible framework for automatic generation of test cases and wrappers

Citation for published version (APA): Martinez Marquez, A. D. (2016). Model-based interface framework : an extensible framework for automatic generation of test cases and wrappers. Technische Universiteit Eindhoven.

Document status and date: Published: 28/09/2016

Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication

General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement: www.tue.nl/taverne

Take down policy If you believe that this document breaches copyright please contact us at: [email protected] providing details and we will investigate your claim.

Download date: 27. Sep. 2021 Model-based interface framework

Aldo D. Martinez September 2016

Model-based interface framework

Aldo D. Martinez September 2016

Model-based interface framework An extensible framework for automatic generation of test cases and wrappers

Eindhoven University of Technology Stan Ackermans Institute / Software Technology

Partners

FEI Company Eindhoven University of Technology

Steering Group Andrei Radulescu Paul Janson Tim Willemse

Date September 2016

Document Status Public

The design described in this report has been carried out in accordance with the TU/e Code of Scientific Conduct.

Contact Eindhoven University of Technology Address Department of Mathematics and Computer Science MF 5.097b, P.O. Box 513, NL-5600 MB, Eindhoven, The Netherlands +31402474334 Published by Eindhoven University of Technology Stan Ackermans Institute Printed by Eindhoven University of Technology UniversiteitsDrukkerij SAI report no. Eindverslagen Stan Ackermans Instituut ; 2016/043

Abstract Automation through model-based development is becoming more and more popular between companies with big software systems. Due to the innate complexity of the systems a significant amount of time has to be invested for the development activities, such as, maintenance, updating, and testing. These activities imply repetition and tend to have an error prone nature. This report describes a model-based framework project for modeling software interfaces with a high-level homogenous representation that can be later used for the automatic generation of test cases and wrappers. A formal modeling language is defined based on the analysis of implemented software interfaces at FEI focusing on the description of the main elements and behavior of an interface and removing technology-related values. A testing algorithm is proposed in which test cases are generated based on state transitions of the interface model, described through a state machine. All these features have been implemented into a prototype that demonstrates the feasibility of testing and wrapping automation.

Keywords Model-based development, state machine testing, behavioural testing, model-based testing, black-box testing, software quality, wrapping technologies, Domain Specific Languages, code generators, modeling languages, testing automation, model transformations, software technology, PDEng

Preferred Model-based interface framework: An extensible framework for automatic generation of test reference cases and wrappers. , SAI Technical Report, September 2015. (Eindverslagen Stan Ackermans Instituut ; 2016/043)

Partnership This project was supported by Eindhoven University of Technology and FEI Company.

Disclaimer Reference herein to any specific commercial products, process, or service by trade name, Endorsement trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the Eindhoven University of Technology or FEI Company. The views and opinions of authors expressed herein do not necessarily state or reflect those of the Eindhoven University of Technology or FEI Company, and shall not be used for advertising or product endorsement purposes.

Disclaimer While every effort will be made to ensure that the information contained within this report is Liability accurate and up to date, Eindhoven University of Technology makes no warranty, representation or undertaking whether expressed or implied, nor does it assume any legal liability, whether direct or indirect, or responsibility for the accuracy, completeness, or usefulness of any information.

Trademarks Product and company names mentioned herein may be trademarks and/or service marks of their respective owners. We use these names without any particular endorsement or with the intent to infringe the copyright of the respective owners.

Copyright Copyright © 2016. Eindhoven University of Technology. All rights reserved. No part of the material protected by this copyright notice may be reproduced, modified, or redistributed in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage or retrieval system, without the prior written permission of the Eindhoven University of Technology and FEI Company.

Foreword

Within FEI, we partition the software into server software, providing the microscope control and safety, and applications, implementing advanced functionalities and user interfaces. We formalize the server interfaces as the contract between the server and the applications. We have defined this project to automate the tedious, time- consuming activities around the interfaces: ─ Tests, ensuring the interface correctness while the software evolves, and ─ Wrappers, bridging the technologies boundaries between the server (C++), the interface (COM) and the applications (C++. C#, python).

Fortunately, Aldo has shown interest in this project and has picked it for his graduation thesis. After an initial exploration phase, we have chosen to develop our own language to have the full flexibility of concisely covering our requirements. Aldo has quickly become familiar with both the FEI server technologies, and xtext / xtend, in which the Model-Based Interface Framework has been developed. He has created a language for modelling interfaces, capturing both their semantics and behavior. Further, he created language extensions targeting the specifics of testing and wrapper generation. Aldo has created a test framework providing full functional coverage. The wrapper generators can cover not only technology translations, but also other interface artifacts, such as multi-client arbitration or tracing.

We have run the project in an agile manner. Aldo has constantly kept a running version of the framework, over which we have iterated multiple, defining and (re)prioritizing tasks as we have learned more. We have aimed at maximizing the types of features we could cover with the framework, and applying them to concrete server interfaces with increased level of complexity. Aldo ensured the project run smoothly, was able to adapt to changing requirements, and successfully tackled all challenges arising over time.

The results are very promising. We have involved a team in a test trial, which has shown that the framework is ease to use and complete enough to enable modelling and testing interfaces in a short time. A few missing features were identified, which can easily be added to the framework. Surprisingly, the main challenge faced during the trial was actually not related to the framework, but to modelling and testing the behavior of one of their own legacy interfaces.

Within FEI, we plan to further extend the framework, make it part of our build and test processes, and use it by all our teams. With the increased productivity and increased test coverage it provides, we believe it will quickly be adopted by our teams, which will be able to spend more time on features and to guarantee the quality of their interfaces.

PROJECT MANAGER Dr. Andrei Radulescu September 2016

Preface This document summarizes the “Model-based interface framework: An extensible framework for automatic generation of test cases and wrappers” project. The project addresses the challenges of testing and wrapper automation using model- based technologies and applying them into a high-complex software system.

The project has been executed by Aldo D. Martinez from the Stan Ackermans Institute, Software Technology Programme of the Eindhoven University of Technology. This project is the nine-month final assignment for the aforementioned two-year Profesional Doctorate in Engineering (PDEng) programme, known by its duch name as Ontwerpers Opleiding Technische Informatica (OOTI). This project has been implemented within FEI Company in Eindhoven.

This document is primarily intended for readers with a technical background in disciplines, such as automation, model-based techniques, testing, wrapping, domain specific languages, derivative text-based languages, and general software engineering. However, no specialized knowledge in these disciplines is needed.

For readers with a non-technical background or those who are interested in knowing the basis and results of this project, they should read chapters 1 to 5. These chapters introduce and explain the essential points in which the system is founded and we were aiming with this project.

Readers with an interested in the implemented solution, how it was designed, planned, and the main development decisions, should read chapters 6 to 10. These chapters cover the requirements, high-level architecture, solution’s design, and the project verification and validation. In specific, the chapters 7 to 9 include the main topics regarding the implementation of a domain specific language as a model-based solution for automating testing and wrapper generation.

Additionally, chapters 12, 13, and 14 cover the conclusions and achieved results through the project, the project managements, and a retrospective from the perspective of the author.

Aldo D. Martinez September 2016

Acknowledgements

I heartily want to express my gratitude to all the people that helped me, guided me, led me, and supported me during this project. Without them, the success of this project would not be possible.

First of all, I am deeply grateful with the people that made this project possible from FEI Company. Special thanks are given to Andrei Radulescu and Paul Janson, my supervisors and mentors at FEI. They made possible my participation in the company and guided me throughout the whole project. Their support, enthusiasm, and collaboration have been essential for the success of this project. Their valuable advices and opportune suggestions helped me to further develop myself professionally. I would also like to thank to all the FEI engineers that supported me and collaborated with the project during these nine months. Martijn Kabel and Paul Van Gorp helped me during my integration to the company. David Van Luijk, Tim Bertholet, Santosh Zope, Vasily Galkin, Vladimir Ivin, Laurens Oosterhof, and Davy Kox supported me and facilitated material and knowledge for understanding FEI interfaces. Andrea Pasqualini, Peter van Merkerk, Erwin de Groot, and Joost Dierkse introduced to me the current wrapper technologies and implementations.

I would like to expresses my thankfulness to my university supervisor Tim Willemse who always gave pertinent advices and valuable feedback of my work. With his expertise in model-driven and testing, he opened my vision to new topics and subjects that I added to my work in order to improve it. Our communication was excellent and he was always there to answer my questions.

I am deeply grateful to a lot of people within the OOTI program and TU/e that supported me during the last two years. Special thanks to Ad Aerts, Maggy de Wert, and Desiree van Oorschot, and all the OOTI coaches for all the advices, reviews, and fruitful talks.

Furthermore, I would like to thank my friends from the OOTI program, during the past two years we shared experiences, culture, knowledge, and we also helped each other to make this journey a success.

Last but not least, I have been able to reach this point in my life thanks to my family. They have always given me continuous support to follow my dreams and make them possible.

Aldo D. Martinez September 2016

Executive Summary

Repetitive activities tend to be tedious, mechanical, and error prone nature. Additionally, these activities consume a significant time, decreasing development efficiency. Removing repetitive activities is not an option when these activities are related with essential elements of quality (creating test cases) and code interoperability (creating wrappers). The recent advances in automation and model-based technologies present an alternative to reduce time consumption and ensuring the quality of the produced artifacts of these repetitive activities. This project focuses on investigating and prototyping a possible solution in the domain of automation of test case and wrapper generation through model-based technologies.

Ensuring interface behavior and avoiding repetitive activities is of great importance to FEI. FEI invests significant time in manually creating tests for validating the interfaces behavior. In this project, we have investigated the feasibility of automating the test generation by using a model-based approach. We have conducted a study of the FEI IOM interfaces in which we have analyzed interface behavior and testing techniques, then we have implemented our results into a prototype that automatically generates tests based on a behavioral model of the interface. We have developed an algorithm that generates test cases for an interface depending on the modeled interface behavior. The model contains a state machine and a simple interface description.

On the code interoperability domain, we have studied the current implemented approaches for wrappers at FEI and have defined an automation process. The automation process requires a wrapper model which is defined by the wrapping technology (boost.python and COM-ATL have been researched). This wrapper model together with the simple interface model allow us to generate wrapper files ready to be compiled and used within the FEI software system.

This report presents the results of the previous two subjects (testing and wrappers) through a model-based solution that is extendable, easy to use, and flexible. This is achieved by the implementation of a component-based architecture with a modular separation of concerns keeping a clear distinction between the modeling language and the code generators. For modeling the interface and artifact generation, we have used Xtext/Xtend, a technology that is becoming the de-facto standard for model-based frameworks in academia and industry.

The prototype is a first instance of a framework that validates the automation of test case and wrapper generation by the definition of models (interface and artifact). The models have proven to be easy to be read and created in the initial trials (with FEI developers). The generated artifacts (test cases and wrappers) are easily integrated and validated within the FEI code base.

As a prototype and a first approach for an automation framework, the results are very promising and the investigation can now move to the next phase, which is the extension of the framework for covering a larger number of interfaces and engineering to improve the current features of the solution.

Table of Contents

Foreword ...... i

Preface ...... iii

Acknowledgements ...... v

Executive Summary ...... vii

Table of Contents ...... ix

List of Figures ...... xiii

List of Tables ...... xv

1. Introduction ...... 1 1.1 About FEI ...... 1 1.2 Transmission Electron Microscope ...... 1 1.3 Model-based interface framework ...... 2 1.4 Model, behavior, and test-based development ...... 3 1.5 Outline ...... 4

2. Stakeholder Analysis ...... 5 2.1 FEI ...... 5 2.2 Eindhoven University of Technology (TU/e) ...... 6

3. Problem Analysis ...... 9 3.1 Complex software systems ...... 9 3.2 Ensuring software quality through testing ...... 10 3.3 Modeling interfaces ...... 10 3.4 Black-box testing ...... 11 3.5 Problem statement ...... 11 3.6 Design Opportunities ...... 12

4. Domain Analysis ...... 13 4.1 Microscope software system architecture ...... 13 4.2 Instrument Object Model ...... 14 4.3 COM interfaces ...... 14 4.4 Synchronous versus asynchronous ...... 14 4.5 Events ...... 15

4.6 Interface behavior ...... 16 4.7 Using interfaces ...... 17

5. Feasibility Analysis ...... 19 5.1 Challenges and alternatives ...... 19 5.1.1. Model abstraction level ...... 19 5.1.2. Standardizing testing and wrapping ...... 19 5.1.3. Project deployment ...... 19 5.1.4. Technology support for project requirements ...... 20 5.1.5. Structural difference in the interfaces ...... 20 5.1.6. Automation of repetitive work ...... 20

6. System Requirements ...... 21 6.1 Requirement gathering process ...... 21 6.2 Main requirements ...... 21 6.2.1. Main use cases ...... 23 6.2.2. Feature 1: Modeling interface definition ...... 25 6.2.3. Feature 2: Test case generation ...... 26 6.2.4. Feature 3: Interface wrapper generation ...... 26 6.2.5. Feature 4: Providing interface documentation ...... 27 6.3 Non-functional requirements...... 28 6.3.1. Extensibility ...... 28 6.3.2. Ease of use ...... 29 6.3.3. Flexibility / Configurability ...... 29 6.3.4. Effectiveness ...... 30

7. System Architecture ...... 31 7.1 High level elements ...... 31 7.2 Modeling approach decision ...... 32 7.3 Component-based architecture ...... 33 7.4 The modeling language ...... 34 7.5 The editor ...... 36 7.6 The parser ...... 36 7.7 The generator ...... 37 7.7.1. The artifact generators ...... 38 7.8 The transformation view...... 38

8. System Design ...... 41 8.1 Introduction ...... 41 8.2 The metamodel layer ...... 42 8.2.1. Grammar ...... 42 8.2.2. Process view ...... 47 8.2.3. Technology perspective ...... 50 8.3 The interpretation layer ...... 51 8.4 The generation layer ...... 51 8.4.1. The testing algorithm ...... 51 8.4.2. The wrapping strategy ...... 53

8.4.3. The generator ...... 54 8.5 Layer interactions ...... 58

9. Implementation ...... 61 9.1 Introduction ...... 61 9.2 Modeling ...... 61 9.2.1. Core modelling language aspects ...... 62 9.2.2. State machine implementation ...... 63 9.2.3. The custom method implementation ...... 63 9.2.4. Visualization of the modeling feature ...... 64 9.3 Artifact generation ...... 66 9.3.1. Artifacts technologies ...... 66 9.3.2. The generation gap pattern ...... 66 9.3.3. Testing implementation ...... 67 9.3.4. Wrapper implementation ...... 73 9.4 Executing artifacts ...... 73

10. Verification & Validation ...... 77 10.1 Introduction ...... 77 10.2 Testing with FEI interfaces...... 77 10.3 Trials with end users ...... 80 10.3.1. Quality requirements ...... 81 10.4 Capabilities and limitations...... 82 10.4.1. State machine transition complexity...... 82 10.4.2. Asynchronous calls ...... 83 10.4.3. Time performance of produced artifacts ...... 83 10.4.4. Maturity of testing ...... 83 10.4.5. Interface implementation differences ...... 83 10.4.6. Microscope configurations ...... 84 10.4.7. Model limitations...... 84 10.4.8. Logging and error detection ...... 84

11. Deployment ...... 85 11.1 Deployment view ...... 85 11.2 Deployment plan within FEI...... 86 11.2.1. Deploying for usage ...... 86 11.2.2. Updating and providing maintenance ...... 86 11.2.3. Integrating within FEI build server ...... 87

12. Conclusions ...... 89 12.1 Results & lessons learned ...... 89 12.2 Future work ...... 90

13. Project Management ...... 93 13.1 Way of working ...... 93 13.2 Work-Breakdown Structure ...... 93 13.3 Project Planning ...... 94

13.4 Risk Analysis ...... 95 13.5 Project execution ...... 96

14. Project Retrospective ...... 99 14.1 Reflection ...... 99 14.2 Design opportunities revisited ...... 100

Glossary ...... 103

Bibliography ...... 107 References ...... 107 Additional Reading ...... 108 Technology references ...... 108

Appendix A: Model-based approaches ...... 109 Proposal no. 1 – Pros, cons, and effort activities ...... 114 Proposal no. 2 – Pros, cons, and effort activities ...... 115

Appendix B: The Technology decision...... 118

Appendix C: The modeling language grammar ...... 120

Appendix D: Model examples ...... 125

About the Author ...... 129

List of Figures

Figure 1 – FEI current microscope product portfolio ...... 1 Figure 2 – FEI Transmission Electron Microscope ...... 2 Figure 3 – Internal composition of a TEM column...... 9 Figure 4 – General view of the system under test ...... 11 Figure 5 – High-level representation of microscope software architecture...... 13 Figure 6 – Microscope PC representation ...... 13 Figure 7 – Server exposing IOM interfaces of specific instruments ...... 14 Figure 8 – Synchronous and asynchronous example ...... 15 Figure 9 – Example of a COM Event interface ...... 16 Figure 10 COM interface definition – methods and state enumeration ...... 16 Figure 11 Common workflow for using an interface ...... 17 Figure 12 Main use cases ...... 22 Figure 13 Main features for the MBIF project ...... 25 Figure 14 Main elements derived from the requirements ...... 31 Figure 15 Component-based architecture for our project ...... 35 Figure 16 Modeling language component ...... 36 Figure 17 Editor component ...... 37 Figure 18 Parser component ...... 37 Figure 19 Generator component ...... 39 Figure 20 Layered view architecture ...... 40 Figure 21 Layered view architecture ...... 41 Figure 22 Main elements in modeling grammar...... 43 Figure 23 Interface grammar main structure and enumerations: ...... 44 Figure 24 Testing grammar main structure and enumerations ...... 46 Figure 25 Wrapper grammar main structure and enumerations ...... 46 Figure 26 Documentation grammar main structure and enumerations ...... 47 Figure 27 Sequence flow for textual model generation ...... 48 Figure 28 Internal structure of the modeling language package – Validator class only shows two examples of validation rules ...... 49 Figure 29 Creating the semantic mapper ...... 50 Figure 30 Technology chosen for the metamodel layer ...... 50 Figure 31 Interface model (defined using our grammar) including a state machine definition (visually represented in the right)...... 52 Figure 32 Additional transitions (red) that are not described in the interface model...... 53 Figure 33 Resulting test table...... 53 Figure 34 Wrapper process activities ...... 54 Figure 35 Generator package structural view ...... 56 Figure 36 Sequence diagram presenting the interaction in the generation layer for test case generation ...... 57 Figure 37 File template sections ...... 59 Figure 38 Full sequence for generating test cases including the three layers ...... 60 Figure 39 Example of the implementation of an interface state machine ...... 64 Figure 40 A fragment of the CMOS protector interface model in which custom methods are defined and then used in the state machine transitions ...... 65 Figure 41 Example of IDE editor showing feedback to the end user ..... 65 Figure 42 Generation gap pattern explained by applying inheritance...... 67 Figure 43 The process of creating test cases for a specific testing framework. ... 67 Figure 44 The logic behind the parameterized testing strategy. In this case specific definitions are provided by our test table in the form of methods/triggers to execute...... 68 Figure 45 Example of the skeleton of a test suite implementation...... 70 Figure 46 Example of the skeleton of an interface handler...... 71 Figure 47 Test case execution workflow – IStemDetector example ...... 72 Figure 48 Generated artifacts for the boost.python wrapper generation ...... 74

Figure 49 The global view of the process for artifact execution ...... 75 Figure 50 Test suite execution for the IStemDetector interface ...... 79 Figure 51 Deployment view of our solution prototype from the usage perspective...... 85 Figure 52 The incremental-iterative approach...... 93 Figure 53 Work-breakdown structure of the project...... 94 Figure 54 Global view of the 9-month period planning and high-level activities...... 95 Figure 55 General view of the main components that our solution required. ... 109 Figure 56 Our solution deployed with a generic model-based tool. A generic model-based tool is the kind of tool that only generates implementation code from a common model...... 110 Figure 57 Our solution deployed with a DSL approach. In this case the DSL extends our solution to develop generators for specific artifact generation...... 111 Figure 58 Our solution deployed with a model-based testing tool. The tool takes responsibility for generating and executing test cases...... 112 Figure 59 Solution proposal no. 1: DSL + MBT tool approach. The DSL takes responsibility for interface modeling and artifact generation regarding documentation and wrappers. The MBT tool takes care of creating, generating, executing, and reporting test cases...... 113 Figure 60 Proposal no. 2: The DSL approach. All the modeling and code generation is covered by developing a DSL while test compilation, execution, and result management is a independent process led by the end user. Accepted proposal...... 117 Figure 61 Comparison table between DSL tools...... 119 Figure 62 Main interface elements...... 120 Figure 63 The state machine grammar...... 121 Figure 64 The operations grammar...... 122 Figure 65 The testing grammar...... 123 Figure 66 The wrapper (python wrappers) grammar...... 124 Figure 67 The ISource interface...... 125 Figure 68 A test model for the ISource interface...... 126 Figure 69 A python wrapper model for the ISource interface...... 126 Figure 70 The IDetector interface...... 127 Figure 71 A test model for the IDetector interface...... 127

List of Tables

Table 1 – High-level project management ...... 5 Table 2 – Project management group ...... 5 Table 3 – Knowledge sources – technical group – potential users ...... 6 Table 4 – TU/e Stakeholders...... 6 Table 5 – Model interface use case specification ...... 23 Table 6 – Model behavior use case specification ...... 23 Table 7 – Model test settings use case specification ...... 23 Table 8 – Model wrapper settings use case specification ...... 23 Table 9 – Generate test cases use case specification ...... 23 Table 10 – Generate wrappers use case specification ...... 24 Table 11 – Generate documentation use case specification ...... 24 Table 12 Functional requirements for the modeling interface definition feature . 25 Table 13 Test case generation functional requirements ...... 26 Table 14 Interface wrapper generation requirements ...... 27 Table 15 Providing up-to-date and complete documentation requirements ...... 27 Table 16 – Extensibility requirement description ...... 28 Table 17 – Ease of use requirement description ...... 29 Table 18 – Flexibility / Configurability requirement description ...... 29 Table 19 – Effectiveness requirement description ...... 30 Table 20 – Elements to model through the DSL language ...... 62 Table 21 – Testing the solution for generating test cases ...... 77 Table 22 – State machine coverage results ...... 78 Table 23 – Testing the solution for generating wrappers ...... 79 Table 24 – Testing the solution with end users ...... 80 Table 25 – Global results of the trials with end users ...... 80 Table 26 – Testing the solution results from the quality requirements...... 81 Table 27 – Risk table ...... 95

1.Introduction

This chapter introduces FEI as a company and Transmission Electron Microscopes (TEM) produced by them, followed by the context definition in terms of methodologies and approaches related to the project. A brief explanation of how these elements match the FEI situation is introduced. The next section (1.4) states the main goal and high level objectives for this project. Finally, the outline section gives a brief overview of what is discussed in the rest of the document.

1.1 About FEI

FEI products in the microscopy field are primarily scanning electron microscopes (SEM), transmission electron microscopes (TEM), as well as focused ion beam (FIB) and dual beam (FIB/SEM) systems. All FEI products are employed in a wide range of applications that contemplate areas such as materials science, semiconductors, oil and gas, life sciences, minerals and mining, as well as industrial manufacturing [1].

Figure 1 – FEI current microscope product portfolio

FEI is a leading technology company that designs, manufactures and supports transmission-electron microscopes providing ultra-high resolution to sub-Angstrom levels. Software plays a key role in controlling and ensuring a safe operation of the microscope, and in creating a range of workflows and applications. FEI’s vision is creating nanoscale imaging and analysis workflows that enable existing and new customers to innovate in science, technology, and production.

1.2 Transmission Electron Microscope

A TEM is a type of microscope that injects electrons into a sample (specimen) onto a fluorescent screen. The specimen usually has to be specially prepared and held inside a vacuum chamber from which the air has been pumped out. A TEM produces a high-resolution, black and white image from the interaction that takes place between prepared samples and electrons in the vacuum chamber.

Figure 2 – FEI Transmission Electron Microscope

A TEM is integrated from several different hardware elements. The software implementation for controlling these hardware elements is managed by a server. This server offer interfaces as a method for usage of the hardware and software functionalities. This project intends to work with those interfaces. The following are the representative set of interfaces employed during the project:  Stem detectors: Detect direct or scattered electrons bounced from the specimen surface.  Field emission gun: Provides an intense beam of high energy electrons.  Cmos protector: Protects a sensitive camera from being exposed to a direct electron beam with a high dose.  Vacuum chamber: Creates and supports the vacuum inside the microscope to minimize the interaction between the electrons in the beam and particles in the column that are not part of a specimen

The FEI software server has a complex structure that contains the required elements to control the configurable hardware elements and to offer functionality needed by the applications running on the microscope. To achieve high quality of the features offered by the microscopes, testing is crucial in the software process.

Microscopes by themselves are extremely complex machines containing high cost hardware. It is mandatory that the interaction between software and hardware works in a manner that ensures proper functioning with a minimum risk of damage to the hardware. Effective and efficient testing of the software is an absolute necessity to guarantee reliability.

1.3 Model-based interface framework

Testing and automation are the main keywords for this project. Currently unit and system tests are written manually. Unit tests are run by the build server during continuous/nightly builds. System tests are invoked on a selected configuration running on virtual machines that simulate a microscope to allow executing tests that depend on specific behavior. Testing is a process that happens after interface implementation and is not always performed by the developer who coded the interface thereby increasing the complexity for test definition due to miscommunication in the interface functioning. An inherent issue with manual written tests is that having a full coverage of an interface behavior becomes complex as the interface extends its interaction with other parts of the system. Even more, maintaining a manually written test can escalate and be costlier as long as the system grows. Nevertheless, all interfaces share a common point, the definition of a state

machine that describes the expected behavior of the interface. Having this state machine as an input, FEI envisions automating test case generation.

A typical situation in high complex systems is the lack of or out-of-date documentation. Especially in systems in constant evolution and personnel changes, the necessity of updating the existing documentation is a high priority. This is a second process in which automation is essential.

Finally, in complex systems it is often required to make the functionality available to different technologies and programming languages to support a variety of clients. Adapters and wrappers are software elements that make achieving this goal possible. However, creating different adapters is a repetitive activity. In addition, existing adapters are a maintenance responsibility, which can be removed by automation.

The model-based interface framework project (MBIF) aims to improve the FEI interfaces quality by:  Guaranteeing its behavior through testing  Providing up-to-date and complete documentation  Generating wrappers automatically to other target languages

A parallel goal of the project is to demonstrate the feasibility of using specialized automation technology oriented in model-based development in order to achieve the previously introduced main goals.

1.4 Model, behavior, and test-based development

The MBIF project is based on the combination of existing methodologies applied to the specific FEI software infrastructure. Those methodologies are:

Model-based software development (MBSD) A model is an abstraction of a system [4] and the key element in MBSD approaches. MBSD is focused on achieving code reuse as well as performing maintenance and development by employing software modeling technologies. Models are created to serve particular purposes and usually represent a simpler human understandable representation of a system or a software component. One of the goals of MBSD is the automatic generation of code (including source code, wrappers, and adapters), documentation, and other artifacts according to the model description.

Behavior-based testing (BBT) This is a specialization of test-driven development that focuses on monitoring specific expected interactions [5]; these interactions occur when certain methods/functions are executed and can involve multiple objects. BBT has been developed as an option to validate that a system is behaving as expected. As a result, BBT is strongly centered on testing an existing implementation by describing steps/sequences and their expected output [6].

Model-based testing (MBT) This approach is derived from model-based techniques and covers the modeling, designing, generating, and executing artifacts to perform software or system testing [7]. In this approach, the system under test (SUT) is modeled (a part of) including the behavior and testing strategies/environment. MBT aims to keep development and testing time in balance by automating the test generation/execution [8]. MBT incorporates different testing strategies that allow a variety in the test case generation for a SUT.

All these techniques aim to automate code/artifact generation to maximize coverage, and, thus, ensure code quality, with minimal effort. Even when they are relatively new in the field, they are becoming more and more popular amongst complex and/or big software systems. Commonly, for these software systems, developing new code

or testing existing code is a hard task due to the interconnections that are created as the amount and complexity of code increases. Therefore, a model-based solution will speed up the development process and will increase the understanding of the system by adding an abstraction level based on the domain of the system.

1.5 Outline

The next chapter introduces the main stakeholders from the involved parties (TU/e and FEI) and gives a brief analysis of their role in the project. Their goals, interests, and influence towards the project are presented. Following this analysis, Chapter 3 states the problem analysis that describes in detail the aspects and expectations of the project. After this, (Chapter 4) a domain analysis is presented in which an inquiry of the current software situation at the company is conducted in order to define how our project fits and to define boundaries and limitations.

The next two chapters go into the formal requirements of our system. Chapter 5 presents a feasibility analysis of our problem by defining potential issues and risks that may arise during the lifespan of the project. Mitigation and contingency strategies are also documented. Chapter 6 states the formal system requirements for the project based on the company’s requirements and the performed analysis in previous chapters. Functional and non-functional requirements are defined in this chapter.

The following three chapters offer a view of the solution for this project. First, Chapter 7 addresses the system architecture that contains all the high level elements developed and included for our solution. Chapter 8 shows the system design based on the previously chosen architecture and finally, Chapter 9 goes into the specific implementation details of our solution. This chapter also explains the links between architecture, design, and implementation. Design decisions regarding technology and software are also covered.

Chapters 10, 11, and 12 are mostly focused on the results of the projects. Chapter 10 discusses the deployed methods for validation and verification of our solution and how they connect the original requirements and expectations. Chapter 11 elaborates on the deployment plan for our solution and Chapter 12 presents the conclusions and results of this project.

The last two chapters are focused on the management of the project. Chapter 13 addresses the executed process during the nine months of the project and Chapter 14 relies on the project retrospective and reflection from the author’s point of view.

2.Stakeholder Analysis

In this chapter, we present stakeholders involved in the project and their interests and goals. As a starting point, we have FEI Company and Eindhoven University of Technology as the parties involved, each of them with specific interests towards the project. The following sections present each party in detail and introduce the stakeholders and people involved.

2.1 FEI

FEI as a company is the owner and initiator of the project. They represent the source of knowledge, requirements, and expectations for this work.

FEI aims at ensuring a high quality of their software interfaces because interfaces play a key role providing access to applications for controlling the microscope and acquiring images.

To minimize the effort of creating and maintaining tests, wrappers, and documentation, FEI envisions employing a model-based technology to cover this part of the development process.

The company stakeholders can be grouped in three categories:

1) High-level project management: These are stakeholders with enough influence to make significant changes during the project.

Table 1 – High-level project management Name Role Martijn Kabel Monitoring progress of the project and planning project Platform Software future and deployment. Manager Paul van Gorp Monitoring progress of the project and providing support Control Server for mitigating risks and threats. Software Manager

2) Project management group: Members of the company that influence the project directly. Requirements, scope, modifications, boundaries, and functionalities were defined with them.

Table 2 – Project management group Name Role Andrei Radulescu Key stakeholder with the role of project supervisor and Software Technical owner. Managing project progress and defining priorities Lead for requirements. As a software architect, he contributes with knowledge and defines the paths to follow during the project development. Active Progress Steering Group (PSG) member and involved in major decisions. Paul Janson Project mentor. Responsible for monitoring daily progress Software Engineer of the project and connecting with other related stakeholders. Provided expertise in testing topics and influenced daily decisions about the direction of the project. Active PSG member and involved in major decisions.

3) Knowledge sources – technical group – potential users: This group covers stakeholders who provide information during the research phase about FEI interfaces and topics related to the project, as well as testers of the project and potential final users.

Table 3 – Knowledge sources – technical group – potential users Name Role David van Luijk Detectors Software Tim Bertholet FEG Software Santosh Zope Optics Software Vasiliy Galkin Different developers and members across FEI software Server Software teams that collaborated with the project by providing Vladimir Ivin specific information about interface behavior. Those HT Software interfaces were the core element for defining and building Laurens Oosterhof a common testing framework as the solution for the Camera Software project. Joost Dierkste COM wrappers Peter van Merkerk Python wrappers Davy Kox Motion – IStage interface Andrea Pasqualini Responsible for providing requirements and information Software Architect about the wrapper generation process. Involved in decisions related to this topic. IOM Interface They will play the role of final users once the project is Developers extended to real usage. For the scope of the project, they are testers of the solution.

2.2 Eindhoven University of Technology (TU/e)

As an educational program, the Professional Doctorate in Engineering (PDEng) in Software Technology (ST) is conducted and assessed by the Eindhoven University of Technology. TU/e dictates certain standards that have to be met. Those standards are mainly related to the design process, project management, and project implementation.

Table 4 – TU/e Stakeholders Name Role Ad Aerts Ensuring that the quality of the project artifacts and Program Director of deliverables are in line with the program standard to grant PDEng in Software the PDEng in Software Technology degree. Technology Tim Willemse Monitoring the process of the project and involved in TU/e Supervisor decisions regarding quality of the assignment and documentation according to the technical report specification. Aldo D. Martinez Coordinating, designing, and developing the project are PDEng Candidate his main tasks within the project timeframe. In parallel, responsible for matching project results with company and university standards.

3.Problem Analysis

In this chapter, we introduce a detailed analysis of the domain by presenting the essential aspects needed for a formal definition of the problem statement. The chapter is mainly focused on addressing the common problems with ensuring quality by testing in high-complex systems. Specific concepts from the company software system are also introduced in order to make a clear problem statement.

3.1 Complex software systems

Software in industry is growing every day. The necessity of connecting hardware through software applications and offering the biggest number of features with an acceptable quality is a high-priority demand. As a consequence, the software behind these applications is becoming complex by connecting all the involved parts. Additionally, the complexity tends to grow every year the system is developed because of new additions in terms of hardware and technology. The following are most common ways in which a system becomes complex:  Complexity by definition: The system solves a problem that is complex by itself such as biological systems.  Complexity by legacy: Previous developments of the system have collected old code, technologies, and design patterns. Technology and good practices change overtime.  Complexity by hardware: New hardware elements and/or software features for the existing ones are added though time.  Complexity by development location: Different teams, companies, and people from different countries have developed parts of the system  Complexity by interdependencies: Parts of the system that depends on inputs from other components, hardware, or systems.

Figure 3 – Internal composition of a TEM column

Independent of the reason, complexity affects every phase of the software development cycle, producing the necessity of specific processes to ensure software quality and reduce time costs. As Figure 3 shows (which only depicts a few of the TEM elements), FEI products have an inherited complexity due to the amount and configuration of internal hardware they contain. Each hardware element has its own software implementation that specifies connection, acquisition, and expected behavior. In terms of software, these elements are represented as interfaces. This project is primarily focused on ensuring quality of implemented interfaces through testing and it aims for automation of this process. In parallel to this, interfaces persist over time, which leads to leaks of knowledge or understanding if they are not properly documented; enriching the interface quality by providing documentation is intended. Finally and due to the interdependency complexity, interfaces require availability for different programming languages and technologies; generating adapters and wrappers for specific target technologies is required.

3.2 Ensuring software quality through testing

Testing is a process in which software is exposed to a set of situations with the intention of finding defects. Testing is also stated as the process of validating and verifying that a software, application, or product meets the business, technical, and functional requirements as well as works as expected. Finally, testing is a subset of quality assurance because ensuring functional requirements provides metrics to determine the software quality.

For this project, testing is the core element for ensuring quality, but testing by itself has several different approaches that are completely valid in terms of improving quality. This is the main reason why it is essential to set a scope for the project in terms of testing. As we mentioned in the previous section, the project is centered in ensuring the quality of software interfaces, which are part of the total FEI system. Interfaces are defined in terms of methods as well as behavior, and for our problem analysis, testing the behavior of the interface is the main focus.

For the sake of interface behavior testing, we have to introduce a new element that is also essential in our problem, and this is defining the interface. Previously, we stated the fact that this project is aiming for ensuring quality of implemented interfaces and in order to meet that goal, we have to consider modeling or defining the interface as a necessary step for our solution.

3.3 Modeling interfaces

A model is an abstract representation of an object and for our specific problem a model is a representation of an existing software interface. The necessity of modeling the interfaces leads our project into the model-based development field. Even though several notations for modeling interfaces are available on the market, e.g., UML, picking the proper one for the context of this project depended on the testing necessities and existing interfaces. Three main elements were considered for the decision:  Modeling methods: Interfaces are composed of methods with specific signatures and parameters. For this project, methods are considered triggers for the interface behavior. In addition, wrappers and adapters are built around these modeled methods as a bridge for using them with different technologies.  Modeling behavior: The expected behavior of the interface is represented by a state machine. This behavior represents the expected calls and states as well as interactions amongst interfaces we want to ensure through testing.  Modeling specific settings: A last element to consider is the possibility to model specific settings that allow or complement the solution. Specific values for testing, documentation, and wrapper generation are included in this element.

A final consideration for the modeling is to clarify that the ultimate goal was not to model all the elements that comprise an existing interface, but modeling all the necessary elements to describe the interface behavior as well as testing and wrapper generation. Modeling the full interfaces will be redundant considering that definition exists in an Interface Definition Language (IDL). An IDL is a specification language used to describe a software component's application programming interface (API).

3.4 Black-box testing

After a model of the interface has been defined, the next element to consider in the analysis of our problem is the testing method. For this, it is important to introduce a general view of the system as Figure 4 shows.

Interface method call Client Server Hardware Test case Interface Interface Logic (Simulator) method response Client influencing simulator

Figure 4 – General view of the system under test

The client (in this case, our application/test case) has access to the interface that is implemented in the server side. The interface offers its methods to the clients that connect to the server, but without exposing the real implementation of the interface. This situation matches with the definition of black box testing [3] in which a system is tested by providing inputs and analyzing the outputs without taking into consideration implementation details. However, the project is aiming for a step further in the abstraction by executing interface method calls in terms of transitions and analyzing the responses in terms of state changes (This concept is covered in detail in the following two chapters).

3.5 Problem statement

The assignment focuses on creating a model-based testing framework for a FEI part of the system (Instrument Object Model (IOM) interfaces are covered in the next chapter). The way to achieve this goal is by generating test cases, documentation, and wrappers automatically from an interface model. The interface model has to be able to include the essential information for describing the expected behavior of the interface. The behavior is depicted as a state machine with a set of transitions which are mainly composed of triggers (from the same or external interfaces), conditions (guards), and an expected state as a result of the transition.

The testing part of the assignment is focused on creating real test cases from the interface model that are ready to be compiled and later executed. These test cases have to map the existing implementation and automate as much as possible to keep user modifications to the minimum. Compilation, execution, and result analysis are used as methods to validate the artifacts generated from the model, but these activities are not included in the solution scope.

The documentation part of the assignment is centered on the automatic generation of documents for the interface. By definition, the interface model is one of these elements, but further documentation has to be generated to cover all the generated artifacts.

The wrapper part of the assignment is focused on creating wrapper code automatically to provide consistent and always up-to-date and matching wrappers for target languages (C# and Python).

3.6 Design Opportunities

In this chapter we have introduced all the elements that surround our project from a global point of view. We envision that our solution will be based on a process which starts with modeling the interface and after that the generation of test cases, documentation, and wrappers conclude the process. We also stated that our system is a complex system in which interfaces are interconnected with each other. Based on these two facts, three potential design opportunities arise:

Elegance/Usability As a model-based solution, modeling can become complex and hard to follow/read if the modeling tool provides many options or complicated ways to define elements. One of the main aspects for this project is keeping the model easy to read and understand so that it can also serve as documentation. Strategies such as splitting models or keeping the model dynamic allowing the user to define only the elements he/she needs, have to be considered in the final solution. At the end, our tool has to make producing test cases, documentation, and wrappers simpler than writing them manually.

Methodical approach The test cases have to be generated following testing technologies for state machines and established testing frameworks. It is also important that our solution executes the transition with an order defined in the model and taking care of events and expected outputs. In the case of interconnected interfaces, there has to be control over how the interfaces are called and evaluated for the test cases. Finally, the solution has to follow a methodical approach from modeling until test/document/code generation.

Genericity The construction of the solution is based on a predefined set of IOM interfaces. These interfaces are a representative sample for which test suites, wrappers, and documentation is generated in order to define the specific requirements for the project. However, the final solution has to provide a generic testing framework that works for the IOM interfaces with minor changes required.

Based on the system vision, the next two design opportunities are not applicable:

Impact As a pilot project, this is the first approach for building a testing framework with a model-based perspective. The integration of the project into the real development/testing process is a future step that is not included in our current scope. However, a deployment plan is considered as part of the deliverables.

Complexity Even when the global topics of testing and model-based solutions are potentially complex, this project is aims to demonstrate the applicability within the company. As a new project, keeping the concepts simple but open to future improvement has to steer the solution.

4.Domain Analysis

The previous chapter discusses the domain for this project, which is model-based testing as well as code and documentation generation for microscope interfaces. The objective of this chapter is to introduce the domain while identifying the most relevant parts that are involved in our problem. The following sections give a detailed view of the microscope system architecture and the software elements to consider for an optimal model description.

4.1 Microscope software Microscope PC system architecture

The Microscope Software System (MSS) is a set of software utilities that make it possible to employ all the functionalities of a TEM 1 microscope. This system enables and connects * the microscope hardware to software interfaces for controlling and performing applications Direct embedded within the microscope. In the high-level abstract hardware representation (Fig 5) it is visualized as a microscope PC that is connected to several pieces of embedded hardware, which vary depending on the microscope model. Figure 5 – High-level representation of microscope The microscope PC is composed of a set of software architecture applications and a TEM server (Fig. 6). The former are elements that provide user-interfaces to operate the microscope and the latter provides software interfaces for controlling the microscope hardware. Applications are defined in terms of specific use cases abstracting the functionality of the system to a convenient workflow for the end-user. The TEM server is designed to abstract hardware and software differences for the different microscope configurations while maintaining the relevant capabilities of the subsystem.

Microscope PC

App 1 App 2 App n

TEM Server

Embedded Hardware

Figure 6 – Microscope PC representation

For the scope of this project, applications represent test cases and wrappers that connect to the exposed interfaces in the TEM server. This connection happens in the layer level known as object model, for which we need to formally introduce the concept of Instrument Object Model (IOM).

4.2 Instrument Object Model

IOM is the interface for the TEM server software. It is the main interface for microscope control and acquisition. The IOM is allocated in the object model layer that provides access to instrument functionality for automation applications and user interfaces (UIs). IOM interfaces have a well-defined structure (based on a tree perspective) and semantics with respect to the instrument server. These interfaces do not allow bypassing as a measure for ensuring system safety, which means that the only way to access the interfaces is through an instrument. IOM interfaces are the exposed (public) part of the TEM server and they are employed as a starting point for creating clients that connect and employ the interfaces. All the interfaces are implemented in a different layer that is hidden for the external application that is using them (black box).

Application

IOM

TEM Server StemDetector

Instrument Detector

FluScreen

Figure 7 – Server exposing IOM interfaces of specific instruments

4.3 COM interfaces

The TEM server provides software interfaces to user interface applications for controlling the microscope hardware. These interfaces define a set of methods that an instrument can support, without dictating anything about the implementation. In order to achieve this goal, Component Object Model (COM) is used as RPC technology for inter-process communication. COM is a binary-interface standard for software components introduced by Microsoft that includes the ability to marshal interface cross processes. Additionally, it provides the functionality to generate .NET runtime-callable-wrappers from type libraries. COM maintains a strict separation between interface and implementation. A COM interface is defined using an IDL that is processed by an IDL compiler, which generates a header file.

Interfaces implemented within COM technology influence activities such as accessing, employing, and testing these interfaces.

4.4 Synchronous versus asynchronous

Once COM interfaces are accessed, methods are executed to perform a specific functionality on the system (microscope). Depending on the instrument and implementation style, this execution is supported in one of the following two approaches:

 Synchronous: The server interface allows parallel access by multiple clients (for example, while inserting a STEM detector, another client can change optics settings). The client waits until the completion and this creates a sequential workflow.  Asynchronous: The server creates a second thread to execute the method allowing the client to employ other operations while the method is executed. The completion of the action requested by an asynchronous call is reported using events.

Figure 8 – Synchronous and asynchronous example

Due to the complexity of TEM microscope instruments, both kinds of methods are defined within the IOM interfaces. FEI is striving to use synchronous methods, and then asynchronous methods will be gradually removed. Meanwhile, common techniques for handling asynchronous methods are using events to inform the caller about system changes or employing wait-for-completion methods that wrap asynchronous calls.

4.5 Events

Inside the microscope software, interfaces and hardware are related (for example turning on the high voltage requires a FEG source powered on). This means that a change in one module can affect one or more of its associated modules. The current MSS propagates this change by implementing events. Events are notifications associated with state changes of a specific instrument. These state changes are value changes in interface properties triggered by a software or hardware module. These triggers can be part of the interface or come from a different interface. For both cases, they produce a notification that is received by all the registered clients of the interface. A client needs to register its event listener for handling these notifications.

Events are an essential part of interface behavior because they represent how interfaces have to notify state changes. Similarly to the interface definition, events are also defined in COM technology.

Figure 9 – Example of a COM Event interface

4.6 Interface behavior

IOM interfaces follow a state machine approach. The key concept of this approach is that every interface owns an internal state machine implementation, which is influenced by internal or external triggers. These triggers are generated by methods or events from IOM interfaces as well as hardware changes. The triggers that are produced by methods of the same interface are known as external triggers while triggers coming from hardware changes are defined as internal triggers.

Figure 10 COM interface definition – methods and state enumeration

As we mentioned before, COM interfaces use encapsulation to hide the implementation from the interface definition. As public elements, the interface exposes two main components that are essential to track the implemented behavior: the state enumeration and the methods. The state enumeration contains all the valid states for an interface and the methods are the operations of the interface that can externally stimulate state transitions.

As we can see in Figure 10, a typical COM interface defined through an interface definition language (IDL) does not include a state machine; then, an approach to describe and model a state machine’s behavior is required for representing behavior. Describing this behavior enhances the interface definition and makes it possible to test the implemented behavior against this definition.

4.7 Using interfaces

The last element to take into account is how to get access to the existing server interfaces. The only way to use interface functionalities in a client is by following the next steps: 1. Connecting to the server: Creating an instrument and enabling connection to the server. 2. Traversing the interface tree: Accessing a specific interface requires navigating within the tree. A connected instrument has access to all the existing interfaces in the server, which are organized in a hierarchical tree. 3. Executing interface methods: Executing the methods from the selected interface. In this point, any method execution flows through the server and interacts with related hardware and/or interfaces.

TEM server

Client IOM specificInterface Related HW

creatingInstrument

connecting

connectionEstablished

getInterface

getInterface

Interface

Interface

executeMethod

method

result

result

Figure 11 Common workflow for using an interface

IOM

5.Feasibility Analysis

Having defined the problem and domain, in this chapter a feasibility analysis is presented describing issues and risks related with the project. The identified elements are introduced besides a mitigation/resolution strategy taken for the project. In some cases, the element was placed out of the scope of the project, but it is potentially an aspect to consider for future work.

5.1 Challenges and alternatives

This section explains the challenges that have been encountered for development of a project with our characteristics.

5.1.1. Model abstraction level As we described in Section 3.3 , modeling interfaces play a key role in this project. This is based on the fact that the project’s goal is to improve testability of IOM interfaces and for achieving that we need a representation of the interfaces (models). Defining a proper level of abstraction influences different factors in the understanding and use of an interface in the microscope domain. A high-level interface description is easy to define and understand for usage purposes. However, depending on user expertise and use cases, different abstraction levels are needed.

The alternative chosen to approach this challenge is dynamic modeling. A dynamic model is a model that supports defining variable elements, which means that the model is not fixed to a default description. In this way, a model description can be defined with a high or low level abstraction level according to user needs.

5.1.2. Standardizing testing and wrapping Expected artifacts (test cases and wrappers) defined in the problem statement and goal of the project (Section 1.4 , Section 1.3 , and Section 3.5 ) have multiple ways to be defined. Depending on technology, frameworks, and specific strategies, these artifacts vary in content and functionality.

In the case of testing, within FEI software several technologies (Google Test, NUnit, and Boost) and strategies (unit, integration, or system tests) are implemented. This is based on factors such as interface type and development team. A similar situation happens for wrappers. Covering all this variety is not feasible in the timespan of the project; therefore, we made an early decision (GoogleTest for testing and Python, COM, and C++ for wrappers) and kept extensibility in mind.

5.1.3. Project deployment At the inception of the project, it was decided that final deployment of the solution would not be covered within the project scope. However, deployment has to be analyzed as a future plan. From that perspective, the challenge consists of how to insert this project in the developer’s process workflow.

Inserting a new element in a development process (even when this process can be replacing another one) has to be considered carefully. A person has to be able to understand the benefits and advantages against the cost of learning and implementing that element. Those benefits and advantages are defined in terms of producing and using artifacts matched with the time they will save for the user.

5.1.4. Technology support for project requirements Currently, there are several model-based technologies (Section 1.4 ) approaching different aspects of software development from the software lifecycle. Technologies (for example, Dezyne) cover partially the objectives (Section 1.3 ) of this project, but due to the specific needs of this project, none of them fulfill it completely. Specific tools can provide features for modeling, testing, and/or wrapping, but they also carry restrictions. These restrictions are associated with deployment requirements, modeling features, open or proprietary licenses, among others.

Several technologies are available, but we need to pick a technology that supports our needs. We need a technology that provides extensibility and can be adapted to our system.

5.1.5. Structural difference in the interfaces As we stated in Section 4.1 , MSS is a complex structure with a large number of different interfaces and applications. The system contains a wide variety of interfaces providing different strategies. Examples of these differences are the way of handling events, errors, and/or state transitions. These differences influence how an interface must be modelled.

Together with main stakeholders, it was decided to work with a small set of IOM interfaces and assume they are representative of FEI software interfaces. Although we are considering a small set, we strive to create a flexible and extendible model language.

5.1.6. Automation of repetitive work A key goal of this project is to automate repetitive activities that are tedious and mechanical and, as a consequence, are error prone in nature. Those activities contemplate definition, testing, and wrapping of IOM interfaces. All these interfaces contain similar parts following certain patterns in the high-level view that is repeated and applied to different elements.

Having a pattern does not ensure that all the required parts for creating test cases and wrappers can be automated. There are parts that, depending on technology and/or the FEI software system, cannot be related to a predefined pattern. Based on this fact, the necessity of defining automated and non-automated parts emerges.

A model-based project aiming for automation of repetitive work has to take into account that not everything can be captured in a model. These parts are commonly related to interface specific requirements. A solution is to automate generic repetitive parts keeping a door open for manual integration of specific parts.

6.System Requirements

After defining problem and domain, the system requirements of the Model-Based Interface Framework (MBIF) project are analyzed. The requirements are introduced from the view of main scenarios/goals of the project and then decomposed into a CAF (Customer Objectives – Application – Functional) perspective. This section only presents the high-level requirements. The last section covers the non-functional (quality) requirements and defines a set of metrics for them.

6.1 Requirement gathering process

The requirements phase occurred in parallel with the software development activities. There were two phases in this process:  High-level requirement definition: This phase started at the beginning of the project and lasted until the middle of the third month. This stage was focused on defining goals and general requirements without detailed specifications. Decisions according to technologies and expected artifacts were the main outcome.  Detailed requirement specification: This phase was centered on a clear and specific definition of the previously established high-level requirements. Requirements were defined in an agile style, i.e., by taking the results of the prototype and technology research and updating the requirement definition. This concurrent approach resulted in the generation of options and recommendations.

Both high-level and detailed requirements were discussed with the involved stakeholders. During both phases, requirements were defined and a prioritized. Three categories are identified as priorities: 1. High – A requirement that the project must fulfill. 2. Medium – A desirable requirement, but only implemented if time permits. 3. Low – Optional requirement not achievable due to external constraints; a recommendation is proposed for future work.

6.2 Main requirements

The original idea behind this project provides the main requirements and use cases that drive this project:

FEI is aiming to improve the IOM interface quality by: 1) Guaranteeing its behavior through testing 2) Automatically generating wrappers to other target languages 3) Providing up-to-date and complete documentation

This statement sets the project scope to users of the IOM interface and is thus FEI- specific. The intended users of the software solution are the current software engineers that are developing interfaces for IOM. The interface developers are the main focus of this requirement analysis. Although the problem is FEI IOM centered, the solution should be as generic as possible for further reuse in other situations.

The common key drivers (customer objective) that are stated in the main requirements are automation and quality. Specific application drivers can be derived from them:  Guaranteeing interface behavior through testing – Testability: Checking the correct behavior of an interface through testing; where behavior is defined in terms of transitions within a state machine. The main requirement

is to generate tests with maximum functional coverage (cover all state machine transitions)  Automatically generating wrappers – Adaptability: Creating wrapper code automatically to provide consistent and always up-to-date and matching wrappers.  Providing up-to-date and complete documentation – Documentation: Creating automatic documentation based on interface models and generated artifacts.

From these application drivers, we identified the following use cases: uc Use cases

MBIF

Model behavior Model test Generate test settings cases

«extend» «extend» «include»

Model interface Interface developer «include» «extend» Generate wrappers «include» Model wrapper settings

Generate documentation

Figure 12 Main use cases

As it is shown in Figure 12, the three application drivers have become specific test cases with interaction with the main actor. Additionally, a new use case is created in the form of “Model interface” case. The reasons for this use case are that all the other use cases, and in general the essence of the project, are based on modeling interfaces. This model is the bridge between abstracting the definition of an interface and generating artifacts that employ the elements (methods) of that interface. The use cases related with the interface developer are mostly focused on the generation of specific models (interface, test, and wrapper).

An interface model is composed of an abstract representation of an existing interface that can include:  Properties and attributes  Methods and operations  A behavior description in terms of a state machine in which a behavior is defined as a state transition

A test model is a set of values that are required to configure the generation of test cases and it is closely related to the employed testing technology.

A wrapper model is a definition of parameters required for creating a wrapper for a specific interface.

6.2.1. Main use cases

Table 5 – Model interface use case specification Name Model interface Description The user wants to create a model for an existing interface. Actor Interface developer Main success 1. User creates a model element scenario 2. System displays model editor 3. User models interface properties 4. User models interface methods Related Alternate: 1a. Model test settings scenarios Alternate: 1b. Model wrapper settings Alternate: 4a. Model behavior

Table 6 – Model behavior use case specification Name Model behavior Description The interface developer wants to describe the behavior of an interface in terms of a state machine. Actor Interface developer Main success 1. User models a state machine scenario 2. User defines states 3. For each state in the state machine 3.1. User models transitions 3.2. For each transition in the state machine 3.2.1. User defines transition elements (triggers, guard, events and final state) Related - scenarios

Table 7 – Model test settings use case specification Name Model test settings Description The interface developer wants to create a model for capturing test case generation values. Actor Interface developer Main success 1. User defines testing interface target scenario 2. User defines common testing parameters 3. System displays testing options according to defined interface 4. User assigns values to testing options Related - scenarios

Table 8 – Model wrapper settings use case specification Name Model wrappers Description The interface developer wants to create a model for capturing wrapper generation values. Actor Interface developer Main success 1. User defines wrapper interface target scenario 2. User defines common wrapper parameters 3. System provides feedback about the model Related - scenarios

Table 9 – Generate test cases use case specification Name Generate test cases Description The interface developer wants to create a test suite from a modelled

interface. Actor Interface developer Main success 1. User Model interface scenario 2. User Model test settings 3. User requires test case generation 4. System creates test case files according to user-defined models Related 1a. Model interface scenarios 2a. Model test settings

Table 10 – Generate wrappers use case specification Name Generate wrappers Description The interface developer wants to create a wrapper from a modelled interface. Actor Interface developer Main success 1. User models interface scenario 2. User models wrapper settings 3. User requires wrapper generation 4. System creates wrapper files according to user-defined models Related 1a. Model interface scenarios 2a. Model wrapper settings

Table 11 – Generate documentation use case specification Name Generate documentation Description The interface developer wants to create documentation automatically from a model. Actor Interface developer Main success 1. User models interface scenario 2. User requires documentation generation 3. System creates documentation files according to user-defined model Related 1.a Model interface scenarios

After analyzing the use cases and application drivers, we conveyed the following main features required for this project:

As we can see in Fig. 13, modeling is a key feature that is shared between all the application drivers of this project. This situates our project in a model-based approach in which the model is the entry and main element that drives development.

In parallel, automation as a key driver leads all the features to implement automation strategies for creating artifacts from the entry model.

Figure 13 Main features for the MBIF project

6.2.2. Feature 1: Modeling interface definition This feature provides a method for writing models related to interfaces. The models that this feature supports are interface, test, and wrapper models from a high-level description.

Current modeling approaches and notations have to be consolidated into a common modeling language for the end users (interface developers). The outputs of this feature are inputs for related features.

Table 12 Functional requirements for the modeling interface definition feature ID Description Priority REQ-MID-01 The system has to provide a notation for creating models that High allow the user to describe interfaces consistently considering signature and behavior based on the following sub- requirements. MID-01-01 The notation has to include elements for method definition High within an interface. MID-01-02 The notation has to include elements for attribute definition High within an interface. MID-01-03 As a part of an interface model, the system must allow the High definition of a state machine describing the interface behavior. MID-01-04 The system must model relationships between interfaces High including inheritance and events. MID-01-05 The notation must model external and internal triggers High (modeling not only the interface, but also a way of forcing asynchronous events from hardware. REQ-MID-02 The system has to provide a notation for creating test models High that allow the user to define the following testing-related settings:  Testing framework  Special setups and scenarios MID-02-02 Indicating post and preconditions. Especial operations that Low happen before and after testing. REQ-MID-03 The system has to provide a notation for creating wrapper High

models that allow the user to define wrapper parameters for automatic generation including:  Wrapper technology  Threading model  Exception handling REQ-MID-04 The system has to provide a method to define microscope Low configurations that allow specific hardware scenarios to be defined.

6.2.3. Feature 2: Test case generation This feature takes as input an interface model and then generates a test suite following indications modelled through a test model. The test suite should be formed by a set of test cases derived from the interface behavior description. These test cases have to be defined in terms of a formal testing framework and contain a consistent structure for further compilation and execution.

Test cases validate the proper functioning of an interface and they are a parameter to measure the quality of the interface. By standardizing and producing test cases, the quality of an interface is guaranteed. Currently, this test case generation is a manual process that tends to be mechanical and repetitive in terms of development, making it a tedious activity that can be error prone.

Furthermore proper test case coverage is now a task of the developer. This is hard to prove and code coverage is not a good indication. This project aims to build a system that automates the test case generation, providing a full coverage of an interface based on state machine behavior.

The specific definition of the functional requirements is deployed in Table 13.

Table 13 Test case generation functional requirements ID Description Priority REQ-TCG-01 The system must generate test case files with code in a High specific testing framework ready for compilation and later execution. TCG-01-01 The generated test case files have to contain an structure High consistent with FEI software infrastructure REQ-TCG-02 The generated test case files must have a full coverage of the High interface behavior where:  Interface behavior is defined through the interface state machine.  Full coverage is defined as enough test cases to validate all transitions within the state machine. REQ-TCG-03 The system should generate specific test cases according to Low the user needs. These kind of test cases fall out of the state- machine-based behavior testing. REQ-TCG-04 All generation tools must be automatically deployable as part High of the FEI build and smoke infrastructure. This deployment should be a simple procedure.

6.2.4. Feature 3: Interface wrapper generation Wrappers are software elements that make software functionality available to other software with incompatible characteristics. In the scope of our project this is restricted to interfaces written in a programming language being used by other interfaces written in a different programming language.

Generating wrappers is a manual activity that tends to be repetitive in terms of the process. Wrapping an interface is defined as linking interface properties, methods, and way-of-work to a specific target language. All of these operations follow defined steps, and these steps have a recognizable pattern that can be automated.

The wrapper feature involves creating a code file (in a language defined by the user) that wraps the functionality or usage of an IOM interface. These code files have to consistently expose the original functionality of the described interface. As general requirement, these code files have to be generated aiming Python and COM technologies.

Table 14 Interface wrapper generation requirements ID Description Priority REQ-IWG-01 The system has to generate wrapper files containing High structured and valid code based on wrapper technologies employed at FEI. IWG-01-01 Generated wrapper files must have a consistent code for the High following elements:  Target language specified by the user  Mapping existing interface to generated wrapper  Specific structures for supporting existing interface usage within the wrapper REQ-IWG-02 The system must generate an interface implementation High header from the model.

6.2.5. Feature 4: Providing interface documentation Documentation is an essential element of an understandable system. Keeping up-to- date documentation is an issue when creating documentation manually because of software changes. Changes in functionality imply updating existing documentation, which could be quite time consuming. Automatic documentation generation is the solution for this problem.

This feature is about automatic documentation generation from the interface model. This documentation has two main elements:  Interface documentation: The interface documentation is represented by the model definition. Making the model by itself a piece of documentation that helps to understand the interface behavior and functionality. The excepted output is a human readable interface description.  Artifact documentation: Previous features generate code files as outcome. Even when the user has a small interaction with these files, it is essential to keep them concise, structured, and clear. The first two elements are part of the artifact generation, and the last one is approached by documenting. Meaningful code comments have to be generated besides the code files (test cases and wrappers).

The specific definition of these requirements is expressed in Table 7.

Table 15 Providing up-to-date and complete documentation requirements ID Description Priority REQ-PID-01 The system has to provide the interface model as part of the High interface documentation. This model is expressed in the grammar language with a clear and understandable structure. REQ-PID-02 The system should create additional documents that reflect Low the interface specification including visual elements and diagrams. REQ-PID-03 The system must document any generated code artifact by Medium

means of code documentation making the artifact understandable for users. REQ-PID-04 Generating test documentation. Writing a test report Low execution for the generated test cases including:  Passed and failed test  Coverage

6.3 Non-functional requirements

In this section, we list the non-functional requirements we want to satisfy with this project; they are introduced in priority order.

6.3.1. Extensibility First, as a pilot project, not all interfaces have been covered. As a result, the following situations can be found:  New interfaces that do not match with the proposed solution are found, requiring additional modules to handle them.  Evolution of interfaces or wrapper target language can result in modifying the existing way to deal with them.  Adding new testing frameworks or wrapper target languages that have not been covered. Therefore, the designed approach must be easily extendible for this.

Extending the system in these directions should be achieved by extending the modules (modeling module, wrapper generator, testing generator). System must be open to the addition/modification of main components.

Table 16 – Extensibility requirement description Name Explanation Metric Coupling Can the user easily separate Distinguish: one or more parts of the  Easy: Parts are separate in solution process? (separate packages and components parts of the process are: that allow a visual and language [ grammar, logical separation. validations, and scopes] and  Reasonable: Some code generation [wrapper, sections are still coupled documentation, abstract test, because of and concrete test]) implementation reasons.  Difficult: Many dependencies and cross- referenced elements. Inserting / How much effort is required to Distinguish: Replacing insert/replace/modify/ an  Easy: Modifications existing component/part of the involve a small number of system? (For example, changes in the full extending modeling definition, structure (mostly for or replacing generation to add updating datatypes, or a new testing framework) calls).  Reasonable: Modifying involves changing structures, classes in other components (classes depending on existing elements).  Difficult: Changes in all the structure are required to support modifications.

6.3.2. Ease of use The system to be developed needs to be incorporated in the software development cycle, and this can become an obstacle. Learning the process to model interfaces has to be as simple and natural as possible. A common language and structure has to be implemented that mimics other common tools or languages employed during development. The system has to have a straightforward process with a clear and concise language for modeling and specifying interfaces.

Table 17 – Ease of use requirement description Name Explanation Metric Time required How long does it take to get Time. To be defined based on for learning the used to the tool? intuition and experience. tool? Time required How long does it take to learn Time. To be calculated in for defining a usage of behavior descriptors? terms of simple and complex state machine state machines (cross references to external interfaces). Time required How long does it take to learn Time. To be defined based on for creating a the process of creating a test intuition and experience. test suite? suite? Time required How long does it take to learn Time. To be defined based on for creating a the process of generating a intuition and experience. wrapper? wrapper? Intuitiveness How easy is to use the tool Distinguish: and associate the tool concepts  High: Easy to with the domain concepts associate concepts and to use them directly.  Medium: Easy to associate the concepts, but usage of them is not completely clear.  Low: Concepts are not clear enough to associate and use them.

6.3.3. Flexibility / Configurability IOM interfaces have a variety of elements, behaviors, relations to represent. A flexible representation of interfaces that allows modeling different interface characteristics has to be provided by the solution.

Table 18 – Flexibility / Configurability requirement description Name Explanation Metric Modeling Does the language include Distinguish: language enough elements, labels, and  Reasonable: The modeling options sections to describe language includes enough sufficiently enough the IOM reserved words to interfaces? represent the main elements of an interface. Methods, state machine,

test and wrapper settings are required.  Impossible: Language does not support definition of the minimum interface elements.

6.3.4. Effectiveness The generated artifacts should be as close as possible to the manual developed artifacts. In relation to automation, effectiveness is defined as the degree of interaction with the generated files.

Table 19 – Effectiveness requirement description Name Explanation Metric Artifact How many modifications are Distinguish: modifiability required in the generated files  Easy: Modifications are in order to compile (and then restricted to initializing execute)? values.  Medium: Modifications are required for additional implementation specifications (for example IDL library).  Hard: Modifications are required in the structure or the creation of new elements.

7.System Architecture

Designing a software architecture is the process of defining a structured solution that meets all functional requirements while optimizing quality attributes. The previous chapter presents the requirements for the MBIF project that drive us through the elements that define the software architecture. This chapter covers from the relevant decisions and how they influenced our reasoning to the high-level architecture of the project solution.

7.1 High level elements As we stated in Sections 1.3 and 6.2 a key element of our project is modeling. A model is an abstract representation of an object and for this project those objects are existing interfaces of the FEI software system. Figure 12 Main use cases shows that all our main use cases are connected with a modeling activity that leads our project to a model-based approach.

The project features (6.2.2. , 6.2.3. , 6.2.4. , and 6.2.5. ) raise the necessity of four main elements:  Modeller: A model editor that supports interface definition, test, wrapper and documentation models. Models are the core element in our project and the main input for the specific artifact generation. A modeller has to provide the user capability to model an interface in a high-level abstraction and for describing the specifics for the test, wrapper, and documentation generation.  Test generator: A generator that uses the interface and test models to create test cases according to a testing strategy (8.4.1. ) and framework.  Wrapper generator: A generator responsible for the creation of wrappers; similar to the test generator it requires an interface model and a wrapper model.  Interface documentation generator: Originally planned as an additional component, it got merged with the previous generators based on the premise that documentation has to be generated for the generated files (test cases, wrappers, and interface model).

Test ggeenneerraattiorn

Wrapper Modeller generator

Interface Interface documentation documentation generator

Figure 14 Main elements derived from the requirements

7.2 Modeling approach decision To decide which modelling approach to use I evaluated the following three option. These options are evaluated by performing a literature survey using a predetermined set of criteria (as presented below):

 Model-based tools: Covering existing model-based software development tools. The modeling strategy is fixed and these tools offer common functionalities such as code generation (implementation code) and model validation. The models are defined in a general purpose abstraction.  Model-based testing tools: They are a specialized subset of model-based tools centered on testing. Test case generation and/or execution under an existing implementation are the main focus of these tools. Modeling features are oriented for testing.  Domain specific language (DSL) tools: Whereas the previous two approaches contain an implemented method (general purpose) for modeling elements, DSL tools give a framework to define customized modeling strategies (specific purpose).

The main difference between the first two options and the last one is the possibility for customization, with regard to both modeling options and generated output files. The first two offer fixed modeling and generation that have to be configured to your specific situation and extended if it is required a DSL provides a common framework for generating modeling options and files to be generated. The following are used criteria for this decision:

 Customization of language/model definition: The modelling strategy provides enough elements for describing our models (interface, testing, and wrapping models); and if not, does the approach manage modelling extension easily?  Customization of artifact generation: Generated artifacts have to interact with the existing software system, and then the file generation has to be customizable.  Simplicity for model definition (Learning curve): Expressing this model has to be simple and intuitive, diminishing the errors due to the specification technique. A model has to be understandable for different users.  Maturity: Maturity is defined by the usage of the technology on the market and the current support and development.  Extensibility: The current project has an initial goal to achieve, but depending on the implementation success, it could be extended to include new scenarios and alternative paths then the solution has to be able to cope with extensions. The solution should integrate with other technologies and should be extendable with new features.  Licensing cost: This project is aiming for testing the potential of using new model-based technologies for automatic code generation, and then it is a feasibility and research project. Open-source tools have a higher priority than proprietary software (unless the proprietary software excels the requirements).  Cost of usage/integration: It contemplates the required effort to employ a modelling approach in our solution. Integrating an option requires effort in terms of development (extending or creating features) and/or configuration (adapting functionality).

The criteria are derived from the functional (6.2 ) and non-functional requirements (6.3 ) of our project. These criteria are applied first to the model-based approach and then to the specific tools and technologies. Appendix A: Model-based approaches show specific comparison tables that support the decision of using a DSL approach.

Based on these criteria it was decided to go with the DSL approach and it is supported in the fact that we have a specific domain and requirements that are not

suitable for general model-based development environments. By using a language definition technology we can fulfill all the objectives of our project and include specific settings and elements from the company codebase/guidelines. Defining our own language will allow to have a solution that captures exactly the language we need/want to model and controlling the code generation will produce all the necessary artifacts (code, test, and documentation files).

We have decided employing the Xtext/Xtend framework for building our solution based on the balanced that the technology for our criteria points (Figure 61 Comparison table between DSL tools.). Using Xtext/Xtend will allow us to define our language in a textual way which will lead to a complexity based on the language/grammar definition (Xtext). Additionally, the code generator is completely open and fully customized through the Xtend language hence creating different target files (wrappers for different target languages, test files in different testing frameworks, and any documentation file) will be possible. Finally, this is a tool that has a current active support and several companies are adopting solutions based on this tool.

7.3 Component-based architecture Software is a complex structure and it must be built on a solid foundation. This foundation has to be based on the requirements and specific situation. This foundation is known as software architecture as it is composed of a set of significant design decisions about the organization of a software system: selecting structural elements that define how our system will look from a high-level perspective and how it will evolve through time. The design decisions provide a conceptual basis for system development, support, and maintenance.

For the sake of this project, the following factors (ordered by importance) steer the decision and selection of the software architecture and later design of the system: 1. High-level requirements 2. Non-functional requirements 3. Modeling approach decision 4. Functional requirements

Our high-level requirements are focused on generating artifacts for improving the testability (test cases), adaptability (wrappers), and documentation of the existing FEI IOM interfaces. When we merge them with the modeling decision (DSL approach), we identify our project as a Greenfield project in terms of implementation. Even when existing FEI software interfaces have to be mapped with our tool, selecting a DSL approach gives us the freedom to implement a solution without the constraints imposed by prior work.

Another important requirement is to keep the separate output features (generated artifacts) also separate in the design (to enhance maintainability and extensibility). Test cases and wrappers share the same source (an existing interface which is also a system feature), but creating these elements are completely different processes. It is essential to keep a clear division of these activities in our system architecture.

From the perspective of non-functional requirements, a key driver for selecting an architectural pattern is the future evolution of our solution. Within the scope of the project a set of representative interfaces have been covered, but the FEI software system is big and complex. Therefore, the design of the system should cope with changes and extensions (optimizing) of the current features (modeling and artifact generation).

The next item to consider is which modeling approach to use. By choosing the DSL approach we can create our own modules and implement our own strategies instead of having to adapt to the ones provided by an existing tool. A DSL framework (such

as Xtext/Xtend) enhances this aspect and abstracts communication between main components (7.1 ).

After considering the previous aspects we have decided to implement a component- based architecture [11] for our system. It provides a higher level of abstraction and divides the problem into sub-problems, each associated with component partitions. The following points present the motivation for using a component-based architecture:  With a new project, we can lay a solid foundation for future work.  Different processes are supported and separate allowing a clear definition and modularity. Modeling interfaces, creating test cases, and creating wrappers are easily separated into specific development modules that can handle different technologies. Splitting responsibilities into different submodules ensures system understandability for maintenance and future development.  Components are modular, portable, replaceable, and reusable, which matches the long-term goal of the project (covering more FEI interfaces).  Dividing the system into components creates clear separation of concerns and the defined interfaces make it possible to replace component implementations without affecting the system as a whole, while improving testability options.  The chosen DSL technology Xtext/Xtend supports model-based development/architectures.

The following figure shows a high-level view of the component-based architecture implementing our high-level requirements (adopting the main elements we identified at the beginning of this Chapter in Section 7.1 ). Additionally, it shows the system workflow from input (interface) to expected output (generated artifact).

The following sections dive into the main components and their specific responsibility and how they connect and interact with the system from a high-level perspective.

7.4 The modeling language The modeling language is the specific notation used to express a software interface. It is composed of a set of rules (grammar), options (scoping), and conditions/restrictions (validation) that define how an interface is modelled. The main components of the modeling language and their roles are the:  Grammar: The grammar is the corner stone of the modeling language and represents our metamodel. It is a domain-specific language, carefully designed for the description of textual languages for describing software interfaces. The main idea is to describe the concrete syntax and how it is mapped to an in-memory representation – the semantic model.  Validation: This package contains the static analysis modules for the defined grammar. It allows restricting the grammar usage by conditioning the syntax usage according to the logic of our domain. These rules define when a model element is correct or not and they are evaluated after the element has been written.  Scoping: Similar to the validation, the scoping component also limits the grammar usage, but from the perspective of restricting available options to the user. The main difference with validation is that scoping is evaluated before the user defines a model value, i.e., scoping limits the modeling options that are visible for the user.  Tree parser: Interface for communication between the modeling language and other components sharing the structure of how a model is written.

pkg High-lev el Component Model -MBID Component Model el High-lev pkg Model «flow» MBIF Editor ModelingLanguage «flow» Parser «flow» generator documentationGenerator testGenerator wrapperGenerator «flow» GeneratedArtifact

Figure 15 Component-based architecture for our project

pkg ModelingLanguage

ModelingLanguage

Scoping

Grammar (from Component TreeParser Model) Validation

(from Component Model) (from Component Model)

SematicMapper

Figure 16 Modeling language component

7.5 The editor The editor is the application with which a user can write an interface model. The editor represents the interface that interacts with the final user of our system. An editor can be any tool that allows writing text files. However, in our architecture, the editor represents a component that is connected to the modeling language. Through this connection, the editor is able to understand the metamodel we defined in our grammar. The editor supports the definition of specific user friendly feedback to assist the user in writing correct models.

From a high-level view the editor is composed of two main elements:  Textual editor: Work area in which the user can write a model.  Supporting features: They interpret the modeling language definitions and provide the textual editor with the necessary formatting rules.

The final editor’s output is the syntactically verified textual model written by the user that is received by the next component in the process workflow, the parser.

7.6 The parser The parser is responsible for the process of analyzing a string of symbols in the interface modeling language conforming to the rules of our formal grammar. This is achieved by the semantic mapper interface that is created by grouping all the components of the modeling language. After this analysis, the parser turns the textual model into a semantic model, which is a tree structure recognizable within our system (converting the human textual model into a machine semantic model).

object Editor

SematicMapper

Editor

Editor:: Editor:: TextualEditor Supporting UI Features TextualModel

Figure 17 Editor component

cmp Parser

SemanticMapper

Parser

TextualModel SemanticModel

Figure 18 Parser component

7.7 The generator All the previous components are responsible for transforming the user-defined interface model to a machine-ready model. The generator takes this model and produces specific artifacts according to the requirements of our system.

The generator contains two main elements:  Generation strategy: Describes the process of how specific artifacts have to be generated from the base interface model. We can define two types of generation strategies: o Straightforward generation: This generation is created with simple rules that match model elements with defined templates without any kind of preprocessing activities. o Complex generation: Requires creating inferences from the model that are later applied to the templates. Creating these inferences is defined like an abstract generation process. For example creating a test suite demands planning the single test cases before applying the test case templates.  Artifact templates: Contains a mixture of text blocks and control logic that can generate a specific artifact. The control logic is usually represented by editable areas that are filled with input from the model and the generation strategy.

7.7.1. The artifact generators The following specific generators have been created:  Test generator: Responsible for creating a test suite with test cases derived from the interface model (more specifically from the interface state machine) and a test model. These test cases are code files following a testing framework structure.  Wrapper generator: Responsible for creating a wrapper and all necessary supporting files for the wrapper. The wrapper functionality is derived from the interface model and the target technology (language) from the wrapper model.  Documentation generator: Responsible for creating additional documentation supporting the interface model.

Generators are one of the main points in terms of extensibility and future work for our system. Adding new generators (for example for generating implementation headers) and updating/replacing existing ones has a high probability considering that we covered only a small set of interfaces. This is achieved by having the generators separated according to functionality, but keeping a common base.

7.8 The transformation view Until this point, we have defined the architecture of our system from the component- based style, but there is a second view on the architecture based on the DSL approach. A DSL system has an inherited view based on the transformation steps, starting with the user defining his input till the specific artifact is generated. This view (Figure 20) shows the layered architecture of the system.

While the component-based approach gives a clear separation based on logical functionality, the layered-based architecture offers a grouping covering of the three main activities related to model-based approaches. These layers represent how a textual model (user input) is transformed to a semantic model, class representation, and finally into specific generated artifacts. These views are referenced in upcoming chapters (8 and 9).

pkg Generator pkg SemanticMapper SemanticModel DefaultGenerator GenerationStrategy SemanticModel ArtifactTemplates MBIFGenerator testGenerator documentationGenerator wrapperGenerator «instantiate» «instantiate» «instantiate» TestCases Documents Wrappers

Figure 19 Generator component

cmp LayeredView

Component Model::Model

(from Component UI Model)

«flow» Metamodel

Component Model::Editor ModelingLanguage

(from Component Model)

TextualModel

«flow»

TextualModel SemanticMapper Interpretation

Component Model::Parser

SemanticModel

«flow»

Generation

generator

(from Component Model)

«flow»

Component Model:: GeneratedArtifact

(from Component Model)

Figure 20 Layered view architecture

8.System Design

This chapter decomposes the high-level architecture introduced in Chapter 7 into detailed-component design. It elaborates on the specific elements according to responsibilities and design decisions made for fulfilling the requirements of our project. Finally, it presents different views that describe grammar and generation distribution, logic, and behavior.

8.1 Introduction The system design is illustrated by using a mix approach between DSL design views and common views from the “4+1” views model. For explanatory purposes, we have decided to describe the design of our solution from the perspective of the layered- architecture components (7.8 ) and features described in Section 6.2.1. .

Due to the technology choice (Xtext framework), two components are included and they are handled in this design as black boxes. Their interactions with the remaining components will be presented, but we are not presenting their internal structure and behavior. Those components are:  Editor (7.5 ): Xtext framework includes automatic connection with the Eclipse IDE supporting multiple features such as text editor, project visualization, tree view, and file management. All these advantages are customizable and we have employed them for our solution’s UI. In this way, a developer can write models directly within the Eclipse IDE.  Parser (7.6 ): For this part, Xtext provides the ANTLR parser. This parser is automatically created and updated according to the Xtext grammar definition. This parser is responsible for creating a semantic model composed of a class tree that is derived from the user model.

pkg High-lev el Component Model - MBID

MBID

ModelingLanguage generator

testGenerator

wrapperGenerator GeneratedArtifact Editor Model Parser «flow» «flow» «flow» «flow»

documentationGenerator

Figure 21 Layered view architecture

Although we have managed these components as black boxes, they are not protected from modifications. It is possible to influence the structure and internal behavior of both of them and even replacing them is an option.

The further design and analysis focuses on the modeling language and generators exposing their structure (object model and static organization), logical behavior, as well as software-hardware mappings.

Finally, the last section shows an updated view of the solution considering all layers as a full process.

8.2 The metamodel layer The metamodel layer is also the presentation layer of our system. All the internal elements have a direct influence on what a user can see when interacting with the system. The two main components of this layer are:  The editor: Explained in Section 8.1 .  The modeling language: This is the user interface core and it describes what the user is allowed to use for modeling. This language offers a textual model representation. As we introduced in Section 7.4 , this package consists of a grammar (Xtext implementation) as well as a scope and a validation package.

8.2.1. Grammar The main definition of the language is done by defining a set of grammar files (written under Xtext framework). The language consists of three elements:

 Lexer: Lexical analysis is the process of converting a sequence of characters into a sequence of tokens. This element defines how the language is divided up into the components of the grammar such as keywords, operations, names and so on.  Grammar: This describes the actual structure of the language that is defined as a set of rules. These rules are based on the formal language theory, in which a grammar is a set of production rules for strings in a formal language. These rules declare how to form strings from the language's alphabet that are valid according to the language's syntax. A grammar does not describe the meaning of the strings or what can be done with them in whatever context, only their form.  Mapping to Model: It is not enough to be able to recognize a grammar; we also need to map it onto an ecore model, which is based on the Eclipse Modeling Framework (EMF). The ecore model can then generate the language editor that are displayed into the Eclipse editor. This mapping is automatically generated based on the set of rules defined in the grammar.

The following subsections will explain how our grammar has been structured in order to create the other two elements. All subsequent class diagrams represent how our textual language is represented in memory (logical views). The set of rules is included in Appendix C: The modeling language grammar.

8.2.1.1. Grammar structure and main models The grammar structure is based on a tree description for which the root is known as a model or origin rule (internally defined as FModel). The root element contains a set of children (or leaves according to the tree analogy) following a composition paradigm. This composition paradigm ensures that every child has at least one parent and it is in the grammar network. In this way we can describe the specific elements for our modeling features (for example, a return value is a child of a method, which is a child of an interface that at the end is a child of an FModel).

Figure 22 Main elements in modeling grammar

The representation of the composition is displayed in figure 22, which also provides a view of the main models that are included in our grammar:  Interface model (FInterface): Grammar subsection including all the required elements for modeling an interface from a high-level perspective.  Testing model (FTesting): Model that links an existing interface model with specific settings and configurations for test case generation.  Wrapper model (FWrapper): Model that maps existing interface models with specific settings for wrapper generation.  Documentation model (FDocumentation): Model that contains elements for generating documentation for specific interface models. These four models are currently supported in a single grammar (The model grammar) and each model owns specific sections with rules based on the kind of model. An aspect that we obtain with this distribution is the modularity and decoupling of responsibilities. Having specific sections supports the definition of grammar rules for specific requirements (either for testing, wrapping, documenting). Finally, the interface section works as the glue that connects the specific artifact generation with the high-level model of an existing interface.

8.2.1.2. Interface grammar The interface grammar offers a set of rules for modeling interfaces containing all the required elements within our domain (Chapter 4). This section describes a simplification of how an interface is defined in an Interface Definition Language (IDL), offering a flexible and understandable definition by using daily development words (such as interface, method, etc.) and removing unnecessary syntax (technology oriented).

As Figure 23 shows, our interface model includes all the common elements that a real interface has (For an example, look at Appendix D: Model examples). These elements have been derived based on the analysis of real interfaces from the FEI software system and the specific need we have found during the artifact generation (test cases, wrappers, and documentation).

Due to space and clarity constraints not all the relationships are present, but the picture still represents the main idea of our grammar structure. This keeps all the elements connected to at least one parent element, and reuses elements as members of other elements. For example, FState name is defined by FEnumerator element that belongs to a specific Enumeration – the state list.

Figure 23 Interface grammar main structure and enumerations:

From Figure 23 we have stated the following assumptions: 1. An interface is composed by methods, event methods, attributed, type definitions (such as enumerations), and a state machine. Additionally, a base class can be defined for the interface. All these elements are modelled on demand (optional) of the end user (compositional approach). 2. Any element associated to the interface tree is reusable for other elements on the tree (for example the Fenumerator is used as input for the FState name, i.e., that the states are defined as an interface type before they are used in the state machine). 3. There is a set of enumerations that capture default values associated with the current programming language (C++) deployed for this solution. For example, FBasicTypeIdEnum holds the valid datatypes based on the C++ available datatypes). 4. Transition’s properties include triggers (FMethods) and actions (FEventMethods) that are defined by previous definitions of these elements (linking existing objects). This strategy is following the principle of building complex structures from simple structures.

8.2.1.3. Testing grammar The testing grammar is responsible for capturing all the required elements for generating test cases from an interface model. These required elements are based on the testing algorithm (8.4.1. ) that is focused on behavioral testing. The behavior of the interface is modeled using a state machine, defined by its states and transitions. Additionally, we need specific testing settings based on the technology (C++) and testing framework (GoogleTest - {3}).

The testing model adopts the paradigm of a main test suite composed of several test cases. These test cases are exposed to elements such as setup (before) and teardown (after) configurations that describe operations performed to the system in addition to the test cases. The model also allows the definition of state setups, which are specific ways to reach the defined states. Finally, all these elements are wrapped around the scenario concept. This scenario concept is an additional way to define different configurations and create test cases for them (Appendix D: Model examples show an example of a testing grammar).

Figure 24 depicts the composition of a testing model, which is initially linked to an (and only one) existing interface. The model consists of a set of FStateSetups, FTestTeardowns, and FTestSetups that are comprise in FTestScenario concepts. Through this concepts it is possible to define a test model in which several scenarios can modify the configuration of the test (StateSetup, teardown and setup). In addition, the test model also includes properties for defining common elements like a testing framework (from a predefined enumeration that currently supports GoogleTest) and the IOM library (internal FEI value). Similar to the interface model, a test model allows optional definition for the previously mentioned elements.

8.2.1.4. Wrapper grammar The wrapper grammar (example at Appendix D) is based on the wrapping strategy (8.4.2. ) which requires an existing interface and specific wrapper settings. These wrapper settings are based on the analysis of existing wrappers deployed at FEI. In this case, wrapper settings are handled on two levels:  Common settings: Related and including the interface to wrap (FWrapperInterface). These are common information required for the wrapper generation such as file location, threading strategy, etc.  Specific settings: Based on the target language of the wrapper (Objects inheriting from the FCommonWrapper). Due to the fact that the specific settings are technology oriented (Python, C++, and COM) a grammar

polymorphic strategy is implemented. Specific grammar sections according to the specific technology.

Figure 24 Testing grammar main structure and enumerations

Figure 25 Wrapper grammar main structure and enumerations

8.2.1.5. Documentation grammar The final component of our grammar is the documentation model. Similar to the previous ones, it contains specific documentation configurations and a link to an existing interface.

Figure 26 Documentation grammar main structure and enumerations

8.2.2. Process view Up to this point we have analyzed the logical structure of our grammar. Now we will take a look at the process to write a model using the editor that generates the output for the metamodel.

Figure 27 shows the interactions for the creation of the textual model (input for the parser) from the perspective of the end user, introducing the semantic mapper object. The generation of the mapper is a hidden process for the end user (who is modeling an interface), but has an impact on how the user writes a model by applying validation and scoping rules. Generating the metamodel grammar consists of changing the language grammar and generating a new semantic mapper and tree parser. This is considered as a maintenance/updating activity. To understand the process, first we need to take a look into the logical structure of the modeling language package.

Figure 28 illustrates the basic structure of the modeling language package which is composed by the following three elements:  Language-related objects: Defining the language rules (Grammar) and the naming convention (MBIFNameProvider) for grammar elements. Additionally, it includes classes (MBIFStandaloneSetup, MBIFRuntimeModule, and GenerateMBIF) for setting up the configuration and the built of our modeling language.  Scoping package: Including the structure for the definition of our own scoping rules based on the domain of our system. It follows the Xtext internal structure for extending the scoping package with our own rules.  Validation package: Containing the validation rules for our modeling language in a similar way than the scoping package. The Xtext internals allow to create customized classes for defining our validation rules.

sd Textual Model

Eclipse: Editor «Parser» :MBIFScopeProvider :MBIFValidator SemanticMapper :Interface developer

LaunchIDE()

readGrammar()

readScope()

WriteModel()

ApplyModelValidations()

ModelValidity()

NotifyModelConformance()

CreateTextualModel() TextualModel

Figure 27 Sequence flow for textual model generation

From this structure we can see that the GenerateMBIF is responsible for taking the textual set of rules of our grammar and creating the semantic mapper. This mapper contains all the information about how the metamodel grammar is constructed. The Scoping and Validation packages use this parser to define supporting model logic: The scoping restricting the user options when deploying (before) specific grammar elements while the validation defining verification rules that will check (after) whether the user-defined model is correct or not.

Figure 29 introduce the sequence for creating the semantic mapper which follows the next steps:  Loading the naming convention (MBIFNameProvider) and the grammar (Grammar).  Running the configured setup in the MBIFStandaloneSetup class.  Creating the semantic mapper which contains a Lexer, a Parser, and a tree parser based on our grammar definition.

The runtime module is not part of this sequence because this is an element that plays a role when the modeling language is executed from the end user perspective. In this situation the runtime module links the scoping and validation packages with the semantic mapper.

class ModelingLanguage

Grammar

Scoping Validation

MBIFNameProv ider

+ getFullyQualifiedName(EObject): QualifiedName AbstractDeclarativeScopeProvider GenerateMBIF AbstractMBIFValidator

MBIFStandaloneSetup

+ doSetup()

MBIFScopeProv ider MBIFRuntimeModule

+ scope_FMethod_Args(FMethod, EReference): void MBIFValidator + scope_FState_name(FState, EReference) + scope_FTransition_to(FTransition, EReference): void + checkOnlyOneInitialState(FStateMachine) + checkOnlyOneObserver(FInterface)

TreeParser

«static» Auxiliary

+ importedInterfacesMethods(FModel): List + interfaceMethods(FInterface): List

(from Component Model) (from Component Model) TreeParser

Figure 28 Internal structure of the modeling language package – Validator class only shows two examples of validation rules

sd SemanticMapper

:GenerateMBIF :Grammar :MBIFNameProvider :MBIFStandaloneSetup

:Maintenance

Generate()

SetNameProvider()

LoadGrammar()

doSetup()

«Parser» SemanticMapper

Figure 29 Creating the semantic mapper

8.2.3. Technology perspective Figure 30 shows the different technologies used in the metamodel layer. The Eclipse IDE exposes the UI out of the metamodel layer while the IDE reads the semantic mapper in order to work as a real editor of our language. The output of the editor is a the textual model written by the end user (To be parsed in the interpretation layer). In the modeling language, we have the representation of the main elements (grammar, scoping, and validation) illustrating the technologies in which they are written.

pkg ModelingLanguage

UI

Metamodel Layer

ModelingLanguage Component Model::Editor Scoping

Grammar (from Component Model) Validation

(from Component Model) (from Component Model)

«Parser» TextualModel SematicMapper

Figure 30 Technology chosen for the metamodel layer

8.3 The interpretation layer As we mentioned in Sections 7.6 and 7.8 the interpretation layer is implemented by the parser. The parser is a component that is automatically generated by the Xtext framework, referencing the elements from the metamodel layer. The parser receives the textual model written by the end user and turns it into a semantic model. A semantic model is a computable representation of the textual model in the form of the classes and attributes defined in the tree parser (originally from the Xtext grammar 8.2.1. ).

8.4 The generation layer The generation layer is the bottom layer responsible for taking the semantic model and generating all the required artifacts. The following two sections explain the algorithms, strategies, and methods behind the generation of these artifacts.

8.4.1. The testing algorithm As we stated in our requirements (6.2.3. ) and problem statement (3.5 ) one of the main features of our system is generating complete test cases from an interface model. In essence, we want software testing in which test cases are derived in whole or in part from a model that describes aspects (functional and behavioral) of a system (interface). This relates to a model-based testing (MBT) approach. A MBT approach contains a strategy/algorithm which generates test cases. This algorithm determines the set of test cases to perform on the modelled interface.

Based on the project goals and FEI software code base, we decided to focus on behavioral testing. In this kind of testing, a behavior contract is defined for the interface and the test cases ensure that this behavior is accomplished. For our specific situation, the behavior contract is represented by a state machine (included in the grammar - 8.2.1.2. ). In this context we define state machines with the following definitions:  State machine: Implementation-independent specification of the dynamic behavior of the interface.  State: Description of the status of a interface that is waiting to execute a transition.  Transition: A change of state to another state caused by a trigger.  Trigger: A particular interface method or event that produces a transition.  Output/Action: The result, output or operation that follows a transition (usually an event message).  Guard: A logical expression stating a Boolean restriction for a transition to fire. Guards must be based on interface model specific properties such as variable values (also a form of state), and values that can be get from other related interfaces by calls.  Dynamic behavior: Transitions are atomic and the new state is reached before the transition ends. The modeller must model time consuming transitions as in between states (for example the transition from inserted to retracted that goes through the retracting state).

Furthermore we need to decide on an acceptable strategy for testing the state machine and which test cases must be created for the state machine based on:  All-states coverage: Each state of the state machine is exercised at least once during testing, by some test case in the test suite.  All-transitions coverage: Each transition is exercised at least once, implying all triggers and actions.

The algorithm determines the test cases that will be generated according to the following rules:  For each reachable state

o A test case is generated for each transition. Transitions are defined as: . Current state: Initial state for the transition. . Trigger(s): Method(s) that trigger the transition. These methods can be internal (same interface) or external (related interfaces). . Guard: The condition is built from internal or external (other interfaces or hardware conditions) elements to the interface. . Target state: Expected state in which the state machine is placed after the transition. . Actions/Outputs(s): Action(s) in the form of interface state value changes. . Transition delay: The required time for ensuring transition completion. o For each trigger (or trigger combination when they are coming from more than one interface) that is not included in the transitions a special test case will be generated. This test case covers the case where the state is the same as the current state.

All the test cases created by the algorithm are stored in a test table as part of the abstract test generation; they will be turned into concrete test case implementations later on. The creation of the test table consists of the following steps: 1. The algorithm receives an interface model including a state machine behavior as input (Figure 31 Interface model (defined using our grammar) including a state machine definition (visually represented in the right).). 2. The algorithm creates data structures of states and triggers based on the state machine description (the Enable, Disable states, and on, off triggers). 3. For each state in the list: a. Generates a row in the test table for covering each transition (Rows 1 and 2 from Figure 33 Resulting test table.). b. Generates a row (rows 3 and 4) in the test table for each trigger that is not included in the state transitions (Figure 32 Additional transitions (red) that are not described in the interface model.).

The result of the algorithm is the table (Figure 33 Resulting test table.) containing four rows that will form four test cases during the concrete test generation (See generator section - 8.4.3. ).

Figure 31 Interface model (defined using our grammar) including a state machine definition (visually represented in the right).

Figure 32 Additional transitions (red) that are not described in the interface model.

Figure 33 Resulting test table.

The states for which test cases are generated must be reachable; this means that the interface (or the system) has to be able to set the specific state before executing the test cases. The test case coverage is ensured by creating test cases for the states that the end user defines in the state machine of the interface model (test cases are generated for states containing transitions).

Finally, there must be an explicit mapping between the elements of the state machine (states, triggers, actions, transitions, guards) and the elements of the implementation (e.g., classes, objects, attributes, methods, events). For achieving this, the interface and testing models have to match the specific definitions (method names, interface names, state names) with the real implementation details. Concrete test generation is presented in Section 9.3.3.

8.4.2. The wrapping strategy The wrapping strategy defines the usage of the interface methods in a target technology (Python, C++, and COM). We target the following interface aspects for wrapper generation:  Methods: Representing the access to the functionality of the implemented interface.  Properties and specific type definitions: Interfaces that contain attributes, specific enumerations, or structures.  Threading strategy: If the interface works with a single-thread or multithread model.  Exception handling: Specific errors and exceptions produced by the original interface have to be reflected through the wrapper.  Type conversions: For target languages that handle types differently, for example string conversions.

In this case, there is no general strategy for creating wrappers because the wrapping strategy is defined by the chosen technology (target language). We only describe a straightforward process for writing interfaces and wrapper models; and depending on the interface characteristics, we introduce handlers for specific purposes (threading, exceptions, etc.).

8.4.3. The generator In the previous two sections we analyzed the process, strategy, and algorithm for defining artifacts to generate; this section focuses on the logical, process, and development views of the automatic artifact generation implementation. To define these views, first to define the steps that are executed in the generator: 1. A generation request is received by the generator containing a semantic model (parsed model). 2. The generator identifies what kind of artifact is requested (testing, wrapper, or documentation). 3. A specific generator is created according to the request 4. The specific generator validates that the interface model(s) included in the semantic model complies with the minimum requirements of the specific artifacts to generate. For example, a testing generator will validate the reachability of the states in the state machine definition. 5. The specific generator executes preprocessing activities (for example, the abstract test case generation). 6. The specific generator creates the artifacts and related files

act wrapperGenerator

Loading Interface Loading Wrapper Model Model

Creating Wrapper

[interface has [interface has exceptions] multithreading]

Create Create threading Create exception wrapper handler handler

Figure 34 Wrapper process activities

The generator is designed to be modular:  Distribution according to responsibilities. Functionalities are distributed according to specific subjects. For example the testing and the wrapping generation are decoupled and these two are also decoupled from the specific file creation.  Dynamic object creation according to specific needs. Our system does not know beforehand what kind of artifact it has to generate; therefore objects are created according to the models that are written by the user (composite pattern implemented through the grammar).  Strategy pattern implemented around common inheritance tree. An abstract generator interface contains two main methods, validate if generation is possible and generate the artifact. Considering future extensibility, the class

tree allows the insertion of new generation interfaces (for generating new artifacts) that can be integrated in the workflow by implementing the two methods.

class generatorClassDiagram

AbstractTestGeneration MBIFGenerator «interface» Factory StateTransitionAlg StateTransitionTestCase InterfaceTriggers - codeGenerator: CodeGenerator - factory: Factory + create(FModel, IFileSystem, Resource): CodeGenerator + doGenerate(Resource, IFileSystem, IGeneratorContext) + validModel(FModel): Boolean

«interface» CodeGenerator WrapperFactory + requiredTemplates(): List:FileTemplate[] TestingFactory + generate(): boolean «abstract» + validate(): boolean FileTemplate

- filename: string - model: FModel «instantiate» + generateFile(): CharSequence + getFilename(): string + setFilename(string): void

«abstract» «abstract» TestingGenerator WrapperGenerator «instantiate» - fileTemplates: List - fileTemplates: List

+ requiredTemplates(): List:FileTemplate[] + requiredTemplates(): List:FileTemplate[] + generate(): boolean + generate(): boolean + validate(): boolean + validate(): boolean

GoogleTestGenerator ATLCOMGenerator BoostPythonGenerator CPlusPlusGenerator + requiredTemplates(): List:FileTemplate[] + requiredTemplates(): List:FileTemplate[] + requiredTemplates(): List:FileTemplate[] + requiredTemplates(): List:FileTemplate[] + generate(): boolean + generate(): boolean + generate(): boolean + generate(): boolean + validate(): boolean + validate(): boolean + validate(): boolean + validate(): boolean

«instantiate» «instantiate» «instantiate»

GoogleTestTemplates COMTemplates BoostPythonTemplates

BlockingScope GoogleClass TestTemplate COMHeader COMCpp ExceptionClass WrapperClass CallbackScope

«static» CallCreator

+ CreateArg(FMethod): void + CreateCall(FMethod) + CreateCall(FGuard): void

Figure 35 Generator package structural view

Figure 35 shows the global view of the generator, in which the main guidelines and design patterns are applied. First, the strategy pattern is used for the factories and code generators through interfaces. A similar concept is implemented in the file template abstract class through inheritance. Second, the factories are responsible for creating the specific generator according to the model defined by the end user, in this way a testing or wrapper generator is created dynamically. Third, file templates are implemented separated and they hold the specific structures that are used to produce the output artifacts (concrete generation). These structures are technology oriented. Fourth, the specific generators are responsible for instantiating the specific file templates that are required for generation. Finally, complex generators are extended by adding additional modules to the abstract-type of generator. For example the testing generator is extended with a sub package responsible for the abstract test generation.

sd generatorTesting

:Parser :MBIFGenerator :TestingFactory

doGenerate(Resource, IFileSystem, IGeneratorContext)

getModel(Resource): FModel validModel(FModel): Boolean

create(FModel, IFileSystem, Resource): CodeGenerator

:CodeGenerator

«abstract» :FileTemplate requiredTemplates(): List:FileTemplate[]

validate(): boolean

generate(): boolean

:StateTransitionAlg

generateStateTransitionTestCases()

:List

generateFile(): CharSequence

Figure 36 Sequence diagram presenting the interaction in the generation layer for test case generation

The interactions between the main components of this structure are displayed in the Figure 36. The sequence starts with the generation request from the parser to the MBIFGenerator that executes the following steps in order to produce the expected output (artifacts):

1. The semantic model is read and this is validated by the TestingFactory (a model is valid if it contains all the required elements for testing/wrapper generation). 2. If the model is valid, then the generator requests the creation of a specific code generator (testing generator, currently googleTest generator). The specific code generator instance will create all the required file templates (templates for the specific artifacts to create). 3. The validate method of the specific code generator is executed. This method validates the values within the model with the specific kind of artifact to generate (for example a valid state machine for creating test cases). The generate method of the specific code generator is executed. This action triggers all the generation steps. In the case of testing, it triggers the abstract test generation (creating the test table) and then the concrete test generation (generating the test files).

8.4.3.1. The factories Factories are implemented to abstract the problem of creating objects without having to specify the exact class of the object to be created. In our situation, this applies in two specific cases:  When creating a specific generator. The specific factories (testing and wrapper) take responsibility to read the user model and then validate if the model contains all the necessary elements for generation. After that, a specific generator is created (GoogleTestGenerator, BoostPythonGenerator, etc.).  When creating file templates. The specific generators require specific file templates for creating artifacts. These files are defined according to the kind of generator (test templates for a testing generator). In order to encapsulate this creation, the specific generator includes a factory method that takes the responsibility of creating the file templates. In this way, file template creation is technology/strategy specific.

8.4.3.2. File templates File templates are based on text templates that are a mixture of text blocks and control logic that can generate a text file. This template is represented as a fill-in description in which the content is filled according to the models and generation strategies. Our system handles two kinds of file templates:  Testing templates: Including all the required templates for generating a test suite based on the state machine of an interface.  Wrapper templates: Specific files for writing wrappers that exposed all the functionality of the original interface in a specific target language.

The Figure 37 describes the three categories of information to fill in the file templates:

Generated artifacts have to be real code files that can be compiled and further executed in the FEI build environment/project. That’s why the structure of the templates is defined by the specific deployed technology (see Error! Reference source not found.).

8.5 Layer interactions In previous sections we have described the specific design details of the three main layers of our system and stated decisions that were taken based on our requirements and technology choices. The communication between layers is defined by the transformations of the original textual (user made) model into the specific artifacts that are generated (test cases, wrappers, or documentation).

Figure 37 File template sections

The Figure 38 shows an example sequence of a test case generation involving the three layers. In this sequence we show what elements are created (on the fly) and how the layers communicate with each other. It illustrates the interactions between the metamodel layer (that creates the textual model), the interpretation layer (generating the semantic model), and finally, the generation layer (creating the specific artifacts). The detailed sequence is read as follows: 1. The end user creates a model using the language grammar through the editor. 2. The editor connects with the MBIFValidator (through the semantic mapper plugin contained in the parser) and receives the conformance of the model with the metamodel rules. 3. The end user request the generation of specific artifacts and then the editor sends this request to the parser. 4. The parser takes care of creating the semantic model and calling the MBIFGenerator. 5. The MBIFGenerator consults with the factories (for this Figure, only the testing factory is covered) and generates the specific code generator (testing generator - GoogleTestGenerator) 6. The specific code generator validates the artifact generation and executes any internal sub process (abstract test generation). 7. The code generator fills the created file templates and the final output artifacts (test cases) are created.

sd FullProcess

:Editor :MBIFValidator :Parser :MBIFGenerator :TestingFactory

:Interface developer

«Document» writes() :Model

WriteModel(Document:Model)

ApplyModelValidations()

:ModelValidity

ModelValidity()

NotifyModelConformance()

RequestArtifacts((Document:Model)) CreateTextualModel(): TextualModel

parseModel(TextualModel) BuildSemanticModel(): SemanticModel

:SemanticModel

doGenerate(semanticModel, FileSystem, GeneratorContext)

alt Testing generation

[model == FTesting] validModel(FModel): Boolean

create(FModel, IFileSystem, Resource): CodeGenerator

:GoogleTestGenerator

validate(): boolean

:StateTransitionAlg

:GoogleClass

validModel()

generate(): boolean

generateStateTransitionTestCases(FModel)

generate()

TestSuite: GeneratedArtifact

Figure 38 Full sequence for generating test cases including the three layers

9.Implementation

The previous two chapters discussed the system architecture and design. This chapter elaborates on the realization of the design, considering the modeling language and generators; including the chosen technologies and how the output of our solution is integrated within the FEI software system.

9.1 Introduction As part of the project, a prototype has been provided as implementation. This prototype follows the architecture and design stated in previous chapters (7 and 8). During the development of this prototype, design and implementation decisions have been made to implement the functional requirements (6) and to adapt the technology decisions (Xtext framework).

The prototype is developed independent from the existing FEI tooling; meaning that our solution stands functionally out of the FEI software infrastructure. This decision is based on the chosen technology which is not currently supported under the Visual Studio (VS) environment in which the FEI code base is defined. As a consequence, our solution generated artifacts have to be moved manually to the FEI software infrastructure. From the implementation point of view we can consider the following three activities:  Modeling: This is the main interface between the end user and our solution. The end user is able to model FEI interfaces and specific details for the code generation; both of them using a domain specific language (DSL) based on the Xtext framework.  Generating artifacts: This is implemented as a background process that is executed by the used Xtext/Xtend framework. This process takes the models from the user and generates the specific artifacts (test cases and wrappers) specified on the models. The generated artifacts, source code files, can be used directly in the designated projects.  Executing artifacts: The process in which the generated artifacts are compiled and executed under the FEI software system. This process is currently out of the scope of our solution; we decided to keep this task as an end user task due to the associated complexity (compiling and executing applications within the FEI software system).

The following sections explain the details and decisions regarding these three processes.

9.2 Modeling After the initial modeling approach decision (7.2 ), the next decision was to implement a text-based interface modelling language with properties to support all the aspects we needed from the interfaces. Firstly, the decision of using a text-based language was focused on the clarity, readability, and understandability of a textual interface representation against a graphical representation; secondly, the interface aspects we wanted to model were based on high-level interface elements and artifact- oriented elements (see table 20); and finally, the selected words for the language where based on the words that a software interface developer uses in the daily basis.

The representation of all these elements (table 20) was defined with words and expressions from our textual language. The definition of these words was based on three main aspects:

 The interface definition: The interfaces within the FEI software system are defined in files known as IDL (Interface Definition Language) [14] files. These are files written under the COM technology; they are complex to read without having COM background and they include basic information regarding interface methods and COM-oriented flags. Other sources are interface headers written according to the interface programming language implementation. Both sources were used to define common words and common elements to model interfaces in our text-based language.  Other modeling tools: Within the available modeling technology on the market, there are frameworks that model interfaces and behavior (such as Dezyne). We analyzed these existing languages and decided to build our own language implementing all the features we needed. The basics of our texted based language is based on the Franca IDL (a DSL that aims to simplify writing DSLs).  Specific requirements from the chosen technology: The deployed technologies demand specific values for their usage. For example, the GoogleTest framework requires a Setup method and a Teardown method that are executed for each test case. In order to allow the end user to employ these features, specific language elements have been created (Column 2 and 3 from Table 20).

Table 20 – Elements to model through the DSL language Interface Specific for testing Specific for wrapping Enumerations State machine1 Wrapping technology Datatypes (structs, and type State setup configuration Threading model definitions) Testing technology Method specifics Properties (attributes) Test setups (adding information Methods including: Test teardowns to the method for  In parameters Test scenarios exception handling, 2  Return values Interface specifics locking, and  Exceptions multithreading) 2 Events (Methods and Interface specifics interfaces) State machine (behavior representation) Custom methods (helpers and customized functions)

9.2.1. Core modelling language aspects There are two aspects to emphasize regarding our modeling language:  The first one is the compositional approach (composite pattern) which means that there is no fixed template for modeling interfaces. This practice produces dynamic models (interface representation) with different elements according to the end user needs; the end user models what he needs to model from the interface instead of modeling a set of static elements for every interface.

 The second aspect is the language separation into subsections according to model type (interface, testing, wrapper, and documentation). This aspect was previously introduced in the grammar design (8.2.1.1. ) and it is implemented in the textual language by using specific semantic constructions related to the specific model section. Nevertheless, the core

1 The state machine is used in the testing algorithm for generating specific test cases, but the state machine is considered as the high-level view of the interface (behavior) and that’s why it is modeled inside the interface. 2 Libraries, headers, namespaces, IDL, etc.

model (interface model) is accessible to the artifact models (testing, wrapping, and documentation).

9.2.2. State machine implementation Our interface modeling language includes functionality for defining interface behavior using a state machine. This state machine is used for test case generation, but it has the potential to be extended for other artifact generation (for example, generating header and implementation files for the interface).

The state machine modelling language construction uses previously modelled interface elements (Section 8.4.1. ) such as, methods, event methods, and state list enumeration values; and additionally, the state machine allows to model relationships between different interfaces (for example, by modeling methods from other interfaces as external triggers for state transitions).

A final feature of the state machine language is using the custom method concept (Section 9.2.3. ) for defining hardware-related events (Requirement: MID-01-05) and error-related states.

Figure 39 shows a state machine for a FEI interface (IStemDetector). At the top we have the definitions of the states (state list enumeration) and the event interface. Followed by the methods of the interface that are used as transition triggers. In the bottom, the state machine is represented by defining the two steady states (retracted and inserted) and by modeling transitions that start from those states. In this way, we have a global view of how the interface elements (methods, event methods, and enumerations) are used for the definition of the interface behavior.

9.2.3. The custom method implementation The custom methods feature allows abstracting complex definitions in a single method that is easily integrated within the artifact generation. For practical output, custom methods are represented as supporting/helper functions that have to be manually written by the end user. The custom methods are useful for the following situations:  Comprising a sequence of operations required for setting up an interface configuration. For example, getting an interface from the IOM server requires traversing through the interface tree and using specific interface methods.  Abstracting operations leading to specific hardware configurations (using simulators). Simulators depend on specific implementations for setting up hardware’s status and these implementations are encapsulated by defining a custom method.  Any complex set of operations that involve specific implementation procedures. Our interface models are made to be easily understandable and there are situations in which defining a behavior becomes complex due to the amount of external interfaces involved. In this situation, we recommend defining a custom method that summarizes/abstracts the behavior and let the end user manually implement the specific operations. Other complex operations are error-related (stimulating the server to produce errors) and timing (asynchronous operations) situations.

The Figure 40 shows the definition of custom methods created for the CMOS protector interface. The first pair (goToHTOK and goToHTNotOK) is intended for setting up the High-Tension module, while the second pair (goToOpticsSafe and goToOpticsUnsafe) is focused on setting specific values in the Optics interfaces. All the methods influence the behavior of the CMOS by triggering state transitions.

Figure 39 Example of the implementation of an interface state machine

9.2.4. Visualization of the modeling feature As we mentioned in the architecture (7.5 ), our modeling feature has an editor for interacting with the end user. As a text-based modeling language, any text editor can be used to define models, but our tool provides a language specific editor which makes modelling easier and faster. This editor is implemented with Xtext using the Eclipse IDE framework. Using our editor the user is able to model interfaces and in parallel he receives feedback from the editor. Feedback is presented by dialog boxes showing possible options and highlighting mistakes as well as missing elements.

Figure 40 A fragment of the CMOS protector interface model in which custom methods are defined and then used in the state machine transitions

Figure 41 Example of Eclipse IDE editor showing feedback to the end user

The connection between our DSL language editor is achieved by the usage of Eclipse plugins. The Xtext framework generates the grammar (7.4 ) plugin which contains the grammar definition, validation rules, and scoping options. This plugin is read when Eclipse is executed and links our modeling files with our DSL language and

editor by looking at the file extension. We have defined “.mbif” (Model-Based Interface Framework) as the file extension for our project.

9.3 Artifact generation The implementation of the artifact generation covers the concrete generation of code files and how they are filled according to the models (interface, testing, and wrapper models). This implementation uses file templates (Section 8.4.3.2. ) that are specifically prepared to generate code based on the specific artifact technologies. This artifact generation is written using the Xtend language.

9.3.1. Artifacts technologies The following list presents the used technologies for artifact generation:

 GoogleTest: Unit testing library for the C++ programming language and currently deployed for interface testing at FEI. Test cases are generated in C++ using this framework.  Boost.Python: Library which enables seamless interoperability between C++ and the Python programming language. Python wrappers are generated using this library.  COM-ATL: Set of template-based C++ classes intended to simplify the programming of Component Object Model (COM) objects. COM wrapper files are generated using these classes for wrapping the C++ interface implementation.

The implementation of these technologies is achieved by the creation of a single generator per artifact and per technology. These generators inherit the code generator interface (Figure 35 Generator package structural view) and implement the strategy design pattern for validating and generating specific artifacts. Additionally the generators implement the factory design pattern for defining what file templates to create according to the specific technologies. Following this modular approach, extensibility of the generation is achieved. New artifacts can be generated by creating new generators that implement these two design patterns.

9.3.2. The generation gap pattern We have decided to implement the generation gap pattern [12] for dealing with generated and manually written code. As we stated in the file templates (Section 8.4.3.2. ) our solution automatically generates code files, but not all of these files are ready to be compiled because some of them require manually written code. A common generation workflow deletes and replaces old files with new versions according to the generation engine; this is a problem when you have manually written code. In order to avoid deleting manually written code we implemented the generation gap pattern that separates this manually written code from the generated files.

The implementation of this pattern has been achieved by generating the manually written file templates only once (avoiding regeneration of these files). These files are expected to contain structural elements of the generated code file (for example method implementation) that are not expected to be modified. In the case of modification, the user is responsible for managing this. He can rename (or delete) the manually written file templates to allow file regeneration based on the model updates, and merge back manually written code into the new files.

Figure 42 Generation gap pattern explained by applying inheritance.

9.3.3. Testing implementation The testing implementation is the process of creating the concrete instances (artifacts) containing the test cases that are constructed by the test-generation algorithm (8.4.1. ).

The Figure below shows the main activities during test case generation. The generation consists of two main activities: the abstract test case generation and the concrete test generation. The abstract test generation produces a generic test table (Section 8.4.1. ), while the concrete test generation produces code specific to the testing technology. This approach allows adding new test generation strategies independent of the used test framework specifics, and new testing frameworks by creating new concrete implementations according to the new frameworks.

sd TestGeneration

Concrete test generation

Abstract test generation GoogleTest generation Interface Model state-transition algorithm

test table Create file Test cases templates files

Test Model Create test table Update templates according to models and test table

Figure 43 The process of creating test cases for a specific testing framework.

The implementation according to the specific testing framework relies in the creation of file templates for creating the test cases instances. These file templates are categorized in three types:

 Generic/Auxiliary files: The files that are created with mostly the same content every time and their purposes are setting global configurations for compiling the test cases in the Visual Studio projects. These files are expected to be ready for compilation.  Test suite files: The files that contain the translation of the test table into specific test cases. They have a common structure that is dynamically filled according to the specific interface and testing model. Similar to the previous ones, changes are not expected from the end user.  Supporting/helper files: These files include special sections that are expected to be manually written by the end user. They are related with specific implementations such as, reaching the interface and setting a specific state from the state machine. Customization of the test suite is achieved through these files.

In order to present the implementation of the test suite and supporting/helper files, it is necessary to introduce the testing feature that we have implemented, the value- parameterized testing.

9.3.3.1. The value-parameterized testing We made the decision to implement a value-parameterized testing strategy for our generated test cases. Value-parameterized testing allows you to test your code with different parameters without writing multiple copies of the same test. For our specific situation, these parameters are the ones that we have calculated in our testing table (8.4.1. ). For each row in our testing table a parameterized test case will be generated (data-driven testing).

Implementing value-parameterized testing requires the following elements:  A test class holding the template test case instance and the specific definitions for creating every test case. The template test case is the general structure that holds the execution of the test case by reading each element of a test table row and executing the sequence of actions/operations/methods to perform the test case and validate the result.  Interface class for handling the SUT description and mapping the usage of the modeled interfaces with the real implementation of them. These classes also contain the helpers and supporting functions that have to be written by the user.

Figure 44 The logic behind the parameterized testing strategy. In this case specific definitions are provided by our test table in the form of methods/triggers to execute.

9.3.3.2. GoogleTest suite structure The Figure below shows an example of a generated artifact (GoogleTest suite) in which we emphasize the following sections (which are dynamically filled according to the information contained in the models):  Class definition and instance initialization (rows 15-24): This section defines the number and order of elements that every instance reads from the test table and initializes variables for holding the values. Additionally, trigger enumerations are created to link interface method executions.  Trigger executions (rows 11 and 29): The interface methods and custom methods are executed in this section which maps the triggers enumerations with specific interface methods. Having a specific section for these executions allows defining specific behaviors for the method execution (including timeouts, HResults, and exceptions).  Guard executions (rows 12 and 30): The guards are enforced to occur through this section. We recommend defining guards through custom methods to allow manually written code for defining the guard.  Event handling (rows 13 and 32-37): This section is responsible for capturing and checking the events that are produced during transitions by comparing them with the ones defined in the interface models.  Test template (rows 40-45): The basic structure that is executed for every test instance. This inner class takes care of calling the methods for setting up the initial state, meeting the guard, reaching the final state, and verifying the occurrence of all the expected events.  Test case instantiation (rows 46-51): The translation of the test table into test case instance descriptions. Every row in this section becomes a test case during execution (this transformation is deployed by the GoogleTest framework).

9.3.3.3. Interface handler The interface handler is a class that is created for decoupling the actions that are related with connecting the SUT with our test suite. Decoupling these actions ensure the usage of the generation gap pattern (9.3.2. ). Figure 46 displays the implementation file of this handler containing all the methods that are expected to be manually filled:  Setup: Defining how to reach the interface by traversing the interface tree and setting up any specific interface initialization.  Teardown: Defining what to do with the interface after executing a test case, this can includes setting up the interface in a specific state, or reassigning interface values.  State Setup: Containing the specific sequence of operations for reaching a state machine state. Only steady states should be defined.  Custom methods: Including the special/abstract/helpers/supporting methods and functions that describe complex interface behavior. This is an optional section that is available only if custom methods have been modelled.

Figure 45 Example of the skeleton of a test suite implementation.

Figure 46 Example of the skeleton of an interface handler.

9.3.3.4. Test case execution flow We have established that our test cases follow the value parameterized approach which means that all of them follow a similar execution workflow based on the test template class. The workflow is described by the following steps (and represented graphically in Figure 47):

1. Google test engine creates a test case instance (Test class) per each tuple defined in the test table. 2. Value initialization is performed with the specific values of the test table row and an interface handler is created. 3. Test setup is performed through the test class and the interface handler 4. The initial state is set by executing the state setup method from the interface handler. 5. The initial state is validated through the testing framework. 6. Transition is executed through the execute transition method: a. Guard is executed b. Triggers are executed c. Events are set d. Transition delay is performed 7. Final state is validated through the testing framework 8. Event occurrence is checked 9. Test teardown is performed through the test class and the interface handler

Figure 47 Test case execution workflow – IStemDetector example .

9.3.4. Wrapper implementation The wrapper implementation is centered on the concrete wrapper generation based on the wrapping technology. Every technology has its own method to wrap functionalities and behaviors and this makes the wrapper generation a straightforward process focused on the technology. However, there are common aspects and features that are essential when wrapping an interface. For this project we focused on the wrapping implementation of the following concepts:  Interface methods  Datatype conversions  Single inheritance  Threading models  Exception handling  Event handling

Based on these concepts the general file templates implemented per technology are categorized as:  The wrapper file: Containing the deployment of the wrapper technology to map the interface implementation (Displayed in Figure 48).  Supporting files: Including generic wrapping for specific implementation concepts (such as exception or event handling). These files are used as libraries for the wrapper file.

In contrast with the testing implementation, wrapper artifacts are expected to be completely generated, i.e., the user does not have to write anything manually. Wrappers are implemented according to specific technology instructions matching the specific elements of a modeled interface. This is matching is based on FEI wrapper legacy code. For the specific instructions, you can consult the deployed technologies (Boost.python {4} and COM-ATL {5})

9.4 Executing artifacts Artifact execution is essential for validating the quality, functionality, and utility of the generated artifacts. This activity is not included in the scope of this project, but an execution process has been established in order to allow the end user to deploy the generated artifacts. Before going into the execution process we need to state the following facts about the artifacts:  Generated artifacts are code files containing C++ code with a structure based on file templates (8.4.3.2. ).  These code files access existing interfaces from the FEI software system (For testing or wrapping).  Code files are mostly prepared for immediate compilation excluding the following parts: o Helpers and custom methods and functions (9.2.3. ): They need to be written manually in the generated code file. o Interface-oriented usage: Any additional and specific interface implementation (such as usage and initialization) that is not included in the model.  Compilation requires adding the files (currently manual moving, but automation is possible) to a project under the FEI software configuration in order to allow the compiler to access the software code base.  Executing the code files requires a simulated environment (microscope) with the FEI server running.

Figure 48 Generated artifacts for the boost.python wrapper generation

The following steps present the process for executing generated artifacts (final execution can vary depending on the type of artifact): 1. Creating a software project and linking required software libraries according to the specific modelled interface Within FEI Microsoft Visual Studio is used. 2. Adding/copying the artifacts generated by our solution into the software project and link them to the compilation process. 3. Filling manually the supporting/helper methods and the specific implementation details areas in the generated artifacts. 4. Compiling the project and generating an executable file. This step is responsible for linking the content of our generated files with the existing FEI software system. 5. Running the executable file in a real or simulated environment. For the scope of the project, simulated environments (through virtual machines) were employed for validating and testing the artifacts.

Figure 49 The global view of the process for artifact execution

During execution of the generated tests, inconsistencies (modeling interface names different from the implemented ones) are discovered; these can either point to model inconsistencies or implementation errors. Updating the model with the execution feedback requires going back to our solution, changing the models, regenerating the artifacts, and repeating the execution process.

10. Verification & Validation

This chapter presents the process of verifying and validating the results of the implementation of the MBIF solution. Comparing the implemented features against the original requirements and then validating the produced artifacts (test cases and wrappers) under the FEI software system. Finally, the chapter concludes with an analysis of capabilities and limitations that have been found during the validation of the solution.

10.1 Introduction Software verification and validation (V&V) processes [15] determine whether the developed products of a given activity conform to the requirements of that activity and whether the software satisfies its intended use and user needs. Verification evaluates a system or component to determine whether the system satisfies the conditions imposed at the start of a specific phase. Validation evaluates a system or component during or at the end of the development process to determine whether it satisfies specified requirements (Section 3.5 and Chapter 6).

For the scope of the project, the solution has been tested in simulated environments by using a virtual microscope running in a virtual machine (VM). These tests have been performed on interfaces from the FEI software system using a representative set used for the creation of the interface definition modelling language. Additional testing has been performed with new interfaces by prospective end users.

10.2 Testing with FEI interfaces Testing the solution within FEI software (using implemented interfaces through the IOM server) has been performed during development to validate models and generated artifacts. Testing consists of the following steps: 1. Selecting a representative interface 2. Getting knowledge about the selected interface 3. Defining the aspects to model and the artifacts to generate from the interface 4. Modeling the interface by using our interface language 5. Creating a model for artifact generation (testing or wrapper model) 6. Generating artifacts from the models 7. Compiling and executing the artifacts in the simulated environment 8. Validating the execution results with the expected interface behavior 9. Validating the generated artifact coverage according to the model specifications and interface elements

The testing has been performed in an incremental approach. The testing process covers from interfaces including simple concepts (isolated interfaces) until complex interfaces with internal and/or external dependencies.

The following table shows the tested interfaces for the test case generation and the requirements (Chapter 6) that have been covered with them (Features 1, 2, and 4).

Table 21 – Testing the solution for generating test cases Interface Tested behavior Related requirements IStemDetector Insert and retract. Triggers coming from the MID-01-01 to MID0-01-04, same interfaces and event handling. REQ-MID-02, REQ-TCG- 01, REQ-TCG-02, REQ- PID-01, REQ-PID-03 IFieldEmission FEG on, warm start, cold start, standby, REQ-MID-01, REQ-MID-

SourceAlignments ready, and operate. Triggers coming from 02, REQ-TCG-01, REQ- two interfaces (HT interface), event PID-01, REQ-PID-03 handling, guards, and custom methods. FluScreenCam Insert and retract. Triggers coming from the MID-01-01 to MID0-01-03, same interface. REQ-MID-02, REQ-TCG- 01, REQ-TCG-02, REQ- PID-01, REQ-PID-03 CmosProtector Blank and unblank beam. Triggers coming REQ-MID-01, REQ-MID- from multiple interfaces (FluScreen, HT, 02, REQ-TCG-01, REQ- Optics, Cmos, and falcon camera), event PID-01, REQ-PID-03 handling, guards, and custom methods

Aspects that were covered during testing are:  State and transition coverage is currently validated manually on the modelled state machine. The coverage is defined by the state machine definition, then depending on the state machine specification, test cases are generated (Section 8.4.1. ). Test cases produced a minimum of one generated test case per user-modelled transition and complementary test cases for missing transitions (inferred transitions based on the test algorithm). 100% transition coverage is reached by design. Table 22 shows the coverage results for modelled interfaces.  Implementation code coverage is not currently measured. Test cases are generated according to the modelled state machine which not necessarily covers all the interface elements (such as methods). Therefore, code coverage results may produce inaccurate values.  Integrating generated code with a valid FEI software project.  Compiling the code with FEI libraries and executing the output in a simulated environment (using a VM).  Physical separation of static generated code and manually written code areas (different files were created).

Figure 49 shows a capture of the execution in the simulated environment.

Table 22 – State machine coverage results Modelled Modelled No. generated Transitions States +Extra Interface states transitions test cases covered covered transitions IStemDetector 2 4 8 4 2 4 IStemDetector 2 2 4 2 2 2 IStemDetector 4 9 16 9 4 7 IFieldEmissionSource 4 11 *46 11 4 35 IFieldEmissionSource 4 13 *44 13 4 31 ICmosProtector 2 5 *199 5 2 194 ISource 2 2 4 2 2 2 IStage 4 7 17 7 4 13 IStage 3 5 12 5 3 7

* Custom methods add an additional dimension in the test case table producing a proportional growing in the number of generated test cases. This is based on the unpredictability of the manually written code contained in a custom method, which can be associated with errors, conditions, etc. + Extra transitions are the not modelled transitions that can be inferred based on the unused triggers for every steady state. These transitions start and end in the same state.

Figure 50 Test suite execution for the IStemDetector interface

The following table shows the tested interfaces for the wrapper generation including requirements that have been covered with them (Features 1, 3, and 4).

Table 23 – Testing the solution for generating wrappers Interface Technology Features wrapped Related requirements IDetector Boost.Python Methods and enumerations MID-01-01, MID-01-02, REQ- MID-03, REQ-IWG-01, REQ- IWG-02, REQ-PID-01, REQ- PID-03 IEdsDetector, Boost.Python Methods, inheritance, and MID-01-01, MID-01-02, REQ- ISuperXDetector enumerations MID-03, REQ-IWG-01, REQ- IWG-02, REQ-PID-01, REQ- PID-03 IInstrument Boost.Python Methods, inheritance, and MID-01-01, MID-01-02, REQ- enumerations MID-03, REQ-IWG-01, REQ- IWG-02, REQ-PID-01, REQ- PID-03 ScanPosition Boost.Python Datatypes and structs MID-01-02, REQ-MID-03, Patterns REQ-IWG-01, REQ-IWG-02, REQ-PID-01, REQ-PID-03 IEditable COM-ATL Methods, events, MID-01-01, MID-01-02, MID- threading, and inheritance 01-04, REQ-MID-01, REQ- MID-03, REQ-IWG-01, REQ- IWG-02, REQ-PID-01, REQ- PID-03 IEditablePolygon COM-ATL Methods, events, MID-01-01, MID-01-02, MID- threading, and inheritance 01-04, REQ-MID-03, REQ- IWG-01, REQ-IWG-02, REQ- PID-01, REQ-PID-03 IEditableText COM-ATL Methods, events, and MID-01-01, MID-01-02, MID- inheritance 01-04, REQ-MID-03, REQ- IWG-01, REQ-IWG-02, REQ- PID-01, REQ-PID-03 ITask Boost.Python Methods, events, callbacks, MID-01-01, MID-01-02, MID- and inheritance 01-04, REQ-MID-03, REQ- IWG-01, REQ-IWG-02, REQ- PID-01, REQ-PID-03 ITaskBase Boost.Python Methods, events, callbacks, MID-01-01, REQ-MID-03,

exceptions, threading, and REQ-IWG-01, REQ-IWG-02, inheritance REQ-PID-01, REQ-PID-03 ITaskException Boost.Python Methods, exceptions, MID-01-01, MID-01-02, REQ- threading, and inheritance MID-01, REQ-MID-03, REQ- IWG-01, REQ-IWG-02, REQ- PID-01, REQ-PID-03

Through this testing process we have validated the conformance of our solution with the collected functional requirements (Chapter 6). Stakeholders (technical group – 2.1 ) have been involved as information sources to obtain interface behavior knowledge and to validate models and generated artifacts.

10.3 Trials with end users The second phase of testing was centered on testing with prospective end users. During this process, interface developers used our solution to model FEI interfaces and to generate artifacts (test cases). The following procedure was conducted: 1. Creation and distribution of a usage manual covering installation, modeling editor usage, and the artifact generation/compilation/execution. 2. Distribution of our solution as an executable Eclipse instance (Section 11.1 ). 3. Pair programming strategy for performing the testing (end user and MBIF developer). The testing followed an incremental approach from modeling and generating simple elements until covering advance topics (such as, event handling and custom methods). 4. Evaluation and feedback session.

Table 24 – Testing the solution with end users Interface Test description Features used ISource Modeling ISource interface and writing a Interface modeling including state machine to describe the on/off state machine with: behavior and its relation with the Field  Single triggers Emission Gun.  Transition delays Test modeling Test case generation.

IStage Modeling IStage interface and writing a Interface modeling including state machine to describe the stage state state machine with: behavior (Disable, ready, moving,  Single triggers wobbling, and not ready).  Transition delays  Events  Custom methods Test modeling Test case generation.

These trials with real users have exposed our solution to a real usage environment in which we have been able to evaluate functional and non-functional requirements. From the functional part, end users succeeded in creating test cases automatically from the model of an existing FEI interface. The following table summarizes the results from the perspective of used features:

Table 25 – Global results of the trials with end users Feature Description Challenges Possible resolutions 1 – Modeling The initial modeling language Interface knowledge: Legacy Looking into the FEI interface is complete enough to model interfaces behavior was not documentation and definition the descriptions of the tested clear at the beginning causing interface experts before interfaces making it possible to inconsistencies that were modeling interfaces. write a meaningful model. fixed after a second analysis of the interface.

2 – Test case State machine and test case Difficulty in reading and Defining a logging generation model provided with a concise detecting when a test case strategy that improves description for generating test fails. Test case execution is our solution feedback cases. not providing enough and allows tracing test feedback to track what part of failures. the test case fails. 3 – Providing The created models include Standardizing and unifying Exposing our modeling interface enough description of the words and concepts with the language to end users, documentation interface to provide a general ones used at FEI. collecting feedback, and understanding of the interface updating according to it. functionality and behavior.

10.3.1. Quality requirements During the trials with end users we have performed an analysis based on the quality requirements of our project (Section 6.3 ). We have applied the quality requirements to the main activities performed during the trials and the results are presented in table 26 (result scales are taken from the correspondent quality table in Section 6.3 ). For evaluating the quality requirements we have focused on the main activities related with the usage of our solution:  Modeling the interface and creating the artifact model (testing or wrapper): Covering the learning of the modeling language (grammar) and the process to get a valid model (no errors) that generates the expected artifacts.  Compiling the artifacts (test cases and wrappers): Taking the generated artifacts and placing them in a valid FEI software project for compilation. Including all the required adaptations and manual code implementations to be added in the artifacts in order to make the compilation possible.  Executing the artifacts: Moving the executable created through the compilation process into a simulated environment (VM with a microscope server running) and executing the artifact. During this process we considered execution errors that caused to go back to compiling or modeling activities.

Table 26 – Testing the solution results from the quality requirements Modeling the interface and creating the artifact model Extensibility Coupling = Easy: Separation between elements was clearly represented by different models and sections. The user was able to identify these sections in a fast pace. Inserting / Replacing = Reasonable: Even when the sections were separated, inserting and replacing elements required a bigger effort because identifying valid elements depends on language knowledge. Ease of use Usage time ≈ 2 hrs: This time includes learning the language, creating the first model version, and updating the model for usage of complex language options. Considering the trial was the first time the users employ the tool and the incremental approach, we can conclude we have a good learning time for the modeling language. Intuitiveness = Medium: Language words and concepts were easy to identify and associate with the tool, but their usage and impact in the model present some questions (standardizing language and domain concepts is required). Flexibility Available options = Reasonable: During the trials, the modeling options were enough to represent and automate the expected interface behavior. However, we identified features that users would like to have available for next versions of the tools (such as defining range values for methods with parameters). Effectiveness Modifiability = Medium: Modifying the models was successfully achieved once the language concepts were clear for the user. This clarification required matching the user concepts with the ones implemented within the tool. Compiling the artifacts Extensibility Coupling = Medium: Generated artifacts structure made easy to detect test sections, but the execution workflow was not so clear. The parameterized testing

approach was relatively new for the end user and then, understanding the test case workflow was hard at the beginning. Inserting / Replacing = Reasonable: Inserting and replacing generated code parts was easy, but ensuring the execution of these extensions got complicated due to the previous fact (coupling). Ease of use Usage time ≈ 3 hrs: Compilation is a process that should take a minimum amount of time, but during the trials we faced external factors that extended this activity. For example, updating FEI code base and getting concrete knowledge of how to access the interface. Excluding these factors, compilation time was acceptable. Intuitiveness = Medium: Once the end user understood the execution workflow, compiling the artifacts was a straightforward process. Flexibility Available code sections = Reasonable: Integrating changes to the generated artifact was possible with the code distribution. Effectiveness Modifiability (minimum number of modifications) = Medium: Once the interface model was complete, the generated artifacts required minimum modification (such as getting the interface). However, with a simple model we faced a situation in which a bigger number of changes were required. Executing the artifacts Extensibility Coupling = N/A: Execution is a single process. Inserting / Replacing = N/A: Execution is a single process. Ease of use Usage time ≈ 2 hrs: Intuitiveness = Low: Executing the artifacts is straightforward, but tracking test case errors (and the specific call/method failing) was not so clear. These errors were related with our understanding of the interface and not with the generated test cases. Flexibility Available options (execution logging) = Insufficient: We were able to track the execution and the main steps (such as, setting up the initial state, executing triggers, and validating final state), but not the low level steps (such as intermediate calls). Effectiveness Modifiability = N/A: Executable files do not required modifications.

These results are taken from the usage perspective; maintenance perspective (updating the tool) has not been covered with end user testing.

10.4 Capabilities and limitations This section presents reflections and observations of the capabilities and limitations of our solution.

10.4.1. State machine transition complexity Interfaces in the microscope can be related with other interfaces. These relationships influence the interface behavior by triggering state changes through events and methods. State machine transitions triggered by states of a number of interfaces can become complex. These related interfaces create a matrix of possible trigger situation; possibly causing a state-space explosion. Furthermore, in certain situations describing external triggers as part of an interface behavior tends to be implementation specific instead of general behavior.

Our solution tackles this issue by adding methods that abstract complex operations (custom methods Section 9.2.3. ). However, the decision of when to use a custom method relies on the end user judgment and this action modifies the interface model (either by modifying guards or transition triggers).

10.4.2. Asynchronous calls As we stated in section 4.4, IOM interfaces have synchronous and asynchronous methods. Asynchronous methods need a way to be handled in the generated test- cases, to prevent non deterministic test result or testing to wrong things.

Transition delays and timeouts have been selected as asynchronous control method. This approach waiting (a modeled time) before checking for the expected state, but it cannot detect if the action triggered by the call ends properly. If this last happened, the test case will fail and raise and exception based on the following calls. A remark for this modeled is to consider it as a kind of interface requirement that an asynchronous method must be finished within this time.

10.4.3. Time performance of produced artifacts Test-cases (code) are generated by our modelling approach. The test case execution can happen either on a real microscope or in a simulated environment. This execution is fast (milliseconds) if the tested interface are relatively isolated, but that is not the case for interfaces with hardware or external interfaces dependencies; then test execution may take longer (even minutes). This has a direct effect on the execution time of the generated test suite.

The time performance issue is not caused by the generated test-cases, but caused by the lack of a non-time consuming simulation environment.

End users can still improve test time by influencing the existing simulators by adjusting timing settings. For example, interfaces offering a specific testing version in which a specific delay can be defined.

10.4.4. Maturity of testing We decided to create our own algorithm to generate test-cases and this algorithm was tested within the project scope (Section 8.4.1. ). This implies that the algorithm has been tested with a small number of interfaces, resulting in a relatively low maturity level. We generated 100% test coverage for transitions based on the modeled state transition definition. This approach has some limitations for large state-models. But large state models are an indication that refactoring of the design might be desirable.

Exposing the algorithm to new interfaces will provide valuable feedback for updating our solution and increasing the robustness and maturity of our algorithm to generate test-cases.

10.4.5. Interface implementation differences One of the goals of the project is to automate repetitive tasks. Each interface model transformation (either as a wrapper or a test case) consists in a set of common activities from connecting to executing calls. As common activities, automation is an option, but only if the interfaces share a common structure in terms of definition and implementation. However, due to hardware or software dependencies, interface implementation and usage vary from interface to interface. Achieving full automation requires a full analysis of all existing FEI code base which is not possible in the timespan of this project.

The adopted mitigation strategy for these differences is to avoid specific definitions for interfaces. It means that the user has the responsibility to implement these specific interface definitions in the generated artifacts manually.

A final statement about this limitation is the fact that new differences can show up as long as the solution is employed for new interfaces. In that situation, the solution has

to be updated for covering these differences. Our solution is expected to grow within the FEI software code base.

10.4.6. Microscope configurations Testing the produced artifacts (test cases and wrappers) requires employing a real or simulated environment. The environment relies on certain configurations that depend completely on the specific setup of the environment (such as microscope type and calibrations). Having an explicit definition of these configurations as part of the solution is not possible due to the number of configurations and the innate complexity from the microscope.

Setting the microscope configuration (either manually or through a script) has to be performed as first step. This is an activity that has to be performed by the end user.

10.4.7. Model limitations One of the recurrent topics during this project was the importance of modeling an interface and the specification level of that model. From one side, simplicity defined as the ease of reading and understanding a model is crucial; on the other side, automating as much as possible tends to require more and more specific elements and values from the model (described in Section 9.2 ). Achieving a balance between simplicity and specification has defined a set of elements that has been tested in a small number of interfaces. Even when these interfaces are a representative sample the possibility of having interfaces that cannot be modeled with the current language is open.

For those interfaces in which the modelling language limitations do not allow to make a complete valid description, human interaction will be required in the produced artifacts by employing the custom method feature (Section 9.2.3. ).

10.4.8. Logging and error detection As we establish in the executing artifacts section (Section 9.4 ), our solution is independent of the FEI code base, and any execution error is not detected until the generated artifact is executed within the FEI software system. When this execution happens, the artifacts connect with the interface implementations and errors can be produced due to misuse of the interfaces within the generated artifact. Tracking these errors back and finding the origin of them has not been covered in the current version of the solution.

Currently this error identification is restricted to the initial and final states. The system future development requires including a more detailed logging of the generated artifact execution.

11. Deployment

This chapter introduces the deployment view of our implemented solution (prototype) considering the specifics implementation technologies. We elaborate on a deployment plan covering the main use cases for deploying our solution within the FEI software development process.

11.1 Deployment view Based on our architecture and technology decisions (Sections 7.2 and 7.3 ), our entire system is built following the Xtext framework. The framework encapsulates the compilation and creates all the required dependencies for creating a runnable solution.

Building the solution starts with packaging our entire solution (including grammar definition, validations, and generation strategies) into an Eclipse plugin. Then, the plugin is connected with the correspondent instances/applications in order to create the editor, the parser, and the generator. Finally, these applications are contained within the Eclipse IDE that runs on a Java Virtual Machine (JVM). In this way, our solution is executed on the end user’s personal computer and the solution becomes portable (any PC with a running Eclipse with JVM can use our solution). This deployment perspective is shown in Figure 50 and it is centered on end users using our solution.

deployment v iew

«device» Interface developer PC

«executionEnvironment» Java Virtual Machine

Eclipse IDE

«Eclipse plugin» MBIF Plugin

«EclipseEditor» «Antlr application» «Java application» Editor Parser Generator

«flow» «flow»

Figure 51 Deployment view of our solution prototype from the usage perspective.

11.2 Deployment plan within FEI An initial requirement (REQ-TCG-04 – Section 6.2.3. ) establishes that our solution has to be easily deployed and integrated within the FEI software workflow, build, and smoke test infrastructure. From this statement, we have identified the following deployment use cases:  Deploying for usage: Our solution has to be introduced to FEI software developers and incorporated into the regular software development workflow.  Updating and providing maintenance: As we mentioned before (Section 10.4 ), our solution is a prototype that demonstrates the potential of model- based techniques in the automation field. However, this is not a final solution; it has to grow within the FEI software code base and this growing involves updating the solution in the future.  Integrating within FEI build server: FEI deploy a continuous integration approach with build server infrastructure based on nightly and smoke builds. Providing a strategy to incorporate our solution within this infrastructure is required.

Due to the complexity and size of the FEI software code base, the MBIF project is a long-term project, and then deployment has to be realized gradually through time. The following sections present proposals and alternatives for achieving these use cases.

11.2.1. Deploying for usage As we described in Section 11.1 our solution is deployed within an Eclipse IDE instance. This means that our solution can be distributed directly into the developers PC and its usage depends on the installation of the JVM. Considering our solution is the first Eclipse-based application within the FEI software, the most practical approach is distributing an Eclipse IDE instance containing our solution. With this approach we get the following advantages:  Easy deployment by executing the Eclipse IDE  No configuration required (besides the JVM installation)  An interface modelling specific editor (within the Eclipse IDE).  Updates are simplified by distributing new versions of the Eclipse instance

This deployment alternative is feasible because our solution stands independently of the FEI software code base. However, two main challenges are produced by this alternative:  Adding a new development environment to the end user. Eclipse IDE is added as a new tool to be included in the current software development process. However, after asking end users and the trial results, this looks like a minor problem. Development tools include additional tools (such as Dezyne) next to the main visual studio.  Eclipse IDE is different than the current FEI software IDE – Microsoft Visual Studio. Compiling generated artifacts require placing those artifacts in a valid FEI software project in its Visual Studio environment (and this is not supported in the Eclipse IDE). A solution for this challenge is configuring the generated files to be placed automatically in a valid Visual Studio project. This action will speed up the transition between IDEs.

11.2.2. Updating and providing maintenance Updating and maintaining the solution requires being familiar with Xtext/Xtend languages and Eclipse plugin system. However, as specialized DSL tools, these technologies simplify grammar definition and code generation allowing a small learning curve and usage understanding. In terms of our solution, it requires a continuous MBIF development to reach a more mature and stable solution that covers

a bigger space of the FEI software code base. For this goal, we recommend that FEI assign the MBIF development to a software team which takes care of extending the tool and improving the current features. Currently, Xtext/Xtend is becoming the a standard solution as DSL framework in academia and industry,

11.2.3. Integrating within FEI build server MBIF can be integrated with the FEI build server through an executable (Java JAR file) containing the parser and the generator. This executable receives files (interface and artifact models) as input and then it generates the correspondent artifacts (test cases and wrappers). Through this option, we avoid the usage of the editor (end user interaction) making it possible to integrate our solution in an automatic build server. This use case only applies when manual code is not needed.

Integrating the solution to the build server requires the definition of a compilation step in which the models are sent to the executable; then the executable generates the specific artifacts; artifacts are then copied to their destination projects; and finally, the build process continues as normally. This steps can be configured without the FEI build server technology (which is based on Jenkins) as a prebuild step.

Additionally, this integration requires defining a configuration of where to place files. For this configuration we have to consider where models are located, where generated artifacts have to be placed, and where manually written definitions have to be read.

12. Conclusions

This chapter focuses on the conclusions over this project, elaborating on the achieved results and the added value to the stakeholders. It also presents future work and alternatives for improving the developed solution.

12.1 Results & lessons learned Model-based code generation is a set of ideas that aim for faster, better, and cheaper artifact generation by the automation of repetitive code that is derivable from high- level models. Testing and integrating these ideas into the FEI software development process was the focus of this project (specifically for test case and wrapper generation). Proving the validity, utility, and added value of these ideas through a prototype were the specific objectives.

We designed and implemented a modeling language (a Domain Specific Language) inspired by FEI’s code base and existing modeling tools and this modeling language includes enough tools for creating FEI interface models. Models contain common programming words and a high-level description of the interface that facilitates the understanding of the interface functionality/behavior (the models working as interface documentation). Our interface models include method definitions, events, attributes, and a state machine for specifying the expected behavior of the interface. Currently, the state machine only supports methods (from the same or other interface and custom methods) as transition triggers and state change events as the actions of a transition. As we moved into testing, we identified interfaces that require complex triggers and guard definitions (using methods and conditions from other interfaces, such as, the CmosProtector); we decided to capture this complexity in special functions (custom methods) and then use these functions in the state machine definition. Having this abstraction simplified and improved the readability of the interface models. The abstraction together with the modelled interface elements were then translated into specific artifacts (code files). The mapping between interface model elements and the FEI software implementation has been solved by using the same names from the FEI software into the models. The advantage of having a specific model for the interfaces is that the model is free of technology details and it can be used for creating specific artifacts.

Modeling specific generation models (test cases and wrappers) is also supported in our modeling language. These models contain specific values for the generation of GoogleTest, Boost.Python, and COM-ATL code files (C++ files). The generation is defined independently according the technology and it combines at least one interface model with one specific generation model. These generation strategies were separated by design and organized in such a way that adding new technologies/generators is a straightforward process (by implementing factory + strategy design patterns).

Defining a strategy for generating test cases was the main challenge because it involved creating abstract and concrete test cases. Having these two concepts, we were able to separate specific implementation (GoogleTest) from abstract test cases. We initially decided to create test cases based on behavior and to use state machines for the definition of this behavior. Choosing how to test a state machine is a difficult process because of concepts such as, non-determinism and unsteady states. However, we decided to focus on deterministic situations and forced non-determinism into determinism (for example, handling asynchronous methods with timeouts). These decisions made possible to propose a deterministic state-transition-based algorithm for the creation of the abstract test cases, and then, to use the algorithm output for the

concrete test case generation. Our design allows defining new concrete test case generators for supporting other testing frameworks currently used at FEI.

Modeling existing FEI interfaces mixed with our code generators allow generating test cases that implemented the modelled state transitions into a sequential test case workflow. This workflow validated reaching the initial and final states as well as events/actions. Our algorithm also generated additional transitions (composed by triggers that were not defined within a specific state) and test cases for validating those transitions. However, modeling a full state machine should include modeling these transitions and then the algorithm will not need to generate them. In both of the situations, generated test cases performed a full coverage of the modelled state machine transitions (including derived transitions that are not modeled). Some test cases (FEG and Cmos protector interfaces) ran very slow because of fixed time delays (windows registry values) in the simulated hardware. This situation has currently to be addressed by modifying the values manually. Additionally, these test cases were able to detect inconsistencies between the interface model and the interface implementation such as, completion times, exceptions with some specific values, and unexpected events. Most of these inconsistencies were not initially addressed or known, but we were able to identify and fix/incorporate to the model. Therefore, our models contained a close representation of the implemented interface behavior.

Generating wrappers was a straight forward process focused on the specific technology (Boost.Python and COM-ATL). Each wrapping library/framework/technology has its own methods for wrapping specific features. Linking the interface elements to the specific technology and taking care of functional behavior were the focus of the wrapper generation. The wrapper generation is fully based on the existing wrapping strategies used at FEI. Considering this, validating generated wrappers were performed by executing unit test cases that were originally designed for an existing wrapper. A future challenge based on our current results could be improving wrapper generation by generating test cases (combining the two developed generators).

From the technology and implementation perspective, Xtext/Xtend framework had a small learning curve and allowed developing a prototype in parallel with the FEI code base research. Defining a grammar within the framework is based on the Xtext language which works as a sequential rule definition grammar. The biggest identified challenge regarding grammar definition/update is related with the error feedback provided by the tool at compilation time. Duplications and inconsistencies are clearly detected, but rule prioritization and unreachable rules are vaguely pointed (rules are referenced by numbers and you need to visually relate these numbers to the grammar definition). An advantage of the framework is the parsing between a model written under the grammar definition and the code generator (by automatically creating a class tree in which the rule objects become java class instances). Writing a code generator uses Xtend (Simplified Java) and it allows deploying all the Java features and programming concepts (even Java language can be linked directly). However, some advanced concepts (such as, name identifiers, configuration providers) require a more in-depth understanding of the Xtend libraries used by the framework.

With all these implemented features during the lifetime of the project, our MBIF architecture and design has shown properties such as, extendibility (adding new code generators), modularity (separating language and code generators), and ease of use (end user trials results).

12.2 Future work During the development of this project, we identified future possibilities and improvements as well as features that were not implemented due to various constrains such as, time, technology, complexity, added value. The following list captures these concerns:

 Modeling language usability: Xtext/Xtend framework is a powerful tool that facilitates the usage of DSLs by providing feedback through restrictions and validations to the user models. This feedback is visible in the Eclipse Editor and is fully customizable through scoping, validation, and formatting rules. Due to time constraints, we have only implemented a set of initial rules (for example, only one initial state is valid per state machine) that validate the feedback feature, but there is room for defining more rules. Even when this feature does not provide functional results, its usage can improve the intuitiveness and usability of our MBIF resulting in a learning-time reduction.

 Customized test case generation: Generation is based on a state machine model which covers state-transition behavior. However, interfaces may have functionality that does not fit in this state machine approach (for example, validation rules for in-parameters). The testing strategy (or adding a new one) could be extended by implementing a feature for creating user-defined test cases (REQ-TCG-03 in Section 6.2.3. ).

 Extending COM wrapper generation: Due to time constrains, the COM wrapper generation feature has been implemented as an initial proof of concept. Through this implementation we have validated the extendibility of our solution for adding new generators for specific technologies. However, the feature is not mature enough to generate production-quality artifacts. These artifacts contain the general elements for defining a COM wrapper, but they require additional adaptations that have to be performed by the end user. Completing the generator requires a full analysis of the FEI COM wrappers as well as updating the modeling language and the specific code generator.

 Parameters and value ranges: Methods with input parameters cannot be defined as transition triggers in this version of the MBIF. Due to time and complexity constrains, we provided the custom methods (Section 9.2.3. ) as an alternative for abstracting this kind of methods. As a future work, a proper strategy for handling these methods that includes the possibility of defining value ranges have to be designed and implemented. A proposal is the definition of a range element in the method grammar in which the end user models the set of values for the input parameters. This set can be used afterwards in the test algorithm creating a dynamic method behavior (looking at the method as a group of methods in which every method holds a different value from the user-defined range).

 Logging and messaging for artifact execution: As mentioned in Section 10.4.8. , our MBIF solution currently deploys a basic logging and error messaging based on the specific implementation technology (Either GoogleTest, Boost.Python, or COM-ATL). Detecting errors out of the technology specifics is a hard task because of the lack of feedback in the artifact implementations. As an automation technology it is essential to provide meaningful messages that determine what makes an artifact fail (especially to define if the error is a model misinterpretation or an implementation problem). Implementing a logging strategy can be achieved by modifying the specific file templates and including logging messages for important operations.

 Code coverage: Currently, code coverage has not been included within the generated artifacts because the end user models not necessarily comply with the full implementation of FEI software interfaces. A model representing only a portion of the interface leads to an inaccurate coverage result when using code coverage tools. An alternative to this is to design a strategy in

which customizing code coverage analysis allows to specify coverage behavior [18].

 Updating the manually written code: We introduced in Section (9.3.2. ) the concept of generated files that have to be filled by the end user (manually written files). This concept works well when the files are generated and filled from a complete interface model, but they present issues when changes occur in the model and these files are not regenerated. An alternative, addressing this situation, is to implement an improved generation gap pattern, the protected regions [19]. Protected regions are a special implementation of the gap pattern in which all files are regenerated, but they keep records of specific parts that are designed to be manually written by the end user. These parts are identified through labels that avoid overwriting/deleting them when the file is regenerated.

 Asynchronicity: Asynchronous calls (interface methods) are currently handled like synchronous calls with a timeout. This means that the asynchronous call is executed and then the system waits for a specific time (defined by the end user within the method) as a way to guarantee the method completion. However, this is not an optimal management of asynchronous calls because their completion depends on other factors such as hardware or software events. Moreover, errors may also occur during execution. A future work is to define a completion strategy that allows modeling completion conditions. These conditions should include elements such as, calling internal or external methods and event occurrences.

 Integrating more FEI microscope software concepts: The MBIF generates artifacts for execution under a simulated microscope with FEI’s software platform. Due to the microscope complexity, this project was developed with a specific software and microscope configuration deferring the responsibility of configuration to the end user (setting microscope preconditions and initial values). An extension for the MBIF would be to increase the scope and allow the creation of artifacts for specific microscopes configurations by extending the modeling and artifact generation. However, a feasibility analysis is necessary to define what parts of these configurations can be automated.

 Extending the modeling language: The modeling language was built using a representative set of FEI interfaces. Additional language extensions may be needed to model new FEI interfaces.

13. Project Management

This chapter discusses information related with the project management part of the project. We elaborate on the way of working and how the project is divided into concrete activities (work-breakdown structure). Finally, a summary of how the project has been executed is presented.

13.1 Way of working The project is a research development project which means that we performed a research on new technologies while we developed a prototype for demonstrating our results. We decided keeping both activities (research and implementation) in parallel in order to track results and define specific requirements. This is achieved by following an iterative-incremental development combined with an agile methodology. Within this approach, we have held meetings every week (within the company) in which work was evaluated, requirements were refined, and future progress was set.

Figure 52 The incremental-iterative approach.

Iterations usually lasted one month, and the output of these iterations was an updated version of the prototype implementing the iteration requirements and an analysis of risky requirements. These requirements were evaluated and updated/redefined/discarded for later iterations.

13.2 Work-Breakdown Structure The work-breakdown structure (WBS) is presented in Figure 52. The WBS is the way in which the project has been decomposed into small packages with deliverable oriented components. The project is divided into four main activities as follows:  FEI domain research: Research and study the current FEI code base including interface, testing, and wrapping implementations.

 Technology research: Research and implementation of the used technologies for our project.  Design and implementation: Usage of the results of the previous two activities in order to build a software solution for our project. There are two main components to describe: o Language grammar definition: Writing the rules that comprise the implemented modeling language. This includes interface, testing, wrapper, and documentation grammar rules. o Generator implementation: Creating the necessary implementation for generating the expected artifacts (test cases and wrappers). This includes using our previously defined grammar for automatic file generation.  Documentation: Redaction of the expected documentation deliverables.

Figure 53 Work-breakdown structure of the project.

13.3 Project Planning This project was executed in nine months, starting in January 2016 until the end of September 2016. The duration of the nine month (39 weeks) project consists of 5 weeks, spent on university events and vacation, and 34 entire working weeks at FEI Company. Figure 53 shows a general planning overview containing the chronological sequence and duration of high-level activities carried during this project. From this plan, we distinguish three main project phases that match with the four main activities of the WBS:  Research: Initial phase covering the first two months of the project (January-February) in which domain (FEI code base) and technology (Xtext/Xtend) research were the main activities.  Design and implementation: Intermediate phase covering five and a half months (March until mid-August). The incremental-iterative development approach was applied and the prototype was developed.

 Closure: Final phase covering one and a half month at the end of the project period (mid-August to September) in which the project was concluded and the final documentation delivered.

Figure 54 Global view of the 9-month period planning and high-level activities.

As the planning figure shows, during the intermediate phase activities were split and overlapped, this describes the iterations lifecycles in which more requirements were implemented (iterative-incremental approach) into the prototype.

13.4 Risk Analysis During the project, we identified, monitored, and updated risks that could compromise the project. For each risk a mitigation/contingency strategy was defined and applied to overcome the risk. The identified risks were evaluated and discussed in meetings with company and university supervisors. As a result, the risks and project requirements were updated. The following table presents risks presented during this project and how they were mitigated and/or addressed:

Table 27 – Risk table Impact values affecting scheduled and quality. 1:Limited. 2:Low. 3:Moderate. 4:High. 5:Extreme Probability values. 1:Not likely. 2:Low. 3:Moderate. 4:High. 5:Expected Impact Probability No. Description Type Mitigation/Contingency (1-5) (1-5) 1 Not all the Process The requirements have been 5 4 requirements can prioritized. Plan alternative be met given the scenarios defining new paths time constraints. and milestones. Negotiate requirements with stakeholders. 2 As a research Process Negotiate with stakeholders a 4 3 project, unknown general scope and keep them complexity for aware of risks and progress in defining a project order to update the scope as scope soon as possible. 3 Complex subjects Technical Identify them as soon as 4 3 regarding model- possible and prepare proposals based testing (for of how to approach the topics. example, non- Negotiate with stakeholders and determinism, define priorities and importance multiple parameter values, and test coverage). 4 Complexity in the Technical Research and communicate with 3 5 FEI software code experts/developers inside the

base (legacy code) company when documentation is not enough. 5 Time performance Technical Discuss the situation with the 3 4 for test case stakeholders and reason about execution cannot why this is happening. Negotiate be achieved given requirements if necessary. the current software code base. 6 Polluting Technical Encapsulate specific 3 3 modeling technology-related elements in language with too grammar subsections. Evaluate many technology- if they fit as model elements or related words- they can be handled as manually semantics written or static elements. 7 Experts of Process Negotiate new requirements 3 3 particular areas through the project supervisor. have their own interests / requirements for the solution 8 Incompatibility Technical Look for alternative approaches 3 3 between existing and mention the risk to the software and stakeholders. Identify the origin generated artifacts of the incompatibility and (test cases and redesign generated artifacts if wrappers). possible. 9 Availability of Process Reach the experts through email 2 2 experts inside the and plan meeting using outlook company. meeting manager. Inform company and mentor supervisor. 10 Lack of support Technical Find experts within the 1 3 for the chosen university through OOTI implementation network. Identify forums and technologies experts through the technology (Xtext and Xtend). community. Focus on functionality requirements instead of polishing technologies.

13.5 Project execution During this project requirement specification, technology research, as well as design and implementation have been executed simultaneously. This approach aims to validate technology research through implementation as soon as possible. This implementation produced a prototype that demonstrated our results, and provide input to feature refinements. The prototype evolved over time becoming the MBIF final solution.

In accordance with the OOTI project guidelines, a Project Steering Group (PSG) meeting was held in which project progress was presented by the PDEng candidate. The main topics of these meetings were the project status together with a demonstration (live-demo or through slides) of the prototype. The goal of these meetings was informing, discussing, and getting feedback from the PSG stakeholders (both company and university). These feedback sessions worked as a method of validating and keeping track of the project direction.

The iterative-incremental approach and the prototype development allowed stakeholders to have a clear view of the project progress, technology features and

potential, as well as requirement implementation. Additionally, it helped to detect risks and to make decisions to address them.

14. Project Retrospective

This chapter presents a reflection of the project experiences described from the candidate’s perspective. It also includes a section in which the design opportunities set in the beginning of the project are revisited and evaluated.

14.1 Reflection

My journey during the past nine months has been challenging but rewarding, it has enhanced my professional and personal skills by the development of a real industry project. As with every project, it presented challenges from technical and managerial perspectives that led me to apply good practices and to identify strengths and points of improvements.

Before the project started, I had two main concerns: the first one related with the employment of model-based techniques and the second one regarding the project domain (microscopy). Even when I had a slight knowledge in the first one, the fact is that I was mostly a newcomer for both of them. Two project strengths helped me to get a sufficient knowledge level for my first concern, the flexibility on planning and the study of alternative solutions. Planning had a dynamic character that was adapted according to results, people availability, and priority changes, but always aiming at the initial high-level goals. Studying alternative solutions (technologies) was performed at the beginning of the project and it involved understanding how those solutions work and how they could be applied for our project; ergo I was able to collect knowledge about the model-based domain by analyzing tools that implemented that domain. Regarding the second concern, a good practice I had during the project was developing my own code files before coding the generators and this development allowed me to understand the legacy code and FEI software platform.

Studying alternative solutions led me to technology decision-making in which I researched on tools that perform similar activities and then analyzed the feasibility of implementing them as part of our project/solution. Due to the complexity and availability (proprietary) of the tools the research was focused on literature and a good practice was to visualize this research into comparison tables. The tables allowed a direct comparison of the tools regarding important aspects related to the project. Making an initial decision, starting the implementation based on that, and then revisiting the decision after a period of time (one month approx.) is another good practice of the project. A point of improvement to this process could be seeking for information from tool experts; people could have been identified and involved for the decision-making process. Although this changed towards the end of the decision- making process, it could have been done earlier.

Researching within the FEI legacy code represented one of my weakest points through the project because the time spent for this activity could have been shortened by applying a different approach. All the knowledge was available in the form of documents, code files, and within the FEI software engineers. I initially adopted an approach of “do it by yourself” in which I got knowledge from documents and code files and tried to understand everything for these two sources; when in reality, approaching software engineers proved to be the most efficient and fast strategy. Every person at FEI was more than helpful and provided needed information and feedback when I requested.

Implementing the solution through an iterative-incremental approach and prototyping development presented several advantages. First, it helped a lot for discovering

technology challenges/limitations. Second, the prototype granted a visual demonstration of the research results and requirement implementation which helped with the involvement of the main stakeholders (company and university). Third, stakeholders were aware of how the solution was evolving and the identified challenges which allowed taking action either by changing priorities or redefining requirements and scope (managing expectations).

Managing the project was a minor activity considering the practical approach of the project. Weekly meetings were organized to inform company mentor and supervisor the project status, during these meetings requirements were updated/added and feedback regarding the project execution (from company perspective) was discussed. In a similar way monthly meetings (PSG) were organized in which university stakeholders were invited and provided feedback (from university perspective).

Overall, I have had a great experience with this project. Exercising, my software development skills (technical), cooperating with technical and managerial people, and managing their expectations (soft skills) have represented a personal and professional growing for me. And additionally, employing new technologies and techniques has opened a new and promising outlook for the future.

14.2 Design opportunities revisited The following points summarize the results of implementing the design opportunities we introduced in Section 3.6 :

Elegance/Usability The interface models created using the modeling language are easier to read and understand than obtaining the interface information from the original sources (IDLs and code implementation). This modeling language uses common programming concepts and words for the creation of the interface models making the end user to familiarize with the language in a short time (according to user trials). End users are allowed to model only the necessary elements they require, which is supported by the dynamic modeling strategy (even when the language contain a finite number of options, an end user is not forced to use all of them). Finally, separation of concerns provides a clear division between generated code ready to be compiled and manually written parts that the user has to fill.

Methodical approach We designed and implemented a strategy for creating artifacts following valid state machine testing approaches (state and transition based testing). This strategy in combination with our modeling language enables the creation of specific test cases following the defined testing approach (Section 8.4.1. ). These test cases ensure the proper execution of the modelled state machine behavior (Section 9.3.3.4. ) while validating the expected results (also modeled in the state machine description). In other words, the modeling language is connected with the testing algorithm to generate abstract test cases which are also connected with the concrete test case generation. This cooperation grants a methodical generation of artifacts based on the end user models.

Genericity The designs of the modeling language and code generators are generic enough such that modeling different FEI interfaces is possible when these models are focused from a general perspective, avoiding low-level or technology-related details. These details have been encapsulated in low-level elements creating an abstraction that moves their implementation for the last part of the solution usage (adapting generated artifacts). Creating artifacts (test cases and wrappers) relies in common elements that are shared among FEI interfaces independent of specific-low level details.

Glossary

MBIF Model-Based Interface Framework PDEng Professional Doctorate in Engineering

OOTI Ontwerpers Opleiding Technische Informatica TU/e Eindhoven University of Technology PSG Progress steering group: Project supervisors who evaluate the performance of the project month by month. ST Software technology. SEM Scanning Electron Microscopy FIB Focused Ion Beam CMOS Complementary metal–oxide–semiconductor MBSD Model-based software development is the idea of achieving code reuse and performs maintenance and product development through the use of software modelling technology. BBT Behavior based testing is when you expect specific interactions to occur between objects when certain methods are executed. MBT Model-based testing is an application of model-based design for designing and optionally also executing artifacts to perform software testing or system testing. SUT System under test Code Code coverage is a measure used to describe the degree to which the coverage source code of a program is executed when a particular test suite runs. This definition can vary according to the targeted source code (for example code coverage of a state machine only considers the code that interacts directly with the state machine). PSG Progress steering group: Project supervisors who evaluate the performance of the project month by month. FEG Field Emission Gun HT High Tension COM Component Object Model, a Microsoft software interface technology IOM Instrument Object Model UML Unified Modeling Language, a software modeling language IDL Interface Definition/Description Language is a specification language used to describe a software component's application programming interface (API). Black-box A method of software testing that examines the functionality of an testing application without peering into its internal structures or workings. MSS Microscope software system UI User interface Bypassing Quality measure that means that the only way to access the interfaces is through an instrument (IOM). RPC Remote procedure call GoogleTest A unit testing library for the C++ programming language, based on the xUnit architecture. Franca IDL Franca Interface Definition Language is a formally defined, text- based interface description language. CAF Customer Objectives – Application – Functional. A subsection of the CAFCR model.

DSL A Domain Specific Language is a computer language specialized to a particular application domain. Xtext Xtext is an open-source framework for developing programming languages and domain-specific languages (DSLs). Xtend Xtend is a general-purpose high-level programming language for the Java Virtual Machine. Greenfield In software development, a greenfield project could be one project developing a system for a totally new environment, without concern for integrating with other systems, especially not legacy systems. Tree parser It is an ordered, rooted tree that represents the syntactic structure of a string according to some context-free grammar. Semantic A semantic mapper is tool or service that aids in the transformation mapper of data elements from one namespace into another namespace. ANTLR Another Tool For Language Recognition, is a parser generator that uses LL(*) for parsing. LL In computer science, an LL parser is a top-down parser for a subset of context-free languages. Eclipse Eclipse is an integrated development environment (IDE) used in computer programming, and is the most widely used Java IDE. Lexical The process of converting a sequence of characters (such as in a analysis computer program or web page) into a sequence of tokens (strings with an identified "meaning"). Lexer A program that performs lexical analysis EMF Eclipse modeling framework Boost.Python Library which enables seamless interoperability between C++ and the Python programming language. COM-ATL Set of template-based C++ classes intended to simplify the programming of COM objects VS Microsoft Visual Studio V&V Verification and validation VM Virtual machine JVM Java virtual machine CI Continuous integration

Bibliography

References

[1] FEI company. http://www.fei.com/

[2] L. Reimer, H. Kohl, “Transmission Electron Microscopy”. New York: Springer Science+Business Media, 2008. (0-387-40093-8)

[3] B. Beizer, “Black-Box Testing: Techniques for Functional Testing of Software and Systems”. New York: Wiley, 1995. (0-471-12094-0)

[4] B. Jubair, S. Moiz, M. Rizwanullah, “Model Based Software Development: Issues & Challenges”. Special Issue of International Journal of Computer Science & informatics (IJCSI), ISSN: 2231-5292, Vol. II - http://arxiv.org/pdf/1203.1314.pdf

[5] J. Fields, “Behavior Based Testing”. 2008, Online resource http://blog.jayfields.com/2008/02/behavior-based-testing.html

[6] P. Hsia, “Behavior-based acceptance testing of software systems: a formal scenario approach”. Taipei: Computer Software and Applications Conference, 1994. COMPSAC 94. Proceedings., Eighteenth Annual International

[7] L. Apfelbaum, J. Doyle, “Model Based Testing”. Software Quality Week Conference, May 1997

[8] Y. Masood, “Model Based Testing: An Evaluation”. Sweden, Blekinge Institute of Technology May 2010. Online resource: http://www.diva- portal.org/smash/get/diva2:831658/FULLTEXT01.pdf

[9] Q. Farooq, M. Riebishch, S. Lehnert, “Model-based Regression Testing – Process, Challenges and Approaches”. Ilmenau University of Technology, Germany, 2011. Online resource: https://www.inf.uni- hamburg.de/en/inst/ab/swk/research/publications/pdf/2011-chapter-mbrt- book-chapter-20120121.pdf

[10] H. Behrens, M. Clay, S. Effinge, M. Eysholdt, P. Friese, J. Kohnlein, K. Wannheden, S. Zarnekow, “Xtext User Guide”, 2008. Online resource: https://eclipse.org/Xtext/documentation/1_0_1/xtext.pdf

[11] http://www.tutorialspoint.com/software_architecture_design/component_bas ed_architecture.htm

[12] H. Behrens, “Generation gap pattern”, 2009. Online resource: http://heikobehrens.net/2009/04/23/generation-gap-pattern/

[13] M. Fowler, R. Parsons, “Domain Specific Languages”. Addison-Wesley Professional, 2010. (0321712943).

[14] B. Hludzinski, “Understanding Interface Definition Language: A Developer’s Survival Guide”, Microsoft Systems Journal, 1988. Online resource: https://www.microsoft.com/msj/0898/idl/idl.aspx

[15] E. Tran, “Verification/Validation/Certification”, 18-849b Dependable Embedded Systems, Carnegie Mellon University, Spring 1999. Online resource: https://users.ece.cmu.edu/~koopman/des_s99/verification/

[16] M. P. E. Heimdahl, “Model-based testing: challenges ahead”, 29th Annual International Computer Software and Applications Conference (COMPSAC'05) (Volume:1 ). IEEE.

[17] M. Schuts, J. Hooman, “Using Domain Specific Languages to Improve the Development of a Power Control Unit”, Proceedings of the Federated Conference on Computer Science and Information Systems – pp 781-788, ACSIS, Vol 5, 978-83-60810-66-8, 2005

[18] Using Code Coverage to Determine How Much Code is being Tested, Microsoft Online Library. Online resource: https://msdn.microsoft.com/en-us/library/dd537628.aspx

[19] D. Dietrich, Xtext Protected Regions, Microsoft Online Library. Online resource: https://github.com/danieldietrich/xtext-protected-regions

Additional Reading Markus Voelter, DSL Engineering Designing, Implementing and Using Domain- Specific Languages. 2013, Germany: Createspace Independent Publishing Platform. ISBN-10: 1481218581

Lorenzo Bettini, Implmenting Domain-Specific Languages with Xtext and Xtend, 2013, United Kingdom: Packt publishing. ISBN: 978-1-78216-0304

Elise Greveraars, Model Based Testing … >> Tester Needed? No Thanks, We use MBT!. The Netherlands: Atos Origin. Online resource: https://www.testnet.org/testnet/download/testnetpublicaties/atos_origin_vision_on_m bt_1.2.pdf

David Abrahams, Building Hybrid Systems with Boost.Python, 2003. Boost Consulting. Online resource: http://www.boost.org/doc/libs/1_61_0/libs/python/doc/html/article.html

Technology references

{1} Xtext: https://eclipse.org/Xtext/

{2} Xtend: https://eclipse.org/xtend/documentation/

{3} GoogleTest: https://github.com/google/googletest

{4} Boost.Python: http://www.boost.org/doc/libs/1_61_0/libs/python/doc

{5} COM-ATL: https://msdn.microsoft.com/en-us/library/3ax346b7.aspx

{6} Eclipse: https://eclipse.org/

{7} Franca IDL: https://github.com/franca/franca/wiki

{8} Dezyne: http://www.verum.com/

Appendix A: Model-based approaches

The next Figure shows an overall overview of the minimal elements (from a high- level point of view) that our solution requires. One of the objectives of the proposals is to represent which main elements will be affected by choosing a specific alternative.

Test settings Real or simulated system

Test strategy / SUT coverage definition criteria

Test Test Test result compilation ggeenneerraattiorn management / execution

Wrapper Modeller generator

Interface Interface documentation documentation generator

Figure 55 General view of the main components that our solution required.

These components were analyzed and matched with different model-based approaches. The following Figures show the implementation of our project using different model-based approaches; and remarking which components could be granted by the approach and which ones would require to be implemented by the developer.

Model-Based tool

Test settings

Test strategy / SUT coverage definition criteria

Test Test Test result generation / compilation / management documentation execution

Additional advantages  Model validation  Code generation

Model definition  Simulation in terms of the tool  Documentation Wrapper Modeller  Contained modules are fully generator implemented Disadvantages  No coverage for test and wrapper generation  Model definition has to be parsed Interface if the missing modules are documentation developed externally  User has to adapt to the model definition language

Figure 56 Our solution deployed with a generic model-based tool. A generic model-based tool is the kind of tool that only generates implementation code from a common model.

DSL

Test settings

Test strategy / SUT coverage definition criteria

Test Test Test result generation / compilation / management documentation execution

Additional advantages  Full customization of all the Customized model definition modules Wrapper  Solution is adaptable to the Modeller generator system  Model definition in an understandable format for all sub-components Disadvantages  It only provides the elements Interface documentation to build the modules, All of them have to be developed

Figure 57 Our solution deployed with a DSL approach. In this case the DSL extends our solution to develop generators for specific artifact generation.

Model-based testing tool

Test settings

Test strategy / SUT coverage definition criteria

Test Test Test result generation / compilation / management documentation execution

Additional advantages  Contained modules are fully implemented Model definition in terms of the  Generation strategy already tested and tool Wrapper validated Modeller generator  Coverage metric included Disadvantages  It only provides the elements to build the modules, All of them have to be developed  User has to learn the modeling language Interface documentation (testing-oriented language).  SUT has to be adapted to the tool

Figure 58 Our solution deployed with a model-based testing tool. The tool takes responsibility for generating and executing test cases.

Proposal / Scenario 1

Model-based Test testing tool settings

Test strategy / SUT coverage definition criteria

Test Test Test result generation / compilation / management documentation execution

Model adapter

Wrapper Modeller generator

Interface documentation

DSL

Figure 59 Solution proposal no. 1: DSL + MBT tool approach. The DSL takes responsibility for interface modeling and artifact generation regarding documentation and wrappers. The MBT tool takes care of creating, generating, executing, and reporting test cases.

Proposal no. 1 – Pros, cons, and effort activities

The following advantages will be granted by using a model-based testing tool:  Model notation ready to use.

 Tested and validated test case generation strategy. Minimal effort will be required to employ the strategy (most of it in configuration for the SUT).  Execution components for the generated test cases are provided by the tool.  Result interpretation for the test case execution.  In some tools, tracing failed test is provided <>.

However, the following shortcomings are foreseen from using an existing tool:  MBT notations are focused on existing standards (UML), or in specific settings from the tool, using them (by the final users of our tool) will require a learning period or replacing them (implementing a specific domain notation) will require adapters.  Test case generation will depend completely on the tool, creating a full dependency on the chosen tool. Limitations (state explosion), bugs, and internal restrictions of the tool will define the boundaries and features that our test case generation will have.  Execution and test result interpretation are not originally the focus of our project, but they are included in all the MBT tools. The integration of these components will require an important period of time <> and the development of specific modules/adapters/transformations.  Independently if all the components of the MBT tool are use or not, understanding the full tool workflow is required in order to integrate it in our solution. This integration will carry out a set of configuration activities that will consume time (depending on tool complexity and specific features).

Proposal no. 2 – Pros, cons, and effort activities

Developing our own test generation algorithm has the next advantages:  Modeling interface is defined according to the specific requirements for our generation algorithm. Additionally, the modeling language is extendable to cover test generation, documentation and wrapper generation avoiding the creation of specific parsers.  Full control in the test generation strategy/algorithm. Defining the coverage strategy according to the specific conditions of our system and adapting test generation to our domain.  Usage of testing frameworks, programming languages, and methods that are employed in the domain of our solution.  Specific test case documentation can be defined and written according to the requirements of our project.  Considering the existing compilation/execution environment, both processes can be achieved by using the current development infrastructure (IDE + libraries for compilation and executable creation). By doing this, the test generation algorithm will be restricted to the creation of code files for later compilation/execution.

The disadvantages of this proposal are the following:  Test generation algorithm has to be developed from the ground. Ideas are available on the net, but the implementation has to start barely from zero. Additionally, the algorithm will be fixed to the SUT, making our solution a specific domain solution with the risk of being a specific interface domain solution.  Interface behavior coverage will depend completely on the quality of the generation strategy. For measuring the quality a set of criteria has to be defined.  For time constraints, features of the algorithm have to been prioritized causing a final solution with restricted model-based testing features.  Maturity (in terms of testing and validation) of the algorithm will be restricted due to the available time for the project and in consequence, final solution maturity will not meet formal model-based testing tools.  All the necessary components have to be developed, increasing the possibility of finding bugs and mistakes that will delay the general development. Getting a final solution with restricted/limited scope is a risk.

Proposal / Scenario 2

Test settings

Test strategy / SUT coverage definition criteria

Test Test Test result generation / compilation / management documentation execution

Compilation, execution, and management of the test results will be Customized performed by the current FEI model definition Wrapper infrastucture/platform Modeller generator

Interface documentation

DSL

Figure 60 Proposal no. 2: The DSL approach. All the modeling and code generation is covered by developing a DSL while test compilation, execution, and result management is a independent process led by the end user. Accepted proposal.

Appendix B: The Technology decision

Once we stated the approach for the solution, the next taken step was to make a comparison between current DSL technologies according to our criteria points. Criteria

According to the previously defined requirements and the nature of the OOTI project, the following criteria have been chosen for the technology decision:

 Customization of language/model definition: In order to adapt the solution to the FEI interface guidelines (interface definition and behaviour through a state machine) it has to be possible to model completely the existing interfaces. This criterion can be achieved in two ways: • The technology allows the extension/modification/creation of its existing definition technique. • The technology specification technique already covers all the necessary elements to model the interfaces.  Customization of artifact generation: All the generated artifacts (test cases, wrappers, and documentation) have to be aligned to the FEI codebase. As a result, the technology must allow the modification or the creation of a fully customized code generation.  Simplicity/Complexity for model definition (Learning curve): Due to the fact that the model is the main pillar of the solution, expressing this model has to be simple and intuitive, diminishing the errors due to the specification technique. There are several methods and strategies to define models that allow defining very specific model concepts which can be confusing for some cases; restricting the model options to the necessary ones will lead to a minor probability of inserting errors due to complex specifications.  Maturity: In order to have a stable solution, the maturity of the tool have to be evaluated in two terms: • The amount of people that is currently using the tool (community). • The years the tool has been available and the market (industrial, academic) which is using the tool.  Extensibility: The current project has an initial goal to achieve, but depending on the implementation success, it could be extended to include new scenarios and alternative paths, then the solution has to be able to cope with this. Collaborating with other technologies or allowing the extension by adding new features.  Open source: This project is aiming for a final product, but it is still a research project and there is no plan for getting budget to acquire the rights to use commercial tools.

DSL tool Metaedit Xtext MPS Spoofax SugarJ Rascal Enso Essential Whole Platform Más Criteria

Customization of language/model Yes Yes Yes Yes Partial Yes Yes Yes Yes Yes definition Customization of artifact Yes Yes Yes Restricted (Java) Restricted (Java) Restricted (Java) Yes Yes Restricted (Java) No generation

Simplicity / Medium Low / Medium Low / Medium Medium Low / Medium Low / Medium Low / Medium Complexity for (Graphical (you generate (you generate Medium Medium (algebraic (you generate (you generate (you generate Medium model definition definition) the language) the language) notation) the language) the language) the language)

Low (Still on Maturity Large Large Medium Low Low Low Low Low Low (still active?) construction)

Extensibility Yes Yes Yes Yes Restricted (Java) Yes Yes Yes Yes ?

Open source No Yes Yes Yes Yes Yes Yes Yes Yes ?

Figure 61 Comparison table between DSL tools.

The most promising tool that our research found is Xtext/Xtend. It has the best balance in the criteria points defined at the beginning of the document. Using Xtext/Xtend will allow us to define our language in a textual way which will lead to a complexity based on the language/grammar definition (Xtext). Additionally, the code generator is completely open and fully customized through the Xtend language hence creating different target files (wrappers for different target languages, test files in different testing frameworks, and any documentation file) will be possible. Finally, this is a tool that has a current active support and several companies are adopting solutions based on this tool.

Appendix C: The modeling language grammar

The grammar definition is presented in as a Xtext syntax graph for the main grammar elements. The complete grammar consist of 850 Xtext definition lines distributed in 117 grammar rules, which are not presented because of space constraints.

Figure 62 Main interface elements.

Figure 63 The state machine grammar.

Figure 64 The operations grammar.

Figure 65 The testing grammar.

Figure 66 The wrapper (python wrappers) grammar.

Appendix D: Model examples

The following are examples of interfaces modeled with our modeling language and artifact models (testing and wrappers).

interface ISource{

eventInterface _ISourceEvents

stateList enumeration enHighVoltageState{ enHighVoltageState_Off, enHighVoltageState_On }

method get_HighVoltage{ }

stateObserver sync method get_HighVoltageState{ out enHighVoltageState pVal }

async method SetHighVoltageOff { timeout 20000msec }

async method SetHighVoltageOn { HResult S_OK, E_FAIL timeout 20000msec }

eventMethod HighVoltageStateChanged eventID dispid_ISourceEvents_HighVoltageStateChanged{ out enHighVoltageState pValue }

stateMachineBehavior{ initial state enHighVoltageState_Off{ on call SetHighVoltageOn goto enHighVoltageState_On timeout 5sec }

state enHighVoltageState_On{ on call SetHighVoltageOff goto enHighVoltageState_Off timeout 5sec } } }

Figure 67 The ISource interface.

testSettings { interface ISource instrumentVersion 3 iomLibrary IOMLib stateSetup{ state enHighVoltageState_Off{ SetHighVoltageOff }

state enHighVoltageState_On{ SetHighVoltageOn } } setup{ SetHighVoltageOff } teardown{ SetHighVoltageOff }

}

Figure 68 A test model for the ISource interface.

wrapper{ Python wrapper wrappingTechnology boost.python namingConvention camelCase interface ISource { header "ISource.h" nonCopyable } namespaces Fei, Tem, Omp, PythonWrapper }

Figure 69 A python wrapper model for the ISource interface.

//Model-Based Interface Framework demo <** @description : Interface providing common stem detector functionality. **> interface IStemDetector{ eventInterface _IStemDetectorEvents stateList enumeration DetectorInsertionState { DetectorInsertionState_Inserted, DetectorInsertionState_Retracted, DetectorInsertionState_Inserting, DetectorInsertionState_Retracting } stateObserver method GetInsertionState{ out DetectorInsertionState pVal } method Retract{} method Insert{} eventMethod InsertionStateChanged eventID dispid_IDetector2Events_InsertionStateChanged{ out DetectorInsertionState pValue } stateMachineBehavior { state DetectorInsertionState_Retracted{ on call Insert goto DetectorInsertionState_Inserted event InsertionStateChanged=DetectorInsertionState_Inserting event InsertionStateChanged=DetectorInsertionState_Inserted timeout 500msec } initial state DetectorInsertionState_Inserted{ on call Retract goto DetectorInsertionState_Retracted event InsertionStateChanged=DetectorInsertionState_Retracting event InsertionStateChanged=DetectorInsertionState_Retracted timeout 500msec } } }

Figure 70 The IDetector interface.

testSettings { testingFramework GoogleTest interface IStemDetector instrumentVersion 3 iomLibrary IomAcquisitionLib stateSetup{ state DetectorInsertionState_Inserted{ Insert } state DetectorInsertionState_Retracted{ Retract } }

scenario test1{ stateSetup{ state DetectorInsertionState_Inserted{ Insert } state DetectorInsertionState_Retracted{ Retract } } setup {Retract} teardown {Insert} } scenario test2{ setup {Insert} } }

Figure 71 A test model for the IDetector interface.

About the Author Aldo Daniel Martinez Márquez received his Bachelor Diploma in Computer and Systems Engineering (2010) and MSc degree in Software Engineering (2012) from the UPAEP Faculty of Engineering and Information Technologies, Puebla, Mexico. His Bachelor (An integrated vision and voice system for the service robot Nanisha) and Master (PSP and Evolutionary Model in a Face Recognition System for The Service Robot Donaxi) theses were about design and implementation of intelligent software for domestic robots. He is a former member of the UPAEP Control and Robotics Laboratory where he worked as a researcher from 2008 to 2013, during this time he developed software for robotics and competed in several national and

international robotics contests such as Robocup (2009 - 2013 in @Home League). From September 2014 until September 2016, he worked at the Eindhoven University of Technology, as a PDEng trainee of the Software Technology from the 3TU.Stan Ackermans Institute. During his graduation project, he worked at FEI on a project focused on testing and code automation. He has a natural passion for software engineering and his main interests are artificial intelligence, robotics, and motion applications within an industrial environment. .