A messaging based IDE

Bachelorarbeit zur Erlangung des Grades eines Bachelor of Science im Studiengang Informatik

vorgelegt von Thies Möhlenhof

Erstgutachter: Prof. Dr. Ralf Lämmel Institut für Informatik Zweitgutachter: Msc. Marcel Heinz Institut für Informatik

Koblenz, im September 2016 Erklärung

Hiermit bestätige ich, dass die vorliegende Arbeit von mir selbständig ver- fasst wurde und ich keine anderen als die angegebenen Hilfsmittel – ins- besondere keine im Quellenverzeichnis nicht benannten Internet–Quellen – benutzt habe und die Arbeit von mir vorher nicht in einem anderen Prü- fungsverfahren eingereicht wurde. Die eingereichte schriftliche Fassung entspricht der auf dem elektronischen Speichermedium (CD-Rom).

Ja Nein

Mit der Einstellung der Arbeit in die Bibliothek bin ich ein-   verstanden.

Der Veröffentlichung dieser Arbeit im Internet stimme ich   zu.

...... (Ort, Datum) (Thies Möhlenhof) Zusammenfassung

Moderne Softwareentwicklungsumgebungen (z.B. , IntelliJ) bestehen aus verschiedensten Programmen bzw. Werkzeugen, welche zu einer integrierten En- twicklungsumgebung zusammengefasst werden. Dies wird durch eine spezielle Plug-in-Architektur, auf der Basis einer speziellen Programmierschnittstelle für jede Entwicklungsumgebung, realisiert. Dadurch entsteht eine große Abhängigkeit der Plug-ins zu einer bestimmten Entwicklungsumgebung und diese können so nicht in verschiedenen Entwicklungsumgebungen wiederverwendet werden. Die in dieser Arbeit beschriebene Microservice Architektur, bietet die Möglichkeit diese Struktur aufzubrechen und die so entstehenden Komponenten, durch textuelle Nachrichten zu verbinden. Dadurch wird die Integration von neuen Werkzeugen vereinfacht, da die Abhängigkeit zu einer speziellen Programmierschnittstelle der jeweiligen Entwicklungsumgebung aufgelöst wird. Das Konzept des Aufteilens der verschiedenen Werkzeuge auf mehrere kleine Programme, wird durch das Monto Projekt [44] eingehend beschrieben, welches als Grundlage dieser Arbeit dient.

Abstract

Modern Integrated Development Environments (IDEs) are, in general, a combina- tion of a textual editor, build tools and other services, which are combined by a plugin architecture. As a result, these services are in a deep correlation and depend on an Application Programming Interface (API) tailored to a specific IDE approach (e.g., Eclipse, IntelliJ). In this thesis, an architecture is presented, which breaks up this monolithic architecture and provides an expandable microservice architecture providing the same features by a correlation of services, inspired by the Monto Project [44], without the dependency to a specific editor or API. As a result, the integration of new services is simplified and services are reusable in different IDE approaches. Contents

1 Acknowlegement1

2 Introduction2

3 Related work5 3.1 Language workbenches...... 5 3.2 Kite...... 7 3.3 Monto...... 7 3.4 NeoVim...... 8 3.5 Summary...... 8

4 Background9 4.1 Microservice Architecture...... 9 4.2 ...... 11 4.3 Service Discovery...... 15 4.3.1 Consul...... 16 4.3.2 etcd...... 16 4.4 FSML Finite State Machine Language...... 17

5 Requirements 20 5.1 Architecture Requirements Specification (ARS)...... 21 5.2 User Requirements Specification (URS)...... 22

6 Design 28 6.1 Architecture Overview...... 29 6.2 Detailed Architecture...... 30 6.2.1 Client Application...... 31

i CONTENTS ii

6.2.2 Consul Server Node...... 32 6.2.3 Service Node...... 33 6.3 Highlighting Request Example...... 35

7 Implementation 38

8 Requirement Analysis 41

9 Concluding remarks 43 9.1 Summary...... 43 9.2 Analysis...... 43 9.3 Future work...... 45 List of Figures

3.1 Monto architecture...... 7

4.1 Docker client-server architecture...... 11 4.2 Docker example...... 13 4.3 Docker Swarm cluster example...... 14

5.1 Syntax highlighting example...... 22 5.2 Code formatting example...... 23 5.3 Autocompletion example...... 24 5.4 Code outline example...... 25 5.5 Input dialog example...... 26 5.6 Output dialog example...... 26 5.7 Error message example...... 27

6.1 General architecture overview...... 29 6.2 Architecture overview in detail...... 30 6.3 Client application in detail...... 31 6.4 Consul node in detail...... 32 6.5 Service node in detail...... 34 6.6 Highlight request example...... 37

iii Chapter 1

Acknowlegement

I would like to thank my family, friends and Prof. Dr. Ralf Lämmel for supporting me in my studies and this thesis.

1 Chapter 2

Introduction

This technical elaboration is concerned with disintegration of features provided by an Integrated Development Environment. The fundamental idea is described by Sloane et. al [44].

An essential part in the daily life of a software developer are Integrated Devel- opment Environments (IDE). These environments are a great way to enhance the productivity of a developer, because they provide a large variety of tools and features, to help create correct and secure computer programs. Over the years many different approaches became popular (e.g., [1], [2]), these environments are established by a large community and generations of software developers. The features provided by a modern IDE are ranging from syntactic and semantic services [24], to project management services (e.g., [3]). Syntactic services, for in- stance, provide source code highlighting or syntactic source code completion for textual editors. Furthermore, they help to abstract the software development pro- cess, by enabling visual modeling and generating source code from model to text transformation. Moreover, IDEs include tools for collaborative work like version control [4] or even realtime messaging [5] making collaborative software devel- opment effortless.

However, these services and tools are in a deep correlation, because they depend on a specific, well defined API for the target IDE approach. Therefore, it can be difficult to add new functionality to these environments and combine them with the already existing ones. These functionalities are mostly provided via plugins

2 3

[6], which are not reusable between different IDEs. Consequently, a language de- signer who, for example, wants to provide tooling for a novel language or new technologies to tackle new sets of problems, which differ from the common prob- lems, like testing or monitoring service oriented architectures encounters prob- lems. The designer needs to provide a special designed plugin for every IDE approach to make the new tool or language widely usable.

The more and more popular concepts of distributed, service oriented architec- tures can be a solution, to decouple the components of an IDE, thus the integra- tion and disintegration of new components is simplified in contrast to the modern integrated approach. In recent work, named the Monto Project [44], this novel ap- proach is introduced. The Monto project provides a way to decouple the services of an IDE, by implementing them stand alone and combining them via textual messaging, thus a flexible and simple integration or disintegration is fulfilled. The overall idea of this concept, is to decouple the different components and pro- vide an environment, so the different components are able to work together by sending and receiving messages over a messaging based middleware. Thereby, an integration or disintegration of services or tools is plainly fulfilled. This new environment is called a Disintegrated Development Environment (DDE), which pro- vides the look and feel of a common IDE, but the different components are flex- ible, reusable, independent and simply interchangeable. Furthermore, what are the advantages and disadvantages of decomposing the features provided by an IDE, thus a simpler and faster integration of new features and functionalities is possible, in contrast to the common monolithic approach? Moreover, can the collaboration of the decomposed features be fulfilled by an established internet protocol and an entrenched messaging format like JSON or XML, to be as inde- pendent as possible from an underlining technology? This concept can be a great leap, because new and old features can be added faster and every software de- veloper is free to choose the editor he or she prefers. As a result, new languages or technologies can be spread faster with appropriate tooling and plugins do not need to be implemented multiple times for every IDE approach.

In this thesis, the idea of a DDE is adopted and a different approach is presented. The approach presented, particular emphasis the independence of each service in regard to its deployment to an infrastructure, which is maintainable and expand- 4 able. Moreover, the implementation of each service is independent to a specific IDE Plug-in Architecture.

The structure of this thesis follows the succeeding common thread. Since pro- gramming environments are widely accepted, the starting point are subjects of Related Work, which supply various solutions to the field. The following chapter will give some Background, to concepts and technologies, used in the provided example implementation. Readers familiar with concepts like Service Oriented Architectures and the Finite State Machine Language, likely skip this chapter and proceed with the Design chapter. The Design chapter is the introduction to the experimental implementation of a messaging based IDE and starts with a high level overview of the architecture, followed by a more detailed description of each component and the fulfilled user requirements. The Implementation chap- ter describes the smallest components and refers to the plain source code. The following chapter Requirement Analysis discusses the requirements on the infras- tructure. The Concluding Remarks chapter will close this thesis, with a summary, an analysis of the implementation, concluding in a future work section. Chapter 3

Related work

Modern IDEs are like nuclear power plants, everybody knows that there are some issues, but nobody has a clear and final solution to solve them. There are sev- eral different approaches to support the process of software development, in the following examples of tools and environments are given, which differ from the common integrated solutions.

3.1 Language workbenches

The term Language workbench was introduced by Martin Fowler in 2005 [7], the core innovative aspect is breaking up the traditional process of editing and com- piling source code. On the surface a Language Workbench is like an ordinary IDE, but the internally source code representation is abstract (Abstract Syntax Tree (AST)) and projected to the user as text [23]. In contrast to the conventional ap- proach, where the source code is edited and than processed by the compiler by parsing and generating executable files. Projecting the abstract representation to the user, offers flexibility by storing the source code in an abstract manner. As a result, the project editor approach can show the user the different representations of the program tailored to the current context. By maintaining the abstract rep- resentation as a machine readable format (e.g., XML, JSON) developing plugins is more straightforward than constructing plugins towards an API. Furthermore, these plugins can deliver features which are beyond the capabilities of the tra- ditional IDE approach, for example, due to the abstract representation not only

5 3.1. LANGUAGE WORKBENCHES 6 syntactic errors can be returned but also semantic errors. As a result, testing code against the compiler can be omitted. Language designers have taken up the idea of language workbenches and de- veloped integrated tools for language creation and definition. These tools like Spoofax [30], MPS [51] or [26] assist language designers among other tasks by creating Domain Specific Languages (DSL) [47] in the Language oriented program- ming (LOP) [14] style. In the LOP style, for each problem domain a language is designed to solve this particular domain of problems [14]. Languages designed to solve a special domain of problems are named DSLs. Structured Query Lan- guage (SQL), Finite State Machine Language (FSML) and HyperText Markup Language (HTML) are examples for DSLs, because they are build to solve a specific problem domain like managing databases (SQL), defining webpages (HTML) and repre- senting Finite State Machines (FSML). These languages need to work collectively where language workbenches come to play. The internal representation offers the opportunity to only project the information relevant to the user and compose the different DSLs correctly. Language Designers can define new and extent pro- gramming languages [41] in Language Workbenches, by defining them due to single (e.g., [45]) or multiple meta-languages (e.g., [30], [50]). Language Work- benches like the Language Designer’s Workbench [50] provide meta-languages for syntax definition [33], name binding [35], transformation [12], dynamic semantics [48] and editor services [50]. As a result, programming language syntax, seman- tic and tooling can be specified in one tool. These definitions will be used by the environment to generate full-featured Eclipse or IntellJ plugins, to support the newly defined language. All in all, Language Workbenches simplifies the process of implementing new programming languages and the associated tools. Moreover, LOP gets simplified and tools for this and other multi paradigm approaches can be produced (e.g., [52]). But this freedom can also be a drawback, because the developer needs to define the problem domain, design a language to solve that domain and have to actually solve the problem. Hence, the developer needs to be developer and a language designer at the same time [28]. Additionally, testing and debugging a DSL can be a major task as well, and the variation of languages can get out of hand [29]. 3.2. KITE 7

3.2 Kite

Developing in textual editors like Sublime [46], [49] or [11] have one major drawback, they do not offer library documentation or code examples like an IDE. This leads to many open browser windows and switching back and forth between editor and browser. Kite is an application to solve this problem by run- ning in the background and offering the desired information in a special window, by interpreting the users input. As a result, Kite becomes a support feature for every editor and even the terminal, by offering plugins to each of these applica- tions. However, kite depends on an internet connection and do not offer tools for executing, highlighting or testing source code.

3.3 Monto

In Monto [44] the main idea of decou- pling services and contributing a mes- saging layer for communication is de- scribed. The Monto architecture 3.1 provides a Message Oriented Middle- ware (MOM), to realize the communi- cation between the different, isolated components. These components split up into three groups sources, sinks and servers. The broker is the mid- dleware, to implement the communi- Figure 3.1 Monto architecture [44] cation between the components. In the context of an IDE sources and sinks are represented by textural or visual editors. The sources publish versions, which are processed by servers and the resulting product is consumed by sinks. The main business logic like formatting or syntax highlighting is done by the servers, which represent the features provided by an IDE. Versions and products are encoded in machine readable JSON format and represent the source file with metadata or in case of a product, the information interpretable by a sink like the position of a token to highlight in the textual rep- resentation. 3.4. NEOVIM 8

3.4 NeoVim

NeoVim [36] is a project aiming to refactor the legacy code to make the Vim ed- itor more flexible, maintainable and open for developers to support plugin de- velopment detached from vimL and old 89 legacy code. The NeoVim projects purpose, is not transforming Vim into an IDE, but paving the way for following projects. The following projects can call back to the NeoVim implementation and use Transmission Control Protocol (TCP) addresses, named pipes and other trans- port mechanism to create a variety of remote plugins to transform Vim into an IDE. In conclusion NeoVim is the foundation to transform Vim into an IDE.

3.5 Summary

As described, many different approaches and projects exists to improve IDEs. Language Workbenches (e.g., MPS, Spoofax [24]) are more likely tools to support language oriented programming, than improving development tooling for high- level languages. Other tools like Kite are an improvement for development in fairly plainly text editors, but do not offer the full IDE support with, for exam- ple, compiling or highlighting tools. The NeoVim project can be a cornerstone to transform vim into an IDE, by providing interfaces to make Plugin development easier and more flexible by moving away from old legacy APIs and vimL and offering TCP connections, but plugins depend on NeoVim. Monto is the project nearest to solve the problem of integration and disintegration of new features. Implementing the features as services and providing a messaging layer to com- pose them into a full IDE. This thesis will supply a further attempt to decouple the services and build a DDE similar to the Monto project. The main difference between this attempt and Monto is the architectural structure and the way of pro- viding services to the environment. Chapter 4

Background

This chapter will provide an overview of the different concepts and technologies used in this thesis. At first there will be an introduction to service oriented ar- chitectures, more accurate microservices. Secondly, an overview of the used tech- nologies is given. If you are familiar with the Docker platform and microservice architectures you are likely to skip this chapter.

4.1 Microservice Architecture

A microservice architecture [37] is a Service Oriented Architecture (SOA) approach, of breaking monolithic software systems down into small services. These services or applications run preferably on multiple machines, but give the impression of running on one server as a single application. This computing approach is called distributed system, because the different parts are deployed on a variety of com- puters, which physically do not have to be at the same location. The key benefit of a microservice application is the independence of services among themselves. As a result, every service can be implemented accurately to the problem it should resolve. To provide an example, a front-end service can be written in and backend services in Golang and . Another key benefit of distributed systems is, that the probability of failure is reduced, because the same service, like a database service, can be added to the system more than once. Hence, if one database service fails another one can replace it directly. But this results in problems of syncing different databases to hold the same data. Moreover dis- tributed systems depend on an underlining messaging protocol (e.g., HTTP), so

9 4.1. MICROSERVICE ARCHITECTURE 10 the performance of the system is limited not only by the application itself, like in a monologic system, but also by the performance of underlining messaging pro- tocol or technology. In a microservice architecture the predominant concept is the generally accepted unix philosophy of “Make each program do one thing well. To do a new job, build a fresh rather than complicate old programs by adding new features” [34]. Services are small, lightweight, independent and single de- ployable software artifacts, which provide only one well defined functionality. To fulfill a task, services collaborate, due to network protocols and fine grained Application Program Interfaces. As a result, larger tasks can be completed, by a group of small services. Service oriented and microservice architectures have strong links to one an- other, to distinguish microservices from other SOA patter n the developer needs to follow specific key principles described by Sam Newman [37]. These six core principles are:

1. Model Around Business Concepts and not around technical concepts, thus ev- ery service has its own business domain.

2. Adopt a Culture of Automation, because in an infrastructure with many dif- ferent independent parts the e.g., deployment can get out of hand and de- ploying every service manually can be time consuming, with automation this can be prevented.

3. Hide Internal Implementation Details, because services have to communicate with each other and by not defining precise APIs a strong correlation or an unstable system can be the result.

4. Decentralize all the things means that the boundaries of every single service should be respected at all time. For example sharing the same database by two services can be inconsistent, a solution could be a service, which manages the database access and communicates with the two services thus the database is consistent at all times.

5. Services should be Independently Deployable, thus failure is isolated. Every service is independent, in context of its runtime, from other services, hence if one service fails or needs to be updated not the whole system fails and needs to be redeployed. 4.2. DOCKER 11

6. Highly observability, based on the assumption that services will fail and it must be easy to determine why and what failed.

By adapting these principles it is possible to provide an infrastructure, which is observable, failure resisted and easy to maintain. Moreover, this architectural style, in consideration of the goal of this thesis to develop a system, which com- ponents are easy to exchange a microservice architecture is the right choice.

4.2 Docker

A microservice architecture consist of a variety of applications and services. These services depend on different runtimes, system tools and libraries. Deploying or installing these services and their dependencies on an environment different from the development environment can be problematic, due to environmental depen- dencies [27]. These dependencies can be resolved, by deploying the service in a container. Docker is an open source project, which implements software con- tainers by system-level virtualization [27]. In the DDE project Docker Containers provides the flexibility to implement language and editor services independent from a specific technology stack. Moreover, the Docker project offers tooling for deploying and managing containers in a cluster. Hereinafter, the different tools provided by Docker Inc. [16] and used in the example DDE implementation will be explained.

Figure 4.1 Docker client-server architecture 4.2. DOCKER 12

Docker offers a client-server architecture (Fig. 4.1) to build, deploy and man- age containers. On the server side is the Docker Daemon a long running process, managing Docker Containers. On the client side a command-line user interface the Docker Client, running on the same or a remote system. Docker Daemon and Docker Client, building the Docker Engine, communicate over a RESTful API or web sockets offered by the Docker Daemon. The Docker Daemon accepts com- mands from the Docker Client and is capable of building, running and managing Docker Containers on the host, due to these commands. Docker Containers are made up as a union-filesystem [21], which combines images as horizontal layers into one filesystem. At the Bottom, for instance, is an Ubuntu Image [40] on top of this image other images (e.g., [38]) can be added. These images are read only, thus a read and write layer is added when the container is instantiated to capture changes made on the container [39]. These images, which are the founding stone of every container [15], are distributed on Docker Hub [20]. The Docker Hub is a public registry for Docker Images, but other public and private registries are available as well (e.g., [10], [18]). As soon as the Docker Daemon receives instruc- tions from the Docker Client, to build a container the Docker Daemon pulls the specified Images from the Docker Hub, adds for example the source code of an application and runs it in the container. These instructions can be specified in two main ways. On the one hand, by a direct instruction to the Docker Client command-line interface or due to a Dockerfile [19], which is similar to a Makefile. In both approaches the Docker Client transmits the instructions to the Docker Daemon, which builds, runs and manages the container on the host machine. At this point we are able to define, build and run Docker Containers on a host ma- chine, but most applications especially Microservice applications are composed of multiple services running on multiple hosts. In the DDE implementation every service is deployed in its own container, to build an infrastructure which is mod- ifiable and expendable without the need to deploy the whole application again when changes to a single service are made. 4.2. DOCKER 13

Figure 4.2 Docker example

An example architecture is shown in Figure 4.2. This kind of architecture is good for small applications running on a single host machine, but problems will arise when services are not on one server, but on multiple servers. To combine multi- ple hosts to one virtual hosts, Docker Swarm was introduced. Docker Swarm com- bines multiple Docker Hosts to one virtual host and offers a single entry point for managing the application deployed on a variety of servers. 4.2. DOCKER 14

Figure 4.3 Docker Swarm cluster example

Figure 4.3 shows an exemplary Swarm managed cluster. The three servers server A, server B and server C are internally managed by Docker Daemons running in Swarm mode with Docker containers. Without Docker Swarm each of these servers need to be managed individual by connecting with a Docker Client to these servers. This approach is visualized by the transparent Docker Clients. These Docker Clients have become obsolete, because the Docker Daemons are now managed by a Docker Swarm Manager. The administrator of the network can now connect to the Swarm Manager, deployed on server M, with the Docker Client to administrate the whole application with one entry point. On server M runs a consul client in server mode, because the Swarm Manager needs to be aware of the location of server A-C, consul functions as a service discovery back- end for the Swarm Manager. Each server A-C needs to register to the Consul server with its IP address and port, as a result the Swarm Manager is aware of state and health status of each server and can notify if a server is not respond- 4.3. SERVICE DISCOVERY 15 ing or behaves incorrect. Moreover, Docker Swarm manages how Containers are deployed to the Cluster, by three different methods:

1. random: Deploying each container randomly to a server.

2. spread: Deploying on the server with the least containers.

3. binpack: Deploying on the server with the most containers, to reduce the amount of servers needed.

Docker also offers DNS lookups for Docker Containers running on the same over- lay container network. These container networks can be defined manually or are created automatically by using Docker Compose [17]. The administrator is able to deploy more than one container with one instruction to a cluster by defining a docker-compose file. A docker-compose files structure is similar to the struc- ture of a Dockerfile, but instead of single containers multiple container can be defined. A docker-compose file accumulates the instructions, of a user deploying Docker Containers with Dockerfiles in one file, thus a whole application can be deployed with one command. In addition, Docker-compose automatically adds an inter-container network to the containers defined, as a result DNS lookups and communication between containers is possible. All in all, using Docker offers a great benefit to the DDE implementation, be- cause of DNS lookups discovering services is easily done. Moreover, deploying services in Docker Containers resolve the dependencies to underlining technolo- gies. For example a developer can pick the programming language, which targets the problem he or she wants to solve freely.

4.3 Service Discovery

In a service oriented infrastructure with an accumulation of various endpoints, it is important to be aware of the current status and location of the services or nodes in the cluster. There are numerous applications trying to solve this kind of problem (e.g., Consul, Zookeeper, Eureka, etcd). In the following section a selection of these services is described. 4.3. SERVICE DISCOVERY 16

4.3.1 Consul

Distributed service discovery, health checking and offering a distributed key / value store are the main features provided by Consul [13] to manage a cluster of services. Services are rarely independent from other services. A user registration service, for example, needs a database service to store the user information. As a result, the registration service needs the location of the database service to con- nect. Consul offers a RESTFull interface to register and deregister services and health checks. The user registration service in the running example, which wants to connect to the database service has to request the Consul server and receives the desired information of the database service of IP-address and port. When the database service unhealty the Consul Server, would not respond. Healthy in this context means that the service is available and functional, ensured by health checks defined by the user. A health check can be a script, which requests the service and checks the response for correctness. Furthermore, Consul offers a distributed key/value store for system wide information and configuration. To run Consul in a cluster, an instance of a Consul Agent has to be deployed on ev- ery single machine. Consul Agents can run in server or client mode. The Consul Client in Server mode, is the leader of the cluster and is elected by participating in the Raft protocol, with all instances and is the central part of the application to keep all agents in sync. In context of this thesis it is particularly important, that Consul can be easily integrated as a service discovery backend into a Docker Swarm cluster out of the box with little configuration. In a Swarm cluster Consul is required, thus the Swarm Clients can join the Swarm cluster managed by the Swarm Manager.

4.3.2 etcd

Etcd [25] offers nearly the same advantages like Consul, but lacks in the capa- bility to check services for health status. Instead it is a in memory, distributed key/value store with shared configuration. The distributed key/value store is capable of storing information about services like IP address and port or addi- tional information of service endpoints. Etcd is the discovery solution for the Kubernetes [31] by Google Inc. Etcd offers a JSON over HTTP interface to in- teract with the distributed key/value store, which offers the core capabilities. In addition, a service can assign to a specific key and if a change on that key occurs 4.4. FSML FINITE STATE MACHINE LANGUAGE 17 an event can be triggered. Etcd is lightweight and by storing the information in memory fast, in contrast to a database solution. Moreover, it is highly available, because multiple instances can be deployed. A Raft protocol solution is given to synchronize and elect a leader of the cluster of etcd instances. In conclusion, the service discovery is an important part of an microservice architecture. If the service discovery fails, the whole application is very likely to fail, because the services are not able to locate the needed services to elaborate correctly.

4.4 FSML Finite State Machine Language

The FSM language is a DSL introduced by Ralf Lämmel in Software Languages: An Introduction [32]. The main purpose of developing this simple language is to demonstrate different concepts and principles for DSL. Furthermore, this exam- ple language is used to demonstrate techniques in the language implementation context. In this context examples for interpretation, code generation and more are given. FSML itself represents a state machine in a textual way (Lst. 4.1), but also cases for a visual representation are given. A state-machine defined, due to FSML consists of distinct states y and one initial state x. These states are described by a unique state id (i.e initial, unlocked) and a series of transitions. Transitions z describe the direct way from one state to another or back to the same state. This transi- tions are defined by events (e.g., ticket, pass), actions (e.g., collect, alarm) and the resulting state (e.g., unlocked). If you want to execute a state machine the input is a set of events, which trigger transitions, if a transition is triggered an action can be involved and can be executed as well. During the execution plenty of actions can be triggered, which combination represents the desired output. The combi- nation of a triggered event and a not mandatory action results in a target state, which also needs to be defined in the context. If the target state is not defined the machine remains in the state and finishes execution {. The representation of 4.4. FSML FINITE STATE MACHINE LANGUAGE 18

a state-machine by FSML is bounded to set of constrains, to be consistent. In the FSML defenition are four constraints [32]:

1. Every state-machine only has one initial state.

2. The target state of a transition must be explicitly declared.

3. All state and event identifiers need to be distinct.

4. All states need to be reachable from the initial state.

Listing 4.1 Fsml example (numbers in souce-code correspond with numbers in text) 1 package main; 2 3 % 5 4 % Example FSML source file 5 6 % 6 7 import exception "github.com/dde/fsml/fsmlexample" 8 9 % 1 10 state initial locked { 11 12 % 3 13 t i c k e t / c o l l e c t −> unlocked ; 14 pass / alarm −> exception; 15 } 16 17 % 2 18 s t a t e unlocked { 19 20 % 4 21 pass ; 22 }

In this thesis the Finite State Machine Language is extended with comments | and imports } for demonstration purposes. Comments start with a "%" and end 4.4. FSML FINITE STATE MACHINE LANGUAGE 19 with a newline. Imports are single lines starting with an “import” keyword fol- lowed by the state name and the path to the desired *.fsml file after the “from” keyword. By importing a *.fsml source file it is possible to use states who are defined in a different *.fsml source file. Chapter 5

Requirements

An IDE supports different kinds of services and tools. In this messaging based experiment the most common IDE features are implemented. The requirements section will give an overview of these core features. Furthermore, requirements for the infrastructure are described.

20 5.1. ARCHITECTURE REQUIREMENTS SPECIFICATION (ARS) 21

5.1 Architecture Requirements Specification (ARS)

Architecture requirements define how the architecture in general should behave.

ARS-1 Independency of services

Services must be independent, in regard to their integration into the architecture.

ARS-2 Expandability of application

The application must be expandable with minimal effort, thus new services and server instances can be added.

ARS-3 Communication protocol

The communication between services must be based on a well known transport protocol (e.g., http).

ARS-4 Message format

Communication between services must be fulfilled with a well know data-exchange format (e.g., JSON, XML).

ARS-5 Observability

The infrastructure must be observable by logging of errors, events and metrics.

ARS-6 Support

The application must support the Vim text editor for FSML development, but support for other editors should be possible. 5.1. ARCHITECTURE REQUIREMENTS SPECIFICATION (ARS) 22

5.2 User Requirements Specification (URS)

User requirements are concerned with the end user experience.

URS-1 Syntax highlighting

The DDE must highlight finite state machine language source code.

Figure 5.1 Syntax highlighting example

Source code highlighting like on the right in the given example (Fig. 5.1) empha- size the structure of the code, hence it is getting more meaningful and intelligible in contrast to the non highlighted code on the left. Requirements for syntax high- lighting are:

1. Initial states ids must be highlighted explicit.

2. All state and event identifiers must be highlighted.

3. All language specific keywords, operators and brackets must be high- lighted. These are: "package", "import, "from", "state", "initial", "{", "}", "->", "/"

By syntax highlighting the time to complete a programming task can be signifi- cantly reduced [43]. 5.1. ARCHITECTURE REQUIREMENTS SPECIFICATION (ARS) 23

URS-2 Formating

The DDE must format FSML source code.

Figure 5.2 Code formatting example

Formatting source code (Fig. 5.2) makes the textual representation more read- able and reduces the risk of syntax or semantic errors, for instance in whitespace intentioned languages like Python [8]. 5.1. ARCHITECTURE REQUIREMENTS SPECIFICATION (ARS) 24

URS-3 Autocompletion

The DDE must provide a way of autocompleting (Fig. 5.3) FSML source code.

Figure 5.3 Autocompletion example

Autocompletion predicts the words the user types next, to make writing source code more comfortable and faster. 5.1. ARCHITECTURE REQUIREMENTS SPECIFICATION (ARS) 25

URS-4 Outline

The DDE must provide outlining (Fig. 5.4) for FSML source code, with a combi- nation of state names and the associated events.

Figure 5.4 Code outline example

By outlining source code the structure of the whole program gets more observ- able. 5.1. ARCHITECTURE REQUIREMENTS SPECIFICATION (ARS) 26

URS-5 Execution

The input format is a comma-separated list of actions, the resulting output will be an array of event, state tuples or an error message.

Figure 5.5 Input dialog example

Figure 5.6 Output dialog example

The DDE must provide an input/output dialog, thus input (Fig. 5.5) for the FSML execution can be accepted and the resulting output (Fig. 5.6) calculated. 5.1. ARCHITECTURE REQUIREMENTS SPECIFICATION (ARS) 27

URS-6 Semantical Validation

The DDE must provide a feature to validate the semantical correctness of FSML source code (4.4). The DDE must show an appropriate error message (Fig. 5.7).

Figure 5.7 Error message example Chapter 6

Design

The main goal of this thesis is to provide a development environment, thus in- tegration and disintegration of services and features is easily fulfilled in con- trast to the ordinary IDE approaches. In this section an architectural overview of the whole system and each component will be given. Introductory, a general overview of the infrastructure is outlined and the interaction between the three main parts: the client application, the service node and the consul node are described. Subsequently, the design of each component is described. Finally, an example re- quest will be illustrated, to point out the interaction of the different services and components.

28 6.1. ARCHITECTURE OVERVIEW 29

6.1 Architecture Overview

Figure 6.1 General architecture overview

With respect to section 4.1 a service oriented architecture is an architectural pat- tern to build an application composed of multiple applications. These applica- tions or services collaborate to fulfill a certain task. A general overview of the infrastructure of the DDE implementation is shown in Figure 6.1. The client application x consists of an editor (e.g., Vim, Atom) plu- gin and an application handling the communication between the clients machine (e.g., a or unix computer) and the web server. The service node y, with a running Docker Daemon in Swarm mode, holds the services deployed in Docker Containers. These Docker Containers make up the business logic and offer a RESTFull web API. The Docker Daemon is running in swarm mode to be able to join the swarm cluster, by registering to a Consul instance on the consul node z. The Swarm Manager, also deployed on consul node z, is thereby able to man- age the Docker Containers on the service node y, because the Consul instance is working as a backend for the Swarm Manager. 6.2. DETAILED ARCHITECTURE 30

6.2 Detailed Architecture

The architecture is shown in Figure 6.2, next a more detailed description, of each component is given.

Figure 6.2 Architecture overview in detail 6.2. DETAILED ARCHITECTURE 31

6.2.1 Client Application

Figure 6.3 Client application in detail

The client application (Fig. 6.3) includes two parts. The vim-dde plugin, which executes the go-daemon and specifies the requests made by the go-daemon to the server. The vim-dde plugin provides information about the source code for ex- ample language and file path. Furthermore, the plugin is capable of getting input from the user for execution or displaying output. The informations contributed by the vim-dde plugin are passed to the go-demon via specified flags. By divid- ing the client application into a vim editor plugin and an application, which is capable of executing requests to a web API, building a new plugin for a different editor approach (e.g., Sublime, Atom) is more simple than building a full func- tional plugin for each one. As a result, the support for different editors is possible and the user can utilize different editors at the same time or the one he or she prefers. The go-daemon is on the one hand an interface to the web API of the 6.2. DETAILED ARCHITECTURE 32

DDE, but on the other hand it is capable to manipulate the clients file system. The go-daemon needs to rewrite files for formatting, so called pretty printing and reading files thus the content can be passed to the server. All in all, the client application made up by the go-daemon and the editor plugin is the interface from the user machine to the DDE environment.

6.2.2 Consul Server Node

Service discovery and health checking are the backbone of the infrastructure, be- cause the system gets more observable and secure, in regard to failure and error detection. In production it is recommended to deploy consul on at least 3 nodes, because if one node fails the other nodes can replace the failed one immediately. But for non production status two nodes or even one node would be sufficient. The consul server node (Fig. 6.4) holds the leading Consul agent in server mode

Figure 6.4 Consul node in detail and stores the information about location and health status of each node. The consul node with a Docker Deamon is running in Swarm mode and contains the Swarm Manager, which is using the Consul server as a discovery service backend, to provide DNS lookups and a distributed key/value store. The Swarm Man- ager, needs the Consul agent, thus Docker Daemons running in swarm mode can 6.2. DETAILED ARCHITECTURE 33 register themselves to the Swarm Manager. Another approach, can be defining the different endpoints in the cluster directly, thus Consul would not be needed. However, by using Consul this can be omitted and the integration of new nodes is effortless.

6.2.3 Service Node

Running each service in its own container is called container as a service (CaaS) [42], running services in containers is advantageous in contrast to running all ser- vices in a single container. By providing each service in its own container internal container errors are prohibit and make the whole architecture more flexible, ob- servable and failure resistant. Furthermore, the service can be deployed multiple times if duplicates of the service are needed, to handle high amounts of requests. Besides adding more services, a service which fails or shows an unexpected be- havior can be extinguish quickly by stopping the container. After stopping the container a new container with the revised service can be deployed. The advan- tage is that not the whole application, like an application in a single container needs to be redeployed, just the erroneous part of the application. Moreover, new services can be added in the same way, thus the requirement of easy integration and disintegration of new services is accomplished. The service node (Fig. 6.5) is an ubuntu server, running a Docker Daemon in Swarm mode containing various containers. The Docker Daemon is running Swarm mode to join the previous defined Swarm cluster managed by the Swarm Manager running on the consul node. The business logic is spread over multiple containers with an underlining container network. This container network is not directly accessible from the outside, just via the easyproxy service. The easyproxy container offers a RESTfull API and accepts http request by offering port 8080 to the host. As a result, the container is reachable from the outside (i.e., the internet) and forwards the request to the required services on the container network and replies the resulting product back to the requesting client. The services making up the business logic deliver products for parsing, extracting, formatting, execut- ing, syntax checking, outlining and syntax highlighting for FSML code. Services can collaborate by requesting other services, due to DNS lookups, because ev- ery container has its own IP address in the container network. Every feature offered by the DDE is defined as a service. A service in this context, is an ap- 6.2. DETAILED ARCHITECTURE 34

Figure 6.5 Service node in detail plication, in a Docker container, offering at least one endpoint. These endpoints and services follow a specific naming convention. Each service is defined by the supported editor, the main duty and the supported language. Following this convention, a service offering highlighting for Golang source-code edited in vim would be vimhighlightinggolang. If services offering highlighting information for more than one language or editor this naming convention would fail, but keeping each service as small as possible and respecting its boundaries prevent this situa- tion. Routes are defined similar by supported task name, editor and language. In the Golang highlighting example this would be /highlight/vim/golang. At this point, every service is defined and can be deployed. This task can expedited by using Docker Compose. Figure 6.1 shows a part of the compose file for the containers running on the service node. 6.3. HIGHLIGHTING REQUEST EXAMPLE 35

Listing 6.1 docker-compose.yml (excerpt) 1 version: ’2’ 2 3 s e v i c e s : 4 vimdependencyfsml: 5 build: ./vimdependencyfsml 6 container_name: vimdependencyfsml 7 depends_on: 8 − vimextractfsml 9 10 vimextractfsml : 11 build: ./vimextractfsml 12 container_name: vimextractfsml 13 14 vimhighlightfsml : 15 build: ./vimhighlightfsml 16 container_name: vimhighlightfsml 17 depends_on : 18 − vimextractfsml 19 20 easyproxy : 21 build : ./ easyproxy 22 container_name: easyproxy 23 ports: ["8080:8080"] 24 25 . . .

To explain the collaboration between the client, easyproxy and services in more detail, an example of a request for syntax highlighting is given in the next section.

6.3 Highlighting Request Example

Syntax highlighting is useful in the context of an IDE, because it makes the source code more readable furthermore syntactic errors get exposed directly. Figure 5.1 shows the difference between a plain and a highlighted FSML source code snip- 6.3. HIGHLIGHTING REQUEST EXAMPLE 36 pet. The structure of the highlighted source code is more perceptible, than in the not highlighted one. Even a reader who is not familiar with FSML, can no- tice the significant parts of the code, because keywords (e.g., “state”, “initial”) or operators (e.g., “->”, “/“ ) are emphasized by a different color. The following paragraphs describe how syntax highlighting is fulfilled, Figure 6.6 is the corre- sponding visualization. Syntax highlighting, is achieved by executing the “: Highlight” command in the vim editor. The vim-editor plugin executes the go daemon, which runs the highlighting request defined by flags set by the vim plugin. The go daemon reads the source file specified by the plugin and builds a JSON-RPC representation of the command to request the easyproxy. The easyproxy is aware of the services in the infrastructure and forwards the JSON-RPC request to the vimhighlightingf- sml service. If the service is not available the easyproxy will respond with an asso- ciated error message. The vimhighlightingfsml service is aware of the FSML key- words, but is not capable of reading or parsing any plain source code. To extract information about the source code, like the precise position of a keyword defined by line number, start and stop index in the source code, the vimhighlightingfsml service request the vimextractfsml service. This service has multiple endpoints, for different purposes described in the next section. The vimhighlightingfsml service request the endpoint /extract/vim/fsml/token, to retrieve the token stream of the FSML source file. The response contains information about each token like the identifier string and the position defined by line number, start index and stop index in the source code. After reading the response the vimhighlightingfsml ser- vice determines the position of the tokens, which needs to be highlighted. After discovering the key tokens and the associated position the vimhighlightingfsml service responses with a json message to the go daemon over the easyproxy. Fi- nally, the go-daemon responds to the vim-plugin, which marshals the request and highlights each token specified in the response. 6.3. HIGHLIGHTING REQUEST EXAMPLE 37

Figure 6.6 Highlight request example Chapter 7

Implementation

The implementation chapter will extend the design chapter and will give a deeper inside into each service implementation and the fullfield requirement. In the fol- lowing, a short introduction to each service and a reference to the containing github repository is given, which contains a detailed description on the func- tionality and the structure of input and output data. All requests are encoded as a JSON-RPC and transported over HTTP the detailed structure is shown the assigned github repositories. Furthermore, it explained which requirement is ful- filled by each service.

Extraction Service (vimextractfsml)

A central task of processing source code is lexical analysis, parsing and extract- ing specific information out of textual source files. The vimextractfsml service is capable of scanning FSML source files and responding with the associated to- ken stream embracing the textual representation of each token and position in the source file, defined by line number and line index position. Extracting also provides JSON encoded AST representation of the FSML source file.

Highlighting Service (vimhighlightfsml)

Fulfills requirement URS-1 by requesting the vimextractfsml service at /extrac- t/vim/fsml/token to receive the token stream. The token stream response in- cludes the string representation and position in the source file of each token. The

38 39 vimhighlightfsml service is aware of the keywords and special characters of the FSML specification and the highlighting groups of Vim. Resulting in a classifica- tion of each token, which is finally responded.

Formatting Service (vimformatfsml)

Formatting fulfills requirement URS-2 through requesting the vimextractfsml ser- vice at /extract/fsml. The response is the AST representation of the source file, which thereon is formatted through a template transformation and responded to the client.

Autocompletion Service (vimautocompletefsml)

The most common tokens (e.g., state names, events) are extracted by requesting the vimextractfsml service. The response is transformed to a representation pro- cessable by the vim-dee plugin. The tokens are ordered by the distance of the cursor position in the current Vim buffer. The vimautocompletefsml service fulfills requirement URS-3.

Outline Service (vimoutlinefsml)

Outlining should give an overview about the current source file in the buffer and existing import files. By extracting the desired information with the corre- spondingly vimextractfsml service endpoint, an overview can be generated and requirement URS-4 fulfilled. The information is shown in a temporary created Vim buffer.

Syntax Validation Service (vimsyntaxfsml)

Identifying whether or not a FSML file is syntactically correct, the file needs to be parsed. Parsing is completed by the vimextractfsml service. A syntacticly cor- rect file will generate no errors in contrast to an incorrect file. The error message contains information about the first appearing error in the file, because if pars- ing proceeds the following errors are directly dependent on the first error. The defective part in the source file will be highlighted in the Vim buffer, via the in- formation given by the vimsyntaxfsml service and requirement URS-6 is fulfilled. 40

Execution Service (vimrunfsml)

Running or execute an FSML file mainly depends on two characteristics. On the one hand the file needs to be correct with regard to the FSML specification. On the other hand all import files need to be correct and present. Executing the run command in the vim editor will request the vimrunfsml service. The vimrunfsml will extract the import information and will response to the client, in this case the go-deamon, with a list of the desired source files. The go-deamon will gather these files, if they are existent and requests the vimrunfsml service again with the additional files. The additional files are again checked for imports. This process continues as long as all files are available and the vimrunfsml is able to execute the whole program. As a result, a list of event, state tuples or in case of incorrect- ness an error message as a response is produced. The vimrunfsml thereby fulfills requirement URS-5.

Client application

The combination of the go-daemon and vim-dee plugin makes up the client appli- cation. The main purpose of both is described in section 6.2.1 to create a straight- forward introduction to the design section. The communication between the vim plugin and the go-daemon and the implementation is specified on communica- tion. Chapter 8

Requirement Analysis

In this chapter, the fulfillment of the requirements (Chapter.5) are discussed. Moreover, advantages and disadvantages of the described system will be given.

ARS-1 Independency of services

Every service can be added to the infrastructure without the need to change any other service directly. By deploying services with Docker Swarm, services are able to find other services by DNS lookups. The downside of this service discov- ery solution is that services need to listen on a specific port (e.g., 8080), because DNS lookups do not contain any port information. The mechanism only works, since every container gets its own IP-Address. A solution to this kind of problem is to register each service directly to a service discovery solution with the needed port and IP information. In the described implementation, such a discovery ser- vice is already part of the infrastructure, but there is no need for specific port information, because the whole communication is fulfilled via HTTP and port 8080, which is the alternative default HTTP [9] port.

ARS-2 Expandability of the application

Docker Swarm provides the tooling to add more services to the infrastructure. A server with a Docker Daemon running in Swarm mode can join the Swarm cluster by signing up to the Consul server. The drawback is that services need to be deployed in Docker containers and services names need to be distinct.

41 42

ARS-3 Communication protocol

Services in the provided infrastructure relay on the HTTP protocol.

ARS-4 Message format

Messaging between services are JSON encoded remote procedure calls (JSON-RPC). JSON-RPCs are structured into method, payload, and an id. This structure of- fers the possibility to add more than one method at one service endpoint (e.g., vimrunfsml service).

ARS-5 Observability

Services output the logging information to standard output, which is composed by Docker Swarm. The client application writes logging information to a .godae- mon.log file in the current working directory.

ARS-6 Support

By splitting the client application into a plugin and the go-daemon, Plug-Ins can be implemented for other editors simply by calling the go-daemon via the defined flags and communication over standard in-/output. In the implementation an example Vim plugin is created.

User Requirements Specification

The URS requirements are discussed in the Implementation chapter7. Chapter 9

Concluding remarks

The experimental Disintegrated Development Environment for FSML shows ad- vantages and disadvantages in contrast to a common IDE. In this concluding chapter the experiment is summarized, limitations, contributions and some ad- vice where this research should be taken next, are pointed out.

9.1 Summary

In summary, the messaging based development environment is split into a cloud or internet based part and an user application. A vim-plugin and a script, form the client application, connecting the user to the second, distributed part of the infrastructure. The second part of the infrastructure is a Docker Swarm man- aged, containerized, service oriented application accessible through a reverse proxy. Services are containerized, observable and communicate over the widely accepted HTTP protocol. Integration and disintegration of new services is ful- filled by Docker Swarm. The experimental implementation delivers the main parts for a development environment for FSML.

9.2 Analysis

This work provides a way of how new services and features could be added to a cloud based IDE. Containerized services are independent in terms of their de- ployment and failure. New services can be added without redeploying the whole

43 9.2. ANALYSIS 44 application. A service behaving incorrect can be removed and a renewed version can be added plainly. Furthermore, a whole new set of services or even nodes can be added by joining the Docker Swarm cluster. The basis for communication between services is the request-response HTTP protocol, to make the develop- ment of new features as simple as possible. HTTP is wildly accepted and nearly all high-level programming languages support this protocol. As a result, new features can be implemented in a variety of languages and do not depend on an underlining plugin API or messaging middleware. These characteristics of inde- pendency are supported by Docker. Docker containers remove the subordination to a specific platform or . The result is a flexible, scalable and reliable infrastructure. As a result, all requirements on the infrastructure 5.1 are fulfilled. Decentralizing all features of an IDE and replacing the plugin in architecture, due to a messaging based approach causes some difficulties. Tasks like syntax high- lighting or formatting are fairly simple, in contrast to running whole programs distributed to a variety of files in a stateless environment. One possible solu- tion can be moving the whole development environment to the cloud. Storing the source files online offers advantages in terms of performance, but an internet connection would be constantly required. This kind of approach is fulfilled by the Eclipse Che project [22]. Furthermore, the go-daemon, part of the client appli- cation, offers major security issues, because it is capable of changing and reading files on the clients machine. An attacker could change the import statement in a source file and read other unrelated files on the system, to prevent this kind of attack the user needs to specify a DDEPATH environment variable to specify the working directory. Another safety concern is the unencrypted HTTP connection between the client and the server, which can be solved by implementing a HTTPS connection between the client and the Web server. In the end, someone may rise issues why the implementation is based on Docker and microservices. Docker and microservices can be seen more than a management decision than a technical decision. The plugins, which make up a modern IDE, are developed by many different developers, who all favor a dif- ferent programming language and work in teams for one plugin. By supplying plugins as Docker containers, are free to choose a technology stack. Docker is concerned with running applications with less configuration than de- ploying a whole application at once, this paves the way for continues integration, 9.3. FUTURE WORK 45 through which a container gets automatically deployed after for example tests succeed. Keeping an eye on the bigger picture, this means that these teams, who provide plugins, can work independently from each other and the DDE can in- crease over time.

9.3 Future work

The experimental implementation only offers support for FSML, thus services need to be added to support more complex languages like Perl or Golang. More- over, in this kind of infrastructure it can be even possible to combine different approaches of development environments, for example, an IDE can be merged with a Language Workbench or an application like Kite can refer to services of the DDE. Furthermore, services unrelated to a writing source code, like management systems or a chat application can be produced, but then a HTTPS connection is mandatory. Bibliography

[1] Sept. 2016. URL: https://www.jetbrains.com/idea/.

[2] Sept. 2016. URL: https://www.eclipse.org/ide/.

[3] Sept. 2016. URL: https://www.jetbrains.com/youtrack/.

[4] Sept. 2016. URL: https://www.jetbrains.com/help/idea/ 2016.2/using-git-integration.html.

[5] Sept. 2016. URL: https://plugins.jetbrains.com/plugin/ 7?pr=idea.

[6] Sept. 2016. URL: https://eclipse.org/articles/Article- Plug-in-architecture/plugin_architecture.html.

[7] Sept. 2016. URL: http://martinfowler.com/articles/languageWorkbench. html.

[8] Sept. 2016. URL: https://www.python.org/dev/peps/pep- 0008/#id17.

[9] Sept. 2016. URL: https : / / www . iana . org / assignments / service - names - port - numbers / service - names - port - numbers.txt.

[10] Amazon Trusted Container Registry. Sept. 2016. URL: https://aws. amazon.com/ecr/.

[11] Atom text editor. Sept. 2016. URL: https://atom.io/.

46 BIBLIOGRAPHY 47

[12] Martin Bravenboer et al. “Program Transformation with Scoped Dy- namic Rewrite Rules”. In: Fundam. Inf. 69.1-2 (July 2005), pp. 123– 178. ISSN: 0169-2968. URL: http : / / dl . acm . org / citation . cfm?id=1227247.1227253.

[13] Consul Project Site. Sept. 2016. URL: https://www.consul.io/. [14] Sergey Dmitriev. “Language oriented programming: The next pro- gramming paradigm”. In: JetBrains onBoard 1.2 (2004), pp. 1–13.

[15] Docker documentation. Sept. 2016. URL: https://docs.docker. com/engine/userguide/storagedriver/imagesandcontainers/.

[16] Docker Inc. Sept. 2016. URL: https://www.docker.com/.

[17] Docker Inc. Sept. 2016. URL: https://docs.docker.com/compose/.

[18] Docker Trusted Container Registry. Sept. 2016. URL: https://docs. docker.com/docker-trusted-registry/.

[19] Dockerfile reference quide. Sept. 2016. URL: https://docs.docker. com/engine/reference/builder/.

[20] DockerHub. Sept. 2016. URL: https://hub.docker.com/. [21] Rajdeep Dua, A Reddy Raja, and Dharmesh Kakadia. “Virtualiza- tion vs containerization to support paas”. In: Cloud Engineering (IC2E), 2014 IEEE International Conference on. IEEE. 2014, pp. 610–614.

[22] Eclipse-Che Project. Sept. 2016. URL: http://www.eclipse.org/ che/.

[23] Elements of a Language Workbench. Sept. 2016. URL: http://martinfowler. com/articles/languageWorkbench.html#ElementsOfALanguageWorkbench. [24] Sebastian Erdweg et al. “The State of the Art in Language Work- benches”. In: Software Language Engineering: 6th International Confer- ence, SLE 2013, Indianapolis, IN, USA, October 26-28, 2013. Proceedings. Ed. by Martin Erwig, Richard F. Paige, and Van Wyk. Cham: Springer International Publishing, 2013, p. 201. ISBN: 978-3-319-02654- 1. DOI: 10.1007/978-3-319-02654-1_11. URL: http://dx. doi.org/10.1007/978-3-319-02654-1_11. BIBLIOGRAPHY 48

[25] etcd. Sept. 2016. URL: https://github.com/coreos/etcd. [26] Moritz Eysholdt and Heiko Behrens. “Xtext: implement your lan- guage faster than the quick and dirty way”. In: Proceedings of the ACM international conference companion on Object oriented program- ming systems languages and applications companion. ACM. 2010, pp. 307– 309. [27] John Fink. “Docker: a software as a service, operating system-level virtualization framework”. In: Code4Lib Journal 25 (2014). [28] Felienne Hermans, Martin Pinzger, and Arie van Deursen. “Domain- Specific Languages in Practice: A User Study on the Success Fac- tors”. In: Model Driven Engineering Languages and Systems: 12th In- ternational Conference, MODELS 2009, Denver, CO, USA, October 4-9, 2009. Proceedings. Ed. by Andy Schürr and Bran Selic. Berlin, Hei- delberg: Springer Berlin Heidelberg, 2009, pp. 423–437. ISBN: 978- 3-642-04425-0. DOI: 10.1007/978- 3- 642- 04425- 0_33. URL: http://dx.doi.org/10.1007/978-3-642-04425-0_33. [29] Lennart C.L. Kats, Rob Vermaas, and Eelco Visser. “Testing Domain- specific Languages”. In: Proceedings of the ACM International Confer- ence Companion on Object Oriented Programming Systems Languages and Applications Companion. OOPSLA ’11. Portland, Oregon, USA: ACM, 2011, pp. 25–26. ISBN: 978-1-4503-0942-4. DOI: 10 . 1145 / 2048147.2048160. URL: http://doi.acm.org/10.1145/ 2048147.2048160. [30] Lennart C.L. Kats and Eelco Visser. “The Spoofax Language Work- bench”. In: Proceedings of the ACM International Conference Companion on Object Oriented Programming Systems Languages and Applications Companion. OOPSLA ’10. Reno/Tahoe, Nevada, USA: ACM, 2010, pp. 237–238. ISBN: 978-1-4503-0240-1. DOI: 10 . 1145 / 1869542 . 1869592. URL: http://doi.acm.org/10.1145/1869542. 1869592. BIBLIOGRAPHY 49

[31] Kubernetes container cluster manager. Sept. 2016. URL: http://kubernetes. io/. [32] Ralf Lämmel. Software Languages—Foundations and Engineering. In preparation. Springer-Verlag, 2017. [33] Paul Klint Mark van den Brand and Jurgen Vinju. SDF3 reference manual. Sept. 2016. URL: http : / / www . metaborg . org / en / latest/source/langdev/meta/lang/sdf3.html. [34] M. D. McIlroy, E. N. Pinson, and B. A. Tague. Unix Time-Sharing Sys- tem Forward. Tech. rep. Bell Laboratories, Mar. 1978, pp. 1902–1903. URL: http : / / www . alcatel - lucent . com / bstj / vol57 - 1978/articles/bstj57-6-1899.pdf.

[35] NaBL reference manual. Sept. 2016. URL: http://www.metaborg. org/en/latest/source/langdev/meta/lang/nabl.html.

[36] NeoVim implementation. Sept. 2016. URL: https://neovim.io/. [37] Sam Newman. Building Microservices. 1st. O’Reilly Media, Inc., 2015, pp. 1902–1903. ISBN: 978-1-4919-5030-2.

[38] Official Redis Dockerhub repository. Sept. 2016. URL: https://hub. docker.com/_/redis/.

[39] Official Redis Dockerhub repository. Sept. 2016. URL: https://docs. docker.com/engine/userguide/storagedriver/imagesandcontainers/ #/container-and-layers.

[40] Official Ubuntu Dockerhub repository. Sept. 2016. URL: https : / / hub.docker.com/_/ubuntu/. [41] Vaclav Pech, Alex Shatalin, and Markus Voelter. “JetBrains MPS As a Tool for Extending Java”. In: Proceedings of the 2013 International Con- ference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools. PPPJ ’13. Stuttgart, Germany: ACM, 2013, pp. 165–168. ISBN: 978-1-4503-2111-2. DOI: 10.1145/ 2500828.2500846. URL: http://doi.acm.org/10.1145/ 2500828.2500846. BIBLIOGRAPHY 50

[42] Sareh Fotuhi Piraghaj et al. “Efficient Virtual Machine Sizing for Hosting Containers as a Service (SERVICES 2015)”. In: 2015 IEEE World Congress on Services, SERVICES 2015, New York City, NY, USA, June 27 - July 2, 2015. 2015, pp. 31–38. DOI: 10.1109/SERVICES. 2015.14. URL: http://dx.doi.org/10.1109/SERVICES. 2015.14. [43] Advait Sarkar. “The impact of syntax colouring on program com- prehension”. In: Proceedings of the 26th Annual Conference of the Psy- chology of Programming Interest Group (PPIG 2015). 2015, pp. 49–58. [44] Anthony M. Sloane et al. “Monto: A Disintegrated Development En- vironment”. In: Software Language Engineering: 7th International Con- ference, SLE 2014, Västerås, Sweden, September 15-16, 2014. Proceedings. Ed. by Benoît Combemale et al. Cham: Springer International Pub- lishing, 2014, pp. 211–220. ISBN: 978-3-319-11245-9. DOI: 10.1007/ 978- 3- 319- 11245- 9_12. URL: http://dx.doi.org/10. 1007/978-3-319-11245-9_12. [45] Tijs van der Storm. The Rascal language workbench. 2011.

[46] Sublime text editor. Sept. 2016. URL: https://www.sublimetext. com/. [47] Walid Taha. “Domain-specific languages”. In: Proc. Intl Conf. Com- puter Engineering and Systems (ICCES). Springer. 2008. [48] Vlad A. Vergu. DynSem is a domain specific language for the concise specification of dynamic semantics of programming languages. Sept. 2016. URL: http : / / www . metaborg . org / en / latest / source / langdev/meta/lang/dynsem/index.html.

[49] Vim text editor. Sept. 2016. URL: http://www.vim.org/. [50] Eelco Visser et al. “A Language Designer’s Workbench: A One-Stop- Shop for Implementation and Verification of Language Designs”. In: Onward! 2014, Proc. SPLASH 2014. ACM, 2014, pp. 95–111. BIBLIOGRAPHY 51

[51] Markus Voelter and Vaclav Pech. “Language Modularity with the MPS Language Workbench”. In: Proceedings of the 34th International Conference on Software Engineering. ICSE ’12. Zurich, Switzerland: IEEE Press, 2012, pp. 1449–1450. ISBN: 978-1-4673-1067-3. URL: http:// dl.acm.org/citation.cfm?id=2337223.2337447. [52] M. P.Ward. “Language Oriented Programming”. In: Software?Concepts and Tools 15 (1995), pp. 147–161.