Masaryk University Faculty of Informatics

Analysis of Decision Model and Notation tooling in the ecosystem

Master’s Thesis

Bc. Marcel Mráz

Brno, Spring 2021 Masaryk University Faculty of Informatics

Analysis of Decision Model and Notation tooling in the Visual Studio Code ecosystem

Master’s Thesis

Bc. Marcel Mráz

Brno, Spring 2021 This is where a copy of the official signed thesis assignment and a copy ofthe Statement of an Author is located in the printed version of the document. Declaration

Hereby I declare that this paper is my original authorial work, which I have worked out on my own. All sources, references, and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source.

Bc. Marcel Mráz

Advisor: Bruno Rossi, PhD

i Acknowledgements

I would like to express my gratitude to my thesis advisor Bruno Rossi, PhD, for valuable guidance during consultations. Also, I would like to thank Adacta for the opportunity to combine study and work life. Additionally, I would like to thank my family and close ones for their support.

ii Abstract

Decision Model and Notation (DMN) is a standard specifying a vi- sual notation and a lower-level Friendly Enough Expression Language (FEEL) for the definition of interchangeable and decision models. The definition and maintenance of such models is contin- uously being improved with the emerging tooling ecosystem built around DMN graphical editors. One of the requested features in- side such editors is the ability to provide language features, such as DMN model validation and FEEL completion on each content change and request, respectively. Simultaneously, (LSP) provides a convenient way to provide such language features for text-based languages and specifications across a number of different development tools and text-based editors, such as Visual Studio Code. In addition to text-based editors, Visual Studio Code offers a way to create custom graphical editors, allowing to embed a DMN graphical editor inside its user interface. This thesis reviews the concepts of DMN and its validations, LSP and related Visual Studio Code API for graphical editors and researches the related questions for providing DMN language features through the use of LSP inside the embedded graphical editors. As a result, an architecture using LSP for providing language features inside embedded DMN graphical editors is pro- posed. The proposed solution architecture is specifically designed to address related domain problems in the of commercial, enterprise-level insurance software.

iii Keywords

DMN, FEEL, LSP, VS Code, Extension API, Custom Editor API, Web- view API, Node.js, Software Architecture, Static Code Analysis, Adacta, AdInsure Studio

iv Contents

1 Introduction 1 1.1 Problem domain ...... 2 1.2 Thesis statement ...... 5 1.3 Research questions ...... 5 1.4 Thesis structure ...... 7

2 Decision Model and Notation 9 2.1 Overview ...... 10 2.2 Use cases ...... 11 2.3 Outline of specification ...... 13 2.3.1 FEEL ...... 14 2.3.2 DRD and its elements ...... 15 2.3.3 Boxed expression types ...... 17 2.3.4 Conformance level ...... 19 2.4 Related standards ...... 21 2.4.1 CMMN ...... 22 2.4.2 BPMN ...... 22 2.4.3 PMML ...... 23

3 DMN model validation 24 3.1 Market overview ...... 25 3.1.1 Vendors and involved parties ...... 27 3.2 Validation breakdown ...... 30 3.2.1 Validation against the schema ...... 31 3.2.2 Validation of DRD and its elements ...... 31 3.2.3 Validation during compilation process ...... 31 3.2.4 Validation of FEEL expressions ...... 32 3.2.5 Validation of a decision table ...... 32

v 3.2.6 Dynamic validation ...... 33

4 Language Server Protocol 34 4.1 Overview ...... 35 4.2 Architecture ...... 38 4.2.1 Client, server and their capabilities ...... 38 4.2.2 JSON-RPC ...... 39 4.2.3 Communication flow ...... 40 4.2.4 Limitations ...... 41 4.2.5 Conclusion ...... 43

5 Visual Studio Code 45 5.1 Overview ...... 46 5.2 Electron.js ...... 47 5.3 Extension API ...... 49 5.3.1 TextDocument ...... 50 5.3.2 ...... 50 5.3.3 Webview API ...... 51 5.3.4 Custom Editor API ...... 51 5.3.5 Language extensions ...... 52 5.3.6 Conclusion ...... 53

6 Proposed solution 54 6.1 Requirements ...... 55 6.1.1 Functional requirements ...... 55 6.1.2 Quality attributes ...... 56 6.1.3 Technical constraints ...... 58 6.1.4 Business constraints ...... 59 6.2 Architecture ...... 60 6.2.1 Overview ...... 60 6.2.2 LSP language server ...... 61 6.2.3 DMN analysis ...... 62 6.2.4 DMN custom text editor ...... 66 6.2.5 Validation process example ...... 67 6.2.6 Possible integrations ...... 69

7 Conclusion 72 7.1 Research questions evaluation ...... 73

vi 7.2 Future work ...... 76

A List of Abbreviations 79

Bibliography 82

vii List of Figures

1.1 Hierarchy of explained AdInsure concepts 3 1.2 Visualisation of references between the research questions 6 1.3 Visualisation of the thesis outline 7 2.1 DRD and its elements 15 2.2 DMN conformance levels hierarchy 19 2.3 Linking business automation with machine learning 21 4.1 Interactions between a user, LSP client and LSP server 41 5.1 Electron.js application architecture 47 5.2 Class diagram showing conceptual relations between the mentioned concepts 50 5.3 SDK for development of VS Code’s LSP-based server 52 6.1 Dependencies between the separate packages 60 6.2 Main conceptual relations inside and outside of the language server 61 6.3 Main conceptual infrastructure of DMN analysis package and its dependencies 63 6.4 DMN analysis package infrastructure and its dependencies 67 6.5 Integration architecture of utilizing multiple language servers for multiple custom text editors 71

viii 1 Introduction

This chapter focuses on explaining the problem domain and related context. Based on the context, research questions are defined, and the overall structure of the thesis is explained and visualised.

1 1. Introduction 1.1 Problem domain

The problem is defined in the context of two systems - AdInsure [1] and AdInsure Studio [2]. AdInsure and AdInsure Studio are software products developed by Adacta, a Slovenian-based company with of- fices spread across major European cities, starting with headquarters in Ljubljana and continuing with Maribor, Belgrade, Zagreb, Moscow and Brno. Adacta is, with more than 30 years of experience, the soft- ware provider for the insurance industry. AdInsure is an insurance platform, and its newest version is de- signed, apart from other things, with configurability in mind, allowing configuration of all of its infrastructural and business elements. AdIn- sure Studio is what makes the configuration of AdInsure and all of its elements quick, convenient and business-user friendly. AdInsure Studio supports the entire configuration lifecycle of insurance prod- ucts and business processes mainly through its Visual Studio Code extension client1, supporting multiple authoring modes and focusing on a broad audience reaching from developers and testers to business analysts and actuaries. There are two AdInsure Studio extension authoring modes relevant to the problem2: • Basic mode (default): A mode focused on business users and configuration of products and processes using custom made -centered (GUI-centered) editors and business user friendly explorers. • Expert mode: A mode focused on AdInsure domain experts, such as configurators and developers, allowing the configuration of products and processes using plain built-in text editors. Many configuration concepts3 defined by the AdInsure platform are supported by both of these modes, with one such concept be-

1The other AdInsure Studio client is Command Line Interface (CLI) for auto- mated purposes, such as the use during continuous integration and continuous delivery (CI/CD). 2The third mode is called "Accelerated mode" and is used for rapid configuration of products and processes using scaffolding technique. 3All of such concepts are defined as textual files, structured mostly using follow- ing formats - .js, ., xml, csv.

2 1. Introduction

ing a business rule configuration. One way to configure a business rule in AdInsure, using AdInsure Studio extension, is by using the Decision Model and Notation (DMN), which is a higher, visual lan- guage focused on the definition of business decisions. In terms of AdInsure Studio modes, it means either to edit the DMN file using a GUI-centered editor (a custom DMN editor or, in Adacta’s Ubiquitous Language4, so-called Rule editor) or a built-in text editor (Monaco editor5).

Figure 1.1: Hierarchy of explained AdInsure concepts

4Ubiquitous Language is a term for specifying common language between devel- opers and other stakeholders, used by Evans in Domain Driven Design [3]. 5Monaco editor is a standalone text-editor that powers VS Code [4].

3 1. Introduction

The problem is that neither the custom Rule editor nor AdInsure Studio provides any DMN model validations. From the configuration perspective, it implies that it is possible to create an invalid DMN model (thus an invalid business rule), which might cause problems during its evaluation in the AdInsure run-time environment. Such issues detected in the run-time environment can be very costly and pro- long the development time, testing time, and overall time-to-market for the given product, which are the exact attributes that DMN and AdInsure Studio are trying to solve. As it is possible to create an invalid DMN model and pass it to the run-time environment, it is necessary to detect the potential problems as soon as possible. In terms of AdInsure Studio, it means detecting such problems within the VS Code extension during the DMN model authoring phase in both related modes (Basic and Expert).6 Moreover, AdInsure Studio includes the concept of validations for other configuration concepts specified by the AdInsure platform. Problem detection for the DMN model should be thus performed in a similar validation manner and thus integrable with the already existing validations. However, as these validations are performed within the same process that the AdInsure Studio extension is running on, they start to bring performance issues. Simultaneously, many VS Code extensions have adopted a stan- dard focused on providing language features for different types of text-based languages. This standard is called Language Server Pro- tocol (LSP), and it is a standard protocol for unifying and providing the language features (such as diagnostics, completion and others) across different text editors and Integrated Development Environ- ments (IDE). Such language features help with problem detection on multiple levels and provide a much larger set of features than exist- ing AdInsure Studio validations. Moreover, many editors, such as VS Code, automatically include support for LSP in its built-in text editors (Monaco editor) and other UI parts (such as the Problems panel [5]). The adoption of LSP within AdInsure Studio would mean pro- viding the validations as LSP diagnostics with automatic support in built-in text editors (Expert mode). Also, it would open possibilities

6As AdInsure Studio also supports CLI client (i.e. for CI/CD purposes), such problems should be detectable on this level too.

4 1. Introduction

of implementing other language features (such as completion) for DMN-based business rules and various other AdInsure configuration concepts. Such features could improve the problem detection even more (lower the number of actually created problems) and provide a better user experience in the authoring phase over time. Moreover, an LSP-compliant solution by design processes all of the work in a separate LSP server process. This would automatically solve the need to improve the overall performance by offloading all AdInsure Studio validations into a separate process. On the other hand, the adoption of LSP raises multiple questions related to the use within the Basic mode, CLI use case, DMN model validation, overall AdInsure Studio validations architecture and others. All of these questions are defined in the section below.

1.2 Thesis statement

The main goal of the thesis is to analyse DMN language features, such as validation and completion, in the context of Visual Studio Code Ex- tension API, its custom Webview-based editors and Language Server Protocol. The gathered knowledge from the analysis phase should used to create a proposed solution architecture for providing such features inside VS Code’s graphical editors. The proposed solution is specifically designed to address related domain problems inthe context of commercial, enterprise-level insurance software named AdInsure Studio.

1.3 Research questions

As described in the problem domain section, LSP solves some ma- jor problems automatically. On the other hand, it raises additional questions in the context of AdInsure Studio and DMN specification. The purpose of this thesis is to answer all of the research questions specified below. It is done so by carefully and independently analysing and reviewing concepts of DMN and its validations, LSP and VS Code. Based on the gathered knowledge, a solution addressing all of the issues is proposed, combining the concepts covered in the previous sections.

5 1. Introduction

Figure 1.2: Visualisation of references between the research questions

Questions

Q1 Can LSP be used to exchange information with GUI-based edi- tors in the VS Code, such as with the Rule (DMN) editor? Q2 Is it possible to use LSP to retrieve DMN model validation results? Q3 What types of DMN model of validations can be actually pro- vided? Q4 Are there some third-party DMN model validators, and is it possible to integrate them? Q5 While using LSP for communication with the editor, is it still possible to provide validations using the CLI?

6 1. Introduction 1.4 Thesis structure

Figure 1.3: Visualisation of the thesis outline

Chapter 1 provides a detailed examination of the problem domain, specifies the research questions, and outlines the thesis structure. Chapter 2 reviews the DMN model specification. It focuses on the possible use cases, key elements specified by the standard and related specifications. Chapter 3 focuses on the DMN model validations. Based on the market research of DMN vendors, tools, and involved parties provides an overview of DMN model validation types. Chapter 4 reviews the LSP, its architecture, related concepts, sup- ported features and discovered limitations. Chapter 5 reviews the VS Code concepts related to the LSP. The first part focuses on the overall architecture and used framework. The second part reviews the essential parts of the VS Code Extension API used in the proposed solution. Chapter 6 provides functional and non-functional requirements and other constraints for the end-system. Based on the requirements and constraints, a solution that addresses the research questions is proposed.

7 1. Introduction

Chapter 7 concludes the provided results and answers the research questions. This chapter also suggests potential improvements as follow- up activities.

8 2 Decision Model and Notation

This chapter provides an overview of Decision Model and Notation, its use cases, specification and related standards.

9 2. Decision Model and Notation 2.1 Overview

"DMN is a modelling language and notation for the precise specification of business decisions and business rules. DMNis easily readable by the different types of people involved in decision management. These include business people who specify the rules and monitor their application; business analysts." [6]

Decision Model and Notation (DMN) is a standard, released in September 2015, maintained by Object Management Group (OMG), which stands behind many popular and ratified ISO standards, such as Business Process Model and Notation (BPMN), Case Management Model and Notation (CMMN), Unified Modeling Language (UML), Common Object Request Broker Architecture (CORBA) and more. DMN is a standard for defining repeatable, interchangeable and exe- cutable decision models, focusing on providing a common and easily understandable notation for a wide range of end-users. The primary goal of DMN is to provide a standardized bridge between those who create and maintain business decisions and those who implement and automate those decisions. This removes the gap between technical users and business users and eliminates the risk of misinterpretation or other communication misunderstandings, resulting in more effective collaboration and faster production changes. The secondary goal is to provide standardized decision models, which could be interchanged and reused across different tools and organizations, thanks to the unified XML specification.

10 2. Decision Model and Notation 2.2 Use cases

In traditional approaches and processes, any maintenance or change of business decisions kept in the code could be a slow process, requiring many people involved along the way. Making any change in the busi- ness decisions could go through business analysts, domain experts, , testers and possibly other parties. Such a chain of peo- ple involved in the process prolongs the whole design, development and deployment procedure and could lead to slower time-to-market response with possible misunderstandings of business needs on the way. DMN tries to limit this chain and separate the responsibilities so that involved parties can focus just on things within their domain of expertise. It could mean a design of business decisions within the DMN visual model for business analysts and domain experts, and on the other hand, it could mean operational support and an integration of the DMN model with related services for technical users. From a business perspective, DMN could be used in any area re- quiring rather complex and automized decision making based on some business rules. Numerous areas satisfy these conditions, reach- ing from healthcare applications to blockchain smart contracts [7] [8]. Other examples are financial systems used by institutions such as banks or insurance companies. In these institutions, there are required countless automatized operational decisions per day, such as calculat- ing the maximum amount of loan or mortgage that could be granted to the customer or determining insurance premium or premium reserves based on the provided data. Even though DMN was designed to be understandable for a wide range of business users, it enables many possibilities to cover most of the business needs without noticeable limitations. For example, DMN enables integration with almost any external services through the business knowledge model element or compatibility with mature PMML (Predictive Model Markup Lan- guage) standard, enabling the integration of predictive machine learn- ing models within DMN itself. From a more technical perspective, DMN could be used on the following levels [9]:

1. Definition of manual decision-making model. 2. Definition of requirements for automated decision-making model.

11 2. Decision Model and Notation

3. Definition and implementation of an executable decision-making model.

The third level is what really differentiates DMN from require- ments languages. DMN is not just another requirements language, and compared with classical business rules tools, DMN prescribes much more than just a definition of decision requirements and decision logic. With DMN, there is no need to create a technical specification of the decision model and then translate it manually to a specific . DMN is not designed to produce just doc- umentation for technical users, but it is designed with executability in mind. Meaning with the help of an appropriate tool, the DMN model could be taken and deployed, integrated or executed straight after being authored. In other words, DMN strictly specifies execution semantics with higher-level visual language, which then could be source-to-source compiled to a specific programming language, such as or JavaScript, producing self-contained executable code.

"DMN is a business-oriented, tool-independent, executable de- cision language." [10]

There are a couple of ways on how to execute a decision model. One possible way would be to invoke a decision model as a service and have it integrated internally as part of the system. This approach might be beneficial for applications that are more inclining to a monolithic architecture. Such integration comes with typical benefits of - lithic applications - tightly coupled components usually running in one process, resulting in overall fast performance and a small number of deployments. Another possible approach is to use decision mod- els as independently deployed decision services, more inclining into a service-oriented architecture with loosely coupled components, a large number of deployments and benefits such as horizontal scalabil- ity, self-maintainability, isolation, resiliency and more. Each decision service can be then hosted in the cloud, exposed over an automatically generated API, deployed directly from a Business Rules Management System (BRMS) and even invoked by a Business Process Management System (BPMS).

12 2. Decision Model and Notation 2.3 Outline of specification

"DMN’s value proposition - Help all stakeholders understand a complex domain of decision-making using easily readable dia- grams.Provide a natural basis for discussion and agreement on the scope and nature of business decision-making. Reduce the effort and risk of decision automation projects through graphi- cal decomposition of requirements. Allow business rules to be defined simply and reliably in unambiguous decision tables. Sim- plify development of decisioning systems using specifications that may be automatically validated and executed. Provide a structured context for the development and management of pre- dictive analytic models. Enable the development of a library of reusable decision-making components." [11]

DMN is visual language and notation designed for business users, with the logic defined in hierarchical diagrams consisting of inputs, decisions, decision service, business knowledge models and knowl- edge sources. All together, it creates a logical structure view on top of the whole DMN, called Decision Requirements Diagram (DRD). Inputs are oval nodes in the DRD diagram and are simply the data coming into the decisions. Individual decisions are rectangular nodes in the diagram. Each decision node takes some inputs and, based on some decision logic, returns corresponding outputs. Decision service, the overlay rectangle containing other decisions, is a top-level decision that can be invoked as a standalone service from an external application or business process inside BPMN. Business knowledge models, rectangular nodes with clipped corners, introduce the concept of reusability and integration with external services. Last but not least, knowledge sources, note-like nodes, refer to external documents such as documentation, policies, regulations or other real-world factors. As a part of the DMN standard, a particular expression language called Friendly Enough Expression Language (FEEL) was introduced to provide a familiar and business-friendly way of writing simple expressions and decision logic. As DMN, FEEL is also designed for business users. It is not a full-blown programming language, but it is a potent language for the definition of business rules, and it can also help in certain situations when arithmetic calculations or other formulas are

13 2. Decision Model and Notation

needed. A good analogy to FEEL is the expression language used by Excel, which aims at a similar, business-oriented audience.

2.3.1 FEEL

As its name reveals, Friendly Enough Expression Language is an expression language specifically designed for business users. Itis a lightweight but yet powerful language that can be used inside DMN decisions and offers many features that can be found in other expres- sion or programming languages. Some of these features include:

• conditional, loop statements, filtering • data types supporting booleans, numbers, string, dates, lists, ranges, contexts and functions • support for three-valued logic - true, false and null • built-in functions for operating with the basic data types • custom function definition and invocation • four different types of scopes - built-in, global, local and special • side-effect free

Compared to traditional programming languages, in expressions and corresponding expression languages, it is impossible to explicitly declare a variable, meaning it is only possible to reference a variable, not to create one1. Nevertheless, the following is the description of expressions by Michael Kay, author of the book about XPath, another expression language: "Every expression takes one or more values as its inputs, and produces a value as its output.... One of the things an expression language tries to achieve is that wherever you can use a value, you can replace it with an expression that is evaluated to produce that value.... This property is called composability: expressions can be used anywhere that values are permitted. [12]" Another significant distinction from traditional programming lan- guages is the support for a space in variable or function names, making

1Unless being defined in the global or local scope. In other words, "variables" can be defined in the context of the whole DMN as an input (decision ordata)a context entry. Both are, however, constant variables.

14 2. Decision Model and Notation

FEEL grammar context-sensitive and more challenging to parse [13]. However, multiple vendors were already able to implement a parser for FEEL grammar and share it with the open-source community. Between the available open-source parsers belong:

• ANTLR4 (ANother Tool for Language Recognition) parser by RedHat (Java) [14]. • Peg.js (Parsing Expression Grammar) parser by EdgeVerve (Java- Script) [15].

2.3.2 DRD and its elements

Decision Service

Decision 2 Knowledge source

Decision 1 Knowledge model

Input data

Figure 2.1: DRD and its elements

A decision requirements diagram, also called a decision requirements graph, is a visual representation of a decision model. It shows from the higher perspective, regardless of the actual decision logic, connections

15 2. Decision Model and Notation

between individual DMN elements2. From DRD, it is clearly visible which decisions depend on which input data or which decisions de- pend on other decisions. It is also possible to see external elements influencing particular decisions, such as business knowledge models or knowledge sources. Following is an overview of available DRD elements3: • Input data: Input data is a piece of information that is provided to the DMN model in the run-time, but its structure is defined in the design time. Its data type should be, by default, supported by FEEL but could be either a custom one or an imported one [9]. To be able to process input data, it is necessary to connect them into a decision. Arrow connecting input data and decision (or knowledge model) is called information requirement and is visible in the Figure 2.1. • Decision: Each decision in DMN is graphically represented by a table called boxed expression, which defines decision logic and has its own type and structure. Each boxed expression takes input data, either from the input data element itself or other decision element and, based on the specified logic and refer- enced knowledge models, determines the outputs. The boxed expression types are FEEL expression, Decision table, Context expression, Function, Invocation, Relation and List. More infor- mation about boxed expressions and their types is available in the next subsection. • Decision service: Decision service is a top-level decision that defines a reusable element in the decision model. It then canbe published as a standalone service, integrated into an external application, reused in other DMN model or executed in the business process inside BPMN. • Business knowledge model: A knowledge model, also called the business knowledge model, is another reusable piece of de-

2The figure showing DRD element is inspired by a figure available onRedHat documentation portal [16] 3This outline of the specification is focused on DMN version 1.3, released inlate 2019.

16 2. Decision Model and Notation

cision logic. Compared with the decision service, it does not define a reusable element but is an endpoint for connecting one. It can depend on some sub-model, sub-decision or sub-inputs and can be used inside the decision logic. Typically it takes some inputs, and after being internally evaluated, it passes the out- puts back to the decision. In addition to plain FEEL expressions and declarative sub-model, it also supports an external function, which can call a Java code or a PMML model. The arrow connect- ing the knowledge model and a decision is called knowledge requirement and is visible in the Figure 2.1.

• Knowledge source: A knowledge source refers to external docu- ments such as documentation, policies, law regulations or other real-world factors. They can reference a wide range of sources, such as documents, web pages or even video or audio content. The arrow connecting knowledge source and a decision is called authority requirement and is visible in the Figure 2.1.

2.3.3 Boxed expression types

In DMN, all decision logic inside decisions is represented by so-called boxed expressions. A boxed expression is defined recursively, mean- ing each boxed expression can contain another boxed expression and lowest level expressions inside each boxed expression are FEEL expres- sions. In order to create a boxed expression, it is necessary to connect it to an appropriate decision. The only possible way for connecting a decision with a boxed expression is by a name, meaning the name of a decision must correspond to the top-level boxed expression name. How is this interaction designed and visualized in the DMN tools is, however, left to the corresponding tool. A boxed expression can be one of the following types:

• Decision table: A decision table is a graphically defined tabular boxed expression. The basic form of a decision table contains rows and columns, where the first row specifies individual in- put/s and output/s for each column, and the rest of the rows specify corresponding rules for each input (defining the decision logic in prioritised order) and expected output. Rule’s inputs

17 2. Decision Model and Notation

are so-called Unary tests4 and rule’s outputs are defined as FEEL expressions. Each row thus graphically represents a simple if [Unary test] then [FEEL expression]. An additional element of a decision table, named Hit policy, can be used to influence the result and return, i.e. just the first matched row or sum of all matched outputs. • Boxed context: A boxed context is a map of so-called context entries (key, value pairs), where each key is output clause name (a constant variable) and value is another boxed expression, such as boxed FEEL expression, decision table or even another boxed context. Boxed context can optional have a result clause, which can be used for additional calculation on top of context entries. • Boxed literal expression: A boxed literal expression, also called boxed FEEL expression, is simply any FEEL expression used either as a standalone boxed expression or inside other boxed expressions, such as inside the decision table’s cells. It is defined by FEEL grammar as followed FEEL(e, s), where e stands for FEEL expression and s stands for the given scope. The scope can include input data elements from the DRD, other input decisions, built-in FEEL functions, custom FEEL functions and variables defined as context entries in a higher scope. • Boxed function: A boxed function is an element for the defini- tion of a function, which can be either FEEL function, Java code or PMML model. • Boxed invocation: A boxed invocation is an element used for the invocation of the function, such as those defined by the boxed function. • Boxed list: A boxed list is just a list of n elements. • Relation: Relation is a list of horizontal contexts with no result clause.

4Unary tests have additional layer of grammar defined, which when simpli- fied can consist of multiple FEEL expressions separated by comma, which canbe wrapped in not() function or can equal dash (which always evaluates to true).

18 2. Decision Model and Notation 2.3.4 Conformance level

Figure 2.2: DMN conformance levels hierarchy

There are three conformance levels specified in the DMN specification to divide DMN tools into clearly separated groups, based on their support of the standard. Conformance levels are certain certification degree defined by OMG. There are three conformance levels, and each level corresponds to a strictly defined set of functionalities, which the end-tool should support. The highest possible level is conformance level 3, which automatically includes the support for conformance level 2 and 1. In other words, conformance level 3 is the highest possible level recognized by the OMG and means full support of the standard. A summary of the three conformance levels could be found below. • Conformance level 1: First level conformance specifies that im- plementation supports full DRD and its corresponding elements. However, the model is not executable, which means the decision logic can be defined informally by any language of choice. This means that any language, even an unstructured natural one, is a valid language. • Conformance level 2: Second level conformance specifies that implementation supports everything that conformance level

19 2. Decision Model and Notation

1 support. It also supports decision tables, literal expressions and a subset of FEEL expressions, called Simplified Friendly Expression Language (S-FEEL), which are, in short, simple com- parisons and arithmetic expressions. However, this makes the decision logic structured and model fully executable.

• Conformance level 3: Third level conformance specifies that im- plementation supports everything that conformance level 2 does. In addition to that, it supports a complete set of FEEL expres- sions and also provides support for other boxed expressions, such as function invocation. Last but not least, as the second level, this level of a decision model is also fully executable.

20 2. Decision Model and Notation 2.4 Related standards

"DMN is designed to work alongside BPMN and/or CMMN, providing a mechanism to model the decision-making associated with processes and cases. While BPMN, CMMN and DMN can be used independently, they were carefully designed to be com- plementary. Indeed, many organizations require a combination of process models for their prescriptive workflows, case models for their reactive activities, and decision models for their more complex, multi-criteria business rules. Those organizations will benefit from using the three standards in combination, selecting which one is most appropriate to each type of activity modelling. This is why BPMN, CMMN and DMN really constitute the "triple crown" of process improvement standards." [11]

Figure 2.3: Linking business automation with machine learning

DMN standard could be effectively adopted and used standalone, but it is specifically designed to complement two related business- user oriented graphical standards, which are, as well, maintained by the OMG group - BPMN and CMMN. BPMN also called a "triple crown" of process improvement standards. The first version was in- troduced in 2004, and till now, it went through many iterations with currently propagated version 2.0.2. BPMN’s primary focus is on al- lowing business users to define and automate business processes. On the other hand, CMMN is a relatively new standard, with the first version introduced in 2014. It is designed explicitly for graphically representing case management, with a secondary goal of case model interchangeability among different tools. On top of that, DMN itself also allows easy integration with mature PMML standard, allowing

21 2. Decision Model and Notation

cross-compatibility across various machine learning tools and pre- dictive models. Altogether these standards link business automation with machine learning, as shown in Figure 2.3 [17].

2.4.1 CMMN

The Case Management Model and Notation is a standard way of ex- pressing a case. A case is a derived concept from case management, with a focus on unpredictable processes where it is impossible to pre- scribe a process with fixed activities. It is, as well as DMN and BPMN, a standard maintained by OMG. The initial version was released in 2014, with currently latest supported version 1.1. Similarly to DMN and BPMN, CMMN specification also focuses on a broad audience and provides a format for interchanging case models between vari- ous tools. CMMN directly expands the boundaries of what can be provided by a BPMN model. In comparison to BPMN, CMMN is not focused on structured processes with a defined set of activities but on unstructured processes with an event-centred approach and case file concept. For example, into this category of dynamic, ad-hoc processes fall tasks such as incident management or consulting. CMMN thus covers much more than pure BPMN and complements it directly.

2.4.2 BPMN

The Business Process Model and Notation is a mature standard for modelling end-to-end business processes. It is, as well as DMN, main- tained by OMG and is also a ratified ISO standard. The initial version was released in 2004 but since then went through many iterations, with the latest supported version 2.0.2. Similarly to DMN, BPMN specifica- tion is also focused on wide-audience, reaching from business users to technical people. BPMN also prescribes a mapping between graphical notation and underlying execution logic, meaning the whole process can be automized thanks to the mapping to a unified language called Business Process Execution Language (BPEL). The main elements of BPMN are messages, which flow between different participants and activities. The activity element can then be of multiple types. One of them is a business rule, and each business rule in a BPMN diagram can be represented as a standalone DMN model composed of multi-

22 2. Decision Model and Notation ple decisions. Many tools already support this interconnection with seamless interactions, such that clicking on a business rule activity in a BPMN diagram opens a particular DMN model inside a DMN specific tool.

2.4.3 PMML "PMML, administered by the Data Mining Group, is not a machine learning algorithm but a common XML format encom- passing a wide variety of machine learning algorithms, including decision trees, logistic regression, support vector machine, neural networks, and others." [18]

The Predictive Model Markup Language is a mature XML-like format for the interchange of predictive models, and it is de facto rec- ognized as a standard to represent machine learning models. The first version of PMML was released in 1998, and since then, it is constantly improved, and many tools and companies support it [19]. PMML is, be- sides FEEL expressions, supported by DMN out-of-the-box, meaning DMN decisions can execute any predictive models defined in PMML format via imported business knowledge model element. Compared with DMN decisions, machine learning models often work as black- box, and the incapability to explain why they for specific inputs return certain outputs is increasingly seen as a problem [20]. DMN decisions, such as decision tables, are, on the other hand, extremely transparent, enabling some degree of control over the predictive model. That is one of the reasons why PMML and DMN are often seen as complementary.

23 3 DMN model validation

This chapter provides a brief market overview of vendors, other parties and tools involved in the DMN model validations and related area. It also provides a summary of available DMN validations provided by the listed tools and vendors.

24 3. DMN model validation 3.1 Market overview

Since the first released version of the DMN specification, several ven- dors tried to adopt the DMN standard and provide appropriate tooling for it. According to Bruce Silver, author of the book DMN Method and Style, many of the vendors in the beginnings did not fulfil the common executable decision model’s promise. Instead, they used DMN more- or-less as a marketing badge, providing no more than just simple DRD editors, with CRUD operations on top of its elements.

"Unfortunately, many proprietary decision modeling tools have appropriated the DMN name as a marketing decal without con- forming to the spirit, much less the letter, of this promise. [21]"

OMG promptly realized that they needed to somehow push the vendors in order to bring all the benefits of DMN to the end-users. The ability to compile and execute the model is no doubt essential, distinguishing DMN from any other requirements language. That is the reason they introduced three levels of conformance into the standard. By doing so, vendors, and tooling they provide, are not badged with simply supporting DMN, but with label strictly defining the particular conformance level they achieved. From a different point of view, conformance levels work as some form of certification degree and divide the available tooling into clear and separated groups. In order to encourage the vendors to implement the full support for DMN, as stated by Conformance Level 3, a TCK [22] (Technology Compatibility Kit) group was established to provide a set of black-box tests ensuring conformance to the specification. TCK group is not, however, maintained by the centralized authority but by the commu- nity of vendors themself. Therefore the list does not include all the possible tools on the market and does not necessarily need to be ob- jective enough. On the other hand, its goals and initiatives are clear, and its involved parties belong to the most active ones in the whole DMN community. That is also confirmed by the fact that many TCK group members can be found on the list of parties contributing to the development of the DMN specification itself [6]. The list of vendors, tools and involved parties below is not a com- plete market overview of all available DMN vendors and their tools

25 3. DMN model validation

and is specifically structured to achieve this thesis’s goal. Nevertheless, the list was carefully filtered and chosen by the following criteria:

1. Tools or their parts are prefered to be open-source projects for the following reasons. • They can be explored and objectively analyzed from the ground, even without relying too much on available sources or marketing materials. • The project’s overall goal is as transparent as possible, so there are clear plans for development and maintenance in the future. • It is possible to contribute to the project, allowing one to be a part of the community and shape the project to the common objectives. 2. Tools or their parts could be used in the commercial sector, mean- ing their license agreement allows free commercial usage and further integrations. 3. Tools or their parts support or prescribe some level of validation on top of the DMN model itself, as it is one of the main focus of this thesis. 4. Vendors or involved parties actively participated in the devel- opment of the DMN standard or noticeably contributed to the community. 5. Vendors or involved parties are part of the TCK group. Their tool passes as many tests as possible, meaning they are operating on the highest possible conformance level and with the latest possible DMN specification.

The first part of this section focuses on providing an overview of vendors and involved parties satisfying most of the set conditions above. The second part is mostly focused on providing an overview of the chosen vendors’ tools and describing their key features. The last part summarises all possible DMN model validations provided by the listed tools and vendors.

26 3. DMN model validation

3.1.1 Vendors and involved parties RedHat

RedHat is the only vendor satisfying all the conditions above and plays one of the main roles not just in the open-source DMN territory but in the whole business rules,

"A business rule is a compact, atomic, well-formed, declarative state- ment about an aspect of a business that can be expressed in terms that can be directly related to the business and its collabora- tors, using sim- ple unambiguous language that is accessible to all interested parties: business owner, business analyst, technical architect, customer, and so on. This simple language may include domain-specific jargon." [23]

and related Business Rules Management Systems (BRMS) world.

"A BRMS or business rule management system is a software system used to define, deploy, execute, monitor and maintain the variety and complexity of decision logic that is used by opera- tional systems within an organization or enterprise." [24]

RedHat has multiple projects related to business rules, decisions and DMN standard, starting from the ground with a mature Drools BRMS and continuing with the RedHat Decision Manager enterprise platform, jBPM toolkit and the Kogito project. All of the projects are mostly based on the Java ecosystem. They are also contributors to the DMN specification itself. Drools is a mature open-source platform, licensed under Apache License, for decision management and business rules, with its history reaching the year 2001. Nevertheless, version 1.0 was never released due to the rule engine’s performance constraints, and thus version 2.0 was actually the first released version of Drools. It contains many components, such as a business rules engine, optimization engine or DMN engine. On top of that, it enables the end-users to work with higher-level declarative metaphors, which are closer to human language than the imperative code. Examples of such metaphors could

27 3. DMN model validation

be a DRD diagram, decision table or lower-level, but still a declarative Drools Rules Language (DRL) rule. RedHat Decision Manager is a platform based on Drools, enabling the development and maintenance of containerized microservices that automate business decisions. Its focus is on the enterprise sector and comes with the usual RedHat subscription model, offering SLA based support, regular updates and more. The jBPM toolkit is approaching the problem from a different and more traditional, code-first imperative approach, and it offers Java libraries for the development of business decisions. It is an open-source project, licensed under Apache License and based on Drools. Kogito is an open-source project, licensed under Apache License, focused on bringing Drools, together with business automation tools such as jBPM, to the cloud environment. It also stands behind pro- viding toolings for business automation, such as standalone BPMN and DMN editors that can either run in a browser or as embedded extensions inside or VS Code.

Camunda

Camunda is a company offering open-source processes and design automation platform, licensed under Apache License and called Ca- munda BPM. It provides a BPMN workflow engine and DMN decision engine, both implemented in Java. They also develop and maintain open-source, web-based editors, supporting DMN nad BPMN, under the bpmn.io project. They are also contributors to the DMN specifica- tion itself.

Trisotech

Trisotech is a company offering enterprise software for end-to-end business automation and digital transformation. One of their main products is the so-called Digital Modeling Suite, including, among other things, advanced web-based applications for creating Case mod- els, BPMN processes and DMN Decisions. Decision Modeler is their application for managing DMN decisions, allowing many features, such as advanced static analysis on top of the DMN models, the defi- nition of test cases, data model creation and more. They are closely

28 3. DMN model validation

collaborating with RedHat and use Drools engine behind the curtain. They are also contributors to the DMN specification itself.

EdgeVerve

EdgeVerve is a company behind the open-source project called oe- Cloud, a digital transformation platform and a framework to build and deploy cloud-native SaaS quickly. They are also behind the open- source js-feel package, licensed under MIT license. This package is a JavaScript-based rule engine, enabling the execution of DMN de- cision tables together with the full support of FEEL. They are also contributors to the DMN specification itself.

Method and style

Method and style is a trademark created by Bruce Silver, who is a huge contributor to the DMN community. He is recognized as a major provider of BPMN and DMN training and certification. He is also an author of popular books, including BPMN Methods and Style, DMN Method and Style, BPMN Quick and easy and DMN cookbook, co- authored with Edson Tirelli, RedHat. Moreover, he is also a public speaker, an active contributor to the DMN standard and a principal consultant at Trisotech. His website, methodandstyle.com, provides mate- rials on many insights into business process management and decision modelling.

29 3. DMN model validation 3.2 Validation breakdown

Because of the fact that DMN offers a fully executable model and that simultaneously DMN specification does not enforce the model tobe complete or consistent, it is crucial to ensure its completeness and cor- rectness by external tools. For example, overlaps or gaps between rules in a decision table can make the model incomplete. On the other hand, wrongly specified FEEL expressions can make the model incorrect. DMN specification specifically allows incomplete models sothey can be interchanged between different tools or people. In addition, the specification also prescribes a standardized way for such interchange called DMN DI, allowing, among the other things, storing of the DRD elements positions. Meaning the position of the elements is stored directly in the XML document, and each tool can read it and visualize it exactly the same way it was designed in other tool. It is even more important to check for this model incompletion because of the targeted audience interacting with the model; business users and domain experts. These people often do not have a deep IT background and are not familiar with the basic concepts of soft- ware verification and assurance. That is why the burden of ensuring completeness and correctness of the model is even more shifted to the corresponding tools. One of the main focuses is detecting all the possible errors and bugs before run-time in a production environment. At the same time, DMN specification is relatively quickly evolving, bringing new concepts, such as business knowledge models, decision services or data types imported from a different model, making suffi- cient validation support for newer versions of the specification even harder to achieve. Nevertheless, all listed vendors, tools or involved parties above provide or prescribe some degree of validation features. The following is a summarised list of DMN model validation ap- proaches across tools listed in the previous section. This thus directly answers the research question number three.

Question 3

What types of DMN model of validations can be actually provided?

30 3. DMN model validation

3.2.1 Validation against the schema

First, and possibly the most straightforward validation that could be performed on the DMN model is a validation of its XML, ensur- ing compliance and syntactical correctness of XML file against the standard XML Schema Definition (XSD).

• compliance of XML file against the XSD schema

3.2.2 Validation of DRD and its elements

Another validation that the vendors often do is the semantic validation of the DRD and its elements. This type is validation typically checks for:

• correctly set references between elements, detecting cycles, wrongly connected elements or missing top-level decision • wrong or missing inputs, outputs between decisions • wrong or missing input, output types between elements • duplicate names of elements • missing decision logic

3.2.3 Validation during compilation process

To enable DMN executable model, DMN is by vendors source-to- source compiled or so-called transpiled, into another source language, such as Java or JavaScript. Thus, one type of validations is outputted by a FEEL parser, transforming FEEL into an Abstract Syntax Tree (AST) and reporting errors from lexical and syntactic analysis based on the defined grammar. Other validations reported at this level are reported from transforming DMN elements into Java or JavaScript- based codebase. Different validations performed on this level are hard to generalize, as they are strictly dependent on the particular transpiler and the target language.

• errors from the lexical and syntactical analysis • other transpiler specific errors

31 3. DMN model validation

3.2.4 Validation of FEEL expressions

In FEEL and its corresponding grammar, not all syntactically correct expressions are valid, meaning additional semantic and logical analy- sis need to be performed. Validation of FEEL expressions is similar to a static analysis performed on any other programming or expression language. Meaning after FEEL is parsed and AST created, its model can be further processed and analyzed. One of the problems is that FEEL is a dynamic language, and DMN specification does not enforce the definition of input and output data types. It is then up to the toolto enforce explicit data types of input and output data of decisions. The definition of data-types then does not influence just the static analyzer and available validations, but also other language features, such as context-aware IntelliSense or go-to definition.

• semantic and logical FEEL problems • undefined variables or input data in FEEL expressions • wrongly specified types of built-in FEEL expressions, FEEL func- tions, input and output data, variables and their usage inside FEEL expressions • code maintainability enhancements based on the specifically defined rules

3.2.5 Validation of a decision table

Validations of decision tables are deeply described by the book by Bruce Silver, named DMN Method Style [25], in which he introduced countless possible validations on top of decision tables, ensuring their completeness and better maintainability. Following is the summary of such validations with accompanying rules.

• gaps between rules • overlaps between rules • conflicts between rules • unused rules • missing columns • missing ranges • cycles in the rules

32 3. DMN model validation

• subsumption between rules, meaning two rules could be com- bined, and table contracted • other best practices, such as guessing the "best" hit policy based on defined rows

3.2.6 Dynamic validation

Into dynamic validation and analysis belong mainly tests performed either on top of the individual decisions and knowledge models (unit tests) or the whole DRD diagram (integration tests). Another bene- ficial type of tests that could compare the results between thesame model changed over time could be regression tests, showing the dif- ferences in decision logic between two different versions of the same model. Another type of tests proposed by Bruce Silver and supported by Drools [20] are auto-generated Modified Condition/Decision Cov- erage (MC/DC) tests, allowing to check for as many problems in the decision tables with as few autogenerated tests as possible. Many tools provided by vendors above have test components directly inte- grated into their editor user interfaces, enabling business-user friendly, declarative definition of test cases.

• individual decision or knowledge model tests • whole DRD tests, testing interaction between individual ele- ments • regression tests between two versions of the same model • MC/DC auto-generated tests

33 4 Language Server Protocol

This chapter reviews the concept of language servers with a focus on the Language Server Protocol (LSP) specified by Microsoft.

34 4. Language Server Protocol 4.1 Overview

"Supporting rich editing features like auto-completions or Go to Definition for a programming language in an editor or IDE is traditionally very challenging and time consuming. Usually it requires writing a domain model (a scanner, a parser, a type checker, a builder and more) in the programming language of the editor or IDE. For example, the CDT plugin, which provides support for /C++ in the Eclipse IDE is written in Java since the Eclipse IDE itself is written in Java. Following this approach, it would mean implementing a C/C++ domain model in TypeScript for Visual Studio Code, and a separate domain model in C for Visual Studio." [26]

One of the latest approaches to support language features in edi- tors is to use LSP-based language servers. LSP is a protocol used for communication between an IDE and a language server. It is designed to provide language features such as validation, IntelliSense, jump-to- definition, auto-formatting and others, for any kind of text-based lan- guages, such as programming languages, domain-specific languages and other specifications. The protocol results from Microsoft’s work on Visual Studio Code and its initiative to provide language features for their languages, such as C and Typescript. Based on their experience of integrating language servers for these languages and other linters such as Eslint, they started to explore commonalities and began to work on a prototype of a general protocol, later named the LSP. LSP is based on the JSON-RPC, which is a lightweight Remote Procedure Call (RPC) protocol for exchanging messages between an IDE and a language server. A language server is a specialised library containing the domain- specific logic for providing language features for a given language. It is conceptually separated from the editors, running in its own process and using i.e. inter-process communication to talk to the editors. This approach enables the separation from an editor’s technology stack, meaning the library can be independently implemented in any pro- gramming language. Thanks to the common LSP, the language server library could also be reused across different editors. This removes the common problem faced by the language tooling providers and

35 4. Language Server Protocol

code editor vendors in the past, which was having to implement new language support for each editor independently. With the use of LSP, it is possible to implement the language server logic just once and reuse it across the different editors. Moreover, there are already sup- porting libraries and SDKs for many popular editors, such as VS Code or , which provide all the necessary infrastructure for connecting an LSP-based language server. This makes it considerably easier to provide language features for a new language using the LSP rather than starting from scratch without it. Another benefit of LSP is its performance, mainly because alan- guage server operates on its own process. Such performance benefits can then be seen from multiple perspectives: • Scalable concurrent computing: All of the work is offloaded to a separate process, which enables concurrent computing1, with all the heavy language analysis computations running separately. Moreover, multiple language servers operating on the same file or workspace spawn multiple processes, resulting in concurrent and independent operations. • Non-blocking processing: As the work is being offloaded to a separate process, the main process remains unburdened and not blocked by the active processing. This might be especially useful in single-process focused editors or applications running on top of these editors - such as some specific extensions2. • Close-to immediate response: The language server is called a server for a purpose, as it is a long-running service, which does not need to be started on each request. This is especially useful while providing a linting of a file on change while the user edits its content. Having a long-running service is, in this case, a must- have, as spawning a new process is a relatively heavy CPU bound task and having to do it on each input change would not exactly bring the expected results.

1It is theoretically possible to achieve parallel or even distributed computing, but it depends on the editor’s design, a chosen communication channel, available machines and others. 2Example of such extension is AdInsure Studio, which performs all of its logic in a single process.

36 4. Language Server Protocol

Finally, the effect of LSP does not bring just the benefits mentioned above, but in general, it tries to reduce the gap between specialised IDEs and other editors. Even though this has at the moment the most significant impact on bringing more language features to VS Code,it could theoretically also have an entirely different consequence. Thanks to the LSP, it is possible to seamlessly integrate all the available LSP- based language servers into a different editor and provide thus a similar experience as VS Code provides - at least on the level of avail- able language features. Meaning even the creation of a brand new editor requires now much less work. Therefore, it will be interesting to observe in the upcoming years whether the majority position of the VS Code at the market will change in any direction or whether the LSP itself will contribute to the more distributed market, with more severe competitors to VS Code itself3.

3Major position of VS Code on the market according to the Stackoverflow’s yearly survey [27].

37 4. Language Server Protocol 4.2 Architecture

The following sections provide an architectural overview of LSP.

4.2.1 Client, server and their capabilities

LSP follows a client-server architecture. The client operates as a thin layer on top of the editor and serves as an entry point for communi- cation with the LSP server. LSP client specifies the target LSP server, together with the chosen communication channel and other options. These options are, however, more relevant to specific LSP implementa- tions rather than prescribed by the specification. It is the LSP client’s responsibility to spawn the LSP server and maintain the established connection. LSP server is then a standalone process instantiated by the client, containing a set of registered capabilities. As there is no need to support all of the capabilities of LSP for each client-server pair, it is necessary to announce supported capabilities in the LSP server during its initialisation. Examples of such capabilities are completion, diagnostics or text document synchronisation type. These capabilities are available either on the client or a server and are arranged into related groups, which are in the current version of the specification following:

• Text document capabilities: Features related to text editors, such as diagnostics, completion or hover.

• Workspace capabilities: Features related to the editor workspace and operations within the workspace. This includes CRUD op- erations on top of the watched files.

• Experimental capabilities: Other experimental capabilities presently under development.

Moreover, the following are the most relevant capabilities provided by the LSP specification:

• Diagnostics: Diagnostics are validation objects returned by the given resource’s static analysis validation. They usually contain errors or warnings, which are results from processing on the

38 4. Language Server Protocol

server. Once the diagnostics are computed, they can be sent as a notification to the client. Each diagnostic should contain the URI of the related file and an array of messages containing the computed message, with other information such as ranges identifying the problematic elements. • Completion: Completion is a result of an IntelliSense request within the editor. The request contains a corresponding file and a cursor position, which can be used to find the AST element position and return the available context. After the context is returned, a response with the array of completion items can be sent directly to the client. • Text document synchronisation: Text document synchronisa- tion could be either none, incremental or full. It allows to control the amount of data sent between the client and serve. Type none does not sync document content with the server, type full always sends the whole content of a document, and type incremental sends to the server just partial updates of the document (apart from the first message, which always contains full content).

4.2.2 JSON-RPC

LSP is based on the JSON-RPC version 2.0, a lightweight Remote Procedure Call (RPC) protocol for exchanging messages between an IDE and a language server. JSON-RPC prescribes as an exchange format JSON serialised messages. Based on its specification, there can be exchanged only following types of messages: • Request message: Requests are messages intending to call a particular method with a given set of parameters while expecting some response. Their structure contains three members - method (string name of the method which should be invoked), parameters (object or array of parameters for the method) and id (non- fractional number in the string, to match the request with a response). • Response message: Contrary to the request messages, response messages can contain either result or error (which are both JSON serialised objects) and an id (id of request).

39 4. Language Server Protocol

• Notification message: Notification messages were added in the JSON-RPC version 2.0. They are such messages which do not require a response. Contrary to the request messages, they can contain just a method and parameters, as id is not necessary since there is no expected response.

4.2.3 Communication flow

The communication flow between the IDE and a language server could get complex for many documents, but in general, it can be simplified to the following steps for one document:

• Open document: The user opens a document within the editor. LSP client notices this event and notifies the server that the corre- sponding file was opened. It does so with didOpen notification, sending the corresponding file within the memory.

• Edit document: The user edits a document within the editor. LSP client notices this event and notifies the server that the corre- sponding file was edited. It does so with didChange notification, sending the corresponding file within the memory. It is then up to the server what to do with this information. One possible way would be to return results from the file’s static analysis and send them back to the client as diagnostics containing errors and warnings. It does so with publishDiagnostics notification.

• Request specific method: The user requests a specific method, such as completion. The client notices this event and sends a request to a server with a corresponding file and related infor- mation, such as cursor position. The server then processes this request and responds with the corresponding result, in this case, completion items.

• Close document: The user closes a document within the editor. LSP client notices this event and notifies the server that the corre- sponding file was closed. It does so with didClose notification.

40 4. Language Server Protocol

The example of communication flow between LSP and is visible in the following Figure 4.14.

Figure 4.1: Interactions between a user, LSP client and LSP server

4.2.4 Limitations

Even though the language server protocol solves many issues and is currently one of the best options to provide language features for vari- ous text-based languages inside corresponding editors, it is still under development and has some missing features or general limitations resulting from the chosen design.

4The diagram is based on Visual Studio Code and LSP communication flow available on Microsoft documentation portal [28].

41 4. Language Server Protocol

• Tailored just for text documents: One of the significant limita- tions arises from LSP being specifically designed for text-based languages, with characters as its atomic units. It is not designed to support other formats, such as binary formats or graphically- based languages with nodes and edges as atomic units5. There are, however, existing initiatives to support specific graphical languages features (i.e. editing capabilities). One such initiative is Graphical Language Server Platform (GLSP) maintained by Eclipse [32]. • Not specified communication channels: The protocol does not prescribe specific communication channels that should be used between the client and a server. This is not necessarily a limi- tation but allows certain freedom, which could differ from im- plementation to implementation, can potentially result in in- compatibilities and has to be kept in mind. Even though the default option in VS Code’s case is to use an IPC connection, there are possible other communication channels, such as I/O or Websockets. • One server cannot have multiple clients: Another limitation is that the language server currently assumes that it serves only one client. Thus even though the communication could be per- formed over the network, it does not mean that one server can serve multiple clients. Also, as each server is a standalone unit, no communication between the servers is allowed. Thus, it is impossible to perform a joint task between the servers. The VS Code ecosystem’s current approach also leads to multiple spe- cific language servers, each for one extension. As each extension is usually mapped 1:1 with a language server, it results in multi- ple independent processes (at least two for each pair), higher resources usage, and possible aggregation on top of the exten- sion packs6. On the other hand, this approach is not forced by the specification either by VS Code. Therefore, a different strat- egy could be used in specific cases, such as 1:many mapping between extension and language servers or 1:1 mapping between

5This limitation is further addressed in the following papers [29] [30] [31]. 6Such pack is named Extension Pack Roundup [33].

42 4. Language Server Protocol

extension and a language server manager, aggregating multiple language servers within one process.

• No direct way to enable project-wide diagnostics: There is cur- rently no capability to easily enable project-wide diagnostics, meaning diagnostic performed on top of the whole workspace and not just the opened files. It is theoretically possible to trigger the whole workspace’s validation during the server initializa- tion as an asynchronous job and leave the rest of the handling for the server registered capabilities. However, there are some hidden quirks as the language server, one of the most complex language servers and inspiration for LSP (even though not yet being fully LSP compliant), still provides these features just as experimental. Moreover, discussions on how to provide these are opened for almost five years7.

• No support: Even though LSP supports multiple language features, syntax highlighting is not one of them. However, there are ongoing discussions and efforts to include syntax highlighting into LSP, meaning it could even- tually end up in the specification. In the meantime, one ofthe most popular options for including syntax highlighting into corresponding editors is by using TextMate Language Gram- mars [35], which are widely supported across a vast number of editors.

4.2.5 Conclusion

Based on the analysis in this and DMN chapter, it is possible to answer the research question number two.

Question 2

Is is possible to use LSP to retrieve DMN model validation results?

7Features request for enabling project-wide diagnostics [34].

43 4. Language Server Protocol

The research question needs to be seen from two different perspec- tives. From a functional perspective, LSP provides a set of language features, including resource validation, which makes it suitable to retrieve validation results. Moreover, DMN is a visual language, but its FEEL expressions are still text-based inputs, just operating with the context of the particular DMN decisions. The use of LSP, thus, in this perspective, makes the same sense as for any other programming or domain-specific language. From the technical perspective, LSPis designed for almost any text-based input, such as programming lan- guages, domain-specific languages [36] [37] and other specifications. Thus, the LSP’s architecture has no fundamental limitations in retriev- ing validation results for DMN’s XML-based resource in the sense of FEEL validations. Other features tied strictly to graphical diagrams, such as diagram editing, are addressed by GLSP, which in the DMN and Visual Studio Code context could be seen as a complementary protocol rather than a strict alternative to LSP.

44 5 Visual Studio Code

The following chapter reviews basic Visual Studio Code concepts related to the LSP topic, which are essential to the proposed solution design. These concepts include the Electron.js platform and relevant categories of VS Code’s Extension API.

45 5. Visual Studio Code 5.1 Overview

Visual Studio Code is an open-source editor developed by Microsoft, primarily focusing on providing source-code maintenance features across multiple operating systems. Built-in features include man- agement, , or language features (i.e. through the LSP) for various programming languages. Most of these features are all treated as extensions to the VS Code core. Moreover, VS Code provides a way to develop custom extensions to the community through its public Extension API. These custom extensions can be either public and main- tained through the integrated marketplace or private and distributed using special .vsix files.1 Community-built extensions solve a wide area of problems, could be utilized in different development lifecycles and thus serve many types of users. Furthermore, other related , such as Webview API and Custom Editor API, focus on embedding a directly to the VS Code editor. These applications can work as standalone webpages (in the case of plain Webview API) or serve as graphical and more business-user friendly views on top of the opened documents (in the case of Custom Editor API combined with Webview API). Such concepts extend VS Code capabilities even further, broadening the target audience from developers, testers or DevOps engineers to technical business users.

1Which are under the hood plain .zip files containing an extension code and the following structure specified by the Extension API.

46 5. Visual Studio Code 5.2 Electron.js

Figure 5.1: Electron.js application architecture

Visual Studio Code is based on Electron.js2, a JavaScript framework maintained by GitHub, providing a runtime to build cross-platform desktop applications using web technologies, such as HTML, CSS and JavaScript. Electron.js’s wrapper enables having one codebase for an application and package it separately for each supported platform - Windows, , and macOS. The fundamental architecture components of the Electron.js frame- work are:

• Chromium: Web engine for rendering web pages UIs, also used as the core rendering engine in Chrome, Brave and Microsoft Edge browsers.

• Node.js: JavaScript runtime enabling the execution of JavaScript outside the browser, with access to native operating systems APIs, such as filesystem or networking.

2Among other popular applications built with Electron.js belong Atom, Slack, Microsoft Teams, Discord, Twitch, Left or Figma.

47 5. Visual Studio Code

• V8: Performant JavaScript engine, written in C++, which is part of Chromium project and a key element in the Chromium web engine as well as in the Node.js runtime.3

The core of the Electron.js application is the main process, which interacts with the GUI of the and is responsible for maintaining renderer sub-processes and overall bootstrapping of the whole application. Renderer sub-processes are standalone pro- cesses responsible for rendering a web page using the Chromium engine and thus displaying the UI of the application. The use of the Chromium engine also means support for tracing activities, such as debugging or profiling. Communication between the main processes and the renderer sub-processes, such as performing GUI operations and other native API calls, is based on inter-process communication (IPC) [38]. A built-in module in the Electron.js framework for IPC provides publish-subscribe pattern-based communication between the user-specified channels. Other built-in features include native GUI elements, such as OS-specific dialogues or menus, and additional bene- fits, such as out-of-the-box prepared installers, push-like auto-updates or crash reporting.

3Among other popular software utilizing V8 engine belong runtime or Couchbase database server.

48 5. Visual Studio Code 5.3 Extension API

"Visual Studio Code is built with extensibility in mind. From the UI to the editing experience, almost every part of VS Code can be customized and enhanced through the Extension API. In fact, many core features of VS Code are built as extensions and use the same Extension API." [39]

VS Code’s Extension API offers a robust API for building custom extensions4 of the editor. Such extensions can enhance the editor in multiple areas, such as the following:

• Custom themes: Customize the VS Code’s look and feel with a different set of icons and colour themes.

• Custom UI elements and functionality: Enhance the VS Code with custom elements in its user interface. Embed custom web- pages views, or so-called webviews, directly to the editor. En- hance the experience with custom editors on top of the custom webviews.

• New language support: Add language support for a new lan- guage, including language features with the predefined VS Code’s LSP SDKs or debugging features utilizing Debugger Adapter Protocol (DAB).

The following section reviews the VS Code extendability APIs and related concepts used as fundamental building blocks in the proposed solution. The relationships between the listed concepts are visualised in a class diagram in the Figure 5.2.The diagram is focused on conceptual explanation, rather than precisely reflecting the implementation de- tails. Moreover, not discussed concepts and their properties, relations, methods, and others are hidden to make the diagram lighter and more understandable on the conceptual level.

4Among the most popular public extensions in the VS Code marketplace belong Gitlens, Python, Docker, Eslint, XML Tools and many more.

49 5. Visual Studio Code

Figure 5.2: Class diagram showing conceptual relations between the mentioned concepts

5.3.1 TextDocument

TextDocument is an interface representing an in-memory instance of a text-based document. The instance of a class implementing this interface holds the document content, URI, version and other related properties. Furthermore, TextDocument interface is a base model for the text editor or a custom editor UI. The same model can be utilized as a model for multiple UI views and display multiple views of the same model next to each other. This approach makes changes in one editor’s UI immediately visible in another editor’s UI instance, as they listen to changes on the same TextDocument model. In addition, such changes to the model can be listened to by other entities, such as the LSP client, which can send theTextDocument model for further processing to the LSP server.

5.3.2 Text editor

The text editor is a view-layer instance of a standalone Monaco editor, which works on top of the TextDocument data model. Monaco editor

50 5. Visual Studio Code

displays the opened file content and enables multiple features, suchas IntelliSense visualization, syntax colourization, auto-formatting, lines count or multi-line edits. Such instance is mapped 1:1 to the opened file, with possible countless opened instances at the same moment within the VS Code.

5.3.3 Webview API

Webview API allows the insertion of web applications next to the embedded text editors. Such web applications are not further restricted and can either suit the more business-user friendly views next to the text editors or as standalone applications, such as some graphical editors. Webviews are conceptually similar to HTML iframe elements and Electron.js’s renderer subprocesses. They can render almost any web application within its content and are spawned as independent processes by the main extension processes and communicate with the main process through the use of IPC, which is based on simple postMessage API (with possibilities to build RPC layer on top of it)5.

5.3.4 Custom Editor API

Custom Editor API6 utilizes Webview API’s potential and combines it with the TextDocument model. Such interconnection enables the cre- ation of alternative editors to the built-in text editor and provides out- of-the-box support for backups (undo/redo), autosave, dirty files and more. In order to create a Custom text editor, it is necessary to imple- ment CustomTextEditorProvider with its resolveCustomTextEditor method, which is responsible for resolving the custom text editor and synchronizing the changes between the TextDocument and a Webview. Moreover, as the editor works on the TextDocument model, it means that changes to the model are automatically listened to by other enti- ties, such as the LSP client, making the out-of-the-box connection from the custom editor to the LSP language server. However, the opposite connection needs to be explicitly established, with the explicitly set listeners to the LSP client-server (such as changed diagnostics) or

5Microsoft already provides a helpful library for such needs [40] 6This part is described in the context of text-based editors, but VS Code also offers binary custom editors, which work on top of the binary files.

51 5. Visual Studio Code

responses from the LSP client-server requests (such as completion response).

5.3.5 Language extensions

VS Code’s Extension API also offers various ways for providing lan- guage extensions. They divide [41] such extension into two separate categories:

• Declarative language features: As the name describes, they are declaratively defined in VS Code’s configuration files. Such fea- tures include syntax highlighting, bracket autoclosing, completion and many more7.

• Programmatic language features: On the other hand, these are dynamic features often handled by the language server and the LSP. Such features thus cover validations, completions, refactor- ing and more. For more convenient development of such features, VS Code provides SDK for LSP development (Figure 5.3) with many useful utilities for the creation of LSP client-server pair.

Figure 5.3: SDK for development of VS Code’s LSP-based server

7Declarative language features [42]

52 5. Visual Studio Code

5.3.6 Conclusion

Based on the analysis in this and the previous chapter, we can answer the research question number one.

Question 1

Can LSP be used to exchange information with GUI-based editors in the VS Code, such as with the Rule (DMN) editor?

It is possible to utilise the potential of VS Code’s Custom Edi- tor API to maintain a connection with the LSP client-server pair. In terms of validations, the connection from the custom editor to the LSP client is automatically solved because both entities operate on the TextDocument model. The opposite connection has to be explic- itly established by listening to changes coming from the LSP server. Such connection can be created in multiple ways, either by following event-driven architecture pattern8 or simply by keeping a reference to the LSP client instance in each custom editor. In both approaches, the custom editor is then responsible for synchronising the data to the corresponding webview. Other features, such as completion, need to be explicitly set for both directions.

8This approach is used in the mentioned Eclipse’s GLSP and their integration into the VS Code [43].

53 6 Proposed solution

This chapter reviews requirements for the end-system, utilises the concepts reviewed in previous chapters and proposes solution archi- tecture based on such concepts.

54 6. Proposed solution 6.1 Requirements

Following sections review functional requirements, quality attributes, technical constraints and business constraints for the end-system. The end system’s proposed architecture is a result of the analysis in the previous chapters. Based on the analysis, the proposed solution architecture for providing language features inside DMN graphical editors, such as on change validation and completion, is introduced. The proposed solution architecture for the end-system is specifically designed to address related domain problems in the context of com- mercial, enterprise-level insurance software called AdInsure Studio.

6.1.1 Functional requirements

The following list is not a complete list of all the functional specifica- tions that the system must offer but rather serves as an overview of the specifications already prioritised with the MoSCoW (Must have, Should have, Could have, Won’t have) method.

Must have

FR1.1 Ability to provide FEEL validations raised during the compi- lation process, namely errors from the lexical and syntactical analysis based on the defined grammar. FR1.2 Ability to provide additional validations of FEEL expressions, namely undefined variables or input data, with the possibility of extending it with further validations. FR1.3 Ability to display the provided validations inside the Webview-based DMN editor and VS Code’s Problems panel, on each change and thus on dirty/unsaved file content.

55 6. Proposed solution

Should have

FR2.1 Ability to execute and display validations using CLI. FR2.2 Context-aware IntelliSense for FEEL inside the boxed expres- sions, providing available data in the given scope, together with FEEL functions and with the possibility of extending it with other information, such as user-defined functions or AdInsure’s specific input data types. FR2.3 Ability to display FEEL Intellisense inside the DMN editor’s boxed expressions on each request.

Could have

FR3.1 Ability to provide validations against the XSD schema. FR3.2 Ability to provide validations of DRD and its elements. FR3.3 Ability to provide validations of a decision table.

Won’t have

FR4.1 Dynamic validation and testing, as it not in the scope of the LSP capabilities. FR4.2 Ability to validate in addition to FEEL expressions also pure JavaScript expressions.

6.1.2 Quality attributes

Similarly to functional requirements, the non-functional requirements, or so-called quality attributes, are specified in the following list. Inop- posite to the functional requirements, the list contains only attributes of the system that shall be considered during the design phase.

56 6. Proposed solution

Performance

QA1.1 The system should be able to process one DMN file in a matter of a few hundreds of milliseconds without any dis- rupting delay. QA1.2 The system should promptly respond to validation requests on each change in the Rule editor. QA1.3 The system should promptly respond to IntelliSense re- quests on each request in the Rule editor.

Usability

QA2.1 The system should not block the Rule editor’s UI thread, allowing interaction with the editor while the requests are being offloaded to a separate thread, process or equivalent. QA2.2 The system should process a DMN file with both dirty and saved content.

Maintainability

QA3.1 The system should be understandable, and its key com- ponents should be well documented, preferably using dia- grams.

Testability

QA4.1 The system or its major components need to be prepared for automated tests, at least in the scope of individual units or components.

57 6. Proposed solution

Extensibility

QA5.1 Possibility to extend the DMN validation functionality with- out drastically modifying the existing source code.

Integrability

QA6.1 Possibility of integration with already existing validations performed by AdInsure Studio backend services. QA6.2 Possible integration capabilities with third-party validation software or mutual interoperability with such software.

6.1.3 Technical constraints

Node.js technology stack

Technical constraint comes with the technology stack used in Adacta, respectively, in AdInsure and AdInsure Studio development. AdIn- sure Studio is a VS Code extension, indicating that it relies on Node.js- based Extension API with -based packages. Furthermore, all of the AdInsure Studio backend services are built with Node.js as well and should be, as specified in the quality attributes, interoperable with the proposed solution. Moreover, DMN is already being transpiled to JavaScript, which is, in AdInsure, the run-time execution language for the DMN-specified model. The JavaScript-based DMN transpiler already parses the DMN model into AST, which is the structure that should ideally be reused for consistent results between run-time and static-analysis validation outputs. Finally, adopting a different tech- nology stack might be inefficient for the company. Therefore all of that makes the dependency on the Node.js technology stack more or less necessary.

58 6. Proposed solution Platform cross-compatibility

Another technical constraint is connected with platform cross-compatibility. The final solution should primarily support Windows-based systems, as default systems for configuration via AdInsure Studio extension, with secondary support for Linux-based systems operating on the CI.

6.1.4 Business constraints

Time and resources

As specified in the DMN validation breakdown section, DMN val- idation consists of many steps and levels that the listed vendors al- ready provide. However, the open-source community reached these validations levels after more than six years of active development with multiple iterations. Also, the validations are constantly being improved, and the majority of vendors are nowhere close to support- ing all of them. Therefore, it would be inefficient in terms of time and resources, starting everything again, and wasting time reimplement- ing something that is already successfully created. Hence, if possible, everything that satisfies the constraints and could be reused should be reused, allowing to focus on must-haves and AdInsure specific’s, as mentioned in the functional requirements.

59 6. Proposed solution 6.2 Architecture

The following section describes the proposed architecture to the re- quirements specified above.

6.2.1 Overview

The system’s architecture contains three main components:

• DMN custom text editor • LSP language server • DMN analysis

Both LSP language server and DMN analysis are separate NPM packages. However, the DMN custom text editor is part of the AdIn- sure Studio extension package. Dependencies between these packages are visible in the Figure 6.4. Each component is thus a standalone unit and could technically be tested and developed independently by different teams in parallel.

adinsure-studio-extension lsp-language-server dmn-analysis

Figure 6.1: Dependencies between the separate packages1

This architecture also enables seamless scalability in the horizontal direction, meaning there could be multiple systems with these three components complementing each other in one extension. This could be utilised for multiple reasons. One of them is to distribute the pro- cessing logic and thus increase the processing performance. Another could be focused on bringing in more features using integration with other specialised third-party systems, such as RedHat’s Drools.

1Only packages relevant to the domain model are shown to keep the diagram inlined in the text and easier to follow.

60 6. Proposed solution 6.2.2 LSP language server

Figure 6.2: Main conceptual relations inside and outside of the lan- guage server2

LSP language server connects backend validation services, in this case, DMN analysis package, with the VS Code extension and thus, indirectly, with the DMN custom text editor. It is a standalone NPM package, instantiated by the VS Code’s LSP client in the extension as a completely separate process. It operates as a facade-like layer, redi- recting the requests from extension to explicit DMN analysis services through the registered providers. It is crucial that the language server remains thin and does not grow in a god component antipattern as its

2Only classes, methods, fields, interfaces and types relevant to the domain model are shown to keep the diagram inlined in the text and easier to follow.

61 6. Proposed solution nature does not allow easy automated testing and neither effortless nor beneficial integration with CLI, which is, in this case, a must-have requirement. In addition, CLI and extension calls should be as similar as possible, meaning the logic should be sunk lower in the DMN anal- ysis package. Therefore, in the case of calling the backend services from the extension, the requests are offloaded through the use of the language server. On the other hand, in the case of calling the backend services through CLI, they can be called directly, as in this case, LSP does not provide any additional value. Nevertheless, LSP is tailored explicitly for communication between text editors and a language server, providing many features, such as IntelliSense, which are not needed using the CLI. This means that technically Intellisense-related code could be located directly in the language server. However, as its core is very similar to the validation logic, it is kept together within the DMN analysis package for better maintainability, following the single source of truth design.

6.2.3 DMN analysis

DMN analysis is a standalone NPM package and contains all the backend services required for the validation and IntelliSense support of the DMN model. It is designed with the fifth research question in mind.

Question 5

While using LSP for communication with the editor, is it still possible to provide validations using the CLI?

Being a standalone package containing all validation logic, it en- ables easy integration within the Node.js stack with either the LSP language server for DMN editor purpose or with AdInsure Studio command-line utility for CI use case. Additionally, it unlocks the possi- bility of integration with already existing AdInsure Studio validations. The DMN analysis package structure is inspired by , a popular static code analyser for JavaScript, written in JavaScript. One of the main concepts of ESlint and thus DMN analysis is the idea of rules,

62 6. Proposed solution which are small individual units for static code analysis and could be easily executed, maintained, tested and extended.

Figure 6.3: Main conceptual infrastructure of DMN analysis package and its dependencies3

The overall package infrastructure is built around a custom DMN model reader, FEEL parser, FEEL AST traverser, DMN model traverser, Rules executor and Build context visitor: • DMM model reader: A custom package for reading DMN files by Adacta, which is already being used for transpilation pur- poses. Given the XML file content, the reader returns a JavaScript

3Only classes, methods, fields, interfaces and types relevant to the domain model are shown to keep the diagram inlined in the text and easier to follow.

63 6. Proposed solution

DMN model object, which can be further analysed and pro- cessed. Alternatively, an open-source DMN moddle could be used, which serves a similar purpose and is maintained by Ca- munda’s bpmn.io project. • FEEL parser: Based on the open-source js-feel NPM package provided by EdgeVerve and customised by Adacta’s needs. The package provides FEEL grammar and a parser creating FEEL AST, which is used for further static analysis. Licensing of base package is based on MIT license, allowing free of charge com- mercial usage under specific conditions, such that the license should be provided within all copies of the software. • FEEL AST traverser: Inspired by ESlint traverser, it is focused on traversing the FEEL AST top-down, executing the visitor function on each visited AST node. It is designed to be executed on each node entrance but can also be adjusted for special node exit visitor. If needed, the same logic could be reused to traverse the AST differently, such as in reversed order - bottom-up. • DMN model traverser: Inspired by the FEEL AST traverser, it gets a DMN decision and its decision context and executes a visitor function on each boxed expression. Rules executor then provides the visitor functions (rules) in order to validate the boxed expressions. • Rules executor: The class responsible for loading and executing a group of related validation rules (visitor functions), either for FEEL or for boxed expressions. • Build context visitor: A collection of visitor functions operating on the FEEL AST or DMN model, creating a context of variables and functions available in the given scope for validation and completion purposes. Validation rules

Rules are in general visitor functions operating on the AST of the source code. Their execution is based on a traverser, which for each defined rule traverses the whole AST and executes each rule as avisitor

64 6. Proposed solution

function per individual AST node. It is then up to the rule to set the constraints on the node type, parent type and another context, whether it should be executed or not. Rules are extremely well maintainable. Each rule can be independently tested based on the given scenarios, and adding a new rule or removing the old one does not require any significant changes to the existing source code. In addition, rules are on their own expressive enough, meaning just by looking at the rule name, it is obvious what the rule does, at least from the high-level perspective. The time complexity of the whole validation linearly increases with each rule, which for tens or hundreds of rules might cause issues. In the case of performance problems, multiple techniques could be applied to improve the execution time, such as:

• Caching and execution of some rules just on the changed parts of the AST (or DMN model).

• Debounce and throttle techniques, limiting the number of times the rule executor can be called.

• Limiting the number of traverses on top of the AST to a mini- mum, i.e. by executing multiple rules or group of rules on each node visit4.

In the context of DMN model validation, and based on the re- quirements specified above, there are recognised two logical types of rules:

• Rules that operate on the XML tree of DRD and its elements.

• Rules that operate on the ASTs of individual boxed expressions inside the DRD decisions.

The first group of rules operates on the XML tree of the DRDand its elements, indicating it can check for all the validation of DRD and its elements as described in the DMN model validation breakdown.

4This technique could be especially beneficial in the case of simple rules andbig AST structures, where the traversing of the AST is the actual bottleneck.

65 6. Proposed solution

However, these types of validations are in the "could have" require- ments group and are also provided by the open-source community with the possible integrations; therefore, they remain achievable at the conceptual level. Nevertheless, the logic of processing DRD and its elements is still essential even without individual rules. It provides contextual information for a particular decision and corresponding FEEL expression rules, such as input data, defined context entries and other scope-related information. This kind of information is then not needed just during validation, as input for individual rules, but also for context-aware IntelliSense. The second group of rules covers the most important requirements as it focuses on the static analysis of the individual FEEL boxed expres- sions. Rules for FEEL expressions operate on the FEEL AST, allowing to beforehand check for validations raised during the compilation process and creation of an AST and afterwards to check the validation of individual FEEL expressions, such as the usage of undefined data and others.

6.2.4 DMN custom text editor

DMN custom text editor is the last piece to the puzzle. However, apart from the previous pieces, this one is not a standalone package but is a part of the AdInsure Studio extension package. The editor’s UI part is what already exists - a DMN editor based implemented using Webviews API. This editor, however, operates just on the saved file content, reading the XML document data and writing the data down to the file when a save button is clicked. This way does not automatically allow operations on dirty files and on change, which are crucial concepts for convenient and user-friendly validations and IntelliSense. Nevertheless, this is where Custom text editor API comes to the rescue, which is, as defined previously, an API that enables to operate on an in-memory TextDocument model. Custom text editor API almost automatically solves the operations on dirty content and additionally brings other out-of-the-box benefits, such as support for autosave and backups. Moreover, as the VS Code’s LSP client is listening only to changes performed on the TextDocument model, this part is automatically solved with this approach as well.

66 6. Proposed solution

Figure 6.4: DMN analysis package infrastructure and its dependencies5

6.2.5 Validation process example

The validation process example is here to describe the whole commu- nication interaction between individual components. The sequence is very similar to the diagram available in Figure 4.1. However, here it is explained with the concepts from the proposed solution and with a focus on communication between the actual processes.

5Only classes, methods, fields, interfaces and types relevant to the domain model are shown to keep the diagram inlined in the text and easier to follow. 67 6. Proposed solution

The whole flow starts in the DMN editor’s view, with the primary goal to provide diagnostics inside the editor’s UI after the content change in the editor.

1. A user makes a content change in the DmnWebviewPanel. • webview process

2. DmnWebivewPanel requests the DmnCustomTextEditorProvider to update its TextDocument model. • webview process -> main extension process

3. A change in the TextDocument model raises an event detected by the LanguageClient. • main extension process

4. LanguageClient decides to forward the request further to the LanguageServer. • main extension process -> language server process

5. LanguageServer evaluates the request based on the dmn-analysis logic and its rules, which results in an array of diagnostics. • language server process

6. After getting diagnostics from the dmn-analysis, the diagnostics are sent as a notification back to the LanguageClient. • language server process -> main extension process

7. DmnCustomTextEditorProvider detects the changed diagnostics on the LanguageClient and sends them to the DmnWebviewPanel. • main extension process -> webview process

8. DmnWebviewPanel receives the diagnostics and is able to provide them in the editor’s UI. • webview process

68 6. Proposed solution 6.2.6 Possible integrations

As specified in QA6.1 and QA6.2 quality attributes, the system should be integrable with other software services, such as AdInsure Studio and other third-party validation services. The following section re- views the possible integration approches.

AdInsure Studio validations

AdInsure Studio includes dozens of other validations for the config- urable AdInsure elements, and it might be beneficial to offload these validations to a separate language server process as well. This would not just improve the current user experience but could also offer In- telliSense and other language features for other AdInsure’s specific configurable items. From the DMN analysis package perspective, there are no design restrictions discouraging integration of the DMN analysis package with other Node.js packages such as AdInsure Studio validations. However, there are certain limitations on the AdInsure Studio side, which was not designed to perform validations in different processes. Such limitation thus needs to be solved on its design level first, in order to perform the complete separation of all existing validations inside a different process.

Other third-party validations

This sub-section reviews the integration possibilities of the proposed solution with the existing third-party solutions and thus directly an- swers the research question number four.

Question 4

Are there some third-party DMN model validators, and is it possible to integrate them?

As third-party open-source Java-based solutions like Drools (and other tools reviewed in the Marked overview) provide advanced vali-

69 6. Proposed solution

dations of the DMN model, the current design also counts with the possibility of integrating them to the proposed solution, even though they are based on a different technology stack. There are several ways to connect Java application from Node.js directly, such as:

• java-caller - calls Java .jar file from the Node.js process

• node-java - creates a bridge to connect over existing Java APIs over Java Native Interface

• JSweet - transpiles Java to JavaScript

However, either of them has pitfalls on its own, either connected with performance (such as spawning a java process each time it is needed), compatibility (no support for whole libraries) or not future- proof maintainability. Therefore, one of the best and most appropriate options, in this case, is to connect a Node.js application and a Java application as two independent, long-running services. One possible solution is to follow LSP client-server communica- tion model and leave the communication to IPC-based JSON-RPC. Another adequate solution would be to spawn a long-running Java service using java-caller and leave the communication to gRPC, which is based on fast, binary-based HTTP/2 and comes with explicitly de- fined contracts using . However, this also comes witha price of higher complexity, as the whole communication infrastructure would need to be created and maintained. A better approach in terms of complexity and the LSP context is leaving the communication infrastructure to the VS Code, its Extension API and supporting LSP SDKs. It would mean having a separate LSP language client and server for a Node.js stack and a separate LSP client and server for a Java stack. Both clients could be then registered in one extension, triggering two independent language servers, which would operate on the same files and process them simultaneously intwo separate processes. This approach is visible in the following diagram. Both approaches would solve the need for extra third-party valida- tions inside the extension but would require additional work to enable the CLI use-case on the CI. One way would be to replicate the same process for the Java stack as well and simply create a CLI interface for

70 6. Proposed solution

Figure 6.5: Integration architecture of utilizing multiple language servers for multiple custom text editors

it, which would be called from Node.js via java-caller or similar. On the other hand, it could be enough to have a separate CI job, preceding the actual Node.js validations and triggering just Java-based code such as these validations. The latter approach would, in this context, bring the most of the business benefits of the rich DMN model validations with the least possible effort. However, either of the solutions would still need to be dependent on a JRE or JDK, based on the Java version and chosen Java vendor.

71 7 Conclusion

This chapter summarises the achieved results based on the specified research questions. It also lists the possible improvements of the pro- posed solution and future area of research.

72 7. Conclusion 7.1 Research questions evaluation

The main goal of this thesis was to analyse Decision Model and No- tation and its available validation focused tooling, Language Server Protocol and related Visual Studio Code concepts. Based on the find- ings, a solution addressing all the requirements and research questions was designed. The proposed solution was specifically designed to meet the speci- fied requirements. Besides, the solution also indirectly addresses all the research questions specified at the beginning of the thesis. Follow- ing is the list of research questions with explicitly formulated answers.

Question 1

Can LSP be used to exchange information with GUI-based editors in the VS Code, such as with the Rule (DMN) editor?

Although there is no out-of-the-box support for connecting a graphical webview-based editor with an LSP-based server, such connection can be established manually, as described in the proposed architecture, util- ising the Custom editor API. With this API, it is possible to create a cus- tom text editor, which operates on a TextDocument model used as a sin- gle data model instance for different views - such as a built-in text edi- tor or a custom webview editor. In order to register a custom text editor and synchronise the model and data retrieved by the LSP server with the corresponding webview, a CustomTextEditorProvider interface needs to implemented. Each implemented CustomTextEditorProvider explicitly maintains the reference to the webview and its TextDocument model. Any change to TextDocument model is automatically listened to by the LSP client, which triggers onChange validation on the LSP server. Any other LSP request, response, or notification has to be explic- itly established, either by keeping in each CustomTextEditorProvider an instance of the LSP client or by event-driven communication ap- proach between the provider and the LSP client, as approached in the Eclipse’s GLSP to Visual Studio Code integration initiative [43].

73 7. Conclusion

Question 2

Is it possible to use LSP to retrieve DMN model validation results?

Even though DMN is a graphical notation, its executable model nature and, more importantly, FEEL allows and almost inclines to the use of LSP for validation purposes inside the VS Code. DMN is a visual language, but its FEEL expressions are still text-based inputs, just operating with the context of the particular DMN decisions. The use of LSP, thus, in this perspective, makes the same sense as for any other programming or domain-specific language. From the perspective of LSP architecture, there are no fundamental limitations for the com- munication with DMN’s XML-based resource in the sense of FEEL validations. LSP is designed not only for programming languages but also for other text-based inputs, such as domain-specific languages and other specifications. There are certain limitations in the sense that LSP is not designed for graphical languages, and GLSP addresses such limitations. Moreover, In the DMN and Visual Studio Code context, both protocols could be seen as complementary rather than strict al- ternatives. Furthermore, LSP’s built-in support for diagnostics and VS Code’s automatic integration with such diagnostics (in Text editor and Problems panel) provides an easy and convenient way to provide DMN model validations inside the editor.

Question 3

What types of DMN model of validations can be actually provided?

There are various tools that are providing DMN model validations, even though most of the tools are based on the Java technology stack. There are six different types of DMN model validations provided by the available tools: XSD schema validation, DRD and its elements validation, Compilation level validation, FEEL expressions validation, Decision table validation and Dynamic validation. More information

74 7. Conclusion

about the validations is available in the DMN model validation chapter 3.2.

Question 4

Are there some third-party DMN model validators, and is it possible to integrate them?

There are various tools that are providing DMN model validations, even though the most complex ones are based on the Java technology stack. Because of the different technology stack, the final integration becomes more complex but not impossible to achieve. In terms of VS Code, there are at least two possible ways to enable such interoper- ability. Both of the concepts are based on the fact that the different technology stack has its own language server. Such language server can then either be part of a standalone extension (which is part of a bigger extension pack) or integrated as another language client next to the other clients in the main extension. The latter approach is discussed more in detail in the proposed solution chapter 6.2.6.

Question 5

While using LSP for communication with the editor, is it still possible to provide validations using the CLI?

Yes, it is possible to provide validations using CLI as the proposed solution is designed with this requirement in mind. Such functionality is achieved by abstracting the validation logic out of the language server into the Node.js based standalone package. More information is available in the proposed solution chapter 6.2.2.

75 7. Conclusion 7.2 Future work

The following is a list of possible improvements to the proposed solu- tion as well as other potential areas of research: • Dynamic validation and formal verification: As the proposed solution is focused on static validation based on static analysis, it could also be beneficial to provide available dynamic validation features and research other potential verification methods. Be- tween the available features belong unit-like tests of individual decisions, integration-like tests between individual decisions or whole models, regression tests between different versions of models, or autogenerated MC/DC tests. Dynamic validation could be also seen in the perspective of debugging features for DMN model, potentially using Debug Adapter Protocol or simi- lar [44]. Related research area could also be formal verification and application of its methods, such as symbolic model check- ing, to the DMN model verification. Such area could also be explored in the sense of LSP capabilities, which would clarify its limits and concerns [45] [46] in this direction. • DMN and FEEL parser improvements: The JavaScript FEEL parser created by EdgeVerve has quite limited functionality, does not support the full conformance, and lacks support for some TCK tests, some built-in FEEL functions or multi-line FEEL in- puts. Also, there is no package or set of packages in the JavaScript ecosystem that would treat DMN and FEEL as a whole and pro- vide functionality for both (at least on the level of parsing). Therefore, there is room for improvement in various directions. One way would be to improve the parser to support full DMN conformance and pass all TCK tests. Another direction would be to support multi-line FEEL inputs or tolerant parsing, which would able to recover and produce a valid and complete tree even in the case of invalid input. The third direction could try to unite everything related to DMN parsing under one package or set of packages, supporting parsing of a DMN and FEEL as a whole, not as two independent concepts. The same package could be then reused during DMN transpilation as well as DMN static analysis, eliminating any inconsistencies between the two.

76 7. Conclusion

• Rule executor performance improvements: As mentioned in the previous section, in some scenarios, a number of perfor- mance optimisations could be utilised in order to improve the rules executor performance. Such optimisations could include tree diffing algorithms (in order to run rules on just changed parts of AST), debounce or throttle techniques (in order to limit the number of execution function calls) or limiting the number of traversers (by executing a batch of rules in one traversal). Thus it would be beneficial to explore such and other related techniques further in order to explore the possible performance improvements on the rules executor side.

• Declarative validation rules with rules engine: As the current rules are imperatively written visitor functions, it might be ben- eficial, in some scenarios, to transform them into declaratively written rules, possibly utilising the power of business rules en- gines. This approach could directly enable better maintainabil- ity of such rules and could also indirectly result in improved rules performance, as the rules engines are based on advanced pattern-matching algorithms, such as the Rete algorithm (which is continuously being improved [47]) and its successors.

• Support for other languages features: Current solution is mostly focused on providing validations for DMN and FEEL as a whole, with a possibility of extending the functionality by providing completion items. There are, however, other language features, which could be beneficial for the end-users. These features in- clude a jump to definition, refactoring, auto-formatting, hover, folding or syntax highlighting.

• Multi-file edit from one webview: Current solution supports editing of one file with one webview, which is being mapped to one custom text editor. Thus, it is not automatically possible to edit multiple files from one editor, such as to edit imported DMN model or a PMML file content within the same webview instance. A potential way to support this feature would be to create a manager of multiple custom text editor providers, com- municating with the corresponding webview. However, further

77 7. Conclusion

research and proof-of-concept would need to be created to find out whether it is technically possible to implement it.

• Multi-folder workspace: The proposed solution does not count with the possibility of running the extension in the multi-workspace environment, as the AdInsure Studio extension does not directly support it. However, there are no restrictions in the current de- sign in case of needs to enable such feature.

78 A List of Abbreviations

79 A. List of Abbreviations

ANTLR ANother Tool for Language Recognition API Application Programming Interface AST Abstract Syntax Tree BPEL Business Process Execution Language BPMN Business Process Model and Notation BPMS Business Process Management System BRMS Business Rule Management System CI Continuous Integration CI/CD Continuous Integration / Continuous Delivery CLI Command Line Interface CMMN Case Management Model and Notation CORBA Common Object Request Broker Architecture CSS Cascading Style Sheets CPU Central Processing Unit CRUD Create, Read, Update, Delete DAB Debugger Adapter Protocol DMN Decision Model and Notation DMN DI Decision Model and Notation Diagram Interchange DRD Decision Requirements Diagram DRL Drools Rule Language ESLint EcmaScript Linter FEEL Friendly Enough Expression Language GLSP Graphical Language Server Platform gRPC gRPC Remote Procedure Call GUI Graphical User Interface HTML Hypertext Markup Language HTTP Hypertext Transfer Protocol IDE Integrated Development Environment IPC Interprocess Communication ISO International Organization for Standardization jBPM Java Business Process Model JDK Java Development Kit JRE Java Runtime Environment JSON JavaScript Object Notation JSON-RPC JavaScript Object Notation Remote Procedure Call LSP Language Server Protocol MC/DC Modified Condition Decision Coverage MIT Massachusetts Institute of Technology

80 A. List of Abbreviations

MoSCoW Must have, Should have, Could have, Won’t have NPM Node OMG Object Management Group OS Operation System PEG Parser Expression Grammar PMML Predictive Model Markup Language RPC Remote Procedure Call S-FEEL Simple Friendly Enough Expression Language SDK Software Development Kit SaaS Software as a Service TCK Technology Compatibility Kit UI User Interface UML Unified Modeling Language URI Uniform Resource Identifier VS Code Visual Studio Code XML Extensible Markup Language XSD XML Schema Definition

81 Bibliography

1. ADACTA. About AdInsure. Adacta. Available also from: https: //www.adacta-fintech.com/platform. 2. ADACTA. AdInsure Tools - AdInsure Studio. Adacta. Available also from: https://www.adacta-fintech.com/platform/adinsure- studio. 3. EVANS, Eric. Domain-driven design: tackling complexity in the heart of software. Boston: Addison-Wesley, 2004. isbn 978-0-321-12521-7. 4. MICROSOFT. Monaco Editor. Microsoft. Available also from: https://microsoft.github.io/monaco-editor/. 5. MICROSOFT. VS Code User Interface. Microsoft. Available also from: https://code.visualstudio.com/docs/getstarted/ userinterface. 6. OMG®. PRECISE SPECIFICATION OF BUSINESS DECISIONS AND BUSINESS RULES. Object Management Group®, 2021. Available also from: https://www.omg.org/dmn/. 7. FIELDMAN, Jacob. Combining DMN and Blockchain. Deci- sion Management Community, 2018. Available also from: https://dmcommunity.org/2018/08/20/combining-dmn-and- blockchain. 8. HAARMANN, Stephan; BATOULIS, Kimon; NIKAJ, Adriatik; WESKE, Mathias. DMN Decision Execution on the Ethereum Blockchain. In: KROGSTIE, John; REIJERS, Hajo A. (eds.). Ad- vanced Information Systems Engineering [online]. Cham: Springer International Publishing, 2018, vol. 10816, pp. 327–341 [visited on 2021-05-17]. isbn 978-3-319-91562-3 978-3-319-91563-0. Available from doi: 10.1007/978-3-319-91563-0_20. Series Title: Lecture Notes in Computer Science.

82 BIBLIOGRAPHY

9. OMG®. Decision Model and Notation, Version 1.3. Object Manage- ment Group®, 2019. Available also from: https://www.omg.org/ spec/DMN/1.3/PDF. 10. SILVER, Bruce. DMN: Something’s Different Now... Method and Style, 2017. Available also from: https://methodandstyle.com/ dmn-somethings-different-now/. 11. OMG®. PRECISE SPECIFICATION OF BUSINESS DECISIONS AND BUSINESS RULES. Object Management Group®, 2021. Available also from: https://www.omg.org/intro/DMN.pdf. 12. KAY, Michael. XSLT 2.0 and XPath 2.0: ’s reference. 4th ed. Indianapolis, IN: Wiley Pub, 2008. Wrox programmer’s references. isbn 978-0-470-19274-0. OCLC: ocn182779795. 13. REDHAT. DEMYSTIFYING THE DECISION MODEL AND NO- TATION SPECIFICATION. DMN Community, 2017. Available also from: https://dmcommunity.files.wordpress.com/2017/ 07/dc2017-edsontirelli-demystifying-dmn.pdf. 14. REDHAT. ANTLR4 parser for FEEL. RedHat, 2020. Available also from: https://github.com/kiegroup/drools/blob/master/ kie-dmn/kie-dmn-feel/src/main/antlr4/org/kie/dmn/feel/ parser/feel11/FEEL_1_1.g4. 15. EDGEVERVE. Pegjs parser for FEEL. EdgeVerve, 2018. Available also from: https : / / github . com / EdgeVerve / feel / blob / master/grammar/feel.pegjs. 16. REDHAT. DECISION MODEL AND NOTATION (DMN). RedHat, 2021. Available also from: https://access.redhat. com/documentation/en- us/red_hat_process_automation_ manager/7.1//designing_a_decision_service_using_ dmn_models/dmn-con_dmn-model. 17. TIRELLI, Edson. Machine Learning + Decision Management. 2019. Available also from: https://decisioncamp2019.files. wordpress . com / 2019 / 09 / dc2019 . edsontirelli - 1 . pdf. DecisionCamp. 18. SILVER, Bruce. DMN, Meet Machine Learning. Trisotech, 2021. Available also from: https://www.trisotech.com/dmn-meet- machine-learning/.

83 BIBLIOGRAPHY

19. GROUP, Data Mining. PMML Products. Data Mining Group, 2021. Available also from: http://dmg.org/pmml/products.html. 20. GROUP, KIE. [KieLive12] Business Rules and Decision Engines: past, present and future - with Edson Tirell. Youtube, 2020. Available also from: https://www.youtube.com/watch?v=BGcVXVMrBTQ. 21. SILVER, Bruce. Aiding and Recognizing Full DMN Confor- mance. Method and Style, 2021. Available also from: https : / / methodandstyle . com / aiding - recognizing - full - dmn - conformance/. 22. TCK, DMN. DMN TCK submitters. DMN TCK. Available also from: https://dmn-tck.github.io/tck/. 23. GRAHAM, Ian. Business rules management and service oriented architecture: a pattern language. Chichester, England ; Hoboken, NJ: John Wiley, 2006. isbn 978-0-470-02721-9. OCLC: ocm70158474. 24. WIKIPEDIA. Business rule management system. Wikimedia Foun- dation, Inc., 2021. Available also from: https://en.wikipedia. org/wiki/Business_rule_management_system. 25. SILVER, Bruce. DMN method and style: the practitioner’s guide to decision modeling with business rules. 2nd edition. [N..]. isbn 978- 0-9823681-7-6. 26. MICROSOFT. Language Server Protocol. Microsoft, 2017. Available also from: https : / / docs . microsoft . com / cs - cz / visualstudio / extensibility / language - server - protocol?view=vs-2019. 27. INC, . Survey 2019: Most popular development environments. Stack Overflow, 2019. Available also from: https://insights.stackoverflow.com/survey/2019# technology-_-most-popular-development-environments. 28. MICROSOFT. Language Server Protocol. Microsoft, 2019. Available also from: https : / / docs . microsoft . com / cs - cz / visualstudio / extensibility / language - server - protocol.

84 BIBLIOGRAPHY

29. RODRIGUEZ-ECHEVERRIA, Roberto; IZQUIERDO, Javier Luis Cánovas; WIMMER, Manuel; CABOT, Jordi. Towards a Language Server Protocol Infrastructure for Graphical Modeling. In: Proceed- ings of the 21th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems [online]. Copenhagen Den- mark: ACM, 2018, pp. 370–380 [visited on 2021-05-17]. isbn 978- 1-4503-4949-9. Available from doi: 10.1145/3239372.3239383. 30. MESZAROS, Monika; CSEREP, Mate; FEKETE, Anett. Delivering comprehension features into source code editors through LSP. In: 2019 42nd International Convention on Information and Communica- tion Technology, Electronics and Microelectronics (MIPRO) [online]. Opatija, Croatia: IEEE, 2019, pp. 1581–1586 [visited on 2021-05- 17]. isbn 978-953-233-098-4. Available from doi: 10.23919/MIPRO. 2019.8756695. 31. ATTILA GYEN, Norbert Pataki. Format-independent Graph Vizual- ization with Language Server Protocol. 2020. Available also from: https://icai.uni- eszterhazy.hu/2020/abstracts/ICAI_ 2020_abstract_119.pdf. 32. ECLIPSE. The Eclipse Graphical Language Server Platform. Eclipse. Available also from: https://www.eclipse.org/glsp. 33. MICROSOFT. Extension pack roundup. Microsoft, 2017. Available also from: https://code.visualstudio.com/blogs/2017/03/ 07/extension-pack-roundup. 34. REECE, Kevin J. Current implementations. Github, 2016. Available also from: https://github.com/microsoft/vscode/issues/ 13953. 35. TEXTMATE. Language Grammars. TextMate, 2021. Available also from: https://macromates.com/manual/en/language_ grammars. 36. BÜNDER, Hendrik; KUCHEN, Herbert. Towards Multi-editor Support for Domain-Specific Languages Utilizing the Language Server Protocol. In: HAMMOUDI, Slimane; PIRES, Luís Ferreira; SELIĆ, Bran (eds.). Model-Driven Engineering and Software Devel- opment [online]. Cham: Springer International Publishing, 2020, vol. 1161, pp. 225–245 [visited on 2021-05-17]. isbn 978-3-030- 37872-1 978-3-030-37873-8. Available from doi: 10.1007/978-3-

85 BIBLIOGRAPHY

030-37873-8_10. Series Title: Communications in Computer and Information Science. 37. SCHÄUFELE, Johannes. SKilL language server [online]. 2018 [visited on 2021-05-17]. Available from doi: 10.18419/OPUS- 10282. Publisher: Universität Stuttgart. 38. JASIM, Muhammed. Building cross-platform desktop applications with Electron: create impressive cross-platform desktop applications with Electron and Node [online]. 2017 [visited on 2021-05-17]. isbn 978-1-78646-653-2. Available from: http : / / www . myilibrary . com?id=1009166. OCLC: 989062845. 39. MICROSOFT. Extension API. Microsoft, 2021. Available also from: https://code.visualstudio.com/api. 40. MICROSOFT. Postmessage RPC. Github. Available also from: https://github.com/microsoft/postmessage-rpc. 41. MICROSOFT. Language Extensions Overview. Microsoft, 2021. Available also from: https://code.visualstudio.com/api/ language-extensions/overview. 42. MICROSOFT. Declarative language features. Microsoft, 2021. Available also from: https://code.visualstudio.com/api/ language - extensions / overview # declarative - language - features. 43. ECLIPSE. Eclipse GLSP VSCode Integration. Github. Available also from: https : / / github . com / eclipse - glsp / glsp - vscode - integration. 44. EDER, Hansjörg. Towards debugging facilities for graphical mod- eling languages in web-based modeling tools [online]. 2021, 119 pages [visited on 2021-05-17]. Available from doi: 10.34726/HSS. 2021.66704. Artwork Size: 119 pages Medium: application/pdf Publisher: TU Wien. 45. RASK, Jonas; MADSEN, Frederik; BATTLE, Nick; MACEDO, Hugo; LARSEN, Peter. Visual Studio Code VDM Support. In: Proceedings of the 18th Overture Workshop. 2020.

86 BIBLIOGRAPHY

46. RASK, Jonas Kjær; MADSEN, Frederik Palludan. Decoupling of Core Analysis Support for Specification Languages from User Interfaces in Integrated Development Environments [online]. 2021 [visited on 2021-05-17]. Available from doi: 10.13140/RG. 2.2.21889.99686. Publisher: Unpublished. 47. DI LIU; TAO GU; JIANG-PING XUE. Rule Engine based on im- provement Rete algorithm. In: The 2010 International Conference on Apperceiving Computing and Intelligence Analysis Proceeding [on- line]. Chengdu, China: IEEE, 2010, pp. 346–349 [visited on 2021- 05-17]. isbn 978-1-4244-8025-8. Available from doi: 10 . 1109 / ICACIA.2010.5709916.

87