NATIONAL AND KAPODISTRIAN UNIVERSITY OF ATHENS

SCHOOL OF SCIENCE DEPARTMENT OF INFORMATICS AND TELECOMMUNICATIONS

MASTER OF SCIENCE

"Computing Systems: and Hardware"

M.Sc. THESIS

A Dashboard based on a distributed data streaming architecture

Deligiannakis N. Nektarios

Supervisor: Stathes P. Hadjiefthymiades, Professor

ATHENS

FEBRUARY 2020

ΕΘΝΙΚΟ ΚΑΙ ΚΑΠΟΔΙΣΤΡΙΑΚΟ ΠΑΝΕΠΙΣΤΗΜΙΟ ΑΘΗΝΩΝ

ΣΧΟΛΗ ΘΕΤΙΚΩΝ ΕΠΙΣΤΗΜΩΝ ΤΜΗΜΑ ΠΛΗΡΟΦΟΡΙΚΗΣ ΚΑΙ ΤΗΛΕΠΙΚΟΙΝΩΝΙΩΝ

ΠΡΟΓΡΑΜΜΑ ΜΕΤΑΠΤΥΧΙΑΚΩΝ ΣΠΟΥΔΩΝ

"Υπολογιστικά Συστήματα: Λογισμικό και Υλικό"

ΔΙΠΛΩΜΑΤΙΚΗ ΕΡΓΑΣΙΑ

Γραφικό Περιβάλλον Μεικτής Πραγματικότητας βασισμένο σε μια κατανεμημένη αρχιτεκτονική μεταφοράς ροών δεδομένων

Δεληγιαννάκης Ν. Νεκτάριος

Επιβλέπων: Ευστάθιος Χατζηευθυμιάδης, Καθηγητής

ΑΘΗΝΑ

ΦΕΒΡΟΥΑΡΙΟΣ 2020

M.Sc. Thesis

A Mixed Reality Dashboard based on a distributed data streaming architecture

Deligiannakis N. Nektarios S.N.: Μ1604

Supervisor: Stathes P. Hadjiefthymiades, Professor

February 2020

ΔΙΠΛΩΜΑΤΙΚΗ ΕΡΓΑΣΙΑ

Γραφικό Περιβάλλον Μεικτής Πραγματικότητας βασισμένο σε μια κατανεμημένη αρχιτεκτονική μεταφοράς ροών δεδομένων

ΔΕΛΗΓΙΑΝΝΑΚΗΣ Ν. ΝΕΚΤΑΡΙΟΣ Α.Μ.: M1604

ΕΠΙΒΛΕΠΟΝΤΕΣ: Ευστάθιος Π. Χατζηευθυμιάδης, Καθηγητής

ΦΕΒΡΟΥΑΡΙΟΣ 2020

ABSTRACT

As computer science grows and evolves, new technologies come to surface. One of these is called . New devices and gadgets start to hit the market from many companies like Microsoft with the HoloLens device. Data visualization is very important using mixed reality to make large data sets easier to navigate and understand and is the going to be a requirement for many proposals in the future. However, the creation of applications for these devices is yet a very difficult and peculiar job due to the high cost of the devices and the small developer community that insists on working with Augmented Reality or Mixed Reality applications. We present a Mixed Reality Dashboard that demonstrates functionalities that can be used to monitor and visualize data from sensors, UxVs or any other device. The scenario for applications is the visualization of sensor data and video feed from UxVs, but we also show the extensibility of the current application. Data visualization is very important using mixed reality to make large data sets easier to navigate and understand and is the going to be a requirement for many different and various use cases in many scientific fields and applications. The application consists of a User Interface, interactable with air taps and gestures, that visualizes data from a network. The API for receiving data is Apache Kafka and Zookeeper, a distributed streaming platform. Three main functionalities are presented in the UI, dynamic data graphs, video feeds and the reception of messages and status monitoring for the user and the device. The UI, receives data received from the internet with use of Kafka Rest Proxy that is a RESTful proxy built in Kafka platform.

SUBJECT AREA: Mixed Reality Applications with the HoloLens device KEYWORDS: augmented reality, mixed reality, data visualization, HoloLens, data graph, REST, Apache Kafka, REST Proxy ΠΕΡΙΛΗΨΗ

Καθώς οι επιστήμες των ηλεκτρονικών υπολογιστών και της πληροφορίας αναπτύσσονται και εξελίσσονται, νέες τεχνολογίες έρχονται στην επιφάνεια. Μία από αυτές είναι η τεχνολογία της Επαυξημένης Πραγματικότητας (Augmented Reality, AR). Οι νέες συσκευές αρχίζουν να προσφέρονται εμπορικά από πολλές εταιρείες όπως η Microsoft με τη συσκευή Μεικτής Πραγματικότητας (Mixed Reality, MR) HoloLens. Η απεικόνιση των δεδομένων είναι πολύ σημαντική, χρησιμοποιώντας τη μεικτή πραγματικότητα, για να διευκολύνουμε την πλοήγηση και την κατανόηση μεγάλων συνόλων δεδομένων και πρόκειται να αποτελέσει προϋπόθεση για πολλές τεχνολογικές προτάσεις στο μέλλον. Ωστόσο, η δημιουργία εφαρμογών για αυτές τις συσκευές είναι μια πολύ δύσκολη και ιδιόμορφη δουλειά λόγω του υψηλού κόστους των συσκευών και της μικρής κοινότητας προγραμματιστών που επιμένει στην εργασία με τις εφαρμογές Augmented Reality ή Mixed Reality. Στην παρούσα εργασία, αναπτύχθηκε και παρουσιάζεται ένα γραφικό περιβάλλον οπτικοποίησης ροών δεδομένων πολλαπλών τύπων. Η εργασία παρουσιάζει λειτουργίες που μπορούν να χρησιμοποιηθούν για την παρακολούθηση και την απεικόνιση δεδομένων από αισθητήρες, αισθητήρες μη επανδρωμένων οχημάτων (Unmanned Vehicles, UxVs) ή οποιαδήποτε άλλη συσκευή. Το σενάριο για εφαρμογές είναι η απεικόνιση δεδομένων αισθητήρων και ροής βίντεο από UxVs. Παράλληλα δείχνουμε την επεκτασιμότητα της τρέχουσας εφαρμογής. Η οπτικοποίηση των δεδομένων είναι πολύ σημαντική, χρησιμοποιώντας τη μεικτή πραγματικότητα, για να διευκολύνετε την πλοήγηση και την κατανόηση των μεγάλων συνόλων δεδομένων και αρχίζει να γίνεται επιτακτική ανάγκη σε πληθώρα επιστημονικών πεδίων και εφαρμογών. Η εφαρμογή αποτελείται από μια γραφική διεπαφή χρήστη, η οποία μπορεί να αλληλεπιδράσει με συγκεκριμένες χειρονομίες (gestures), και απεικονίζει δεδομένα προερχόμενα από ένα κατανεμημένο δίκτυο αισθητήρων. Χρησιμοποιήθηκε η πλατφόρμα Apache Kafka και Zookeeper για τη λήψη και επεξεργασία των ροών δεδομένων, μια κατανεμημένη πλατφόρμα ροής δεδομένων. Τρεις βασικές λειτουργίες παρουσιάζονται στο περιβάλλον: (i) τα δυναμικά γραφήματα δεδομένων, (ii) οι ροές βίντεο, (iii) η λήψη μηνυμάτων και η παρακολούθηση της κατάστασης για τον χρήστη και τη συσκευή. Η γραφική διεπαφή χρήστη, λαμβάνει δεδομένα που λαμβάνονται από το διαδίκτυο με τη χρήση του Kafka Rest Proxy το οποίο αποτελεί τμήμα της πλατφόρμας που υποστηρίζει RESTful κλήσεις δεδομένων.

ΘΕΜΑΤΙΚΗ ΠΕΡΙΟΧΗ: Εφαρμογές στην Ενδιάμεση Πραγματικότητα με τη συσκευή HololLens ΛΕΞΕΙΣ ΚΛΕΙΔΙΑ: επαυξημένη πραγματικότητα, μεικτή πραγματικότητα, οπτικοποίηση δεδομένων, συσκευή HoloLens, middleware, γράφημα δεδομένων, REST, Apache Kafka REST Proxy

Αφιερώνεται στον άνθρωπό μου, που χρειάστηκε να με υποστεί στην περάτωση και ολοκλήρωση αυτής της μεταπτυχιακής εργασίας.

ΕΥΧΑΡΙΣΤΙΕΣ

Ευχαριστίες πολλές απευθύνουμε στον επιβλέποντα και επιστημονικό υπεύθυνο της συγκεκριμένης εργασίας αλλά και επιστημονικού υπευθύνου της ερευνητικής ομάδας Διάχυτου Υπολογισμού του ΕΚΠΑ (Pervasive Computing Research Group), κ. Ευστάθιο Χατζηευθυμιάδη. Θερμές ευχαριστίες σε όλα τα μέλη της ερευνητικής ομάδας Διάχυτου Υπολογισμού αλλά ιδιαιτέρως στα μέλη Παπαταξιάρχη Βασίλη, Κωστή Γεράκο και Ζαμπούρα Δημήτρη για τη βοήθειά τους και τις συμβουλές τους σε τεχνικά και μη θέματα που αφορούσαν την παρούσα εργασία.

CONTENTS

ABSTRACT ...... 5

ΠΕΡΙΛΗΨΗ ...... 6

CONTENTS ...... 9

LIST OF IMAGES ...... 11

PROLOGUE ...... 12

1 INTRODUCTION ...... 13

2 STATE OF THE ART ...... 14 2.1 Introduction ...... 14 2.2 ...... 14 2.3 Augmented Reality ...... 15 2.4 Mixed Reality ...... 16 2.5 Differences between Virtual Reality, Augmented Reality and Mixed Reality ...... 16

3 TOOLS FOR THIS APPLICATION ...... 18 3.1 Editor 2018 LTS and 2019 ...... 18 3.2 Mixed Reality Toolkit ...... 18 3.3 Visual Studio ...... 19 3.4 .NET Framework 4.5 - 4.6 in C# ...... 20 3.5 Python for Kafka consumers ...... 20

4. MIXED REALITY FEATURES ...... 21 4.1 Holograms...... 21 4.2 Spatial Mapping and Anchors...... 22 4.3 Gaze ...... 22 4.4 Gestures ...... 23 4.5 Cursor ...... 23 4.6 Resize and Follow Toggle ...... 24

5 GENERAL OVERVIEW ...... 26 5.1.1 ...... Introduction ...... 26 5.2 Data Graphs ...... 27 5.2.1 Simple Graph ...... 27 5.1.2 Dynamic Graph ...... 29 5.2 Video Graphs ...... 30 5.2.1 Air Feed Graph ...... 30 5.2.2 Ground Feed Graph ...... 32 5.3 Options Section ...... 32

6 SCRIPTS ...... 34 6.1 Script structure & Execution order ...... 34 6.2 Coroutines ...... 35

7 DATA ...... 36 7.1 Graphs ...... 36 7.2 Video Feed ...... 36 7.3 Messages ...... 36

8 APACHE KAFKA PLATFORM ...... 38 8.1 Configuring Apache Kafka ...... 39 8.2 Messages in JSON Format ...... 41

9 SETTING UP UNITY EDITOR ...... 43

10 PERFORMANCE ...... 44

11 APPLICATION PORTING TO THE NEW HOLOLENS 2 ...... 45

12 FUTURE WORK ...... 46 12.1 Deploying the applications to the new HoloLens 2 device ...... 46 12.2 Sending commands and messages ...... 46 12.3 Stream Processing ...... 46

CONCLUSIONS ...... 47

ABBRIVIATIONS - ACRONYMS ...... 48

APPENDIX I ...... 49

REFERENCES ...... 50

LIST OF IMAGES Figure 1: The Virtual Reality Experience [23] ...... 15 Figure 2: Augmented Reality [19] ...... 16 Figure 3: A hologram example [22] ...... 21 Figure 4: HoloLens device field of view in the real world [5] ...... 22 Figure 5: An approximation of the frame HoloLens track for gestures [21] ...... 23 Figure 6: Air tapping gestures supported by our applications [9] ...... 24 Figure 7: By air-tapping on one of the components the component comes to the center of the users view and gets larger ...... 25 Figure 8: General Overview of the application ...... 26 Figure 9: Application view inside the Unity Editor ...... 27 Figure 10: Application view from the HoloLens device ...... 28 Figure 11: Zoom in and Sliding Window, and the Options available ...... 29 Figure 12: Dynamic Data Graph ...... 29 Figure 13: Options inside the Dynamic Data Graph ...... 30 Figure 14: View from the Unity Editor without received data streams ...... 31 Figure 15: View of the application with use of the received data streams. Both video feed are visible at the left side of the dashboard...... 32 Figure 16: The options section at the bottom right side ...... 33 Figure 17: Default Unity script template ...... 34 Figure 18: The message section ...... 37 Figure 19: Kafka Ecosystem: Kafka REST Proxy [20] ...... 38 Figure 20: Kafka configuration properties in JSON data format ...... 39 Figure 21: Creating the Kafka REST consumer ...... 40 Figure 22: Subscribing the Kafka REST Proxy consumer to our topics ...... 41 Figure 23: Lifetime of a frame [10] ...... 44

PROLOGUE Currently, from any corner of the world, are able to have access to any kind of information from many and various network sources. The need to obtain information and to filter it is very big. However, the same amount of information may not mean the same for two different individuals. Therefore, applications that can easily and simply introduce information in front of our eyes are very much needed. The ability if someone to determine what kind of data he currently needs is also requested. We present an specific application that addresses the above needs in a more specific way. We present the use case of an application that visualizes data in front of our eyes and lets us interact with the information we are able to see and filter it in some manner.

A Mixed Reality Dashboard based on a distributed data streaming architecture

1. INTRODUCTION In this thesis we have created a Mixed Reality Dashboard deployed on the HoloLens 1 device. This dashboard presents various and basic functionalities that are able to handle live data in various formats from a network. The data types that are currently supported are data in the form of JSON data type and video feed from the internet. The data stream may be measurements from sensors or UxV camera devices. In order for the device to get the information from a network, we used Apache Kafka, a distributed streaming platform. To demonstrate the whole architecture, we have designed a User Interface that is needed to project the information in a mixed reality context. Also, we used an already set up , used from out research lab, that runs Apache Kafka and Zookeeper, in order to demonstrate the data exchange and communication with the application on the HoloLens 1. Various difficulties and problems came up, during the development of this application. Many trials and fails had to happen in order to develop and create a stable and reliable version of the above application. We have to consider that the application development for HoloLens 1 is not fully supported due to the fact that the new HoloLens 2 device is now available.

13 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

2. STATE OF THE ART

2.1 Introduction The technologies we are about to discuss are the state of the art considering how immature the tools and the whole support for this technology still are. Terms such as "Virtual Reality", "Augmented Reality" and "Mixed Reality" are creating a new view of things. However, the true understanding of these terms is not always very clear and the similarities and differences are sometimes not as clear as they seem to be. We will try to describe some simple aspects of the above mentioned technologies. In Virtual Reality, henceforth VR, the user does not see the real world and developers and designers create and “place” the user in a virtual location or space to begin the experience. With Augmented Reality, or simpler AR, the learner can see the real world in or through the device display or headset. The device and the software platform have to recognize something in the real world and then build the experience around that. Specifically, in AR the experience needs to know where to place the content. We explain these recognition points in a following chapter. Mixed Reality, or MR, is something in between the two above terms, although the experience is very similar to the Augmented Reality. Again, objects and 3D models are placed to the real surroundings of the user, augmenting the real world with non-real objects. At this point the community differentiates the MR and AR by separating the devices that are used for the applications. AR is referred to the experience offered from a mobile device application and MR is a term used to refer to applications that are deployed on wearable devices, mostly with a head-mounted display. We use the HoloLens device created and supported from Microsoft as a reference and as our targeted device for our application. Microsoft HoloLens were introduced in 2016 as a cutting-edge technology device, supporting Mixed Reality applications. HoloLens maps the 3D space around the user and projects holograms in front of his eyes, allowing him to interact with them in a natural way.

2.2 Virtual Reality Virtual reality (VR) is a term to refer to content that can be visualized with use of digital devices, such as HMDs (head-mounted displays) or smartphones (mobile VR) that mostly have a way to be held in front of the user. The digital content is most cases a fully created environment with 3D objects and surroundings giving the impression of full space or world. This is used to simulate the real world in another context, like an airplane cockpit or a fictional world in a game. A key aspect of VR is the users interaction with the digital world projected to him. Instead of viewing a screen, the user is immersed and able to interact with the 3D worlds. By simulating as many senses as possible, such as vision, hearing, touch, even smell, the user enters a world or a place that does not exist in reality. It is even up to the creators imagination, whether the physical rules, like gravity, will apply to this world or not.

14 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

Figure 1: The Virtual Reality Experience [22]

2.3 Augmented Reality With "Augmented Reality" (AR), virtual content can be used in the real world. This is made possible with the help of digital aids, which can be used to provide information: such as the fastest route to a specific product in the supermarket, how-to content for assembling a new set of shelves, or an operating guide for the satellite navigation in your car. In the entertainment industry, "AR" is commonly used in gaming: dinosaurs on your coffee table suddenly come to life, virtual toy cars drive through your living room, which you have just newly furnished with virtual furniture selected from a catalogue. To a large extent, the possible applications of "AR" are dependent on the device. The first AR applications ran as native apps on smartphones and merely enhanced the camera image with an information level. In 2016, the game "Pokémon Go" by Niantic became hugely popular with players of all ages from all over the world – "Augmented Reality" par excellence [5]. Google SkyMap is another well-known AR app. It overlays information about constellations, planets and more as you point the camera of your smartphone or tablet toward the heavens. Wikitude is an app that looks up information about a landmark or object by your simply pointing at it using your smartphone's camera. Also, we can find many use case scenarios in military applications. The U.S. Army, for example, uses AR tools to create digitally enhanced training missions for soldiers. It's become such a prevalent concept that the army's given one program an official name, Synthetic Training Environment, or STE. Wearable AR glasses and headsets may well help futuristic armies process data overload at incredible speeds, helping commanders make better battlefield decisions on the fly. There are fascinating business benefits, too. The Gatwick passenger app, for example helps travelers navigate the insanity of a packed airport using its AR app [4].

15 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

Figure 2: Augmented Reality [18]

Another application we recently worked on was the Naval Fire Fighting Training and Education System (NAFTES) [16] Project that was implementing some training exercises regarding firefighting safety and training for the Hellenic Military Navy. This application aimed to introduce to the Navy a new technology to present the AR technology to train sailors when at sea. To conclude, the possibilities of AR tech are limitless. The only uncertainty is how smoothly, and quickly, developers will integrate these capabilities into devices that we'll use on a daily basis, which yet has many difficulties.

2.4 Mixed Reality The term "Mixed Reality" (MR) is used to describe the merger between the real and . The idea of this technology is not very new, it was publically introduced in a 1994 paper by Paul Milgram and Fumio Kishino [17], although its implementation is. At that time, the computational power and the technologies needed put in a comfortable and relatively small HMD device was not yet ready to be created to be publically accepted. Microsoft (the company which device we used for our application) was one of the first to introduce to the public a HMD device, the HoloLens 1 that came with some built in applications and holograms. Although the device initially was very expensive, it started many discussions and research topics. At his point of time the community of developers seems to grow and new devices are announced or produced like the HoloLens 2 or the Apple AR Glasses.

2.5 Differences between Virtual Reality, Augmented Reality and Mixed Reality Virtual Reality and Augmented Reality are two sides of the same coin. You could think of Augmented Reality as VR with one foot in the real world: Augmented Reality simulates artificial objects in the real environment; Virtual Reality creates an artificial environment to inhabit. Virtual Reality and Augmented Reality are have many differences. One of them is the fact that the VR totally isolates the user from the real world, by making him experience and see (through the VR devices) a fully digital environment. In Augmented Reality the user is able to see digital content placed inside the real world minding the real surroundings. For instance, a small dog placed in the floor or a small car placed on a table. AR applications use computer vision algorithms and sensors to

16 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture detect some properties of the real world e.g. surfaces or walls. The rendering of 3Dobjects is done in relation to these properties. Microsoft HoloLens goes beyond augmented reality and virtual reality by enabling you to interact with three-dimensional holograms blended with your real world. Microsoft HoloLens is more than a simple heads-up display, and its transparency means you never lose sight of the world around you. High-definition holograms integrated with your real world will unlock all-new ways to create, communicate, work, and play [7].

17 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture 3. TOOLS FOR THIS APPLICATION We developed and deployed our application with specific tools, and Libraries. We used Unity Engine with the Unity Editor, Visual Studio for the .NET Framework. Also, we used the Mixed Reality Toolkit from Microsoft that is currently at version 2.3.0. Unity Editor is used for the creation of games, Virtual Reality and Augmented Reality applications. In our case we have a mixed reality application, something in between the virtual reality and the augmented reality. Mixed Reality Toolkit is an open source project from Microsoft that is built for virtual reality headsets or HoloLens devices. In order to deploy the application to the HoloLens device it is needed to install and use the Visual Studio 2019 that allows the deployment of applications for Microsoft devices. With the above tool it is supported to add the .NET framework for C#. We also used Python to write our producers in Kafka to send messages to the Topics of the Kafka REST Proxy.

3.1 Unity Editor 2018 LTS and 2019 Unity Editor [2] is a tool for creating apps for various platforms. In order to develop our application for the HoloLens device we had to use this tool because it was the only one that supported the libraries, we needed to use for our HoloLens device. Therefore, opting for this tool was something we needed to deploy our application to our HoloLens device. For instance, the MRTK from Microsoft was created to be loaded in Unity. This limitation was not something that halted us from creating our application. Unity Engine is a tool constantly updating and this was useful many times in order to fix bugs and problems that halted the development of the application many times. However, it was also possible that these updates could cause serious issues and force us to make change to our project with no guaranteed results. Therefore, the correct configuration is very important in order to make the application work on various versions of Unity. Initially, we have to install a version of Unity. We suggest opting for an LTS version of Unity because it is very probable to overcome many difficulties due to unstable version. We used Unity 2018.4 15f and finally 2019.2.19. However, as we speak new versions are already available although some configuration may be required in order to successfully load the project and deploy it. The selected version is also related to the version of the MRTK version. Each version suggests the optimal Unity version that should be used in order not to have any issues. We have to mention that this is developed only for the Unity platform. In the next sections we give a detailed report on how to set the Unity editor and other components in order to edit and deploy our app to any device.

3.2 Mixed Reality Toolkit Mixed Reality Toolkit [3] is an open source project from Microsoft that offers many functionalities for various devices and platforms. It also supports HoloLens 1 and is a very useful tool that offers many functions for handling input actions from the users. It is a package with all the needed components that are needed to apply all the mixed reality experience to our application. Before the MRKT was created, another toolkit was available only for the Hololens 1 device and was called the HoloToolkit. When the Hololens 2 where announced the open

18 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture source MRKT was published based on the Holotoolkit but with cross-platform aspects and many features for the Hololens 2 device. The main features that are offered are below: 1 Input system for the various devices. 2 Hand tracking features we do not currently use 3 Profiles for the various devices. These profiles are sets of functions written that are handling specific behaviors (in the meaning of functionalities) for each different device. In our case we used the Default profile that holds very basic functionalities for the mixed reality experience in our device. 4 UI Controls 5 Solvers which are scripts that are implementing automations mostly regarding the UI elements of an application. I our case, we used the Follow Me Toggle with its solvers in order to make the UI follow the users position and view.  Spatial Awareness and many other feature areas that we will mention in the following pages.  Diagnostics tools As we can see, there are many features implemented inside the MRTK that help creating many useful applications implementing many features. From all of the above we have to point out the Diagnostics system as it is very important in order to understand what the application is doing and what overhead or cost on resources, we have any given time. Metrics can show the memory status and usage, the CPU and GPU utilization and Frames statistics. Another important aspect of this toolkit is that it allows cross platform application development for various platforms and devices. Due to the fact that the toolkit is relatively new it is not easy to deploy apps that are stable. Therefore, it is important to understand that the developer’s community is small and that the the creators of the toolkit rely on the feedback of the developers. This can be many times very disappointing but also indicates that a new era is coming for the mixed reality applications is coming. The devices the toolkit supports are [3]:  Microsoft HoloLens  Microsoft Immersive headsets (IHMD)  VR (HTC Vive / Rift)  OpenXR platforms As we mentioned above, we created and deployed our app for the HoloLens 1 device. Although the toolkit was not intended to be used for this device as HoloLens 2 have come out and this new device is the major reason this project has come out, the 1 series is currently supported.

3.3 Visual Studio For the development we used Microsoft Visual Studio. By default, the Unity editor installation package comes with a version of Visual Studio. Compilers and tools for developing applications in Unity are automatically downloaded and set. The Unity Editor offers many platform configurations and it is easy to switch between platforms. Then, the editor automatically configures the visual studio project in order to start the applications development. Of course, the developer is able to add even more in the project with use of the visual studio editor. Package handling and adding, tests and debug operations are enabled and ready to be used in the process of deploying the app to the device.

19 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

Visual studio also uses the Nuget package manager to download and set up needed packages and libraries. Unity has its own mechanisms for acquiring the correct packages that are needed. Also, Unity has its own asset store. However, the Nuget manager is informed for all the changes that happen inside Unity because the Unity Editor prepares the project during build time to be ready to be loaded to the Visual Studio. The build procedure from Unity prepares the solution that is needed from Visual Studio. And afterwards the Visual studio compiles the solution with the ILL2CPP compiler and builds the solution to create all the necessary dll files. When this procedure is completed then the application is ready to be deployed to the device. The tools offer debug tools and other performance tools and profilers to monitor the application while running on the device.

3.4 .NET Framework 4.5 - 4.6 in C# Unity and Visual studio automatically download and set the .NET framework which is needed to develop applications for HoloLens. It is not always easy to determine the correct version of the tools due to the fact that app builds are very fragile to system updates. These updates are needed but it is possible to disturb the development for older devices such as HoloLens. One very important feature of the .NET framework is that the applications may be able to run on different systems and platforms. Also, applications that is intended to be published to the Windows store have to be implemented with specific versions of the framework.

3.5 Python for Kafka consumers In order to demonstrate our application, we created some data producers in python. We choose the python programming language because of its simplicity. We generate random data and create the correct JSON objects and publish them to the Kafka server. From there, each topic stores the received message and our application then starts to consume the data. Using Python makes the creation of the Kafka consumers really easy. Importing all the necessary files can be done easily and quickly. From that point on we stated producing and publishing data very quickly and easily.

20 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture 4. MIXED REALITY FEATURES

4.1 Holograms Before we discuss and present the UI, we have to explain some features of holograms. Holograms are mainly the 3D objects that the device is able to project to the user of any device. Also, the user may be able to interact with a hologram by air tapping on it or by applying any other gesture. In our application, we do project holograms to the user. We use 2D objects for our UI and it is possible to add any hologram to our application. This is possible because our application is already set to support 3D functionalities and holograms. On HoloLens however, because it is a Mixed Reality device, holograms blend with your physical environment to look and sound like they're part of our world. Even when holograms are all around you, you can still see your surroundings, move freely, and interact with other people and objects. We call this experience "mixed reality. HoloLens has sensors that can see a few feet to either side of you. When you use your hands, you'll need to keep them inside that frame, or HoloLens won't see them. As you move around, the frame moves with you 0.

Figure 3: A hologram example [21]

Gestures are a progressively new approach to interacting with reality-altering technologies. Besides the ability to leave the user hands-free, they are proved to be useful and effective. With a wide set of discrete and continuous motions, HoloLens can be controlled and maintained. However, due to the fact that mixed reality is not a solid technology yet, HoloLens has certain limitations. For instance, hand position or object placement is not always accurately recognized. This should be taken into account when developing for Microsoft headset and ensure that difference in inches will not have a significant impact on the functioning of the app.

21 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

Figure 4: HoloLens device field of view in the real world [7]

It is very important to mention that the 3D or 2D objects have to be at a specific distance from the user. In Unity editor this can be set from the camera settings at the field Near and Far, in the Clipping Planes section on the camera settings.

4.2 Spatial Mapping and Anchors Anchors are components that connect an object’s position and orientation from the virtual world to the real world. In simple words the integration between the two worlds is possible with the use of the anchors. This can be achieved with detecting common characteristics of the real world. These common characteristics are the position and orientation of surfaces and the real- world lighting conditions. In order to detect all the above the AR devices need cameras and computer vision algorithms. The HoloLens device has its own set of functions in order to gain a knowledge of the surrounding area. This feature is called Spatial Mapping. Spatial Mapping is the representation of real-world surfaces in 3D. HoloLens 1 is equipped with 4 environment understanding cameras, an RGB camera and a depth camera, thus giving it the ability to understand surfaces in the real world and reconstruct them in 3D. Spatial Mapping is a process that allows HoloLens to understand its environment and surroundings. Spatial Mapping is achieved using the integrated depth cameras, inherited by the . Depth cameras are measuring the distance between the device and its surroundings. This way, the device can detect objects, such as volumes or planes, and significantly enhance the user experience [6]. However, the question remains on how to relate the spatial perception of the device to the anchors that are used by Unity. Here comes another feature of the MRTK that correlates the anchored configurated game objects from Unity to the device.

4.3 Gaze The HoloLens devices also have another very important feature called Gaze. This feature is responsible to track the user’s head and the directions he is “gazing” at any given time.

22 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

By default, inside the HoloLens, a small translucent circle that is actually a 3D cursor indicates where the user is gazing; however, it may inherit a form of any object whatever it is, real or virtual. The Gaze is the first form of input and reveals the user's intent and awareness. MR Input 210 (aka Project Explorer) is a deep dive into gaze-related concepts for Windows Mixed Reality. We will be adding contextual awareness to our cursor and holograms, taking full advantage of what your app knows about the user's gaze [11].

Unity helps developers to manipulate this HoloLens feature in many ways. For instance, the known “billboarding” technique allows the hologram to always face the user when they hit his or her field-of-view. It is especially advantageous for UI elements such as text or buttons. Also, developers can use 3D cursors such as gaze indicators that catch the user’s attention and guide it to the necessary hologram or object.

4.4 Gestures Getting around HoloLens is a lot like using your smart phone. You can use your hands to manipulate holographic windows, menus, and buttons. Instead of pointing, clicking, or tapping, you'll use your gaze, your voice, and gestures to select apps and holograms and to get around HoloLens. In our application we focused on the hand gestures and used the default ones that are preset and enabled inside the MRTK. HoloLens has sensors that can see a few feet to either side of you. When you use your hands, you'll need to keep them inside that frame, or HoloLens won't see them. As you move around, the frame moves with you [7].

Figure 5: An approximation of the frame HoloLens track for gestures [20]

Next to the following image we can detect the gesture frame and the area the device can detect the movements of the hand. In order to enable any hand gesture capabilities in our applications we had to enable them through the Mixed Reality Toolkit. This toolkit offers the option to adopt any default hand gestures but also create new ones. It is preferred to alter the existing profiles as we are not certain yet that integrating new hand gestures works the optimum way. Therefore, keeping up with the updates from the toolkit is essential. Microsoft however offers detailed documentation and tutorials for input sources and gesture inputs.

4.5 Cursor Along with the above-mentioned gaze and gesture features come the cursor object. This seems to be something simple but is crucial not only for the user input and experience. The cursor is strongly related to the gaze feature as it enables any gesture input only in objects that are gazed upon and are intractable.

23 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

The cursor is used for both the features of the application and to determine if the input or gesture event listeners have to be enabled and also for the user. From the user’s side the cursor may change size or color indicating that an input option is available.

Figure 6: Air tapping gestures supported by our applications [11]

The above two gesture are enabled on out applications and are the main interaction hand movements that out application understand and tracks very efficiently. Another functionality the UI implements, is the resize functionality. This feature is strongly connected to the gestures.

4.6 Resize and Follow Toggle We used a Canvas object from Unity and attached to it the grid script. With that script we managed to create a grid of dynamically allocated smaller canvases that separate each component from the other. After all these steps we attached a script to each smaller canvas that resizes it and scales it to be shown to the center of the screen. The user has to air tap on the canvas he desires to zoom and the canvas gets scaled. The others canvases are moved to the background to not visually disturb the view of the user and the projection of the data visualization. Each component has its own resize script as the operations to the unity objects transform that have to be performed to each component for are position depended. This means that the component has a specific position in the world space and has to be altered in relation to the anchors it has. Anchors are components that connect an object’s position and orientation from the virtual world to the real world. Also, we needed to add some functionality to make the UI follow the users view. If we had left the UI without some behavior regarding this aspect, the user would lose sight of the UI if he committed any type of movement. This would happen because the anchors of the UI would not change when the user moves. We wanted to implement the behavior like the HoloLens menu, where the menu followed the user. This is important in order to make the UI visible to the user even if the user has moved his head around. After some time, the UI follows the user’s movement and view tracking any movement in any direction. For example, if the user lifts his head up, after some time the UI will follow his view. As we mentioned above, we have created the modules that we attached to the components that have to consume data from the Kafka Web services. These components are added to the objects inside the canvases and are being executed when the application starts. The components have some messages to inform the user

24 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture whether each module currently consumes any data. If the component receives data, they are immediately projected to the UI.

Figure 7: By air-tapping on one of the components the component comes to the center of the users view and gets larger

25 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture 5. GENERAL OVERVIEW

Figure 8: General Overview of the application

In the above image we give a general overview of the applications architecture. All the various components are coordinated by the application deployed to the HoloLens device.

5.1.1 Introduction In this section we will start to present and explain the various visible and invisible functionalities in the User Interface. The UI has some features enabled that offer a better visualization for the user. For instance, if a graph is “air tapped” (see the gestures section), it moves to center of the device’s view, and the other graphs and components reside in the background without being visible. Tooltips over the data are also provided, giving an even better understanding of the data to the user. Also, the UI has specific scripts attached to it, in order to follow the user’s view through the device. This is very necessary because the devices spatial awareness is enabled and helps the user to keep track of the UI and its position in space. The User Interface is a Panel that consists of nine parts (or panels) equally drawn on the main Panel. Each panel is an independent panel with specific functionality. The main Panel consists of the listed sub Panels: Data Graphs  Graph  Dynamic Graph  Info Video Graphs

26 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

 Air Feed  Ground Feed Options  Options

5.2 Data Graphs We will start with the first panel, which is the Graph panel.

5.2.1 Simple Graph In this panel, a numeric graph is created and designed to visualize numeric values for a specific phenomenon. The graph in this example holds values of altitude simulating the altitude of a drone. The data that is projected on the graph is very easy to be configured or altered in the Unity editor, in order to visualize any phenomenon, the user needs. In order to receive data from the network, a C# coroutine is executed infinitely that requests via HTTP calls, data from the Kafka Server. Kafka uses topics in order to create data queues, show each Panel has its own Kafka Topic to request data from. This is done in this manner in order to test the HoloLens capabilities with multiple data stream connections up and running. The coroutine receives data if available on the topic and translates it to a form usable for the graph components. Upon successful reception of messages, the graph gets updated and redesigned (if needed), visualizing the new incoming live data.

Figure 9: Application view inside the Unity Editor

27 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

Figure 10: Application view from the HoloLens device

The received data is in JSON data format and after getting deserialized and parsed the values are loaded on the graph. The graph component also offers some options in order to change the way the graph visualizes the data. The user can choose between a bar chart graph and a line graph. Also, the user may wish to monitor a subset of the data in the graph. This feature can be determined from the editor and deployed (for a more permanent selection) or can be temporary altered inside the application. As we will also discuss in a next section this component receives data from the altitude Kafka topic. When receiving new value, a list of the values is updated and re-loaded to the graph component. Also, we have a sliding window as we can call it, which allows as to monitor only the last x values we received. The value x is a value that we can set from the editor and temporarily change from the application itself. Every time specific coroutines receive incoming traffic from the network, some local data structures are updated and then the graph gets prepared to add to the dataset the new value to visualize. The graph obviously gets updated with the same rate the values are received if a new value is available. In case the values are constantly changing, the graph gets rebuild. We care to measure this kind of performance as it is possible that if we send or publish large chunks of data to be consumed by our application, we strain the device and put great workload on the GPU. With many tests we found that updating the canvas butches is puts great workload on the device’s hardware and is something to keep in mind in general. This sliding window is very important and could also be used for other purposes like applying some data algorithm on this selected window. For instance, we could apply sliding window algorithms at the already created window of values.

28 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

Figure 11: Zoom in and Sliding Window, and the Options available

In many cases, it is possible to have values with a specific type of measurement, like meters of feet for distance. This feature is also available from the editor and from the Graph menu to select. However, any change might have a temporary effect on the visualization of the data, meaning that the graph opts for the preset options.

Figure 12: Dynamic Data Graph

5.1.2 Dynamic Graph This component is similar to the above Graph component. However, it has features that make it more dynamic to the received data. First of all, this dynamic graph is a bar chart. Each bar represents the last value of a set of values for different phenomena e.g. we visualize temperatures for different subjects, like temperature in the air, water, ground etc.

29 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

The coroutine that handles the reception of the data from the network does not know from the beginning how many different sets values it has to deserialize and parse from the received Kafka message. Each time a message is successfully received, the coroutine updates each set with new incoming value. If a set does not exist it is created, or, in case no new value is received for a set, it is ignored from the graph. The above feature illustrates a more dynamic way to visualize data in a more generic context. We have made the assumption that the various values measure similar phenomena, which helps to extend some options from the user that exists in the previous graph.

Figure 13: Options inside the Dynamic Data Graph

5.2 Video Graphs Video graphs are created to offer live video feed to the user. Currently, we consume video feed from data servers, in order to measure any performance issues. It is possible to consume video frames from a Kafka topic and construct the video in the device but this aspect is on test mode at this time. Each video component has the zoom in or zoom out function with the use of air tap on the video.

5.2.1 Air Feed Graph This component is intended to grab a live video feed from a topic and project it to the user. We named Air Feed to emphasize the use case scenario of a drone flying above an area and feeding the Kafka topic with is on board camera feed. In order to develop this functionality, we stumbled above many and various difficulties and limitations from the tools we used. For security reasons, it is not allowed to consume any data type from the internet. Also, not any type of web operations with data streams is allowed because of the just mentioned reason. We achieved to stream video from a server and optimize it in order to avoid conflicts between the other internet transactions with internet (from the graphs etc.). We have stored a video in a cloud service, and used the share link that the platform provided and

30 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture used it as source for our video component. From there, the video player script attached to the component, takes the URL and starts downloading frames. If the frames that are already streamed suffice for a smooth video play, the video starts to play and the component continues to download frames. It is necessary to point out that only with the above three graphical components, we already strain a lot the device with great workload on the CPU, the GPU that has to render all changes in the frames and to the device's memory. The functions and other code execute behind the scenes and therefore is not possible to completely control the workload of some processes. With many tests we found that the consumption of frames needs some effort as it was expected. With use of coding techniques and the correct set up with have manages to offer a very great, smooth and impressive experience to the user without any visible lags.

Figure 14: View from the Unity Editor without received data streams

31 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

Figure 15: View of the application with use of the received data streams. Both video feed are visible at the left side of the dashboard.

5.2.2 Ground Feed Graph Similar to the Air Feed component, the Ground Feed video component is also intended to consume live video feed from the network. The user again is able to enlarge the video by air tapping on it. The ground feed is indented do demonstrate the source of an unmanned ground vehicle. Again, we stream a video from a cloud source and we handle the video with the same functionality as mentioned above. With these two video components we prove that it is indeed possible to have multiple video components consuming video streams from a network source, and having a very good and smooth user experience from our application and the device.

5.3 Options Section This component holds more options regarding the entire layout and the components. Currently, the user is able to enable/disable a component. This is made in order to help the user to monitor the data he desires or currently needs. Also, we have to consider that the users view is limited towards his real surrounding space. These options intent to help the user free up his view through the device’s view and move safely in his real surroundings.

32 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

Figure 16: The options section at the bottom right side

33 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture 6. SCRIPTS In order to create our application and also implement some logic we had to write some code. For the scripts we have created we used the C# programming language and some scripting APIs. We used the Unity scripting API which offers the documentation and some basic examples for the use of functions and classes Unity offers. This scripting API is the core of our application and there were no alternatives to use. We used the .NET framework from Microsoft that was used to build and deploy our application to the HoloLens device.

6.1 Script structure & Execution order Unity creates scripts with use of a default template. According to the template the script has the name of the public class that resides in it. Two default functions are also generated and are important in order to understand a script’s lifecycle. It is not strict to use any of these methods but they indicate some key features of a Unity application. The two methods are the Start method and the Update method. The first one gets called automatically and is used mainly for initialization purposed. The second one, the Update method is very important if used. The Update method is called in every frame. The CPU scheduler keeps track of these functions and calls them in every frame. We mention these methods because the code or anything a programmer might call in the methods body has a great impact on the overall performance of the application. We investigate this fact in the Performance section. We present the default template of a unity script:

Figure 17: Default Unity script template

34 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture 6.2 Coroutines Coroutines are a type of functions but with some more and very important characteristics. In general, functions are called and executed inside the lifecycle of a frame. This means that a function has to run to its completion and this had to happen in a single frame. Also, many times it is needed to fulfill a task that is going to take multiple frames to complete. We could spread this work across multiple frames with the use of the Update function but this solution will certainly backfire on us with performance issues. There are many tasks that we know for sure that will take some time and that some procedures should not start or execute upon the completion of a specific task. For example, in our application we use some REST calls that we want to wait and get the response. Unity offers a solution to our problem, which are the coroutines. Coroutines are like functions but has the ability to pause its execution and return the control to the main thread of the application. This is very useful when waiting for input from any source, in our case the source is the network, and then return control the coroutine to start where it left off and terminate its execution. In order to return the control from a coroutine to main thread a specific instruction is used, the yield instruction followed by some condition that, if met, the control of the application’s execution flow should return to the coroutine. Coroutines might also be used for very large operations, even infinite loops that do some useful work and we want to operate for the application’s whole lifecycle. I we implemented this with any common function or the Update function the application would have non expected behaviors and results. Also, there is the option to stop or even pause a coroutine and start it again from the point it left off and continue whatever operation was intended to be completed. To permanently stop a coroutine we can destroy it.

35 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture 7. DATA

7.1 Graphs In order to visualize data in the Mixed Reality Dashboard application, we have to generate data in a specific format and then create the application’s routines for consuming the data from the Kafka REST Proxy [see next section]. First of all, the data format has to be JSON. This is currently the only type our routines are able to deserialize and consume. For demo purposes we have created our own producers written in Python, which create JSON Objects and send them to a specific Kafka Topic. The application’s routines consume the data from each Topic (one Topic per UI component), strip it from any metadata that Kafka adds to the message (for configuration purposes) and parse it in order to extract the specific values we need for our graph. The above procedure is simple but has to be done as dynamic as possible in order to make the UI support as generic data types as possible. Our application achieves the above assumption by parsing the data in a general manner without having hardcoded information on our graph components. For instance, the presentation of the values in the graph uses the key values inside the received JSON objects for the tooltips and properties of the graphs. Additionally, the dynamic graph has no preconfigured values or properties and is able to dynamically visualize data in relation to the received data parsed from the JSON object. It automatically adjusts properties of the graph, like names of the datasets visualized. We add some images as an example below: (Images here that show the json files sent and the graph adapting to the received data)

7.2 Video Feed The video feed components require a video stream to project to the user. These components have mechanisms enabled in order to prepare the video to be played, by gathering some frames before playing, and also to handle possible audio sources and other quality properties. Each component has its own video player and scripts attached to it. This is necessary because there are some processes that are forced from the editor tools such as specific decoding mechanisms. When a video stream is received with use of an in editor specified URL source, some automations happen when receiving the video frames. The costliest (on resources) and demanding is the decoding of the frames. The editor limits the type of video stream that can be received and projected. Therefore, initially we create some non-live video feed to prove that having this functionality in general is possible and does not cause performance issues. Our next goal is to consume video feed from a Kafka topic, which is currently on a testing phase and has developing difficulties. We have proved that it is possible and offers a smooth and reliable experience to the user. The overhead for the applications to run smoothly on the HoloLens device was not something we could ignore. We stumbled upon great difficulties with the workload that was put on the device many times forced the application to either crash or delaying any other operation like downloading the data streams we had to visualize to the user. We successfully overcome these difficulties and continued to the next topic to discuss.

7.3 Messages The application has a panel dedicated for status information and communications purposes. It gives the possibility of receiving short messages, connectivity status or 36 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture options and some system information regarding the battery status and level of the device. This is a mixed component, due to the fact that is receives information from a network but also projects information and settings gathered from the device itself. In this component we demonstrate the ability to show in a mixed reality way, various types of important information and messages received from various sources, the device, a network, etc.

Figure 18: The message section

Also, we have to mention that this section of the UI has the potential to make use of the reception of any kind of events. The user may be in a position to contact someone and receive or publish messages to the Kafka services that will be able to be consumed from another application. The messages that are show are from the Kafka topic that we created for sending messages and the connectivity status for demonstration.

37 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture 8. APACHE KAFKA PLATFORM A very challenging aspect of the application we demonstrate is the communication with a network using the Apache Kafka distributed streaming platform. Kafka implements queues that are able to handle events very efficiently and fast. In our application we used the Confluent Platform, which implements at its core Apache Kafka and is the only enterprise event streaming platform as of now. The Confluent Platform is almost fully documented and offers many different connectors that offer more application specific automations for handling events. An application has to subscribe to a Topic in order to publish or consume messages in real time to any other application. Kafka guarantees some very important assumptions with the most important that messages will be consumed in the order they were published to the topic. Our applications make use of this assumption and visualize incoming data from any source that are real time. As a result, are graphs present actual measurements at the time they happened. Topics are unique named buses used for data transmission between nodes. Nodes can publish or subscribe to a topic, but data production and consumption are decoupled meaning that there is an indirect connection between the nodes. Every topic should have a unique name in order to avoid problems and confusion between same named topics. Finally, each topic is related to a message type, meaning that it only accepts specific message type. In our case we have set this type to JSON data type and message type. We have to point out that, implementing the above assumptions is not a feature that the device was designed for. Therefore, our application is a prototype at this kind of event tracking via Kafka Confluent Platform. Because of the above, we stumbled upon many implementation difficulties and failed tests. After long research we concluded that the most efficient way to communicate with a web server running Kafka was the built in Kafka REST Proxy connector. This proxy allowed us to consume messages from Kafka topics with code that is supported from the frameworks and the Unity Editor we used. In order to produce messages in JSON data format, the producers in python automatically set the correct configuration properties for Kafka. However, on the consumer’s side, some configuration is needed.

Figure 19: Kafka Ecosystem: Kafka REST Proxy [19]

38 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

In the picture above we show the basic architecture of the Kafka ecosystem. We show here the REST components we made use of.

8.1 Configuring Apache Kafka In order to produce or consume data from a server that executes the Apache Kafka streaming platform, some configuration has to happen first. The Kafka producers, that publish messages on the topics, are written in Python. Therefore, many values are automatically set, like the message format we intend to publish and consume. On our application's side, however, some more configurations is required in order to connect to web servers and web services. Our application is limited by the .NET framework, IL2CPP compilers and the unity editor on how to connect and get data from a network. First of all, we have to use the UnityWebRequest object provided from the Unity Scripting API. This object is used to implement many types of connections to web servers and is the only one that worked on our HoloLens device. Any other system call for internet communication was either not supported by the device or not allowed for security reasons. Also, although we found an The use UnityWebRequest object forced us to use the Apache Kafka REST Proxy connector, as we could not use the Confluent Kafka libraries offered on many other programming languages. In simple words, our application does not load any Kafka libraries or dll files but uses UnityWebRequest objects to connect to a specific URI. The above URI requires some steps to be created as it has to refer to a specific topic(s), consuming the data in a way to handle them as a simple JSON format. Firstly, a consumer for JSON data has to be created starting at a topic’s log and subscribe to it. This is done by posting a POST request to the Kafka REST Proxy. For this request, we create a JSON object with all the configuration properties we need and if the request succeeds, we will have a specific consumer implementing all our desired properties. The following JSON objects demonstrates our consumer attributes:

Figure 20: Kafka configuration properties in JSON data format

The above object is posted to the Proxy, at the following URI: target URI = HOST_SERVER + "consumers/" + consumer name + "/"; Where HOST_SERVER is -> http://server_name:8082 We have to note that the consumer name does not yet exist and will be created upon success of the request. However, the topics were already created in our setup. The next step is to subscribe our consumer to the topic(s) we want. Subscribing the same consumer to multiple topics will result to consuming data from all the topics when available. We opted to subscribe the same consumer to more topics in order to gain performance on the applications side, because we would not have to create multiple consumer coroutines but just one, which is by far more efficient. At last, we are ready to request data from our consumer with GET requests, receiving messages from either topic that has messages not yet consumed. The newly created

39 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture consumer stands as a Proxy between our application that only supports simple GET, POST etc. requests and our Apache Kafka web service.

We create a consumer for JSON data, starting at the beginning of the topic's log and subscribe to a topic. Below, we present the raw request that we used to set things up and start consuming messages[14]:

Figure 21: Creating the Kafka REST consumer

Then consume some data using the base URL in the first response. We check the response code and then the response content to ensure that the creation of the consumer at the REST Proxy integrated part of the Kafka web services is created. In case it failed for any reason the routine repeats the request with a new instance and the same attributes as above. This procedure will last upon successful creation of the consumer. If the consumer already exists the procedure proceeds to the subscription phase.

40 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture

Figure 22: Subscribing the Kafka REST Proxy consumer to our topics

In order to continue to consume data in real time, we implemented a coroutine that infinitely creates requests from the last GET operation. A new request will be sent if the previous one succeeded by yielding a non-erroneous response code. Some argue that the REST Proxy adds great overhead to the services that are running on the server. However, this use case is many times an efficient and easier way to add a Kafka connector to an application that does not support Kafka features from the libraries to create pure Kafka consumers.

8.2 Messages in JSON Format Initially, we chose to use the JSON data format in order to produce and consume data easily. We create simple JSON objects in python and publish them to the Kafka topics. From there, the consumers in our application (written in C#) inside specific coroutines, consume the data with some metadata information from Kafka in a more complex JSON format. The metadata holds the important topic information, in order to keep track of what is happening and what messages are consumed or are stored to be consumed. Each message received from Kafka, has our required values encapsulated.

Message produced: {"altitude": 104} Message received: [{"topic":"DroneTopic","key": null,"value":{"altitude":104},"partition":0,"offset":80752}]

As we can see, the initial message is encapsulated I a bigger JSON data message consumed from the Kafka topic “DroneTopic”. The above message holds some very

41 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture basic values like the partition and the message offset. These values can be used in order to recover from a network failure or any other cause that forced a delay or halt on consuming data from the application. In our application we opted for the current and latest messages (or values) in order to keep track of a phenomenon in real time, ignoring any previous data that might be stored. The procedure for this assumption is described in the previous section. We would also like to mention the other two topics we are using which are the Vector topic and the InfoTopic. The Vector topic is used to present the dynamic loading of content from the graphs. The producer of the message may alter the dataset he wants to visualize to the receiver. If this is happening the application will adapt to these changes and the graph will take the form of the new received content. The InfoTopic is used to demonstrate the reception of messages to the application user. The dashboard has a section to receive messages from this Kafka topic and show it to the user. Our next step towards this direction could be to give the user the option to send messages back to the topic for consumption by another application or .

42 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture 9 SETTING UP UNITY EDITOR In order to deploy the application we have to set all our tools with the correct settings. Firstly, we will have to start with Unity. Installing the correct editor version is necessary. When starting Unity some work is done to generate files that are needed to start working on the project. The next step is to load all necessary libraries and projects we need to create our application. This is the Mixed Reality Toolkit, an asset to handle the JSON data type and some other assets that are loaded from Unity. The toolkit also sets some configuration options that have to be set in order to launch the application and deploy it to the device. The MRTK has many options for the configuration of the app inside Unity. It offers various profiles that can be used. We opted for the Default Configuration Profile although there was a profile for the HoloLens 1, but executed with errors on the Unity Editor. We do not know whether our application can be upgraded easily with the upcoming versions of the toolkit. After the above steps we have a ready project that has no content from our side but is ready and configured to be deployed to the device. However, we have to keep in mind that any changes or code we will start to write may disrupt our configuration and will result to crashing our application when loaded to our device. This has many times been the case. The main reason for this, is, that the Unity editor if installed on a 62-bit computer may create some dll files that are compiled for this architecture. However, HoloLens has a 32-bit architecture. Because the code and dll files are valid and successfully compiled, the tools do not raise any warnings and the applications is deployed. When executed, it crashes. We had to create the graphs from scratch because we did not found any open source asset on the Unity Asset store for graphs. Therefore, the graphs are relatively simple and the functionality added is written in C#. For the video components we used the Unity’s Video component and attached to it our scripts to handle the playback of our videos. Some basic components are created and enabled as GameOjects in Unity and we have added basic scripts to them. For instance, there is an object that is called Data controller and attached to it is a script that instantiates the Kafka REST consumer as we have described in a section above regarding the Kafka connectivity.

43 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture 10 PERFORMANCE The main issue for application was not only to make it work but also to create a smooth user experience meaning that it would not bring discomfort to the user to use our application. Also, because we deal with live data streams it has no use for the user to wait to perform an action for more than some hundred milliseconds. The performance of our device and our applications is controlled by three main threads or basic functionalities, the Application Thread that executes entirely on the CPU, the Render Thread that is initiated on the CPU but gets passed at some point to the GPU and the GPU thread that is controlling the graphics pipelining of any graphical object or component that has to be rendered. Some bottlenecks that anyone may encounter are coming from the Application thread that is running on the CPU of the device. This thread does many things behind the scenes apart from starting or executing our scripts and code. The next thread, which is the Render Thread, is very fragile to changes that might affect its performance. This thread sends all the requests to the GPU for rendering 3D models or anything we want our application to project to the user. The simplest way to disrupt any execution order for the requests is by simply adding code to the Update() function and accidentally alter the execution order or delay requests to the GPU. The user will immediately experience this case because the application will either start to freeze or miss frames or delay to project any graphical object to the user. At last, the GPU thread comes in play which executes all the incoming requests from the CPU to render the graphics. In simple words the GPU transforms all the models, UI components etc. to pixels. Generally, HoloLens applications will be GPU bound, but not always. There are many tools to get a profile of the applications behavior and execution stack. In the next image we show the lifetime of a frame in our device. It is obvious that in a single frame many procedures are started and completed. With our applications many times we disrupted the above frame procedures or the order of them. The most visible and immediate consequence was the low frame rate or fps that originally is supported to be at 60fps.

Figure 23: Lifetime of a frame 0

44 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture 11 APPLICATION PORTING TO THE NEW HOLOLENS 2 As we described in the previous sections, we used many components that are available on the Mixed Reality Toolkit. Due to the fact that these libraries are created mainly for the HoloLens 2 device we are confident that we will be able to port our application to the new device. In the next section regarding the future work we discuss this topic. Because the MRTK was created mainly for the HoloLens 2 device, the only thing required to port our application to the newer device is to change the active configuration profile. We have already tested this change and successfully build the application. We could not deploy it yet due to the lack of a device. Even Microsoft offers some support and suggestions along with some instructions on how to proceed to make the changed needed to deploy the applications made for the HoloLens 1 devices. We also anticipate some changes to the UI because the HoloLens device offers a wider field of view to the user.

45 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture 12 FUTURE WORK

12.1 Deploying the applications to the new HoloLens 2 device As we mentioned above, we aim to integrate out application to the new HoloLens 2. In order to do that we have to ensure that our app and libraries will be able to be integrated with the newest version of the MRTK. We are not certain on how easy this is going to be although our application is already set to run on the HoloLens 2. yet if we don’t test it we cannot guarantee the results.

12.2 Sending commands and messages The UI has a specific section to receive messages from a Kafka topic. The publisher of the messages could be another application. We intend to create a UI component in order to give the user the option to respond to the messages he may receive and also to generate his own messages or warnings. It is possible to integrate a specific configuration in order to give the user the ability to send different types of messages, like warnings, distress messages and other.

12.3 Stream Processing In our application we receive multiple data streams. We intend to create some mechanisms that will allow us to enable stream processing features to the receive data streams. In many cases, it is needed to assess an incoming data stream for real time detection of a disaster or a more general phenomenon. This application, is receiving the data and the step now is to test whether it is possible to add functionalities and features for this kind of processing. Also, we have to measure any performance issues that might rise when applying this king of processing to the device because it will not be useful to create bottlenecks inside the application that will reduce the quality of the user experience and also the respond time of any detection. It is possible to implement some mechanisms inside the app that will request from another component in the network to apply some stream processing and visualize the warnings or the results to the user’s view.

46 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture CONCLUSIONS To conclude, we presented an application that is not available to any app store or platform that introduces the use of mixed reality to a more specific case scenario. Visualizing data that is received from a Kafka web service that is was already used for other applications and proposals. We believe that this application is going to be the start for a new direction in handling and visualizing data from sensors live, as the measurements are happening. Also it is possible to apply some algorithms to the data we receive and apply some simple risk models that will give the user some future projection of a phenomenon he currently monitors.

47 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture ABBRIVIATIONS - ACRONYMS MRKT Mixed Reality Toolkit

UxV Unmanned Vehicle, x can stand for aerial, ground or sea

UGV Unmanned Ground Vehicle

UI User Interface

JSON JavaScript Object Notation

VR Virtual Reality

AR Augmented Reality

MR Mixed Reality

CPU Central Processing Unit

GPU Graphical Processing Unit

HMD Head-Mounted Display

48 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture APPENDIX I The source code of this thesis can be found by following the link: https://gitlab.com/delnektarios/final-diplomatiki

Unity Editor can be found here https://unity.com/ Mixed Reality Toolkit https://github.com/microsoft/MixedRealityToolkit-Unity Tutorials for HolLens 1 with MRKT https://docs.microsoft.com/en-us/windows/mixed-reality/mrtk-getting-started

49 Ν.Δεληγιαννάκης A Mixed Reality Dashboard based on a distributed data streaming architecture REFERENCES [1] What is Virtual Reality? [Definition and Examples], https://www.marxentlabs.com/what-is-virtual- reality/ [2] Unity Editor, https://unity.com/products/core-platform [3] Mixed Reality Toolkit, https://github.com/microsoft/MixedRealityToolkit-Unity [4] How Augmented Reality Works, https://computer.howstuffworks.com/augmented-reality.htm [5] VR, AR, MR, and What Does Immersion Actually Mean?, https://www.thinkwithgoogle.com/intl/en- cee/insights-trends/industry-perspectives/vr-ar-mr-and-what-does-immersion-actually- mean/?gclid=CjwKCAiAvonyBRB7EiwAadauqTgTO- BWnLavH7O0Zpg7a6WFd9VDpsD6WbCa7e2xMl1bQIycxCk2rxoCdkgQAvD_BwE [6] HololLens Spatial Mapping – A developer’s guide, https://lightbuzz.com/hololens-spatial-mapping/ [7] How to develop for Microsoft HoloLens, https://www.hypergridbusiness.com/2017/01/how-to- develop-for-microsoft-hololens/ [8] HoloLens for Research, https://www.microsoft.com/en-us/research/academic-program/hololens-for- research/ [9] Tutorials and sample apps, https://docs.microsoft.com/en-us/windows/mixed-reality/tutorials [10] MR Basics 100: Getting started with Unity, https://docs.microsoft.com/en-us/windows/mixed-reality/holograms-100 [11] Gaze and commit, https://docs.microsoft.com/en-us/windows/mixed-reality/gaze-and-commit [12] Understanding performance for mixed reality, https://docs.microsoft.com/en-us/windows/mixed-reality/understanding-performance-for-mixed-reality [13] UnityWebRequest, https://docs.unity3d.com/ScriptReference/Networking.UnityWebRequest.html [14] What is Confluent Platform?, https://docs.confluent.io/current/platform.html [15] Understanding Anchors in Augmented Reality Experiences, https://learningsolutionsmag.com/articles/understanding-anchors-in-augmented-reality-experiences [16] NAFTES, https://www.naftes.eu/ [17] A Taxonomy of Mixed Reality Visual Displays, https://www.researchgate.net/publication/231514051_A_Taxonomy_of_Mixed_Reality_Visual_Displa ys [18] Virtual Reality · Augmented Reality · 360 · 3D, https://www.poppr.be/wp-content/uploads/IMG_3898- 2-1140x640.jpg [19] The Kafka Ecosystem - Kafka Core, Kafka Streams, Kafka Connect, Kafka REST Proxy, and the Schema Registry, https://www.google.com/url?sa=i&url=http%3A%2F%2Fcloudurable.com%2Fblog%2Fkafka- ecosystem%2Findex.html&psig=AOvVaw18nKPhhv- GDQqkXZO3kS2n&ust=1581776418776000&source=images&cd=vfe&ved=0CAIQjRxqFwoTCKim68 Oe0ecCFQAAAAAdAAAAABAF [20] Gestures, https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.syncfusion.com%2Febooks%2Fholole ns_succinctly%2Fgestures&psig=AOvVaw0JxzA3PaPDOVoLBBg0XCBe&ust=1581776545274000&s ource=images&cd=vfe&ved=0CAIQjRxqFwoTCOja8Pqe0ecCFQAAAAAdAAAAABAD [21] KPMG Data and Analytics, https://www.google.com/imgres?imgurl=https%3A%2F%2Fimages.squarespace- cdn.com%2Fcontent%2Fv1%2F56003772e4b0331c9a440b42%2F1489458259272- QMRMFNOLHMGPGCY115B7%2Fke17ZwdGBToddI8pDm48kNvT88LknE- K9M4pGNO0Iqd7gQa3H78H3Y0txjaiv_0fDoOvxcdMmMKkDsyUqMSsMWxHk725yiiHCCLfrh8O1z5Q POohDIaIeljMHgDF5CVlOqpeNLcJ80NK65_fV7S1UbeDbaZv1s3QfpIA4TYnL5Qao8BosUKjCVjCf8T KewJIH3bqxw7fF48mhrq5Ulr0Hg%2Fimage-asset.png&imgrefurl=http%3A%2F%2Fwww.positive- chaos.com%2Fkpmg&tbnid=Aj7kx- ioft9xjM&vet=12ahUKEwjn_vv3n9HnAhXDmLQKHVXJDj8QMygBegQIARAY..i&docid=tmiLhmfSkwjI NM&w=1000&h=563&q=holo%20lens%203d%20charts&hl=en- GR&ved=2ahUKEwjn_vv3n9HnAhXDmLQKHVXJDj8QMygBegQIARAY [22] The Future VR, AR and MR in Gaming, and Their Impact on Socialization, https://www.google.com/imgres?imgurl=https%3A%2F%2Fmiro.medium.com%2Fmax%2F9728%2F1 *Drtv42-e6bmgLna2sJjzXg.jpeg&imgrefurl=https%3A%2F%2Fmedium.com%2Fswidom%2Fthe- future-vr-ar-and-mr-in-gaming-and-their-impact-on-socialization- 695e886d341&tbnid=wTU5xoiRxfu0mM&vet=12ahUKEwjTsKaioNHnAhXoClAKHdDYDcYQMygAeg UIARCeAQ..i&docid=IHhAH_Sm15dCVM&w=4000&h=2657&q=virtual%20reality%20game%20room &hl=en_GB&ved=2ahUKEwjTsKaioNHnAhXoClAKHdDYDcYQMygAegUIARCeAQ

50 Ν.Δεληγιαννάκης