Standard Operating Procedure for Operations Using HoloLens Prepared for: Ohio UAS Center

Prepared by: University of Cincinnati

Date: March 24, 2020 Version Number: 2.0

Table of Contents List of Figures ...... 3

1.0 Introduction ...... 4

2.0 What is Microsoft HoloLens? ...... 5

3.0 Input Methods on HoloLens ...... 6

3.1 Gaze ...... 6

3.2 Gesture ...... 7

3.3 Voice ...... 8

4.0 Prerequisites ...... 8

5.0 How to change the 3D model in the scene? ...... 10

6.0 Common Operating Platform ...... 14

7.0 How does HoloLens work with Common Operating Platform? ...... 15

8.0 How to deploy your application from to HoloLens? ...... 19

9.0 How to create Shared Session between two or more HoloLens? ...... 23

References ...... 26

2 | Page

List of Figures Figure 1 – A three-step process for 3D model generation ...... 4 Figure 2 – Exploded view of HoloLens[1] ...... 5 Figure 3 – Example showing cursor hugging a 3D model ...... 6 Figure 4 – Air tap gesture [2] ...... 7 Figure 5 – Bloom Gesture[3] ...... 7 Figure 6 – Bloom gesture opens the system menu as shown in the figure below. [4] ...... 8 Figure 7 – Adding 3D model to Unity Scene ...... 10 Figure 8 – Newly added 3D model preview in the Unity scene ...... 11 Figure 9 – Model without mesh colliders ...... 12 Figure 10 – 3D model with mesh colliders ...... 12 Figure 11 – Adding the name of the model in the scripts for markers to work and for HoloLens connection on ...... 13 Figure 12 – Placeholder to add a model in the script for UI buttons and slider to work...... 13 Figure 13 – Common Operating Platform ...... 14 Figure 14 – HoloLens connectivity with Common Operating Platform...... 15 Figure 15 – 3D model request from HoloLens to Common Operating Platform ...... 16 Figure 16 – 3D model request and response between HoloLens and Common Operating Platform...... 16 Figure 17 – Representation of shared session connectivity between HoloLens ...... 17 Figure 18 – User interface for interacting with the 3D model ...... 18 Figure 19 – User’s avatar in the virtual space of a shared session ...... 18 Figure 20 – Setting IP address of Server ...... 19 Figure 21 – Steps for HoloLens application deployment ...... 19 Figure 22 – Build application in Unity ...... 20 Figure 23 – Application build status in Unity ...... 21 Figure 24 – Deployment in visual studio ...... 22 Figure 25 – Wi-Fi connectivity check on HoloLens ...... 23 Figure 26 – Sharing service execution – new session initiation ...... 24 Figure 27 – Sharing service – connected users preview ...... 24

3 | Page

1.0 Introduction Since the beginning of content creation, the visualization platform has evolved. Artists created 3D data but was viewed on a 2D paper and then on a 2D screen. In the last two decades, has gained huge attention from major IT giants like Microsoft, Facebook, Apple and has become the dominant tool for exploring and visualizing 3D content. After a thorough literature review on existing AR/VR technologies, it was decided to use HoloLens for the project as it is far ahead of the competition and unlike HoloLens, others are not completely wireless, do not run autonomously on an , need continuous HDMI connection, remote controllers for inputs, have poor battery backup. We primarily use HoloLens as a visualization tool for the 3D model. And the way these models are created is a 3-step process.

Figure 1 – A three-step process for 3D model generation Step 1: A UAV flies a planned mission over a site and captures overlapping images from multiple angles. These images are uploaded to the UCII common operating platform (web server) Step 2: These images are then stitched together to generate the 3D model using professional photogrammetry . Step 3: The 3D model is then available on web server. A 3D model is downloaded from webserver and then deployed on HoloLens at compile-time. The users could create the shared user experience, where the user may or may not be in same physical location but will always be in same virtual space and be able to interact with the model. During the shared session, users will be able to talk to each other over voice chat using VoIP technology, look at each other’s avatar while being able to manipulate the 3D model with respect to scale and position, and the synchronize these changes among everyone in the 3D conference with a single button. Users can interact by dropping notes, markers on the models which will be visible to other users at runtime. Users also have provision to mute their mic and toggle on and off for dropping markers.

4 | Page

2.0 What is Microsoft HoloLens? HoloLens is the flagship augmented reality headset from Microsoft. It was launched in early 2016 and has a heads-up display. This headset is way ahead of other competitors in this field. Unlike few of the other headsets which need their sensors mounted in the corners of the room, HoloLens has all the sensors mounted on the device as seen in the Figure 2 below.

Figure 2 – Exploded view of HoloLens[1]

• Four Microphones are used for giving voice commands to and for dictation. It can also be used to create custom voice commands in applications for any type of action.

• Ambient light sensor helps in auto adjusting the brightness of the display. When the room is brightly lit, the brightness of holograms will be high, so the user can see it clearly and when the room is dim (poorly lit), the brightness of holograms will be low to reduce stress on eyes.

• IMU (Inertia Measurement Unit) helps in tracking the user’s head movement. This helps in rendering user’s avatar in shared session which will rotate and reposition with user’s movement in the virtual space.

• The environment understanding cameras and the depth camera work together to understand user’s surrounding and generate a 3D map of it, so it can simultaneously locate and map user’s location.

• The front camera is a simple 2-megapixel camera capable of capturing , so it sees what a user is seeing. We can take pictures, videos and even live stream what the user is looking at.

• HoloLens runs on the operating system and does not need HDMI connection a computer system for functioning. This means that the HoloLens can run almost all the Universal Windows Platform (UWP) apps in the Store as 2D apps. UWP is a developer’s

5 | Page

platform introduced with Windows 10 that runs applications on all devices like Desktop, Mobile, , and HoloLens as well as future Windows devices

Moreover, HoloLens has a big community hence support from forums is always available. 3.0 Input Methods on HoloLens On a computer, the user gives inputs and gets output from the system. Inputs are from events like mouse movement, click, keyboard button press up or down. On HoloLens, there are three ways you can give input. They are: 3.1 Gaze (For more information https://docs.microsoft.com/en-us/windows/mixed-reality/gaze) It is a what a cursor is to your computer. It tells the user where it’s pointed so the the user knows what they are air-taping on. It - • is used for targeting • tells where the user is looking • uses position and orientation of user’s head (not eyes) • moves when you move your head • is like a laser pointer straight ahead from between user’s eyes • can be used to select holograms, or interact with holograms • can target the intended hologram and the user can trigger some action using voice input or air-tap input • tells you which hologram it is pointed to. In the figure below, the cursor is a purple ring and highlighted by a yellow box.

Figure 3 – Example showing cursor hugging a 3D model

6 | Page

3.2 Gesture (For detailed information https://docs.microsoft.com/en-us/windows/mixed- reality/gestures#bloom) There are two types of gestures for HoloLens. Airtap • After targeting a hologram using gaze, this gesture allows you to interact with it. • You can use your hand or a clicker (both triggers a “tap” event). The easy way to do it is to hold your index finger pointing skywards and then tap it on your thumb. • There’s a certain area where this gesture is recognized by HoloLens, so your hand must be in that . It’s known as “gesture frame”. You will get used to it upon practice. • Two gesture states show in Figure 4 below  1. Ready position 2. Pressed state

Figure 4 – Air tap gesture [2]

• When using a clicker, just click it to initiate a tap event. No need of hands.

Bloom

Figure 5 – Bloom Gesture[3] This is a gesture that resembles the blooming of a flower. When triggered, it will open the System menu as shown in Figure 6 below. Think of it like “” on the keyboard.

7 | Page

Figure 6 – Bloom gesture opens the system menu as shown in the figure below. [4]

3.3 Voice (For more information https://docs.microsoft.com/en-us/windows/mixed-reality/voice-input) This is one of the three input commands. Without using gestures, you just need to speak out command by looking at the hologram. Enables to eliminate the need of complex menus. You can give many commands like: • "What can I say?" • "Go home" or "Go to Start" - instead of bloom to get to • "Launch " • "Take a picture" • "Start recording" • "Stop recording" • "Increase the brightness" • "Decrease the brightness" • "Increase the volume" • "Decrease the volume" • "How much battery do I have left?" • "Call " (requires for HoloLens or ) 4.0 Prerequisites Following tools should be installed prior to software development.

• Unity 3D game engine • Visual Studio • HoloLens emulator • Windows SDK

Installing latest, stable and compatible versions is encouraged. Take a look at “Install the tools” page from Microsoft for environment setup.

8 | Page

A simple crash course can go a long way to get started working in Unity and Visual Studio. Following links should be referred to learn Unity.

• https://unity3d.com/learn • https://www.udemy.com/learnunity3d/ • https://www.coursera.org/courses?query=unity%203d • https://www.coursera.org/specializations/unity-xr A great source of learning for augmented reality would be • Microsoft Academy for HoloLens (https://docs.microsoft.com/en-us/windows/mixed- reality/academy) • Microsoft mixed reality forum (https://forums.hololens.com/) • GitHub repository for Mixed reality toolkit (https://github.com/Microsoft/MixedRealityToolkit-Unity) These links will get you started.

9 | Page

5.0 How to change the 3D model in the scene? If you want to alter an existing 3D model in the scene, you can either delete or disable the older model in Inspector by selecting it. Before deleting it’s a good idea to create a prefab for future purposes if we need to go back to that model. Once space is cleared for new model, we can create it by following steps. To add a new model: • Open Unity and select your scene. • Locate your 3D model file in the project explorer. • Drag and drop it to the hierarchy where you want. In this scene, we want it to be added under “InnerCollection” because of the entire script structure that handles OuterCollection and InnerCollection. • Make sure you add the script “Hologram Placement” on the newly added 3D model.

Figure 7 – Adding 3D model to Unity Scene And the 3D model will show up as highlighted by red boxes and if needed, relocate and scale model so it’s clearly visible in camera.

10 | Page

Figure 8 – Newly added 3D model preview in the Unity scene Here, I have used already saved gamobject(3D model): prefab so I didn’t need to add any components or script to it. Mesh Collider Sometimes, the 3D models will not have mesh collider. It’s important to add mesh collider so your gaze will collide on the 3D object and give you more information about it. We will use an example of a free 3D model (King of Forest) downloaded from the web. It’s already been added to the scene using steps above and now to add the mesh collider manually follow these steps:

• Select the 3D model in Hierarchy • Locate Inspector • Click on “Add Component” • Search/Select “Mesh Collider” • Now, locate the original 3D object in the project explorer and expand it • Select first meshpart object and drag and drop it to the “Mesh” parameter of Mesh Collider on a 3D object

11 | Page

Figure 9 – Model without mesh colliders Mesh files usually have names ending with “_meshpart”. we must add all meshpart objects so that entire object is covered in mesh which is shown by green lines in unity)

Figure 10 – 3D model with mesh colliders Finally, you will need to add the name of the model in a script called “Tapped Handler” which is loaded on “InnerCollection” object. This way code knows what object to consider for plotting markers. And the other script “Hologram Placement” will help in syncing models with others in shared session.

12 | Page

Figure 11 – Adding the name of the model in the scripts for markers to work and for HoloLens connection on server. Also, in the Figure 11 above, red arrow shows the scripts under “InnerCollection” and green arrows show where you need to enter details for object. In Hologram Placement, you need to drag and drop those models from Hierarchy. The next Figure 12 shows adding a model in a script called “Shared Model Script” so that UI actions like scaling up and down, moving model up and down are performed swiftly.

Figure 12 – Placeholder to add a model in the script for UI buttons and slider to work. Just drag and drop the model from “Hierarchy” to the placeholder in the script.

13 | Page

6.0 Common Operating Platform Note – Due to the limitation of computation power of HoloLens, it is impossible to visualize heavy models on HoloLens by downloading and rendering at run time while maintaining higher fps. So, chapter 6 and 7 serve as a proof of concept for integrating HoloLens with the web server. The common operating platform is a centralized command center for the collection and processing of data from different unmanned aerial systems. The platform would allow users to handle and share data amongst themselves and process uploaded images for creating 3D models, which can be viewed and processed on the website. The data processing would be handled on the server side and the user would be notified once the processing is complete. The common operating platform also provides a REST application programming interface (API) over HTTPS. The platforms HTTP based REST have been used in tandem with the Microsoft HoloLens for visualizing 3D models (processed on the platform).

The common operating platform consists of mainly two components. 1. A front-end interface which enables users to interact with different capabilities provided by the system. 2. A backend processing system responsible for processing uploaded data.

Figure 13 – Common Operating Platform For 3D model processing, we use Pix4DMapper[5] and OpenDrone Map(ODM)[6] in the back- end. The user has the option of choosing which application to use for model processing and may do so using the graphical user interface. Once the back-end is done building the model, we use three open source libraries, entwine[7], greyhound[8] and potree[9] to enable the user to view the newly built 3D model on the website.

14 | Page

7.0 How does HoloLens work with Common Operating Platform? The communication between HoloLens and web server or the common operating platform[10] is used to get a list of the stored 3D models. It works on the REST APIs hosted on the server. See the following Figure 14.

Figure 14 – HoloLens connectivity with Common Operating Platform HoloLens connects to the central system only via REST API. When the scene opens, HoloLens pings server and gets the list of all models for current user as seen on the left side of the following Figure 15.

15 | Page

Figure 15 – 3D model request from HoloLens to Common Operating Platform This list of 3D models used by the user interface is populated in the drop-down menu. When the user selects a 3D model from the drop-down, it is downloaded using the REST API as seen on the right side of the Figure 15 above.

Figure 16 – 3D model request and response between HoloLens and Common Operating Platform.

16 | Page

The 3D model is downloaded in an object (.obj) file and its contents are seen in the Figure 16 above which is a collection of thousands or millions of vertices, triangles, and faces. The object file contains the address of the material file (.mtl) which in turns contains the address of the texture file (usually .jpg). The code then downloads these files. All vertices are stitched together to form a mesh and then, using the material file, the corresponding texture is applied at runtime. And the model is visualized in front of the user on the heads-up display of HoloLens. This can be done in shared session as well, meaning when two users with HoloLens are connected to a server over a socket connection and when one user selects a model another user will see it in front of them.

Figure 17 – Representation of shared session connectivity between HoloLens They can also look at each other’s avatar in shared virtual space and voice chat. They can drop notes/markers on the 3D model and type onto those notes in real time using the , they also can scale the model bigger and smaller to have an interactive session. And the best part is synchronicity. After you make all changes to the model, move or scale just air tap on the “Sync Model” button on the interface and it will sync all the UI components like sliders, toggle buttons and synchronize the models among everyone.

17 | Page

Figure 18 – User interface for interacting with the 3D model In Figure 18, the buttons “Bigger” and “Smaller” will scale the model up and “smaller” will scale it down. The slider will move the model up and down. Mute toggle will mute your microphone and “Use Markers” will enable disable the markers functionality.

Figure 19 – User’s avatar in the virtual space of a shared session Figure 19 shows the user’s avatar. It is represented by a white cube. The avatar moves in 3 dimensions in virtual space when a user wearing it moves in the real world.

18 | Page

8.0 How to deploy your application from Unity to HoloLens? Before we deploy an application, it is important to set the IP address of the server. It’s done as shown below:

Figure 20 – Setting IP address of Server Above Figure 20 shows the IP address is set in Inspector module of Outer Collection under Sharing Stage script. This IP address should belong to the computer where you’re running the server script. Application deployment to HoloLens is a 2-step process. It’s easy (if you don’t have any errors during code compilation) but it’s just a long process. So, you’ll have to be patient.

for modifying 3D scene Step for modifying scripts Step Unity Gaming Exporting Visual Visual Engine HoloLens Studio solution Studio

For building and deploying This is where magic happens Unity exported Visual Studio solution to the device

Figure 21 – Steps for HoloLens application deployment As seen in the Figure 21 above, it involves two steps first moving an application from Unity to Visual Studio and from there to the destination, HoloLens. Step 1 - Compiling your application in Unity and generating a Visual Studio solution. Step 2 - Exporting the Visual Studio solution to HoloLens Instructions for Step 1

19 | Page

• When your application is ready and functions as expected in Unity, it’s time to deploy it onto HoloLens for the final test. To deploy, follow these instructions: • Press “Ctrl + Shift + B” to get the build window.

Figure 22 – Build application in Unity

• Select appropriate scenes and click Build. Then you’ll need to select a folder where the visual studio solution is generated. The following Figure 23 shows the instant when the application is being compiled and built in Unity. • After it’s successful without any error in console, open the selected folder. Now open the “.sln” file. It should directly open in Visual Studio.

20 | Page

Figure 23 – Application build status in Unity

Instructions for Step 2 • First, three things to do in Visual Studio is to change the values of these dropdowns.

21 | Page

Figure 24 – Deployment settings in visual studio

• From Figure 24, 1 should be set to “Release”, 2 should always be set to “x86”, and you can choose multiple options for 3. For 3, you can use “Device” and deploy; for this, you need to connect HoloLens over USB and I faced deployment errors many times in this. Mostly, I used “Remote Device” where you can enter the IP address of HoloLens and just deploy wirelessly. This never gave me any error. And then for final deployment to HoloLens, I press “Ctrl + F5” which “Starts without Debugging”. If you’re using breakpoints for debugging, just use F5. After this, your application will be deployed on HoloLens and auto-started.

22 | Page

9.0 How to create Shared Session between two or more HoloLens? To establish the socket connection, the following steps are to be followed: Components required for this System: 1. Laptop (as a server) 2. Router (with Ethernet cable and power adapter) or you can install the service on the server a. In this case of a server, make sure HoloLens are connected to same Wi-Fi which has the server in the same network, so it could reach the service 3. Two (or more) HoloLens

Step 1 – Connect the power adapter of the router and turn it on. Step 2 – Connect Ethernet cable to laptop and port 4 in the router Step 3 – Turn laptop on. Turn Router On using power switch at the back. Step 4 – Turn on both the HoloLens and they should connect automatically to the hotspot of router named: ASUS. Upon performing the bloom gesture, you should see the “Wi-Fi network connected” icon on the top-left side of the home-screen as highlighted by the red square below.

Figure 25 – Wi-Fi connectivity check on HoloLens

Step 4.1 – If it does not connect to ASUS then use the Bloom gesture, go to Settings -> Network and Internet -> and connect to ASUS.

Step 5 – Run the executable file “SharedService” on the desktop of the laptop. You should see this Figure 26 on screen. And make sure, you have a service running on static IP address 192.168.1.161

23 | Page

Figure 26 – Sharing service execution – new session initiation Step 6 – (Only after making sure that Step 4 and Step 5 are successful) Use the bloom gesture, open the menu and select app “GUIAttempt1” on HoloLens (you may need to tap on “+” tile for more apps as seen in Figure 25). In the app, from the main menu, choose the option as needed. User joins session only when they enter a scene. Both users MUST select the same scene, and you can see this entry for each HoloLens joining in, in the console. See Figure 27 below

Figure 27 – Sharing service – connected users preview And thus, you are in session and have voice chat feature available. 24 | Page

On the laptop, there’s an application called as Microsoft HoloLens. You can access the live stream of mixed reality from this application

25 | Page

References [1] “New HoloLens Video Shows Glimpses of Detailed Internals and Early Prototypes – Road to VR.” [Online]. Available: https://www.roadtovr.com/new-hololens-video-shows- glimpses-detailed-internals-early-prototypes/. [Accessed: 21-Feb-2019]. [2] “Gestures - Mixed Reality | .” [Online]. Available: https://docs.microsoft.com/en-us/windows/mixed-reality/gestures#bloom. [Accessed: 21-Feb-2019]. [3] “Google Image Result for https://msegceporticoprodassets.blob.core.windows.net/asset-blobs/12462_en_3.” [Online]. Available: https://www.google.com/imgres?imgurl=https://msegceporticoprodassets.blob.core.wi ndows.net/asset-blobs/12462_en_3&imgrefurl=https://support.microsoft.com/en- us/help/12644/hololens-use- gestures&h=430&w=258&tbnid=BVQEBI2EV3XaoM&q=bloom+gesture&tbnh=126&tbnw . [Accessed: 21-Feb-2019]. [4] “App model - Mixed Reality | Microsoft Docs.” [Online]. Available: https://docs.microsoft.com/en-us/windows/mixed-reality/app-model. [Accessed: 21- Feb-2019]. [5] “Pix4Dmapper - Pix4D.” . [6] “OpenDroneMap | OpenDroneMap is a tool to postprocess drone, balloon, kite, and street view data to geographic data including orthophotos, point clouds, & textured mesh. In the tradition of the Ship of Theseus, it was originally forked from qwesda/Bund.” . [7] C. Manning, “Entwine.” . [8] “Greyhound.” . [9] C. Manning, “Potree.” . [10] N. Krishnan et al., A Common Operating Platform for Employing UAVs in Infrastructure Monitoring. 2018.

26 | Page