Masaryk University Faculty of Informatics

Examining Motion Sickness in

Master’s Thesis

Roman Lukš

Brno, Fall 2017

Masaryk University Faculty of Informatics

Examining Motion Sickness in Virtual Reality

Master’s Thesis

Roman Lukš

Brno, Fall 2017

This is where a copy of the official signed thesis assignment and a copy ofthe Statement of an Author is located in the printed version of the document.

Declaration

Hereby I declare that this paper is my original authorial work, which I have worked out on my own. All sources, references, and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source.

Roman Lukš

Advisor: doc. Fotis Liarokapis, PhD.

i

Acknowledgement

I want to thank the following people who helped me: ∙ Fotis Liarokapis - for guidance ∙ Milan Doležal - for providing discount vouchers ∙ Roman Gluszny & Michal Sedlák - for advice ∙ Adam Qureshi - for advice ∙ Jakub Stejskal - for sharing ∙ people from Škool - for sharing ∙ various VR developers - for sharing their insights with me ∙ all participants - for participation in the experiment ∙ my parents - for supporting me during my studies

iii Abstract

Thesis is evaluating two visual methods and whether they help to alleviate motion sickness. The first method is the presence of a frame of reference (in form of a cockpit and a radial) and the second method is the visible path (in form of waypoints in the virtual environment). Four testing groups were formed. Two for each individual method, one combining both methods and one control group. Each group consisting of 15 subjects. It was a passive seated experience and Rift CV1 was used. Results are inconclusive due to several factors such as high variance between groups, however there is a pattern in the data favoring visual path as a better method against motion sickness compared to the frame of reference. Recommendation is to employ this method if possible in virtual reality experiences.

iv Keywords virtual reality, VR, motion sickness, simulation sickness, frame of reference, visible path, , head-mounted display, virtual environments

v

Contents

1 Introduction 1 1.1 Aims and objectives ...... 1 1.2 Structure of the thesis ...... 2

2 Background 5 2.1 Virtual Reality ...... 5 2.2 Motion Sickness ...... 5 2.3 Methods against motion sickness ...... 6 2.3.1 Frame of reference ...... 6 2.3.2 Visible path ...... 7 2.3.3 Field of view ...... 7 2.3.4 Speed ...... 8 2.3.5 Locomotion ...... 8 2.3.6 Pharmaceutics ...... 10 2.3.7 Habituation/Adaptation ...... 10 2.3.8 Galvanic vestibular stimulation ...... 12 2.3.9 Cognitive load ...... 13 2.3.10 Posture ...... 13 2.3.11 Breaks ...... 15 2.3.12 User in control ...... 15

3 Design & Implementation 17 3.1 Scene design ...... 17 3.2 Implementation ...... 20 3.3 Setting up VR in Unity engine ...... 20 3.4 Scene hierarchy ...... 21 3.5 Cockpit and the clipping plane ...... 22 3.6 Cockpit controller ...... 22 3.7 UI in VR ...... 22 3.8 Keyboard shortcuts ...... 23 3.9 View recentering ...... 24 3.10 Size of the scene ...... 24 3.11 Terrain and textures ...... 25 3.12 Rail System ...... 26 3.12.1 Waypoints ...... 28

vii 3.13 Terrain and vehicle ...... 29 3.13.1 Rotation and leaning ...... 29 3.13.2 Speed ...... 30 3.13.3 Movement stuttering issue ...... 30 3.14 Implementation of the methods against motion sickness ... 31 3.14.1 Frame of reference ...... 31 3.14.2 Visible path ...... 31 3.14.3 Mechanism ...... 33 3.15 Scaling 3D models ...... 34 3.16 Draw distance and billboard issue in VR ...... 34 3.17 OVRService ...... 35 3.18 Performance optimizations ...... 35 3.18.1 Regular runs with Profiler ...... 35 3.18.2 Hardware specifications ...... 36 3.18.3 Occlusion culling and level design ...... 36 3.18.4 Other optimizations ...... 37 3.19 Note on development process ...... 38 3.19.1 Health and Safety Warnings ...... 39

4 Methodology 41 4.1 The experiment ...... 41 4.2 Experiment process ...... 42 4.2.1 Pilot ...... 43 4.2.2 Preparation ...... 43 4.2.3 Before the VR scene ...... 43 4.2.4 VR scene ...... 44 4.2.5 Body posture differences ...... 45 4.2.6 After the VR scene ...... 45 4.3 Gathering participants ...... 47 4.3.1 Promotion ...... 47 4.3.2 Registration for experiments ...... 51 4.3.3 Participant assignment to a testing group . . . . 54

5 Results 57 5.1 Quantitative ...... 57 5.1.1 SSQ results ...... 57 5.1.2 Further analysis of SSQ results ...... 66 5.1.3 Personal Information Questionnaire ...... 69 viii 5.1.4 Presence questionnaire ...... 92 5.1.5 Task Load Index (TLX) ...... 102 5.2 Qualitative ...... 105 5.2.1 Blurry text ...... 106 5.2.2 Experience - curiosity ...... 107 5.2.3 Experience - first time ...... 107 5.2.4 Expectations ...... 108 5.2.5 Immersion and virtual body ...... 109 5.2.6 Experiment process ...... 110 5.2.7 Technical issues ...... 112 5.2.8 Headset adjustment ...... 113 5.2.9 Laboratory ...... 113 5.2.10 Movement ...... 113 5.2.11 Involuntary body movement ...... 113 5.2.12 Oculus device ...... 114 5.2.13 Questionnaires ...... 115 5.2.14 Graphics ...... 117 5.2.15 Sound ...... 120 5.2.16 FoR, VP group specific feedback ...... 121 5.2.17 FoR group specific feedback ...... 124 5.2.18 VP group specific feedback ...... 126 5.2.19 Control group specific feedback ...... 128

6 Conclusion 131 6.1 Discussion ...... 131 6.1.1 Summary ...... 138 6.2 Future work ...... 139

7 Bibliography 143

Bibliography 145

8 Index 153

Index 155

A groups 157

B Consent form 161

ix C Feedback form 165

D Presence Questionnaire 167

E Simulator Sickness Questionnaire (SSQ) 171

F Personal Information Questionnaire 175

G Presence Questionnaire - results 189

H SSQ descriptives 197

I Task Load Index (TLX) 201

J Task Load Index - CSV file 203

K Promotion 205

L Used assets 209

x List of Tables

4.1 4 groups of participants and methods (Frame of reference, Visible path) 41 5.1 Mean, median and std. for each method (SSQ) 61 5.2 Mean values and examples of extremely high and low values for each method (SSQ) 66 5.3 Lower and upper bounds (SSQ scores) 67 5.4 Mean and trimmed mean values for each method (SSQ scores) 67

xi

List of Figures

2.1 Mean SSQ scores of two conditions (Rest frame and Non-rest frame) in the effective group. N = Nausea; O = Oculomotor; D = Disorientation; *p < .05. [10] 7 2.2 Navigation speeds (m/s r..s. in fore-and-aft axis) [14] 8 2.3 Geometry Conditions: Stairs and Ramp Modes [15] 9 2.4 User is reoriented into the center of the CAVE by walking through a portal. [20] 10 2.5 Mean total reported sickness score, with standard error bars, as a function of number of successive flights in a single helicopter flight simulator. [18] 11 2.6 Comparison of mean group response with participant who reported increased symptoms. [33] 12 2.7 Mean and standard error of post drive SSQ scores (GVS). [19] 13 2.8 Nauseogenicity increases towards the top. [22] 14 2.9 Mean total reported sickness score, with standard error bars, as a function of flight duration in a variety of helicopter flight simulators. [18] 15 2.10 Comparison between active and passive participants [25] 16 3.1 Early draft of the scene 20 3.2 Enabling VR in Unity 21 3.3 Early version of the crosshair (reticle) clipping inside geometry of the cockpit 23 3.4 Screenshot of the Scene View in Unity engine illustrating size of the terrain 25 3.5 Scene View in Unity showing waypoints 27 3.6 Using Gizmos lines to visualize track in the editor 28 3.7 None (screenshot of the scene) 32 3.8 VP (screenshot of the scene) 32 3.9 FoR (screenshot of the scene) 33 3.10 FoR, VP (screenshot of the scene) 33 3.11 Screenshot of the Scene View in Unity with occlusion culling in action 38

xiii 4.1 Various body postures 46 4.2 Participant fills in the registration (example) 51 4.3 Participant session marked green (example) 52 4.4 Cell protection feature in Google Sheets 53 5.1 SSQ scores 58 5.2 Histogram for the 1st group: FoR,VP 59 5.3 Histogram for the 2nd group: FoR 59 5.4 Histogram for the 3rd group: VP 60 5.5 Histogram for the 4th group: none 61 5.6 Independent samples test (FoR method) 62 5.7 Independent samples test (VP method) 63 5.8 Independent samples test (both methods) 64 5.9 Independent samples test comparing FoR and VP 65 5.10 SSQ percentiles 67 5.11 Scatter plot for each group (y-axis: Total SSQ score, x-axis: Nausea score 68 5.12 Scatter plot for all groups (y-axis: Total SSQ score, x-axis: Nausea score) 69 5.13 Age groups summary 70 5.14 SSQ scores for 3 distinct age groups 71 5.15 Scatter plot for 3 distinct age groups 72 5.16 T-test for 2 age groups 72 5.17 Gender and the SSQ score (box chart) 73 5.18 Gender and the SSQ score (scatter plot) 74 5.19 Gender (T-test) 75 5.20 Histogram (Gender) 76 5.21 76 5.22 Weekdays (summary) 77 5.23 Weekdays (box chart) 78 5.24 Weekdays (scatter plot) 78 5.25 Weekdays (t-test) 79 5.26 Histogram for the morning sessions, shows frequency of the SSQ scores 80 5.27 Histogram for the afternoon sessions, shows frequency of the SSQ scores 80 5.28 Morning and afternoon sessions (box chart) 81 5.29 Morning and afternoon sessions (scatter plot) 81 xiv 5.30 Morning and afternoon sessions (T-test) 82 5.31 Computer usage (summary) 83 5.32 Computer usage (box chart) 83 5.33 Computer usage (scatter plot) 84 5.34 Videogames (summary) 85 5.35 Videogames (box chart) 85 5.36 Videogames (scatter plot) 86 5.37 Videogames (T-test) 87 5.38 VR (summary) 88 5.39 VR (box chart) 88 5.40 VR (scatter plot) 89 5.41 Week of testing (summary) 90 5.42 Week of testing (box chart) 90 5.43 Week of testing (scatter plot) 91 5.44 Week of testing (T-test) 92 5.45 It was a passive experience for most of the participants 94 5.46 Mean values of "involvement" for different groups 95 5.47 Box charts comparing values of "involvement" 96 5.48 Question 5 (table) 97 5.49 Question 5 (box chart) 97 5.50 Question 7 (table) 98 5.51 Frame of reference (cockpit) did not obstruct the view 99 5.52 Table compares how compelling was the sense of movement for participants in each group 100 5.53 How compelling was your sense of moving around inside the virtual environment (box chart) 100 5.54 Quickness of adjustment to the experience by the groups (table) 101 5.55 Comparing speed of adjustment to the experience by the groups (box chart) 101 5.56 Overall TLX score (histogram) 102 5.57 Overall TLX scores for individual groups (box chart) 103 5.58 TLX histograms for individual groups 104 5.59 Relationship between TLX and SSQ (scatter plot with regression line) 105

xv E.1 SSQ - part 1 172 E.2 SSQ - part 2 173 E.3 SSQ - part 3 174 F.1 Personal Information questionnaire - part 1 176 F.2 Personal Information questionnaire - part 2 177 F.3 Personal Information questionnaire - part 3 (students only) 178 F.4 Descriptives: Age 179 F.5 Descriptives: Morning/Afternoon 180 F.6 Descriptives: Gender 181 F.7 Descriptives: Weekdays 182 F.8 Histogram: Weekdays 183 F.9 Histogram: Weekend 183 I.1 HTML based TLX questionnaire - screenshot 201 K.1 Notification is shown 205 K.2 Participant (right) receiving voucher from Roman Lukš (left) 205 K.3 Example of a funny VR picture used in the registration calendar 206

xvi 1 Introduction

Virtual reality is gaining ground in the recent few years. Many man- ufactures including Sony [1], HTC [2] and Oculus VR [3] have been releasing their virtual reality systems and head-mounted displays (HMDs). Companies in various industries (including entertainment, education, design and military) are interested in applying these af- fordable devices in their daily operations. However, it has been known for quite some time that some users suffer from the motion sickness [4] (also referred to as simulation sickness [5] or visually induced motion sickness (VIMS) [6]). Symptoms of motion sickness include nausea, stomach awareness and many more [7]. There are three competing theories about why motion sickness occurs. The mostly discussed is conflict cue theory [8], then postural instability [8][9] and poison theory [8]. Former states that problem is caused by conflicting information coming from different senses (usually conflict between visual and vestibular systems). Several techniques to combat the issue were proposed and investi- gated. Some of them are: visual frame of reference (like a grid [10][11] or a virtual nose [12]), effects of different fields of view (FoV) [13], mod- ifying movement either by applying different walking speeds [14][15] or changing stairs to act like a ramp [15][16], using pharmaceutics like antihistamines [17], adaptation/habituation [18], stimulating vestibu- lar system [19], using portals to teleport users [20], mental distraction [21], laying down [22][23], taking breaks [24] and user in control of the camera/movement in virtual environment [25][5][26][27].

1.1 Aims and objectives

Thesis will try to confirm that the presence of the frame of reference (frame of reference method) helps to alleviate motion sickness. Thesis proposes hypothesis that a visible path method will alleviate symptoms of the motion sickness. To measure the motion sickness in virtual reality experiment a Simulator Sickness Questionnaire (SSQ) [28] will be used after the exposure to the movement in the virtual environment to evaluate the severity of motion sickness. Experimental prototype of a virtual

1 1. Introduction scene will be implemented in Unity engine [29], using programming language C# and Oculus Rift CV1 will be used as a head-mounted display. The project (thesis) consists of following tasks: ∙ Research available sources on motion sickness and methods to alleviate it ∙ Design testing scenario ∙ Implement and test experiment for Oculus Rift ∙ Find sufficient number of suitable participants ∙ Perform experiments with participants ∙ Analyze data gathered during experiments ∙ Evaluate hypothesis ∙ Compose thesis text and discuss results

1.2 Structure of the thesis

Following chapter Background briefly describes virtual reality, motion sickness and methods against motion sickness. In the third chapter scene design is discussed, followed by imple- mentation details for various parts of the project (eg. scene hierarchy, cockpit, terrain, rail system, frame of reference and visible path meth- ods) including solutions to encountered issues, description of applied performance optimizations and thoughts on development process of the prototype scene. Fourth chapter Methodology describes experiment itself - design of the experiment, experiment procedure, data collection and how it was promoted to attract potential participants. Chapter Results is divided into two sections. Quantitative section analyses data collected from the questionnaires. Qualitative section is an analyses of the general feedback given by participants at the end of the experiment and provides insight to individual aspects which could possibly affect the results. Conclusion is the last core chapter of the thesis. Discussion and potential future work are part of this chapter.

2 1. Introduction

Bibliography is followed by Appendices. Firstly, list of all social groups used to promote the experiment to participants1. Secondly, questionnaires used in the experiment. Additionally, promotion infor- mation and detailed results. Lastly, list of assets used in the develop- ment.

1. This list could be helpful for all researchers who look for participants.

3

2 Background

In the following sections we will briefly introduce virtual reality, mo- tion sickness and methods against motion sickness.

2.1 Virtual Reality

This thesis would not be complete without a definition for the term virtual reality. Due to the rapid technological advancements it might be better to define the term without mentioning specific devices or technologies. In the freely available book Virtual Reality by Steven M. LaValle [30] the term virtual reality is defined as "Inducing targeted behavior in an organism by using artificial sensory stimulation, while the organism has little or no awareness of the interference." Targeted behavior in this case means, that the organism is having an experience that was designed by the creator. This unawareness leads to a sense of presence in an altered or another world. Organism is immersed in the experience. In the book Understanding Virtual Reality [31], the term is defined by its key elements - , immersion, sensory feedback and interactivity: "medium composed of interactive computer simulations that sense the participant’s position and actions and replace or aug- ment the feedback to one or more senses, giving the feeling of being mentally immersed or present in the simulation (a virtual world)." Participants will put on head-mounted display (HMD) and observe the virtual scene using their eyes. HMD creates and illusion of a 3D image. Even though this would be a passive seated experience they can still look around, because the device has head-tracking. When they move their head, virtual image will respond. And hopefully, they will feel immersed.

2.2 Motion Sickness

There are several terms used to describe the feeling of discomfort and various symptoms associated with exposure to a virtual reality simulation.

5 2. Background

Most frequently used terms are motion sickness [4], simulator sick- ness [31], simulation sickness [5], visually induced motion sickness (VIMS) [6], cybersickness [8] or VR sickness [30]. These terms are often used interchangeably, however some of them have different connotations and some terms are not limited to use only with conjunction with the virtual reality. There are many symptoms associated with the motion sickness: nausea, disorientation, tiredness, headaches, sweating, eye-strain [7], difficulty focusing, salivation increasing, difficulty concentrating, blurred vision, dizziness, vertigo, stomach awareness and burping [28].

2.3 Methods against motion sickness

This section describes various methods which can be employed to alleviate motion sickness.

2.3.1 Frame of reference Frame of reference, very popular method employed to help with mo- tion sickness, means to add a visual frame of reference (a rest frame) to the image. It helps to reduce the sensory conflict between visual and vestibular sensations. In 2011 team of chinese scientists researched effects of different types of rest frames [11]. They used both SSQand psychophysiological signals (eg. EEG) to measure motion sickness with and without rest frame conditions. Both SSQ and EEG scores were reduced significantly with rest frame present. Two years laterin 2013 a study used roller coaster simulation [10] both with and with- out rest frame to demonstrate whether rest frames reduced motion sickness. Total of 22 participants were exposed to simulation and their EEG was measured. A grid of 2 horizontal and 2 vertical white lines was used as a rest frame. Immediately after the exposure subjects filled in SSQ. Rest frame proved to reduce motion sickness. A virtual nose [12] was introduced in 2015 by researchers at Purdue University as a way to reduce motion sickness. Placing a model of human nose in the center of view in simulation for Oculus Rift. They have evaluated 41 subjects and concluded that this method is an inexpensive mean for reducing motion sickness.

6 2. Background

Figure 2.1: Mean SSQ scores of two conditions (Rest frame and Non- rest frame) in the effective group. N = Nausea; O = Oculomotor; D= Disorientation; *p < .05. [10]

2.3.2 Visible path

Visible path method is understood as simply placing waypoints along the predefined path in the virtual environment. Thanks to the visible waypoints, users should anticipate movement in the virtual reality. This is different from the situation when user is in control ofthe movement. Users in control are able to anticipate the movement and this could be the reason why being in control weakens motion sickness. Visible path method separates control and anticipation. Users are placed in a passive seated experiment and can only anticipate the movement by observing waypoints marking the path, but they are not in control of the movement.

2.3.3 Field of view

Different fields of view (FOV) also seem to have an effect [13]. Groupof researchers in Netherlands exposed participants to simulation for 50 minutes and tested two kinds of FOV - internal and external. External being screen size and distance from it and internal being angle of camera in-game. They concluded that more motion sickness occurred when those different FOVs were congruent, when internal and external FOVs were in agreement.

7 2. Background

2.3.4 Speed Problem of movement in virtual environments is tightly coupled with the theory of sensory conflict. The user is seeing the movement inthe simulation however their real body does not move.There have been studies to help understand what is the best way to move in virtual environment to cause as little motion sickness as possible. In 2001 a 96 chinese men were participating in an experiment to find out what speeds are suitable for simulation tours in virtual environments [14]. Eight different speeds were evaluated, each on 12 participants. Nausea ratings were recorded using SSQ. The results were that longer the exposure (ranging from 5 to 30 minutes) the higher nausea ratings and that the nausea increased with speeds 3 m/s to 10 m/s, the 10 m/s having the highest nausea, then stabilizing.

Figure 2.2: Navigation speeds (m/s r.m.s. in fore-and-aft axis) [14]

2.3.5 Locomotion Two studies were performed by group of Dorado Jose L. First study in 2014 [15] examined a simple condition - whether the way of mapping between moving in virtual environment and joystick control influence motion sickness. Three different navigational mappings were tested: constant speed, direct mapping to speed and smoothed constant speed. Twenty participants were recruited for this experiment and SSQ was

8 2. Background

used to measure motion sickness. The results were inconclusive for the mappings in general, however there was a tendency in the data in favoring speed mapping over other two mappings. Experiments on using virtual ramp instead of stairs were performed in both the study in 2014 and in 2015 [16] first with 22 students latter with 34 students. It was proven that virtual ramp reduces motion sickness in comparison with the stairs.

Figure 2.3: Geometry Conditions: Stairs and Ramp Modes [15]

Another way can be to use portals to teleport users [20] instead of actual movement. This approach can also be used to reorient them or to work in harmony with limitations of physical space. In the study they argue that real walking has advantage of increasing presence and reducing mental load. Few approaches can be used to allow free walking in virtual environments and taking real world limitations into account at the same time. For example simulating a 360 degrees turn while user does only 180 degree turn. The novel approach is to show the user they are nearing the edge of workspace boundary and allowing them to open portal they can work through to reorient and reposition them. This approach does not cause additional motion sickness.

9 2. Background

Figure 2.4: User is reoriented into the center of the CAVE by walking through a portal. [20]

2.3.6 Pharmaceutics In 2015 a great summary of knowledge about motion sickness [17] was published. One of the ways mentioned in this work on how to alleviate symptoms of motion sickness can be pharmaceutics. There are at least nine different kinds of drugs used against motion sickness. For example antihistamines.

2.3.7 Habituation/Adaptation At least three sources conclude that adaptation to VR experience is a proven way to reduce severity of motion sickness. Many developers of VR experiences give talks about best practices on VR development. One of the points being mentioned that VR developers are worst testers. Reason behind is that they have been habituated to motion sickness. They spend a lot of time in VR experiments which are only prototypes and early versions. These projects lack required polish and developers experiment with different approaches before settling on the one, which is going to make it to the final product. By being constantly exposed to these imperfect conditions VR developers adapt and experience less motion sickness. Two of the studies are from the year 2000. First was performed by scientists at Loughborough University in Leicestershire [32]. They exposed 19 subjects to a game Wipeout using HMD for five days. Simple 4 point malaise scale was used to rate severity of

10 2. Background

nausea. Proportion of users who did not report symptoms of nausea increased steadily with each day of exposure. Researchers concluded that habituation occurred. Second study [18] showed that longer exposure to simulation pro- duces more motion sickness. It also evaluated the effects of repeated exposures concluding that adaptation occurs almost every time. They used SSQ to measure motion sickness. For the duration exposure four categories were used: 0 to 1 hour, 1 to 2 hours, 2 to 3 hours and 3 or more hours. For the repeated exposure 7 sessions were recorded. Both duration of exposure and repeated exposure showed being linearly related to motion sickness outcomes.

Figure 2.5: Mean total reported sickness score, with standard error bars, as a function of number of successive flights in a single helicopter flight simulator. [18]

Lastly, study from 2008 immersed 70 people on 10 occasions [33]. Several objective measures were used before and after the exposure as well as subjective measure using Pensacola Simulator Sickness questionnaire. Over the period of ten sessions overall mean symptom score decreased for the group of participants with an exception of one participant who reported increased symptoms.

11 2. Background

Figure 2.6: Comparison of mean group response with participant who reported increased symptoms. [33]

Guidelines for alleviation of motion sickness [24] also states that adaptation is one of the strongest and most potent fixes for motion sickness. Previously mentioned study on fields of view [24] has also ob- served effect of habituation.

2.3.8 Galvanic vestibular stimulation

Conflict cue theory arises from differences between visual stimulus and inner ear sensations. Researchers at the University of Iowa investi- gated whether stimulating vestibular system [19] can help with motion sickness. They argued that adding vestibular stimulation might lead to more realistic driver behavior in driving simulators. Galvanic vestibu- lar stimulation (GVS) was used on 19 participants. Nausea rates were evaluated using SSQ. Among other things, GVS was concluded to be successful in reducing motion sickness.

12 2. Background

Figure 2.7: Mean and standard error of post drive SSQ scores (GVS). [19]

2.3.9 Cognitive load One of the studies looked into whether having mind occupied with au- dio letter memorising task [21] can reduce severity of motion sickness. They evaluated sixteen subjects and concluded that mental distraction can reduce motion sickness by 19%. To measure motion sickness they used 11-point misery scale (MISC). They also used Motion sickness susceptibility questionnaire (MSSQ) to rate subjects’ susceptibility to motion sickness before the experiment. Only susceptible subjects who experienced motion sickness before were included in the study.

2.3.10 Posture Two studies, one from 1995 and second from 2006, both by researchers (namely eg. John F. Golding) at University of Westminster deal with posture. Earlier study [22] compared seated position with supine (lying down) position. Subjects were exposed to horizontal and vertical mo- tion. The former proved to be twice as nauseogenic as latter. Another interesting finding was the revelation of time needed to achieve mod- erate nausea. It differed for various scenarios, ranging from 9 to27 minutes. They used MSSQ to evaluate susceptibility to motion sick- ness. The result is that least nauseogenic is supine position together

13 2. Background with horizontal movement, then seated position with vertical motion, followed by supine position and vertical motion. Most nauseogenic is seated position together with horizontal motion.

Figure 2.8: Nauseogenicity increases towards the top. [22]

More recent study from 2006 [23] argues that protective postures such as supine position might be incompatible with task performance. This study investigates motion sickness susceptibility and references a lot of relevant studies. It looks into motion sickness and mentions several aspects, including women being more susceptible to motion sickness than men, the term Mal de debarquement (“sea legs”), also mentioned in [5]) as the sensation of unsteadiness when sailor returns to land and also states that age has an effect. According to one of the referenced paper, individuals with high levels of aerobic fitness appear

14 2. Background

to be more susceptible to motion sickness. Mentions several sources on habituation being an effective measure against motion sickness.

2.3.11 Breaks It is a common sense that taking breaks can also help to reduce motion sickness. This is backed by experience according to [24]. With the knowledge from [18] we can understand that taking a break separates exposure to shorter and more comfortable sessions. Even though motion sickness symptoms might eventually add up so longer pause might be required. Shorter and more frequent sessions might also help with adaptation.

Figure 2.9: Mean total reported sickness score, with standard error bars, as a function of flight duration in a variety of helicopter flight simulators. [18]

2.3.12 User in control Many studies are for having user in control of the camera/movement in virtual environment. For example study from 2008 [25] states that higher levels of symptoms were reported in passive viewing com- pared to active control over movement in the virtual environment. This finding might be relevant especially for VR experience designers

15 2. Background

(not necessarily VR game designers, were having players in control is common due to the nature of games - experience is driven by the player).

Figure 2.10: Comparison between active and passive participants [25]

The 1995 paper [5] discusses difference between passenger and operator. Theory is that latter can anticipate movements leading to lesser motion sickness. Two studies focused especially on user in control. First study [26] had a goal to find out whether real world difference of passengers being less prone to motion sickness works in virtual reality too. They exposed pairs of participants to simulation. One of them was in con- trol of the movement, second played only passive role. Passengers showed greater motion sickness. Second study [27] used SSQ to mea- sure the difference between the two. It concluded that complete control reduced severity of motion sickness.

16 3 Design & Implementation

In the following sections, we describe the scene design, important implementation details, process of implementation and solutions to some of the problems, implemented features, organization of objects in the scene and profiling and optimization for performance. There were few issues, which were solved at some point during the development and were not mentioned in following sections. These were either small problems (not worth mentioning as a separate subsections) or were solved by merely upgrading to the new version of the game engine. To give an example, buttons in the menu were not responding to the input until headset was active. Another problem was that the tree prefabs shared the same settings for everything, causing change in one instance propagating to the others. Solution was to duplicate the template and instantiate from this template. Trello [34] was used to organize and plan the work. It served well for this project.

3.1 Scene design

From early on scene was designed as an experience. Borrowing story flow from the storytelling. Experience starts with a prologue. It is there to grab attention, to make user "buy into" the experience. In case of this experiment it is the initial area with a small farm house, cows, horses, little pond, bushes and a butterfly. Initial waiting time is part of the scene for the viewer to get used to the VR experience, to have a chance to look around and enjoy the first scene. It is the only scene in this experience in which the viewer’s position is fixed and only scene without viewer’s movement. Nevertheless, head-tracking is active and users can look around, move and lean their heads. For some participants, as it was their first VR experience. It was the first time they experienced anything in VR and it was important to make them comfortable to enjoy the scene. This scene sets the tone for the rest of the experience too. There is no stylization, scene has a natural look. When countdown runs out vehicle starts to move. Viewer is being transported in a straight line through a valley. Right at the beginning a horse runs along the vehicle for few seconds. There is no acceleration.

17 3. Design & Implementation

It is very calm ride without any distractions. There are leafy trees around. This is a part of experience to make participants to get used to moving in the virtual environment, but without any surprises. Even though viewer moves, it is less exciting than initial scene. This part is timed to 1 minute. What follows is a 30 second long climb up the hill. It is still fairly comfortable without rapid changes in direction or speed. It builds up an anticipation. There are also bumps on the terrain and environment changes a bit. Terrain gets rocky and there are conifer trees instead of deciduous trees. 90 seconds into the experience viewer is at the top of the hill. Here, viewer can observe the whole rest of the scene and the upcoming slope. Slowly, vehicle points downward and rapid descend begins. This is the first part of the experience which might cause any significant discomfort. Some participants might be afraid of heights, causing them to experience vertigo at the top of the hill. When the descend starts, some might experience a level of motion sickness. It is the first point in the experience with fast acceleration. It is worth mentioning, that even though there is an acceleration, this part of path is fairly straight line without any turns or tilting involved. This is also an important part of the experience. It sets the tone to the rest of the scene - a roller coaster-like movement through the environment. This part takes only 10 seconds, but the impression is very strong. After descending down from the top of the hill there is a first turn. Track tilts to the side as well, hugging the slope of the hill. There are no rapid changes in the speed of the vehicle. Again, only one part of the motion is introduced - there is only tilting in this part, without acceleration. It takes another 30 seconds to curve right and then start climbing again. This is the part when both tilting and acceleration are present. Until now, they were always separate. There is a steep climb up the hill, followed by a rapid descend into the rocky valley. This is one of the parts, which caused me a level of discomfort during the development. And could be responsible for causing some level of discomfort to participants too. Again, to make it feel like an experience, there is a calmer part. Part without any acceleration. 185 seconds after the start vehicle slows down and evens up the tilt. What follows are two very steep hills with slow climb up and very fast plunge down. It takes 20 seconds to climb each hill. There is a short straight path between

18 3. Design & Implementation the hills for viewer to catch a breath. During the development an interesting aspect of these steep hills emerged. There were no clouds in the sky above. The blue above has the same color everywhere. And as the vehicle points straight up, there is nothing else to see, but the same color. Viewer looses any reference points about their motion and it feels like nothing is happening unless they look down. This contrasts with rapid descend downhill and feels like there is more acceleration than really is. In this part there is no tilting and path is straight. In 250 seconds into the experiment, there is a long right turn along the wall of the castle, followed by a part with a lot of tilting to both sides. First tilting happens in the time span of 7 seconds. What follows is a right turn making track point towards the castle. 280 seconds after the beginning, there is a second tilting part lasting 20 seconds. Both tilting parts involve only tilting without significant acceleration or changes in direction of the track. Experiment ends after 305 seconds by the vehicle riding into a gate of the castle and the camera fading out to black.

19 3. Design & Implementation

Figure 3.1: Early draft of the scene

3.2 Implementation

Unity 3D game engine was used for implementation, scripts were written in programming language C#. Development started on Unity version 5.4 and the engine was kept up-to-date during the develop- ment. This allowed to take advantage of the most current features, such as new version of light baking system at the cost of having to rewrite obsolete code. Script execution order was defined for few scripts to make them initialize in proper order.

3.3 Setting up VR in Unity engine

Enabling VR development in Unity engine is fairly simple. There is no need to install plugins or SDKs. Single checkbox in the Rendering settings enables the development for the VR platforms.

20 3. Design & Implementation

Figure 3.2: Enabling VR in Unity

3.4 Scene hierarchy

All objects in the scene were organized under the following objects: ∙ Methods against Motion Sickness ∙ VehicleLeaning ∙ Level ∙ RailSystem ∙ Tools ∙ VRManager ∙ System ∙ KeyboardShortcuts Object Methods against Motion Sickness contains classes dealing with setting for the methods and contains component to enable meth- ods using keyboard shortcuts for testing and development purposes. Object VehicleLeaning contains hierarchy for the cockpit and virtual camera of the user. Object Level contains environment of the scene - terrain with details, trees, buildings, animals and lights. Object Rail-

21 3. Design & Implementation

System holds entire path and classes related to the transportation of the user through the virtual environment. Direct child objects of the Rail- System are considered waypoints - objects marking the path. Object Tools contains few handy classes mostly for development and testing purposes. Object VRManager contains a script used to setup VR device. Object System contains EventSystem used by UI. As the name Key- boardShortcuts suggests, this object contains class which implements few keyboard shortcuts used for development, testing and occasion- ally during the experiment while discussing it with the participant during general feedback.

3.5 Cockpit and the clipping plane

Virtual camera is placed inside a cockpit with transparent hood. User sits inside and watches the scene. At the beginning there was an issue with clipping plane. User can lean forward and backward and movement of their head is tracked. Meaning they could lean towards the transparent hood as well. In case user leaned too far, part of the hood got clipped away creating an "opening" in the hood. Adjusting the setting for near clipping plane solved the problem (setting the value to 0.01 instead of 0.3).

3.6 Cockpit controller

Rotation of the controller inside the cockpit was mapped to the input from the game controller - reacting to the movement of an analogue stick. In the end this feature was not used during the experiments Future experiments such as testing methods of users being in control could potentially take advantage of this feature.

3.7 UI in VR

All user interface elements in VR need to be in the world space. They can not be sticked to the screen like in standard videogames. This leads to issues such as the crosshair clipping with the geometry of the environment. Following screenshot from the early development illustrates the issue.

22 3. Design & Implementation

Figure 3.3: Early version of the crosshair (reticle) clipping inside ge- ometry of the cockpit

Later, crosshair was replaced by standard reticle from the VR Sam- ples package (see App.L).

3.8 Keyboard shortcuts

I have prepared the scene with fixed 30s waiting time at the beginning for participants to accommodate to the experience before the transition through the virtual environment begins. It is good when everything works properly – participant puts on the HMD, I help them adjust the straps, tell them what to do in the scene and the experiment begins. However in case there is an issue such as that it takes longer to adjust HMD into correct position or they have additional questions before the beginning the experiment, I need to exit the scene before the timer runs out, return to the main menu and launch it again. I have not implemented keyboard shortcuts to skip or freeze the timer. If I were to test with more people I would add this feature, because it would make experiment process more streamlined and without having to relaunch the scene manually. I had implemented 3 shortcuts to skip to various parts of the track which proved to be practical. I should have done this with the timer as well.

23 3. Design & Implementation 3.9 View recentering

A simple feature was implemented to recenter the view inside VR. At first there was an issue with it. After recentering the view it shifted into the back from the correct position. Recentering was very practical for development as well as later for testing and needed to work correctly. Solution was fairly simple. Forcing recentering at the start and adding a correct offset for the Camera object solved the issue. Values used for the offset were acquired by investigating local position of the head node from VR tracking input: InputTracking.GetLocalPosition(VRNode.Head).

3.10 Size of the scene

At the beginning, I created a default size terrain in Unity, added a few waypoints (with default speed) around the map and measured the time to complete one round. I wanted to have a 15-minute experiment so I created a new terrain with corresponding map size. It took me longer than I had expected to build a track on half of the larger map. I spent a lot of time on performance optimizations and changes in map design to allow for additional tweaks. So in the end I ended up with 5-minute experiment which was optimized and complete. I was worried it would not be enough because in some academic papers I read it was recommended to have about 10-20 minutes long VR sessions for motion sickness to show. But my thesis advisor wanted me to start testing early (because he knew it would be hard to look for participants) with the scene I had already. In the end, 5 minutes proved long enough for my test purposes and it was also easier to test because each session with a participant took less time.

24 3. Design & Implementation

Figure 3.4: Screenshot of the Scene View in Unity engine illustrating size of the terrain

3.11 Terrain and textures

Few different textures were used to give terrain different look invari- ous parts of the scene. Only seamless textures without borders were painted over the surface without creating a repeating "checkboard". All textures had a natural look to approximate realism of the scene. Textures were applied by using editor feature to paint texture on a terrain with a brush. Different styles of brushes can be used to achieve desired look. For example, it is possible to achieve a natural look by applying two textures using unevenly distributed brushes - result looks naturally thanks to the apparently random distribution of two textures. Texturing was also used to fabricate shadows in the forest of trees. Spots of darker texture was painted under each tree to make it seem like the tree is connected with the ground. Another use for textures was to replace detail meshes on terrain, such as rocks and bushes. In some parts of the scene patches of details were causing drops in frame rate. These details served only a cosmetic function. A small gain in performance was achieved by replacing them with textures. This also solved an issue with details popping in by going entirely without details in some areas. It is worth mentioning, that textures can be scaled similarly to 3D models. Scale was set to the default value or estimated experimentally.

25 3. Design & Implementation 3.12 Rail System

When I began designing and figuring out how to implement the track (path which users take when being transitioned through the virtual environment) I looked at assets available online, but those which were looking good were not free and since I wanted really basic tracks sys- tem I thought I would just create one by myself. It was good decision in a sense I knew it very well and was able to tweak it the way I wanted it. However, for some reason I just implemented straight line waypoint to waypoint system. I totally forgot about the curves. I designed the scene and used this system for the whole track. I added an interpolation to rotate the vehicle on the track towards the next point smoothly. It felt a bit too snappy, so I inspected my code very carefully and found out that I was inputting new rotation value in lerp function instead of original value. After fixing that, it felt alright. However, after a couple of sessions with participants, I got feedback that the movement felt artificial and was too discrete, not very smooth. “I got really irritated by the lack of smooth animations and movement. The abrupt changes of direction and speed felt artificial” As a solo developer and tester of my own project, I was accustomed to the artificiality of the move- ment and also to the way I hard-coded speeds of the vehicle into the waypoints. I implemented interpolation of course, but sometimes it would have been better to simulate the speed (for example, according to the weight) instead of setting the speed for waypoints manually.

26 3. Design & Implementation

Figure 3.5: Scene View in Unity showing waypoints

The Rail System was designed to transport any object (such as vehicle) through the environment using waypoints objects marking the path. It can move any object and can be used for simple animations. For example, it was used for the butterfly flying around and for the running horse at the very beginning of the scene. It was also very helpful for debugging and testing to be able to assign any object and check how it is being moved and rotated around. To help with the building of the track a method to visualize whole track was implemented. At first this was done by using component which creates a textured line between each waypoint. However, this approach works only when the application is launched. Similarly, so- lution of trail renderer attached behind the vehicle was discarded. Purpose of this visualization method was to assist when working in editor. In the end this was done by using Gizmos. These are allowed to render directly in the editor without the need to run the scene. And by using an if UNITY_EDITOR directive they execute only in the editor. This allows to create a clean build of the scene for testing the exper- iment in the laboratory without having to worry about debugging tools being part of it.

27 3. Design & Implementation

Figure 3.6: Using Gizmos lines to visualize track in the editor

3.12.1 Waypoints

Waypoint objects can contain Waypoint component which can define next Waypoint. If it does not define any, Rail System checks the object hierarchy and takes the next sibling object. Every direct child of a Rail system object is considered a waypoint. This behavior allows for branching paths and even changing the path on the fly. It was also very practical during testing and debugging to loop certain parts of the track. Thanks to this design, one only has to sort waypoints properly in hierarchy when no branching is needed and the Rail system takes care of the rest. There is no need to link individual waypoints. Linking last waypoint in the hierarchy with the first one is enough to create a circuit track. The waypoint design was inspired by the training rings used in games, where player has to complete an objective by flying through predefined path. By having waypoints as separate rings or gates, user immediately understands that the nearest one is the next intermediate destination. There is a faint color tint applied if waypoint applies a higher speed to the vehicle. Inside the ring of a waypoint there is an effect of shrinking torus. The reasoning behind this is to grab attention and make users focus on the waypoint. It might have been worth investigating whether presence of this effect influences results of the experiment.

28 3. Design & Implementation

One of the last waypoints on the track triggers an effect of camera fading out to black. After the fadeout is finished, user returns to the menu screen.

3.13 Terrain and vehicle

There was one thing, which was hard to keep track of. It was clipping of vehicle 3D model with the geometry of the terrain. There were bumps on the terrain below the track. Vehicle was being transported over these bumps with the Rail system. Sometimes a bump was too high or too sharp and it would clip inside the cockpit. Or the next waypoint would be below the current terrain height meaning vehicle would pass through part of that terrain to get to it. A simple approach was used to tackle this issue - manual check for these clippings and manual adjustment of the terrain or the track (adding, moving or adjusting waypoints). It was very time consuming. Sometimes changes made would be insufficient and several checks had to be done before problem was fixed. Keyboard shortcuts to jump to specific waypoints onthe track were implemented to make this process easier. In the future I would simplify this process, perhaps by implementing Rail system feature which would automatically check the path for the vehicle, creating a corridor or a tube and highlighting spots on the terrain where it would intersect. Or another approach could be to make the vehicle always stay in a fixed height above all points of the terrain below and automatically adjusting itself. Or having a special path system which would redefine the terrain under the track and make it comply with constraints. Anything but manual check would be better.

3.13.1 Rotation and leaning There are two ways how the vehicle needs to rotate every step on the way. First is the rotation towards the next waypoint. Meaning the tip of the vehicle rotates to face the next waypoint. Second is vehicle leaning from side to side according to the tilt of the track. This is done by looking at the tilt of the next waypoint. Of course both of these rotations need to happen smoothly with an interpolation between previous and desired value. This is done by using the lerp

29 3. Design & Implementation function. This function takes in two values - original and the new value and a parameter normalized between 0 and 1. When parameter is 0 an original value is returned, any value between 0 and 1 returns interpolated value somewhere between original and new value and for parameter value 1 a new value is returned. Normalized distance between waypoints is used as this parameter. This ensures that the rotation happens in synchronization with the track. Parameter for rotating towards the next waypoint is multiplied by 1.1 to face the target waypoint before it reaches it (in 90% of the way). Before, the change was done in a fixed time with predefined speed meaning that sometimes the vehicle would face the next waypoint and tilt according to it too long before it reached the next waypoint. And sometimes, when waypoints were closer to each other, the vehicle would not have enough time to finish the rotations. Using normalized distance solved these problems.

3.13.2 Speed Vehicle has a default speed of movement. This value has been set experimentally and the size of the terrain was tweaked with this value in mind. However, Waypoint objects can contain a component WaypointSpeed which overrides the default speed. RailSystem checks for the presence of this component and if found interpolates between current speed the speed defined in this component.

3.13.3 Movement stuttering issue With the first prototype of the Rail system there was an issue - object which was being transported by the Rail system (back then a sim- ple cube) was stuttering. The movement of object was not smooth, it felt like frames being dropped. Forcing vertical synchronization through Quality Settings did not help - since it is already being used internally. Same for using smoothDeltaTime instead of regular Delta Time. Issue was first discovered during the screen recording session, however even without screen recording issue persisted. Background process for adjusting the color temperature of the computer screen was turned off, but an issue persisted. Attempt to move the calculations to FixedUpdate loop also failed to solve it. Finally it was discovered thath

30 3. Design & Implementation

the culprit was a distributed computing application, even though it was set not to take up more than 25% of the CPU. This shows how much is virtual reality system resource intensive and that developers should always take their workstations’ condition into account when investigating a problem. Similarly, during one of the profiler investi- gation, significant frame drops were observed. Process of elimination was used to find a culprit - one by one removing objects from the scene, until only basic 3D primitives remained. Performance issue was still present. It was promptly discovered that the cause of the issue was a backup program had a task scheduled to run in background. In follow- ing profiler sessions it was always checked whether any background application was running immediately before testing the scene.

3.14 Implementation of the methods against motion sickness

Both of the methods are controlled by toggling their rendering on and off, meaning an element is shown on the screen or not. Asthe participants only experience single variant of methods against motion sickness it is not needed to implement any transitions or effects during the activation or deactivation of the method.

3.14.1 Frame of reference Method which shows a frame of reference to the user consists of two parts. First part is the cockpit and second is the radial reticle. In both cases it only takes to enable or disable rendering for these two objects. In case of used Unity engine one has to only enable or disable Renderer component. 3D model of the cockpit is still loaded in the memory, but that is not an issue because it has a simple geometry and thus does not require significant system resources.

3.14.2 Visible path Similar to the previous method, this method is a visual method. When it is activated waypoints are shown in the scene marking the prede- fined path which will followed by the virtual camera. Implementation

31 3. Design & Implementation is fairly straightforward - enabling or disabling rendering for all way- points. Screenshots in Figures 3.7 to 3.10 illustrate how does the initial scene look for each testing group (FoR, VP, FoR and VP, none). Note, that the position of the first waypoint was adjusted to make it visible in the screenshot, it was placed further back in the actual experiment.

Figure 3.7: None (screenshot of the scene)

Figure 3.8: VP (screenshot of the scene)

32 3. Design & Implementation

Figure 3.9: FoR (screenshot of the scene)

Figure 3.10: FoR, VP (screenshot of the scene)

3.14.3 Mechanism Every method is a child of a class AntiSSMethod. This parent class contains 3 members, one is a flag whether method is active or not, other two are abstract methods which prescribe what needs to be implemented in a child method. Each method needs to register with the MethodsSwitcher class using unique string and must implement a Toggle method for turning it on and off. MethodsSwitcher contains a dictionary where key is a unique string for each method and the value is a reference to the instance of the specific method. This approach is

33 3. Design & Implementation used to decouple method specific behavior and simplify process to toggle it on or off. Which methods are enabled or disabled depends solely on configu- ration. MethodsSwitcher implements an event listener SwitchMethods() which is executed when configuration is updated. This again helps to decouple setting of the values from the actual functionality. This robust system was implemented before it became clear that only two methods and their combination will be used for testing.

3.15 Scaling 3D models

Proper scaling of the 3D models is important for VR experiences. Without, it would feel strange for users to be present in an incorrectly scaled environment or to observe objects with improper scale. Basic human figure made with one of freely available 3D modeling tools was imported into a scene. After that, with HMD on the head, I have adjusted scale of the 3D model to approximate my own size. This was done both for standing human figure and for a sitting model. The rest of the 3D models were always compared to the standing human figure, which function as a reference scale. In the engine 1 unit of length is considered 1 meter. This approach was used to properly adjust sizes of objects. Photo references were used to scale 3D models of cows, horses and trees in comparison to a human figure. Photo references were also used while looking for textures which would make environment look the way it was designed to look. There was a small issue with a texture of a horse eye. For some reason it was not imported with the 3D model. Opening the 3D editor, exporting the texture an assigning it to a sphere primitive and placing it at its place in the hierarchy solved the problem.

3.16 Draw distance and billboard issue in VR

A simple optimization called billboarding is available in Unity engine when using a Terrain component. At a set distance, 3D models (eg. trees) placed on a terrain will automatically be replaced by 2D texture. This improves performance and it is hard to notice anyway. Unfortu- nately, at the time of the development there was a bug in Unity causing

34 3. Design & Implementation

these billboards to rotate according to the input from head-tracking instead of the fixed camera position. When user moved their had, even though not moving through virtual environment, trees would start to swing. There is an option in Quality settings called "Billboards Face Camera" which is supposedly solve this issue. However, it seems that it has no effect. Problem persisted. To solve this issue I had to place tree models as individual objects instead of details of the terrain and tweaked their LOD to optimize performance in a different way. Another issue with the terrain details were details like rocks pop- ping in as soon as they were close enough to render according to their settings. Clever placement of other objects to blog the view and chang- ing elevation of the terrain (like creating hills) to obscure the view was used to mask this problem. There are still few parts of the experiment where the problem is still visible. It would require redesign of the whole scene to hide it entirely.

3.17 OVRService

Occasionally there was a problem with Oculus VR Runtime Service. Service has stopped for an unknown reason and had to be launched manually. This happened only a few times and might have been present only with a specific version of the driver. Symptom of this issue was an error message stating "Can’t Reach Oculus Runtime Service" and it was not possible to run the VR scene.

3.18 Performance optimizations

Performance, performance optimizations and experience with the pro- cess of applying optimizations are discussed in following subsections.

3.18.1 Regular runs with Profiler Regular testing of the scene with enabled profiler was done to ensure steady performance. This was very important, because it helped to steer both the design and the development towards running more smoothly. Keeping average frame rates was not a problem, however there were occasional frame drops. Dropping frames in VR is very

35 3. Design & Implementation uncomfortable and might influence results of the SSQ, which isun- desirable. These runs had a significant impact on the development. Various methods for performance optimizations were used. Both in- editor and profiling of a build was used. In-editor profiling is useful for a rough and fast checks during the development. Using profiler on a build is recommended, because editor does not influence results. Even though it takes more time to start profiling results are more precise. By combining these two ways developer can optimize performance as they see fit. Unity provides integration with Visual Studio andan option to automatically connect a profiler with the build. This makes profiler a great feature and easily to work with. It is even possible to place breakpoints in the code and step through the program with ease.

3.18.2 Hardware specifications Unity scene was developed on a desktop computer with Intel i5 pro- cessor and Nvidia GeForce 1060 GTX graphics card with sufficient system memory. Computer in the lab, where experiment took place had GeForce 1070 GTX installed. There was also a difference in a model of HMD. Development was done on Oculus Rift DK2 and an experiment was conducted with Oculus Rift CV1. When using DK2, the framerate was locked to 75 frames per second. Similarly, CV1’s framerates are locked to 90 frames per second. It was needed to test whether experiment will run smoothly in the lab. Firstly, build of the scene was profiled at the lab and checked for frame drops. To dothat a simple script, which checked frames per seconds was used. Basi- cally, it counted how many frames below target value (90fps in the lab) were present. Secondly, an adjustment in the Render Scale value was made on the development machine (with DK2) to simulate higher pixel count on a computer in the lab (with CV1). Details about the method can be found at svbtle.com. Basically, to adjust for more pixels, this value is increased from 1.0 to 1.3.

3.18.3 Occlusion culling and level design I had an idea what the scene should looked like. Natural looking landscape with few interesting places – eg. relaxing initial scene, which

36 3. Design & Implementation

I called Chill Zone, a giant hill with first downhill, a little village with a water well and something interesting at the end. I decided to place a castle there. I wanted to provide an experience with a prologue and build-up to climax. This decision of having wide open space, long draw distances and the substantial amount of details did not go so well with VR. I wanted to make it look better, to have a lot of details, to be able to use higher anti-aliasing, use better textures, but I could not due to the performance. I wanted to have a nice looking natural scene, but I could not do that in such a short time I had for the development. I wanted details but they were too performance demanding at this scope. Mostly because of the draw distance. I tried to add a fog, but it looked awful. I had to change my landscape a bit to at least to take advantage of few basic performance optimizations. Especially the occlusion culling [35] brought a huge performance gain. However, I had to introduce a lot of terrain changes. Even when I tried I could not use it effectively without changing a substantial amount of the track. Ialso used LOD and baked lighting and few more optimizations. If I were to do that again I would not choose an outdoor environment or I would design the whole track and terrain having occlusion culling in mind right from the beginning. I used basic terrain in Unity with default settings and it was fine for basic terrain design. It is also pretty easyto do. Just like in image editors, you can select a brush, its size and shape and “draw” the terrain height, textures, smooth it out, put details like rocks and trees. However, it looks ugly, because the geometry of the terrain is limited and there are sharp edges everywhere. Maybe I should have looked at some tutorials and 3rd party terrain tools. Default settings were used for the size of the camera volumes. This optimization (after changing the map layout) brought largest performance gain. However, it took a significant amount of time to adjust the map to take advantage of the technique. It would be far better to start building the scene with it in mind instead of adjusting for it in the middle of the work.

3.18.4 Other optimizations Here are some other optimizations, which have been applied or at least tried out. Some of them might be project specific. Some of the advice on optimizations came from articles and videos on how to optimize for

37 3. Design & Implementation

Figure 3.11: Screenshot of the Scene View in Unity with occlusion culling in action smartphones. Both VR and smartphones share the need to optimize due to the performance constraints. Tuning of fixed timestep interval was tested, however due to the lack of physical calculations present did not provide significant performance gain. All unused collider components were removed from the scene too. Lighting was baked and all light sources were static. Scene was tested with fog turned on to decrease draw distance, however the edge of the fog was clearly visible. This would be distracting for users. Mesh of a waypoint was optimized and the number of its polygons reduced. Other ways to optimize, which were tested or applied, are single pass rendering, shared materials and static batching. Lastly, the fixed initial 15 second countdown worked as a way for frame rate to stabilize before the experiment began.

3.19 Note on development process

Due to the nature of this experiment I had to create a virtual scene which might cause participants a level of discomfort, perhaps even induce a level of motion sickness. This was slightly discouraging at times, because I was purposely creating something which could even- tually make other people sick. However, findings from this experiment may be applied in future VR projects to make these experiments more enjoyable.

38 3. Design & Implementation

Developing this scene made me feel a certain level of motion sick- ness at times, forcing me to take breaks from the development or switching between different aspects of the development to rest from experiencing the discomfort. It was difficult to work on the project while experiencing a nausea. Switching between programming or editing scene in the editor and observing results in VR (wearing HMD) can be cumbersome at times. Developer has to constantly put HMD on and off while fine- tunning variables or debugging. There is a greater overhead compared to the development for standard videogames. Keeping HMD on the forehead like regular glasses is possible, however HMD is heavy and after hours of development in this fashion one might experience some neck pain.

3.19.1 Health and Safety Warnings Health and safety warnings included in the Oculus Home (Oculus App) software were a nuisance during the development. Developer has to put on the headset and look directly at an element in VR for a short time period to agree with the warning. This was required every time scene was launched, making iterative process of tweaking the code/variables and jumping into VR less streamlined. However this is no longer a case. In current version (since v1.15) of the Oculus Home developer can tick and option in the program to disable these warnings. It is only required to watch a few minute long instruction video once. This small change was a good improvement and made development slightly easier.

39

4 Methodology

Following chapter illustrates design of the experiment, reasoning be- hind the design, describes testing procedure, data collection and pro- motion to find participants.

4.1 The experiment

The primary goal of the experiment was to find out whether selected methods work against motion sickness. To measure participant’s sever- ity of motion sickness standardized Simulator Sickness Questionnaire (SSQ) was used. Questionnaire is widely accepted and has been cited over 800 times [36]. Questionnaire provides a list of symptoms and severity of the symptom on a scale from none to severe. Participants evaluate themselves, meaning obtained scores are subjective measure- ment. One of the future improvements could be to use an objective way to determine severity of the motion sickness, perhaps EEG mea- surement. The SSQ total score is obtained by following the formula in the [28]. First weighted totals are calculated, then entered into the formula TS = [1] + [2] + [3] * 3.74. Participants were randomly assigned to 4 groups. Each group consisted of 15 participants. The number was recommended by the advisor. Following table illustrates which methods were enabled for each group of participants:

Group Frame of reference Visible path FoR, VP Yes Yes FoR Yes No VP No Yes None No No

Table 4.1: 4 groups of participants and methods (Frame of reference, Visible path)

The purpose of these 4 groups was to test selected methods both independently and in combination with the other. Fourth group ’None’ served as a control group.

41 4. Methodology

It was a right choice to have as many questionnaires in digital form as possible. Researcher does not have to print as many materials, it saves time and effort of transcribing and there is a smaller roomfor making an error. My advisor recommended me to use questionnaires such as TLX and PQ to collect interesting data from participants. A website form of Task Load Index (TLX) questionnaire [37][38] was used. Original javascript was modified to download a .csv file with answers when form is submitted. This made data collection process slightly faster. These files were individually imported into appropriate data table. In the future I would automate this process even further and save answers directly to the data table. Some of the questionnaires were implemented using Google Forms [39]. Survey Monkey [40] was also tested for this purpose, but Google Forms were sufficient. A great feature of Google Forms is that thean- swers can be directly stored in a Google Sheet table. Every time a form is submitted a new row is added to the table. Personal information and SSQ questionnaires were using Google Forms. Presence questionnaire (PQ) [41] (Appendix D), consent form (App. B) and general feedback (App. C) were on a paper. Participants were signing consent form before the experiment. In retrospect it should have been better to have these questionnaires in digital format as well, because transcribing the data was a mundane manual task. In the future all the questionnaires would be digital. After the data was collected it was anonymized by removing names and emails and only participant number was kept. Participant number was used to merge data together. Pilot data was removed from the dataset. Gradually, all quantitative data were merged into one data file. Some additional variables (such as components of the SSQ score) were generated using scripting syntax. Fixing typos was needed in some cases. Lastly, records were ordered by the method against motions sickness and analyzed.

4.2 Experiment process

Following section describes entire experiment process from early stages to the end.

42 4. Methodology

4.2.1 Pilot At first, a pilot testing was conducted with a family member and fellow students to check the process. It is important to have a pilot testing, because it helps to find flaws in the design of the experiment process and it helps researcher to establish a testing routine. Experiment can be improved by iterating (having a pilot test, finding flaws and improving an experiment). Data from pilot testing were not included in a data set for further analysis.

4.2.2 Preparation A paper sign "Don’t interrupt, Testing in progress" was placed on the lab door to conduct the experiment without participants being disturbed from the experience. A VR scene was launched and prepared by checking head-tracking response. HMD lenses were cleaned with suitable cloth. Two browser windows were launched (desktop shortcuts were created to make things more streamline). First window contained personal question- naire to be filled before the VR scene. Second contained SSQ andTLX questionnaires to be filled after the VR scene experiment. Two copies of consent form were printed to be signed. Presence questionnaire and form for general feedback were printed out. Having a function- ing printer right in the lab was very convenient. Lastly, sheet with a participants number (used for filling out questionnaires) was also printed. During the working hours the doors of the university are open and participants were coming directly to the laboratory. On weekends I was picking them up at the front entrance and taking an elevator. This slight inconsistency could have influenced the results (participants climbing stairs compared to ones taking an elevator, relevant study 2.3.10).

4.2.3 Before the VR scene After showing participant to the lab, letting them look around and take a seat, consent forms was handed over and participants were asked to read it carefully, fill-in the form and sign the both copies with

43 4. Methodology a supplied pen. They were told to ask, if they had any questions. Par- ticipant handed over the consent form and it was checked whether the entered information is correct and complete (eg. participant number). Often, participants needed to check computer for the current date (required in the consent form). Then they were asked to fill in the prepared personal information form in an internet browser. When finished an actual experiment could begin. After an incident when one of the participant’s phone was vibrating (ringing in silent mode), participants were told to turn their phones off. Perhaps in the future it would be better tohand over paper with instructions (and containing individual steps of the experiment) instead of telling each participant independently. This would also make testing process more uniform.

4.2.4 VR scene Participant was instructed to put on the HMD. The application was set to show a menu screen. There were menu options visible on the LCD screen. Inside VR there was a small area with relaxing green leaves texture and menu options were not visible. This allowed ex- perimenter to switch settings of the methods against motion sickness without participant knowing which setting was used. After that partic- ipant was asked whether they could see the scene properly. If needed experimenter helped participant to adjust straps and HMD to fit it comfortably. Then a testing scene was launched. There was a fixed 15s interval set at the beginning of the scene. User was asked again whether they could observe the scene properly. The question given was: "Can you see the house on the left clearly?" If needed experi- menter assisted participant to adjust HMD. When finished adjusting, scene was reset. After that participants were instructed to just watch the scene and to look around until experiment is over. There was a slight issue with this approach. After the experiment, some participants expressed that they were not told what to do (even though each participant was told to watch the scene). When asked why they though they had to do something, they answered that there were questions in the following questionnaires suggesting a task was sup- posed to be performed. As an improvement for future testing, I would write instructions on a paper clearly explaining the task. Perhaps "Your

44 4. Methodology

task is to watch the scene. This is a passive viewing experience only." Or perhaps repeating instructions few times before beginning the experiment in order for participants to remember what they are sup- posed to do. With the current state of things experimenter had to explain that the questionnaires are standardized and some questions might not fit the experiment. Omitting irrelevant questions would help as well. They were also told not to speak during the experiment and that at the end of the experiment screen will fade to black and they will return to the initial scene and by that they will know that the experiment is over.

4.2.5 Body posture differences

I observed participants while they were watching the scene and took a photograph if they agreed to it in the consent form, wrote down feedback from previous participant, checked the registration calendar or promote the experiment on a social network. However, I always glanced to look how is the participant doing from time to time to make sure everything is in order. This is the part when an issue with the testing procedure was dis- covered late into the experiments. Participants were sitting in many different ways. They differed in the body posture, arms and legsposi- tions. This might have influenced results as it was mentioned before (in subsection 2.3.10) that the posture has indeed an effect on SSQ scores. Photographs in figure 4.1 show only a few examples of differ- ent postures.

4.2.6 After the VR scene

When the VR scene finished and faded to black, experimenter assisted participants in removing the HMD and provided napkins to wipe forehead/face if needed. Sometimes participants sat and waited or asked whether the scene is finished. Even though they were told at the beginning that the scene would fade out to black. In the future it would be practical to add a text to inform them clearly, that the scene is finished and that they can remove the headset. Additionally, providing

45 4. Methodology

(a) Relaxed posture (b) Crossed arms (c) Leaning on a table

(d) Holding a leg (e) Holding a chair

Figure 4.1: Various body postures

46 4. Methodology

a napkin to wipe could have reminded participants of sweating and by that influenced the results. Then participants filled in SSQ and TLX in the browser. After they finished presence questionnaire in the paper form was handed over to them to fill in too. The last part was a general feedback form. First, they were asked to write down they thoughts and opinions about the experiment. Then we discussed the points they made to clarify and complete the feedback. Some of the participants had to be prompted to write down thoughts they mentioned or they needed and assistance with translation. There were constant problems with questionnaires in English. Par- ticipants did not know some of the words used and either asked or used a dictionary. It is impossible to find out whether they understood all the questions properly or even if they answered to they best knowl- edge or just put in some answer. It is worth mentioning that even though some participants were sufficiently proficient in English, they still had problems with few terms used in questionnaires. Perhaps, it would have been better to translate the questionnaires and sacrifice the verification of the English version for the sake of clarity and under- standability for the participants. But then again, it would be difficult to report the results in an English thesis. Nevertheless, problems in translation could have influenced the results. After the discussion about their general feedback, participants were given an envelope with the discount voucher for a VR experience and expressed gratitude for participation in the experiment. Then the laboratory was prepared for the next participant.

4.3 Gathering participants

A total of 60 participants were required to take part in the experiment. In this chapter we will examine process of looking for participants and registering them for testing sessions.

4.3.1 Promotion Finding suitable participants is a crucial part of every experiment and one has to look for them in the right places. As a student of the

47 4. Methodology

SSME program [42] I am supposed to have a wide variety of expertise spanning across several disciplines starting with deeper knowledge of information technologies to soft skills and apply them while working with diverse groups of people. For example, I have been learning about marketing and applied this knowledge while looking for participants, or used active listening [43] during sessions with participants in the laboratory. Primary place used to search for participants was a social network- ing site Facebook [44]. Many students and people in the productive age are members. This is a good fit for experiments, because partici- pants from Facebook will likely fit into similar age groups (18-40 years old). Other sources were student forums, through friends (friends of friends and coworkers). Creating a Facebook page was the first step. It is important for the page to have followers - people who subscribe to receive updates from the page. I have messaged many of my con- tacts, inform them politely about what is the purpose of the page and invite them to follow the page. Facebook events were also created. To actively seek for people who might be interested in helping with the testing many Facebook groups were used. It has proven effective to post in groups for students. For example student dormitories groups, groups for years of studies on various faculties, groups for free time activities, groups for advertising and groups for job seekers (including volunteering). A list of groups which have been used is in Appendix A. Posting in groups was scheduled and limited to a few groups a day to avoid spamming and to distribute posts over time. Also a couple of template messages were created and only slightly adjusted for each group to save time. Here is one of the template posts (original, in Czech):

“Ahoj studenti :) Kdo chce vyzkoušet virtuální realitu? Dělám výzkum a sháním dobrovolníky. Odměnou za účast vám bude dobrý pocit (že jste pomohli kolegovi studentovi s jeho diplomkou :-D) a také poukázka na slevu 20% do @vrena (herna virtuální reality).”

It was written in a light playful way with a emphasis on value participants can receive by taking part in the experiment. Once a

48 4. Methodology

situation occurred - member of one of the groups had probably marked the post as a spam. For several hours it was not possible to post in any group at all. However, it was still possible to comment on existing posts. And this kind of activity promotes the post to the top position inside the group feed. Luckily, ban is only temporary. Similar it is not possible to post the same comment repeatedly. For this template texts became valuable, only needing to be slightly adjusted to fit the audience within the specific group. In the template message above itis mentioned that the participant who participates in the experiment will get a 20% discount for virtual reality arcade. I know the owner and we agreed that he will provide a discount vouchers for the participants. I believe that this added value motivated people to participate in the experiment. Other motivation was to help student with their research and/or Master’s thesis. However, most of the participants wanted to try the virtual reality for the first time. It is always important to think about what added value can be offered to participants. At the end of the session each participant received an envelope with the voucher (App.K.2). As mentioned earlier, Facebook page was created. After some though it was named Virtual Reality Research Brno. And the intention was to use it not only for this research, but for all research related to the subject in the future. As of now, this research and the research of colleague had obtained their participants through this page. The page started small with only a couple of friends being invited manually through a message. After consistent effort over a period of few weeks a majority of contacts started following the page. It is worth mentioning that manual messaging is a way to go to get first followers. There is an option to "Invite Friends", however invitations are unintrusive and partly hidden in user’s UI. Most of the users don’t even know they re- ceived an invitation. And when they do, they have no idea what is the page about. In message it is simple to explain and have a conversation about it. Interesting articles, videos, images and other types of posts were shared through this page all related to the topic of virtual reality. Thanks to this activity, other people started to join and page followers numbers grew organically. As of the moment of writing, there are 174 followers. Humble following of people interested in the VR news and VR testing opportunities.

49 4. Methodology

People in the groups or in the comments often have the same ques- tions. It has proven beneficial to have a FAQ (Frequently asked ques- tions) section available. Answers to frequently asked questions such as location, contact information, time needed have been made available to people interested in the experiment right below the registration calendar in a box with thick border to emphasize the importance. How- ever, this was not enough in some cases. People tend to forget or do not have access to the internet everywhere. Important information was repeated again in the email for registered participants and again in person on premise (in the lab). It takes patience in some cases, because not all people are as tech savvy as students of Faculty of Informatics. In retrospect an age requirement (18-30 or so) should have been set and communicated, this would have prevented from testing a few subjects who fell outside of this range. However, thanks to the places of acquisition (selected Facebook groups) it was a small issue. Another takeaway from promoting the testing sessions was that conversion rate matters and more obstacles put between registration calendar and the user the worse it gets. For example, when Facebook event was created and there was a link to the registration calendar within the event description only 2.39% users who viewed the event came to the testing. Even though link to the registration calendar was at the beginning of the description it was still slightly hidden. Linking directly from the post to the registration calendar proved to be a lot better and smoother experience for the participants. Facebook events were only used to accompany the testing for each week and links to the calendar was posted there as an update. This is visible action to the people interested in the event, because notification (Fig.K.1) is shown. And it serves as reminder or to grab their interest. In the future I would experiment with physical posters and leaflets put on university boards if I were to test again. This time I have aban- doned this idea, because testing was done during the summer holiday when there are no students roaming the university halls. Note, at first, I was documenting my development process through Twitter and Twitch. Later, about a month before the experiment I seized to do so and also removed past posts and references to the nature of the experiment. This was done to obscure true purpose of the experiment in order to participants not being able to use search engine to find out about the purpose of the experiment and be influenced by it.

50 4. Methodology

4.3.2 Registration for experiments Calendar created in Google Sheets [45] was used for registration of participants. During the first week a registration was done in Doodle [46]. However, there were issues with it. Firstly, user interface was lacking when multiple session times were created - creating a long horizontally scrollable bar with session times. Secondly, description field did not allow for hyperlinks and was restricted in length. Lastly, it did not provide a way to exchange contact information in case par- ticipant or experimenter needed to make changes or cancel a session. In contrast, custom calendar created using Google Sheets allowed for a registration solution designed specifically for the needs of the exper- iment, participants and experimenter. Entire Google Sheet calendar is in English, because entire experiment was conducted in English. The idea behind was that participants who were able to follow the instructions in English are able to handle questions in questionnaires. However there were few less commonly known English words in the questionnaires and participants occasionally had to use a dictionary. This might have influenced the results, because questionnaires were calibrated for native speakers. There was a small issue with using Google Sheet as a registration calendar. People using their mobile phones have to install an application to edit the documents. This is an obstacle for participants and might have put off a few of them. In the future it would be perhaps better to use a responsive website for participant registration. The process of registration was following:

1. participant chooses a date and time of their session (Fig. 4.2).

Figure 4.2: Participant fills in the registration (example)

2. I have checked the calendar every day and send a confirmation email (full text in App. K).

51 4. Methodology

Important parts of the email were highlighted / in bold to be easily readable.

3. Participant’s name in the calendar is marked green in the calen- dar (Fig. 4.3). At the same time the laboratory is booked using private calendar and the participant is assigned to one of 4 testing groups.

Figure 4.3: Participant session marked green (example)

4. Reminder email is send to participants day before their session.

Hello,

Just a reminder, you have a VR testing session to- morrow :)

Cheers RL

When sending en email to multiple participants, blind carbon copy (Bcc) is used. Sending a reminder email is a good way how to remind participants about their upcoming session. However, even then some of the participants did not show up. One must not be discouraged when this happens. 15 minutes after waiting for participant a friendly email was send, asking whether they are coming today. Some of the participants did not answer. Luckily, some did and a substitute session was planned. Personally, I was really angry when participants did not show up. It was an hour of wasted time. Especially when there were a couple of participants spread over a whole day and it took 2 hours or more to wait for the next. However, I understand that people are busy and might had something more important to do. And I know that sending a friendly email is far better than angry one, only former is

52 4. Methodology worthy of writing. In the future I would add a step, where participants would reply to first email and confirm their participation. I believe this might alleviate problem with people skipping their sessions. About 20% of participants did not come to their session (for example: 3 out of 14). Having a waiting list in the calendar was useful. Sometimes people are willing and want to participate, but available sessions do not work for them. Or sessions are already full. There were not many people, however it was good to have few people ready to contact for upcoming week instead of beginning every week from the scratch. Another way to increase the number of participants is to ask participants at the end of their session whether some of their friends or colleagues would like to try virtual reality too. This brought me a few extra participants. Lastly, it is good to ask participants for a recommendation for a places to look for more participants. Several places and groups were recommended to me only by participants. There is an option in Google Sheets to protect some of the cells within the document (as shown in Figure 4.4). This feature was used to prevent participants from accidentally deleting or overwriting im- portant information. It is worth mentioning that protection is fixed to the concrete area of cells. Checking protection after changing the structure of the document is recommended. There was an issue one week with the registration, because I forgot to update cell protection and participants could not register for a few hours! Luckily, one of the participants send an email notifying me about this problem.

Figure 4.4: Cell protection feature in Google Sheets

Keeping track of the participants is not an easy task. Some people cancel their session, some postpone or reschedule. Having a table with

53 4. Methodology names, emails, sessions times and numbers is a must. Color coding was used to distinguish participants which participated and which did not. It was a manual task of checking the names with the calendar, list of participants and the waiting list. In the future it would be better to use an emailing newsletter service to keep track of the waiting list and to manage prospective participants. I benefited from knowing fellow students who have already been testing. They shared valuable insights with me and this allowed me do the testing properly and with less problems and faster. I recommend anyone conducting a research such as this to ask around for advise.1

4.3.3 Participant assignment to a testing group While participant is being registered to a specific testing session, lab- oratory is reserved for this time and the session is marked in the calendar and they are assigned to one of 4 testing groups. There are 4 testing groups:

1. "FoR,VP" 2. "FoR" 3. "VP" 4. "None"

These 4 groups correspond to the 2 methods against motion sick- ness, whose effects are being evaluated. First group experiences the VR scene with both methods being active. The second and third group have only one method active. Last group has both methods disabled. This configuration is used to check how individual methods perform and whether their combination gives better results in contrast to only one active method. Each group compose of 15 participants, meaning there are 60 participants in total. Participants were assigned to their respective group using ran- dom.org [47] (website for random number generation). Website gen- erates a random number between 1 and 100 by default. This interval

1. If you wish to use the Facebook page Virtual Reality Research Brno to find partici- pant or seek advise I will be glad to help. Feel free to reach out to me.

54 4. Methodology was divided into 4 equal intervals. For example, if the random num- ber was 100, participant was assigned to the 4th group. Later, when there were empty slots after few participants canceled their sessions, assignment was a matter of filling these empty slots first. Almost atthe end, two groups were already full and all participants in these groups already tested, interval of 100 was split into two intervals to assign participants randomly. This approach is in contrast with previously planned structure, in which each week of testing corresponds with specific testing group. This improvement was recommended tomeby Adam Qureshi, researcher at Edge Hill University. Important note, nor age, gender or any other demographic played role in assignment to groups. However, participants were acquired through a specific groups on a popular social network meaning they would probably fall into category of 18-40 years old and they would all have a basic skill of using a personal computer and an English language. This indifference toward distinct demographics might in- fluence the results of this experiment. For example, there maybea different male:female ratios for each group, the average age maybe different for each group, current status may be different andmore. There was an older participant from abroad. Results of this par- ticipant’s testing were not included in the research to evade skewing of final results. However, there might still remain some participants who are skewing results in some way due to similar reasons.

55

5 Results

This chapter is divided into two parts. First parts looks into the quan- titative data collected with questionnaires. Second part analyses qual- itative data gathered from the general feedback at the end of the experiment.

5.1 Quantitative

It was hypothesized that using either method against motion sickness will lead to lower scores of the SSQ (a simulator sickness question- naire). There were two methods: Frame of Reference (FoR) and Visible Path (VP). Four groups of participants were formed. Two groups dedi- cated for each method, one group for combination of methods and group with both methods disabled. In short, FoR method is a presence of reference point in the field of view of viewer. It was a cockpit anda radial crosshair in this case. This method should lessen the discrep- ancy between inputs from visual and vestibular systems (which is often considered the cause for motion sickness) by producing a static frame of reference in the scene. VP method meant that there were vis- ible waypoints (circles) along the way, marking the path and helping viewer to anticipate movement in the virtual environment. This again should have an effect in reducing level of motion sickness. The data was analyzed using IBM SPSS Statistics [48], a widely used software for the statistical analysis. There is a free educational license for students. Few charts were generated by SOFA Statistics [49], an open-source statistical package.

5.1.1 SSQ results

Figure 5.1 shows the SSQ scores for each group. At a first glance, all groups with methods involved (’FoR’, ’FoR,VP’ and ’VP’) show lower median SSQ score than the ’none’ group.

57 5. Results

Figure 5.1: SSQ scores

For those unfamiliar with this kind of chart, figure 5.1 shows a regular bar chart, where the narrow lines show lowest and highest values, yellow box marks lower and upper quartile and dark line in between shows median value. Interestingly, score for "FoR" and "For,VP" groups look very similar, only in the case of the latter there are less extreme values. This seems to correspond with the narrow range of values for the "VP" group. Even though it has the highest value for the minimum SSQ score, it has also lowest value for the maximum SSQ score. However, the median value is only in the half way between "none" and "FoR". Making visible path the less effective method of the two. Frame of reference on the other hand shows lower SSQ score. What is unsettling is the variability of the SSQ scores measure for this method. There are the most extreme values of SSQ score.

58 5. Results

More insight can be gained by looking at the histograms of SSQ score for each method. Mean and standard deviation values are also present. Sample size was 15 for each group.

Figure 5.2: Histogram for the 1st group: FoR,VP

There are clearly only a few responses with higher SSQ values in the first group (5.2). Most participants in this group had around 10to 20 of the SSQ score. Mean value is 23.23, standard deviation is 20.894.

Figure 5.3: Histogram for the 2nd group: FoR

59 5. Results

Second group on the other hand has a lot more answers in the higher SSQ scores (note the wider range of x-axis, mostly between 40 and 60). Also standard deviation is a bit higher with 27.455. Mean value is 33.72.

Figure 5.4: Histogram for the 3rd group: VP

Third group is the only group, where its histogram resembles a Gaussian function. However, this might be only because of limited sample size (only 15 participants for each group). In this case most participant scored around 10 to 20 and 30 to 40. Standard deviation 12.881 is less than a half compared to the FoR group. Mean value is 25.51.

60 5. Results

Figure 5.5: Histogram for the 4th group: none

Mean value for the last group is 31.65. Interestingly, there are a lot of low SSQ scores in the ranges 0 to 40. Following Table 5.1 summarizes the important values for each group (mean and median values of SSQ score and the standard devia- tion). Values are floored for readability. For more details see appendix H).

Method Mean Median STD FoR 33.72 18.22 27.455 FoR,VP 23.23 16.22 20.894 none 31.65 28.7 24.68 VP 25.51 22.22 12.881

Table 5.1: Mean, median and std. for each method (SSQ)

Both of the methods can be tested using independent samples test [50] (considering 95% confidence). The ’none’ group is used asa control group.

61 5. Results

Figure 5.6: Independent samples test (FoR method)

Significance level (Sig. (2-tailed)) is 0.829 (see Figure 5.6), meaning we can not conclude that the FoR method helps to alleviate the motion sickness.

62 5. Results

Figure 5.7: Independent samples test (VP method)

Situation is similar for the VP method (shown in Figure 5.7). Sig- nificance level is 0.400 (above 0.05). Again it is inconclusive whether VP method is effective against motion sickness.

63 5. Results

Figure 5.8: Independent samples test (both methods)

When considering "FoR, VP" as a single method, again level of sig- nificance is greater than 0.05. Results for this method are inconclusive too (Figure 5.8).

64 5. Results

Figure 5.9: Independent samples test comparing FoR and VP

When comparing methods directly between each other (Figure 5.9), there is also no significant difference (p-value of .306). And this time they even fail to pass the test for equality of variances (Sig. is below 0.05). After testing SSQ data of both methods for significant differences we have to conclude there is none. There is too much variance between each group. However, from the look at mean values (Tab. 5.1) it seems that frame of reference has little to no effect and visual path does seem to lower the SSQ score. 1

1. These results are a bit discouraging after taking a time to design and create the scene, experiment with participants and gathering all the data. However, is is also interesting, because at first glance at the charts one might hurry to conclude their hypothesis true when (after carefully examining the data) the opposite is true. What is the reason behind the large values of standard deviation?

65 5. Results

5.1.2 Further analysis of SSQ results

Let’s examine reasons for inconclusive results and look further into the data. There are two outliers visible in the Figure 5.1. One in the group ’FoR,VP’ and one in the control group ’none’. Table with extreme values is available in appendix H. There are many outliers (extreme values in the samples. When comparing extreme values in appendix H with mean and median values (see Table 5.1), it is clear. Extremely high values are present in all groups, and there are some extremely low values as well. See Table 5.2 (values are floored for readability, original values are in the appendix). For example, there are two results in the ’FoR,VP’ group with 80 total score of the SSQ. These are very extreme values compared to the mean or median values. This is most likely the reason for such a large values of the standard deviation.

Method Mean Extremely high Extremely low FoR 33.72 82 x 2, 70, 47 x 2 3 x 2 FoR,VP 23.23 68, 61, 42, 38 2 x 2, 3 none 31.65 93, 66, 57 2, 8, 10, 11 VP 25.51 52, 43, 37, 36 8, 11, 12, 13

Table 5.2: Mean values and examples of extremely high and low values for each method (SSQ)

Table 5.10 shows percentiles for each method. By using Outlier Labeling Technique [51] with the g-value of 2.2 [52] it is possible to determine lower and upper bounds and find out whether there are any outliers which fall outside of this range. The formulas are Q1 − (2.2 * (Q3 − Q1)) for lower bound and Q3 + (2.2 * (Q3 − Q1)) for upper bound.

66 5. Results

Figure 5.10: SSQ percentiles

Resulting lower and upper bound values are shown in the Table 5.3. It is immediately obvious that no outliers can have lower score than any of the lower bounds (SSQ score is always above 0). Similarly, there are no extreme values above 100, leaving only upper bound of 88.49 of VP method for investigation. According to the Table 5.2, the highest extreme value is 52. Meaning that this approach to determine outliers has found none.

Method Q1 Q3 Lower Upper FoR 13.48 47.92 -62.29 123.69 FoR,VP 6.74 38.18 -62.43 107.35 none 11.48 39.18 -49.46 100.12 VP 13.48 36.92 -38.09 88.49

Table 5.3: Lower and upper bounds (SSQ scores)

Method Mean Trimmed mean FoR 33.72 32.74 FoR,VP 23.23 21.86 none 31.65 29.83 VP 25.51 24.94

Table 5.4: Mean and trimmed mean values for each method (SSQ scores)

67 5. Results

Table 5.4 shows mean and 5% trimmed mean values for all the methods. This can be used to indicate of how much a problem are the outlying cases. The expected value is the 5% trimmed mean, acquired by removing the top and bottom 5% of the cases and calculating a new mean. By comparing the original mean and trimmed mean, it is possible to see if outliers are having an influence. And it seems that mean and trimmed mean are fairly comparable for all the methods. Conclusion of this is that there are very few or no outliers [53]. Another way to look at the data is to study scatter plots. Figure 5.11 shows scatter plots for individual groups. It is immediately visible, that individual results are scattered from lowest to highest scores in all groups (except ’VP’ group, which does seem to cluster).

Figure 5.11: Scatter plot for each group (y-axis: Total SSQ score, x-axis: Nausea score

68 5. Results

Figure 5.12: Scatter plot for all groups (y-axis: Total SSQ score, x-axis: Nausea score)

Second scatter plot (Figure 5.12) shows all scores in one chart. With the color coding, it would be easy to spot clusters. There are some areas dominated by one or two colors (such as 3 ’VP’ measurements on the x-axis between 30 and 40 or several measurements of ’FoR’ and ’FoR,VP’ between 10 and 40 below that). Overall, however, results from all groups seem to mixed together. Some of the previously mentioned studies (for example [10]) have been evaluating separate components of the SSQ score (N, O and D). N stands for Nausea, O for Oculomotor and D for Disorientation [28]. For the sake of brevity, results have been omitted from the text of the thesis. None of the separate components showed significant results between the groups.

5.1.3 Personal Information Questionnaire This subsection analyses the effect of various demographic and other parameters on SSQ scores. For complete list of questions in this ques- tionnaire see Appendix F.

Age There were only 3 distinct age groups, as shown in Figure 5.13. For more details see Figure F.4 in Appendix F. Most of the participants

69 5. Results were students and young adults between 18 and 25 years. Second group 26-33 years had 22 participants. There were only 3 older partici- pants.

Figure 5.13: Age groups summary

Let’s examine the effect of age on the results of the SSQ scores. Note, that the age of participants was not considered during their assignment to the particular group of methods against motion sickness (FoR, VP,FoR and VP or none). Meaning, there could have been a larger number of participants of certain age tested for one method and by that skewing the results. See Appendix F for age distribution between groups. However, analysis of the SSQ scores showed no significant difference between the method groups.

70 5. Results

Figure 5.14: SSQ scores for 3 distinct age groups

There is little difference between mean values of the SSQ Total Score for different age groups (as shown in the Figure 5.14). This becomes even more obvious when examining the scatter plot (see Figure 5.15). First and second age group show very similar distribu- tion of values for Nausea and Oculomotor components of the SSQ scores. This is confirmed by performing an independent samples test (see Figure 5.16). We can safely conclude that there is no significant difference for these two age groups. However, this does not mean that the age does not have an effect on the SSQ scores. It is possible, that for more diverse age groups the SSQ scores would be different (For example, comparison between young and senior age groups could be an interesting addition in the future).

71 5. Results

Figure 5.15: Scatter plot for 3 distinct age groups

Figure 5.16: T-test for 2 age groups

72 5. Results

Gender

Gender did not play any role during randomly assigning participants to testing groups (FoR, VP, FoR and VP, none). Meaning, there was no guarantee that equal number male or female participants will be assigned to each group (see App. F for gender distribution between groups). However, there was no significant difference measured be- tween the testing groups in relation to the SSQ scores. There were 40 male and 20 female participants. Let’s evaluate whether gender has an effect on the SSQ score (ignoring gender distribution was not uniform between the groups). For more detailed information see Fig F.6.

Figure 5.17: Gender and the SSQ score (box chart)

73 5. Results

Figure 5.18: Gender and the SSQ score (scatter plot)

Figures 5.17 and 5.18 suggest there might be a difference between male and female participants and severity of motion sickness. Interest- ingly, there is a greater variety between scored measured for female participants. Scatter plot suggest there could be a significant difference between genders and their SSQ scores.

74 5. Results

Figure 5.19: Gender (T-test)

T-test in this case (Fig. 5.19) shows that equal variances are not assumed and we have to consider p-value in the right column. Never- theless, p-value is within limit and we conclude significant difference. Looking at histograms for each gender (Fig. 5.20a and Fig. 5.20b) it is clear that shape of their distribution is different. This is a condition for using Mann-Whitney U test. Results of this test can be seen in Figure 5.21. However, for this test p-value (Asymp. Sig.) is greater than 0.05 meaning there is in fact no significant difference.2

2. There are two tests with conflicting results. Personally, I would not draw any con- clusions here. There were too few female subjects and too many factors influencing the results.

75 5. Results

(a) Female (b) Male

Figure 5.20: Histogram (Gender)

Figure 5.21

76 5. Results

Weekday and weekends

This subsection shows comparison between testing sessions conducted on weekdays and on weekend. The main purpose here is to analyze whether testing on weekend yields different SSQ scores compared to testing during the week. Figure 5.22 shows how many participants were tested for each group. It is worth mentioning that even though these groups had almost identical amount of participants, the as- signment to the method groups (FoR, VP, FoR and VP, none) was randomized and there is no guarantee each group had same amount of participants assigned for weekdays and weekend. Similarly, gender and other factors are in play.

Figure 5.22: Weekdays (summary)

77 5. Results

Figure 5.23: Weekdays (box chart)

SSQ scores look very similar for the two groups (Fig. 5.23). Scatter plot (Fig. 5.24) shows very similar distribution for the Nausea and Oculomotor scores.

Figure 5.24: Weekdays (scatter plot)

78 5. Results

This is confirmed by the results of the T-test (Independent Samples Test) as shown in the Figure 5.25. For more details see Appendix F (Figures F.7, F.8 and F.9).

Figure 5.25: Weekdays (t-test)

Time of the day

A comparison was made between morning and afternoon testing ses- sions to evaluate whether the time of the day influences SSQ score. This information could be useful for researchers who plan to evalu- ate SSQ scores. Figure 5.26 and Figure 5.27 show histograms for each group. There is little difference between the two (Fig. 5.28 and Fig. 5.29). T-test confirms it (Fig. 5.30). Even though results show no difference it is worth mentioning that the testing was done during the summer holidays with students. Students have usually free time during hol- idays and they might experience little difference between morning and afternoon. Results could have been vastly different during the school year or for the labor force. For more details see Figure F.5 in Appendix F.

79 5. Results

Figure 5.26: Histogram for the morning sessions, shows frequency of the SSQ scores

Figure 5.27: Histogram for the afternoon sessions, shows frequency of the SSQ scores

80 5. Results

Figure 5.28: Morning and afternoon sessions (box chart)

Figure 5.29: Morning and afternoon sessions (scatter plot)

81 5. Results

Figure 5.30: Morning and afternoon sessions (T-test)

Computer usage

Participants were answering question "To what extent do you use a computer in your daily activities?". Answer was a number on a scale between 1 and 5, where 1 = Not at all and 5 = Very much (only the minimum and maximum values were labeled). Figure 5.31 shows the case summary. Higher computer usage seems to correlate with the lower SSQ scores. It is clear, that there are too few participants with low computer usage to draw any conclusions and the distribution of participants with different amount of computer usage was different for each group (see App. F).

82 5. Results

Figure 5.31: Computer usage (summary)

Figure 5.32: Computer usage (box chart)

83 5. Results

Figure 5.33: Computer usage (scatter plot)

Videogames usage

Let’s examine whether playing videogames has an effect on the SSQ scores. Participants were answering question "To what extent do you play videogames? (eg. on PC, consoles, smartphone)". Answer was a number on a scale between 1 and 5, where 1 = Not at all and 5 = Very much (only the minimum and maximum values were labeled). Fig- ure 5.34 shows the case summary. Note, that this was a self-evaluation. There was no question about the amount of hours weekly spend play- ing videogames. Some participants might have considered themselves no players, even though they played more compared to other partici- pants playing less, but evaluating themselves as playing a lot. There was a small number of participants who chose either minimum or maximum. Most participants considered themselves as playing a lot (chose 4 on a scale), however there was still a high variance between the groups in videogames usage (see App. F).

84 5. Results

Figure 5.34: Videogames (summary)

Figure 5.35 and Figure 5.36 show SSQ total score and Nausea/Ocu- lomotor scores for self-evaluated groups. Unfortunately, there were only few answers in minimum and maximum and there is probably a little difference between the middle group (3) and the other two(2 and 4). To compare an effect of videogames on SSQ scores "group 2" and "group 4" were selected. There are both enough participants in these groups and are separated by the middle (group 3).

Figure 5.35: Videogames (box chart)

85 5. Results

Figure 5.36: Videogames (scatter plot)

86 5. Results

Figure 5.37: Videogames (T-test)

However, there is no significant difference between these two groups according to the T-test (Fig. 5.37). To correctly evaluate this criteria participant onboarding process would have to been changed in the future to pre-select non-gamers (as a control group) and more frequent players. There could also be a difference between different "types" of players. Some of them might play a lot on smartphones, others on a desktop machine. These are different ways how play, in- volving different body postures and movements (standing vs sitting, fingers vs mouse) and more. More thorough investigation could bea possible addition in the future.

Virtual reality usage Participants were answering question "To what extent do you use virtual reality devices? (such as Oculus Rift, HTC Vive, Samsung Gear,

87 5. Results etc.)". Answer was a number on a scale between 1 and 5, where 1 = Not at all and 5 = Very much (only the minimum and maximum values were labeled). Figure 5.38 shows the case summary. Mostly there were only participants who did not had any previous experience with the virtual reality. Only 9 participants considered themselves having used VR in the medium amount (value 3 on the scale). There will be no conclusion whether having experience with VR helps to lower SSQ scores.

Figure 5.38: VR (summary)

Figure 5.39: VR (box chart)

88 5. Results

Figure 5.40: VR (scatter plot)

Dominant hand

Even though it could have been an interesting study all but two par- ticipants were right handed, meaning there will be no conclusions associated with the role of dominant hand.

Week of testing

This analysis functions only to verify whether the testing process has changed between the weeks. If it has, it would mean there was a hidden problem in the process. First, let’s examine how many participants were there each week (Fig. 5.41). Numbers were fluctuating between the weeks, ranging from only 3 participants (in the second week) to 25 participants (in the 3rd). Second, most numerous week was the 5th. Let’s compare and test these too. Figure 5.44 shows there is no significant difference between scores acquired in the 3rd and 5thweek. This does not mean there was no difference between scores acquired in other weeks. However, it does give some level of confidence that the testing process was uniform throughout the data collection phase.

89 5. Results

Figure 5.41: Week of testing (summary)

Figure 5.42: Week of testing (box chart)

90 5. Results

Figure 5.43: Week of testing (scatter plot)

91 5. Results

Figure 5.44: Week of testing (T-test)

5.1.4 Presence questionnaire In its original form presence questionnaire [41] includes 24 questions. For the purpose of this research 5 questions were not included from the very beginning. These questions were the one involving sound (which was not a part of the virtual scene) and the haptics (it was a passive seated experience without using any controllers). After the testing was finished and general feedback analyzed (see Section 5.2) it became clear, that not all included questions were suitable. These were the questions regarding actions, task performance, interactions, control and there was one about objects moving through space even though they were not any. Data collected from these questions was not omitted from the data set, however it will not be analyzed here. Complete set of frequency histograms is in Appendix G. Following questions are used for analysis: ∙ 1. How much were you able to control events?

92 5. Results

∙ 4. How much did the visual aspects of the environment involve you? ∙ 5. How natural was the mechanism which controlled movement through the environment? ∙ 7. How much did your experiences in the virtual environment seem consistent with your real world experiences? ∙ 9. How completely were you able to actively survey or search the environment using vision? ∙ 10. How compelling was your sense of moving around inside the virtual environment? ∙ 12. How well could you examine objects from multiple view- points? ∙ 13. How involved were you in the virtual environment experi- ence? ∙ 15. How quickly did you adjust to the virtual environment experience?

Question 1: How much were you able to control events?

Data clearly shows that for the most of the participants this was very passive experience (Fig. 5.45). Interestingly, there was one participant who felt like they were in a complete control. It is possible that they had understood the question differently or maybe they truly believed they were controlling the events (does not necessarily mean they felt like controlling movement).

93 5. Results

Figure 5.45: It was a passive experience for most of the participants

Questions 4 and 13 The level of involvement was evaluated by these two questions. Over- all, answers were very similar (Fig. 5.46 and 5.47) all tested groups. In question 4 ’none’ group scored lowest and in question 13 it was close second. This is perhaps because participants had nothing in the vir- tual reality scene which would represent them they were only "eyes" without any body, shadow or other. Most involved were participants in ’FoR’ group for both questions. Strangely, group involving both frame of reference and visible path came last in question 13. View to virtual environment obstructed by both cockpit and waypoints could be the reason for these data.

Question 5: How natural was the mechanism which controlled movement through the environment? Interesting conclusion can be drawn from answers to the 5th ques- tion. Looking at the data in Figure 5.48 and 5.49 reveals relationship between frame of reference method and how much participants consid- ered movement natural in the virtual reality environment. Movement was extremely artificial for group ’none’. On average movement was

94 5. Results

(a) Question 4

(b) Question 13

Figure 5.46: Mean values of "involvement" for different groups

95 5. Results

(a) Question 4

(b) Question 13

Figure 5.47: Box charts comparing values of "involvement"

96 5. Results

"moderately compelling" (4 on the scale) for participants in groups involving frame of reference (’FoR’ and ’FoR,VP’). Results of group ’VP’ are between control (’none’) and frame of reference groups. How- ever, there are 3 extreme values in ’none’ and ’VP’ group towards the natural movement. On average, movement felt more artificial then natural for all participants (potential improvement for the future).

Figure 5.48: Question 5 (table)

Figure 5.49: Question 5 (box chart)

97 5. Results

Question 7: How much did your experiences in the virtual environment seem consistent with your real world experiences?

Consistency of the virtual experience with the real world was exam- ined here. As the table shows (Fig. 5.50), participants in the group ’none’ felt experience being least consistent with the real world. High- est consistency was reported by group ’VP’. Score is slightly lower (4.20) when both methods are employed. This might have been caused by simple geometry and texture of the cockpit. In the future cockpit could be fully modeled with buttons and dials.

Figure 5.50: Question 7 (table)

Questions 9 and 12

In the general feedback (Sec. 5.2) it was reported that cockpit (part of the frame of reference) obstructed the view. However, participants in groups with cockpit (’FoR’ and ’FoR,VP’) reported they (Fig. 5.51) could survey the environment more completely (mean 5.87 and 6.00) compared to the participants without the cockpit (mean 5.33 and 5.73). Examining objects from multiple viewpoints was easiest for the ’FoR’ group (cockpit only) with the mean value of 4.53. Group with visible path reported mean of 3.87. This is an interesting data, because it is in contradiction with the general feedback. Data shows that waypoints might have been blocking the view. Looking at mean values for ’none’ and ’VP’ shows difference in favor of ’none’ group.

98 5. Results

Figure 5.51: Frame of reference (cockpit) did not obstruct the view

Question 10: How compelling was your sense of moving around inside the virtual environment?

Figures 5.52 and 5.53 shows that there was little difference for the participants in each group. Most compelling felt the movement for the ’none’ group with the mean value of 5.47. Least compelling was movement for the ’FoR’ group. Scale was from the 0 to 7, which leaves a room for improvement of the movement in the future.

99 5. Results

Figure 5.52: Table compares how compelling was the sense of move- ment for participants in each group

Figure 5.53: How compelling was your sense of moving around inside the virtual environment (box chart)

Question 15: How quickly did you adjust to the virtual environment experience? There was little difference between the groups (Fig. 5.54, 5.55) and how quickly they adjusted to the experience. Note, that this question has a specific wording for labeling its scale. Lowest score is labeled

100 5. Results

"Not at all", medium value "slowly" and the highest value is labeled "less than one minute". This is very specific information compared to the more abstract "slowly". Most of the participants answered 5 or 6 on the scale. Slowest to adjust were participants in group ’FoR’ and fastest ’none’. This could be because of involved adjustment to the visuals of waypoints and the design of the cockpit.

Figure 5.54: Quickness of adjustment to the experience by the groups (table)

Figure 5.55: Comparing speed of adjustment to the experience by the groups (box chart)

101 5. Results

5.1.5 Task Load Index (TLX)

Overall TLX score (Figure 5.56) was low on average (31.6), this corre- sponds to answers to the first question of the presence questionnaire (Fig. 5.45). Passive experience does not require a lot of concentration and there was not any difficult task to accomplish.

Figure 5.56: Overall TLX score (histogram)

102 5. Results

Figure 5.57: Overall TLX scores for individual groups (box chart)

Overall TLX score was slightly different between tested groups (Fig. 5.57). Group ’none’ had the highest score with 35.22, closely followed by group ’FoR’ with 35.16. This could be interpreted as frame of reference having no effect on TLX. Visible path has slightly lower score 32.5. Interestingly, when both methods are employed (group ’FoR,VP’) the overall TLX score is only 23.55. It seems that on their own methods have little to no effect, however when combined they reduce task load index. Histograms for the individual groups (Fig. 5.58) look similar with most answers in the lower scores, exception being group ’FoR’ with more uniform spread from lowest to higher TLX scores. Lastly, data analysis reveal relationship between TLX and SSQ scores (Fig. 5.59). Higher TLX value correspond to the higher values of SSQ total scores. This is possibly because higher level of discomfort and stressful (demanding and frustrating) experiences go together. More research could be done in the future to pinpoint the exact cause.

103 5. Results

(a) FoR (b) FoR,VP

(c) none (d) VP

Figure 5.58: TLX histograms for individual groups

104 5. Results

Figure 5.59: Relationship between TLX and SSQ (scatter plot with regression line)

5.2 Qualitative

This section analyses the qualitative feedback collected after the ex- periment. This was done by having a single sheet of A4 paper and having participants write down their thoughts, ideas, suggestions, tips on how to improve, and more. After participant has finished writ- ing down, experimenter read through the feedback form, asked for clarifications or put additional questions related to the feedback given. The discussion after the experiment was very beneficial, because of- ten participants would only write down few things even though they had a rich experience and many things to contribute to the feedback, but for some reason did not share it at first. During the discussion participants were prompted to write down their oral feedback. This was the most interesting part of the experiment process. It was surprising that even the 60th participant had a unique and original feedback to give. It is certainly true, that a lot of the feedback was overlapping and many issues were repeated again and again. However, each participant gave their own particular thought. Having this many

105 5. Results participants was time consuming, but beneficial to gather a lot of relevant feedback and many interesting design ideas. Furthermore, this section is especially important to study when the quantitative results are inconclusive. By analyzing the feedback, we can get an insight into flaws in the design of the experiment and find out what were the sources of inconsistencies between thedata (consequently leading to large standard deviation values). Lastly, during this part it was crucial to give participant a space to share their thoughts. Despite participant saying something obvious or mundane, they were not interrupted and it was communicated to them that their feedback is valuable. In this part of the experiment, I have tried my best to practice active listening. An introduction into this form of communication was give at the university during the course PV206 Communication and Soft Skills.

5.2.1 Blurry text The menu screen consisted of regular screen with buttons to launch, exit and configure the application and a simple VR scene with green environment surrounding the user. There was also an instruction text, informing user to look directly ahead in order to recenter the head-tracking when launching the scene. Similarly, at the beginning of the scene there was a text showing countdown before the movement through the virtual environment begins. Two participants gave a feed- back, that the text at start was blurry. One participant also shared that the countdown at the beginning of the experiment was set too close to the viewer and it was difficult for them to read the number clearly. One participant confused the instruction with the (verbally communi- cated) task in the scene and after their session ended they said that it was bad that they could not look around. Unfortunately, this was not reported in their general feedback. In the future it would be better to state clearly, that the purpose of looking directly ahead is only for calibration purposes. This could have influenced the results, because some participants might have misunderstood. Interestingly, issue with blurry text is only with the CV1 version, with the DK2 version the text is sharp. This was a small overlook on my part, but to my defense none of the pilot testers reported the blurry text during the tests. A little complication aroused from this issue, because I needed to launch the

106 5. Results

scene in order for participant to adjust the HMD properly. It was not possible to do this in the menu scene, because of the blurry text. And there was a fixed countdown in the scene, thus when adjusting HMD took too long, the scene had to be restarted before the countdown ran out.

5.2.2 Experience - curiosity

At the end of the experiment scene, there is a large castle and the scene fades to black right before camera enters the castle. Two participants were curious about the castle and what it contains. One even said "I felt disappointed that I never entered the castle". One of the participants said "I will be curious about the results when ready." even though the true purpose of the experiment was not shared with them.

5.2.3 Experience - first time

There were many participants (7) who said they tried virtual reality for the first time: "I have never experienced VR before. Right now I am amazed. Display quality is poor and I knew this before, but nevertheless the experience felt real enough." Multiple participants (6) said that it was fun. This is surprising, because this experiment was supposed to induce motion sickness: "Thanks, was fun to try." "Overall, it was a positive experience and I can see a future when VR becomes mainstream." The head-tracking provided by VR was a big takeaway for some participants: "...when you turn your head and look back - you basically see what is behind you. This is new experience which cannot be offered by any current technology which is publicly available." "Gyroscope system in VR was used quiet good, I like that I could look around and even behind my chair." "When I looked into virtual reality through the device for the very first time I was amazed how it looks like andhow smoothly and naturally it reacts to movements of my head. I was in completely different world."

107 5. Results

And some feedback showed that VR is still tech for early adopters and there are some flaws. For example, it is difficult for people who wear glasses to enjoy the VR: " It was in most ways completely as I expected. VR is for now far away from perfection. For me it was even harder because I have glasses (4 diopters) and I wasn’t able to have them. But in time it will be fixed."

5.2.4 Expectations Most often, participants (11) expected experience to be more inter- active. And in the feedback they suggested what kind of interaction would they like: "I was expecting more interaction in the experiment, not just watch- ing the scene. For example to be able to control speed of movement, or be able to move freely" "I hoped I could move around on my own." "I would like to move where I want." "If I lean left or right it could be like when I’m on bike." Expectations and broadly applicable questionnaires (with inter- action specific questions) were hinting there could have been some interaction: "Questionnaires included sometimes questions about environment interaction. There could be some possibly... using gaze to activate simple events. for example: bird flying off the tree, something with the house (person peeking)" Some participants even suggested for the experience to be more like a game: "...make challenges for viewer ("look on that bird", "find red cow")... " "...challenges on the way [or several stops during the way, each with a little task to complete before continuing]" "...a little more interaction (avoidable obstacles..." "As player moves along the level, they should always be able to discover something interesting (crack in the cave illuminated on look, sounds for particular places)" In the future, more methods against motion sickness could be evaluated, perhaps a mental task making participants count the animals, remember color or similar. Often, better graphics was expected" "Overall, I was satisfied with the experiment, even though my expectations were little different (eg. I did expect some kind of interaction with the environment, better graphics..."

108 5. Results

A short stops to have a look around was also suggested: "...few stops at journey (at the top of mountain)" "I was expecting more interactions or at least short stops near objects of interest like cows and building (I would like to look inside well)" "I preferred the possibility to explore the surroundings at the beginning [when had time and standing still]. It was harder when I was moving." One of the participants just wanted to help the research they were curious about: "My expectations was help with research and I was curious what kind of research its gonna be." The length of the experiment scene was mentioned: "It was much shorter than I expected." Interestingly, one participant came up with an idea to work with expectations of participants: "Find a way to surprise the viewer with stimuli to one of their senses that they believe is not being stimulated in coordination with visuals (eg. a fragrant essential oil corresponding to flowers seen in the visuals being passed in front of their face." One of the most thought-provoking feedback was the following one: "Familiarity is more important than realism. A habit of suspend- ing doubts (ei. when watching ’The Simpsons’ makes it easier to do the same in VR with the same visuals)." It shows, that virtual reality expe- riences can take advantage of existing intellectual properties, which are often stylized to sell the experience. This is beneficial for perfor- mance as well, being hardware requirements for photorealistic visuals are often higher than for those with cartoon or otherwise stylized experiences. Additionally, by having stylized human characters one can avoid so called uncanny valley [54]. Simply put, the more character depicts a real person the more are people sensitive towards minor deviations from a real person. This is why robots closely resembling people can feel "creepy". Simple 3D characters do not suffer from this effect, because we understand that they are just an abstraction orthat they are stylized.

5.2.5 Immersion and virtual body

Two participants shared that they would welcome a virtual body parts to be part of the experience:

109 5. Results

"It would be nice to see some parts of my virtual body like hands, legs,..." "I was missing hand/body." Having no visual representation of themselves in the virtual envi- ronment could have affected the results of SSQ tremendously, because these virtual body parts could have worked as a natural frame of reference. Similarly, without virtual body a sense of immersion and presence was probably affected to a degree. Participants reported forgetting about the real world: "I had to remind myself from time to time that I am actually sitting on the office chair in the lab not flying above the surface somewhere inthe countryside." A design of the head-mounted display can play a role in immersion too: "In the beginning I was not able to be fully concentrated on the virtual reality because I could see a piece of a real world from the bottom of the glasses, but a bit later I was completely concentrated on the VR."

5.2.6 Experiment process A total of 5 participants shared in their feedback, that experiment process was well prepared: "The experiment was conducted very professionally." "The per- sonal acted professionally and the experiment itself was very well prepared." That being said, there were a couple of suggestions to improve it. One feedback was to include a link to the registration calendar right in the confirmation email. This feedback was immediately incorporated, and email was being send with the to a calendar to all the subsequential participants. "Link calendar in confirmation mail, so I can see time and date when to come." Two participants felt, that a warning about potential discomfort during the experiment could have been part of the instructions be- fore the experiment: "And a warning about potential dizziness could be useful :)" And even though I agree with the point being made, I chose not to convey this information. The reason behind this is to not influence participants by telling them they might feel dizzy. Simple statement, that they might, together with the common misconception that people always feel sick after VR sessions, could be the very reason

110 5. Results participants start to feel dizzy. One participant asked me whether "someone threw up, yet?". One of the goals of this thesis is to evaluate motion sickness. If it was focusing on another aspect of VR, warning about potential discomfort would be included. To make sure, partici- pants feel save and are not feeling being forced into anything (eg. suffer from severe nausea) during the experiment a consent form is formu- lated in such manner allowing for both sides to interrupt experiment at any moment without any penalization. One of the issues with the experiment was too informal communi- cation of the tasks for the participant. Task was to observe the scene, nothing more. This information was conveyed to participant during the 15 second countdown when they were wearing HMD. And since this was a first many wore a , their attention to instructions was likely diminished. A sense of awe, being over- whelmed by the experience and this brand-new feeling of immersion and seamless head-tracking. They were not paying a full attention to what was experimenter saying: "It would have helped me if there were more instructions in the session or if experiment was better and/or clearer explained by mod- erator." "I would like to have previous knowledge of what I will be doing in experiment." "Introduction of the virtual environment can be improved so user have time and space to get used to it." One of the participants the following and it clearly shows, they did not pay attention: "I could use more information about the experiment. For example: What is going to happen after I put on VR glasses. How long does it take [this information was given during the countdown!] How I might feel (sick) As well in case of dizziness I could use a glass of water. Or some fresh air." This feedback shows how much it is important for the participants to feel comfortable during testing and how important is to be warm towards them: "For next it would be nice if you talk with your partici- pants little bit more, eg. show the room before experiment." One participant gave a valid feedback about eating before the experiment. I do not know how I missed this. Rectification is quiet simple - put a single sentence in the information for participants box next to a registration calendar. "Warn about not eating before the experiment. I have a feeling that I had a greater nausea because I was eating just before the experiment."

111 5. Results

Timing of filling the questionnaires was mentioned too: "Tell to participants there is questionnaire which should be completed right after the VR experience. I had quite long break which could distort answers in first questionnaire [subj. had toilet break after VR before SSQ]" Lastly, there was one participant, which took an initial text in the menu screen as an instruction for the whole experiment: "While having oculus I had feeling like turning around and watch scenery - instruction to look just in front of you is lowering experience of the game." Again, they did not pay any attention to the vocal conveyed instructions during the countdown. In the future, I would improve the experiment by placing clearly written instructions as a text in the VR scene. And participants would be prompted to read it and understand it, before experiment scene begins. Alternatively, they could have been given a paper with instructions to read before putting on HMD. This would improve experiment process and make it more understandable for participants.

5.2.7 Technical issues

Testing with participants did not go without minor technical issues. One issue was stuttering of the head-tracking. This was caused by some other process running on a workstation and was solved easily by restarting the machine. This problem was encountered during the first week of testing and smoothness of the head-tracking was verified before each experiment. Still, 4 participants encountered a sudden lag in the head-tracking. In 3 cases this was caused by participants rotating their body and obscuring the sensor. Some participants were very curious and looking all around themselves. There was no way how to prevent this without restraining them somehow. Tracking technology will improve in the future, meanwhile this can be reduced by calibration and proper sensor placement. "In the first part before first hill, two unnatural transitions occurred (like lags in games)" "When I looked back and turned on the chair (180 degrees), the view switched and I was looking forward instead of backwards [lost head-tracking]"

112 5. Results

5.2.8 Headset adjustment Two participants shared that what they saw was blurry: "The virtual environment was slightly blurred so the experience was not that satis- fying." This was an issue with every participant. There is a sweet spot when wearing the HMD. It needs to be adjusted properly, otherwise vision is blurry. Sometimes, even though it was initially adjusted properly, device can slip a bit or user might exhale with their nose and fog the lenses. The poor resolution of the display does contribute to user’s confusion whether they are wearing the device properly. The process of the experiment was to show participant the device, point to adjustable straps and explain they can be adjusted, then let participant put on the device by themselves. After that asking them whether they can see certain objects in the scene properly and helping them adjust the straps. Lenses were cleaned after every participant with appropriate cloth.

5.2.9 Laboratory In their feedback forms participants said, that they liked the laboratory. This particular feedback shows again the common misconception about VR: "This lab is full of nice devices so I hope I will not vomit here."

5.2.10 Movement Two insightful feedbacks were about movement. First one shows, that users want to be able to explore their environment at their own pace: "Quick movement, was not able to look for objects" One participant touched the topic of conflict between visual and vestibular system: "I think that was interesting experience. Especially when my body was sitting on chair, but brain was receiving informa- tion about moving."

5.2.11 Involuntary body movement One of the unexpected effects of the experiment was involuntary body movement of the participants. It was occurring especially during the

113 5. Results part when the track was tilting from side to side. Participants were sitting on their chair and tilting their heads, sometimes with their torso too. Interestingly, some of them were tilting the head to counter the tilting of the virtual camera in order keep their horizon and some of them tilted their head accordingly to the tilt of the track. This was a completely involuntary movement as one of the participants said: "Twist caused my head to tilt." This reminded me of kids playing a cars computer game, when they lean their body, head and controller to the side while they are steering in the videogame. I am not sure why was this happening in the experiment. Below is feedback from several participants. Various body parts were mentioned, sometimes participants were moving voluntary to balance themselves: "I had a tendency of leaning back while going uphill and leaning forward when riding down." "When the vehicle was tilting I realized that I was tilting with it." "Movement of the vehicle forced me to react (move side to side)" "I was even reacting to the virtual world in a real reactions in a real world (like going to the right or left, sitting differently on the chair or even going closer or further away)." "I found myself turning my head around a lot just in order to compensate for the ’weird’ angle I was faced with." "When movement twisted from left to right I tried to balance it with moving my torso." "I felt need to move my shoulders slightly to balance vision rotations." "My head was shaking and I was trying to stabilize with my legs to not to fall."

5.2.12 Oculus device

Several participants reported Oculus device being comfortable to wear: "Wearing equipment on head was comfortable, it was not heavy." "The headset was pleasant on head and haven’t made me any discomfort." However, some of the participants observed, that the cable was dis- turbing them from the experience. This will be improved in the future with wireless connection and stand-alone VR headsets. "One thing that disturbed me at one moment was that my hand unintentionally touched a cable from headset." "Movements were smooth except while turning head too much, I noticed that device has cables." "During the experiment I held the cable to prevent damage to it and it limited my head movement."

114 5. Results

One of the participants reported fogging: "The device could be more comfortable. It does not fit on my face perfectly. At half of exper- iment the oculus started fogging." It could be small things, which can influence whole experiment. For example: "I think my sense of reality was disturbed by the headset borders (black thing around my face)." "Itchy nose." "I could see the raster of display." Novel experience of the head-tracking was mentioned too: "I liked low latency head tracking."

5.2.13 Questionnaires Questionnaires were a bit problematic. First issue was that they were written in English and many participants, even though reasonably ad- vanced speakers, were having problems with terms used. Frequently, they were using dictionary to translate specific English words to Czech. Some of them asked, what the terms mean or whether they could use a dictionary. Some participants pulled out their smartphones and pro- ceeded to translate without asking. This was very likely to influence the results. Even though standard and verified questionnaires were used. Participants were translating questions and using a difference means to do so. Every participants could have understood the question in their individual way. These are some of the words, which participants did not under- stand without dictionary: compelling, proficient, fatigue, nausea, dizzi- ness and the phrase "fullness of the head". "If there was a better explanation of symptoms in SSQ question- naire, it would have helped me to consider if I experienced them or not." "SSQ - contains very similar questions." Frequently, participants were baffled by one of the questions in the SSQ and two of them give similar feedback, here is one of them: "What the heck is Fullness of the head?" Second major issue was caused by using general questionnaires. Some of them include questions which do not apply in this case. For example, there were a few questions about interaction with the virtual environment. However, participants could only look around during the experiment. This left participants uncertain, whether they missed something they should have been doing in the experiment. "If there

115 5. Results was a possibility to interact with environment within the session I haven’t found them. For this reason it was difficult to answer because I didn’t know if I missed something or not." "I was a bit confused whether I might missed some instructions in the VR, as I did not really feel I was given any tasks and/or options to actively act in the VR. Therefore, all my answers relate to the task watch in front and don’t speak" "I didn’t have a feeling that I should do something, but from the questionnaire I had a feeling that I should have done something while inside VR." This feedback demonstrates importance of clear experiment in- structions. In the future I would make sure to have instructions in text form instead of only telling them in person only once. "Contradictions between VR part and questionnaires - they quite often asked about task, goals... and I didn’t know what was task or goal." This unclarity left a space for interpretation: "I understand interac- tions as the movements of the head." "I had to think how it was meant because I did not control the vehicle." This could have influenced results to some degree. Overall, using questionnaires as they were with questions about interactions was confusing for participants. "Some of the questions in the last questionnaire [presence q] seem that don’t fit the exact experiment (no controllers for example)." "Some questions about in- teractions seemed out of place because the ride didn’t require any interaction." "I had difficulties with filling up questionnaires, because there were questions which seemed like for different test eg. How your actions led to change in environment, Mental Demand or Effort in TLX questionnaire." Only two participants understood, that these questionnaires are for general use: "Some of the questions [TLX - peformance, effort] were unnecessary - but I guess there were just standard questions used in virtual reality experiments." "Presence questionnaire included questions regarding the controls. But that’s just a detail and completely understood that the questionnaire has broader use." A great point made by one of the participants was to ask what exactly in the experiment caused discomfort and when: "Also I would expect to have more questions about my experience and feelings dur- ing the ride (eg. How much nausea I felt, When, How much it influ- enced me)." In the future I would include this as a separate custom

116 5. Results questionnaire, perhaps in the form of the map of the virtual scene and asking about specific parts of the ride and how it felt. Demographics questionnaire had a flaw in missing the option of no title yet. "In questionnaire missed my option that I have no title, still student" Lastly, a small UX tweak was requested by one of the participants. This would make filling in questionnaire more pleasant: "NASA TLX questionnaire - add hover effect to choosing an option."

5.2.14 Graphics A lot of the feedback was related to the visuals of the virtual scene. Videogames nowadays have reached photorealistic graphics and from the feedback it is obvious that they have expected better graphics overall. "The quality of graphics is hopefully better in real VR games." "The graphic part of VR seems to be basic, like really basic - it reminds me of 80s or 90s movies about VR, like Johny Mnemonic or so..." "The whole thing looks like a game (kind of old game) it does not seem real to me at all." "Graphics quality really surprised me I was expecting much better graphics through the VR headset." Participants would welcome more details in the environment. Some parts of the landscape were a bit bare. Mostly because of the performance concerns. Apart from grass moving in the wind and few animal animations there was not much happening in the virtual scene. Participants shared, that they would like more "life" in the scene: "More trees, rocks, plants." "Needs more detail environment - brushes, clouds, antic ruble." "More cows, more life - people living in the houses." "...maybe some movement (birds, tree branches moving in the wind)." "It was little bit boring. Other events during movement would be great (for example group of dinosaurs crossing the road in front of me)." "I would add some life into the environment, like a farm with animals." "...a deer jumping unexpectedly in front..." This sense of virtual environment being empty could have been influenced by the initial scene being a full of interesting objects while the rest of the environment had only far between. As pointed out by one of the participants: "Nothing to watch during the experiment. At the beginning there was cottage, some grass and animal, after first hill it was quite empty. Maybe some more animals like birds or squirrel."

117 5. Results

"I was expecting more objects because they were at the start, but not afterwards" Some suggested improvements, to make the scene more interesting and less boring: "Your VR environment was actually pretty cool. But I hoped that it will be more fun there. It was just hills and rocks. But downhills were fun. More interesting stuff like Eiffel tower or other landmarks - Niagara falls, etc." At the beginning there was a horse running along the viewer for a short period of time. Plenty of participants missed it, because it was slightly behind the vehicle. Reactions of those who saw it were positive: "Great horse!" Interestingly, one of the participants shared following: "At the beginning I liked the horse and its running effect. I though that the horse moved because experimenter clicked mouse, nevertheless he didn’t start it and the sound wasn’t a part of the testing." This shows how much it is important to thing about every detail of the experiment process. And how much even a small thing can influence the participant and to think about what you do during the experiment. Participants liked initial relaxing scene in general: "I liked the butterfly at the beginning." A proper lighting setting was overlooked for one of the horse models. This resulted in 3 participants complaining in looked 2D: "Running animal at the beginning looked to be 2D only." The looks of the virtual terrain were frequently mentioned in the feedback. In the future I would either tweak the resolution of the terrain in the editor or use 3rd party tools to achieve a better visuals: "Terrain edges are too sharp, looks like 90s game, better textures - ground only." "I would add better textures to the terrain." "The terrain had "rough edges" from time to time. Like an old school 3D game." I have learned that level design is an important tool to hide tech- nical limitations. There is a room for improvement in the future to obscure ends of the virtual world from the viewer. This is because there are high peaks from where it is possible to see where the terrain ends: "Sometimes it was obvious that the world has its limits. That there is the point beyond with there is nothing (no texture or objects)." "Please add sun and backgrounds, and make the end of the map not obvious." There were a lot of specific feedback related to individual objects in the virtual environment: "...it could be a lot better (for example trees)"

118 5. Results

"Improve the graphics of the virtual environment, I guess it might get a lot cooler! (the whole thing needs more butterflies, better grass, trees seem a bit flat." "The blue sky without any texture or clouds isvery uncomfortable to look at." "Orange pixels on rock texture in canyon." "Colors were not real (grass was too green)." "The environment looked quite nice but I was hoping to have a closer look on the castle." "Light and shadows - specular on cockpit glass, shadows in environment, reflections on glass" "Night environment and lights on the circles to enlighten its surroundings, street lamps" In the future virtual environ- ment would be greatly improved. For example color gradient for the sky and clouds. Improved lighting and textures. Some feedback was in direct contradiction: "Rocks looks realistic, but grass not." "Rocks are way too far from reality." An optimization for trees was turned on to save performance, 4 participants shared that they noticed it: "The VR environment seemed really artificial with trees just appearing out of nowhere, it distracted me little." One of the participants thought that tree has a wrong angle with the ground (not perpendicular): "Trees in the environment were some- times in a bad orientation." There is a part in the scene where the vehicle leans on side of the hill. While traveling along the slope there are trees nearby. From the point of view of the viewer these trees form unnatural angle with the ground. What I believe is happening is that the information from the vestibular system is telling the viewer they are leveled with the horizon, while in the virtual scene they are not. Very original feedback was related to the nonsensical movement through the virtual environment. In the real world paths curve along the hills and choose the optimal path through the environment. This could be one of the major changes of the level design in the future: "Who would have chosen a path like this?! All these hills! Why? Also impossible in the real world and also confusing." Similarly, another participant was missing roads in the virtual environment: "For more real experience I would add more roads (or at least to the castle). When I was flying along the castle I missed something to lead straight into the castle (like a bridge) so I don’t get mislead when I just pass along the castle gate." Sometimes it might be a bit harder to pinpoint, whether its the visuals or the actual hardware resolution, but plenty (5) of participants

119 5. Results would welcome a better overall resolution: "Maybe higher resolution would be more beneficial." "The scenery was comfortable except the resolution which was very low bad." A good suggestion was given about the distant objects. Firstly, a low setting of fog or similar effect should have been used: "Object far away should not be seen sharp." Secondly, to design the scene in a fashion which demonstrates the sense of depth and enables viewer to observe all 3 dimension of objects: "Not very good 3D impression of distant objects - closer are better, maybe bring some objects closer to the spectator." A pleasant visuals of the scene was appreciated (4 participants): "Environment wasn’t distracting me from the ride itself - it was nice to look around when I was on the top of some hill, but otherwise I was mostly focusing on the ride itself other that looking around too much." On the technical side: "Not any technical bugs or lags during ani- mations, all ok." An excellent suggestion for the level design, was to make the hill feel like a massive mountain by having the peak covered in snow and to hide the valley under the clouds: "When on hills - above clouds (or fog), so you can’t see landscape under you -> mystery, high mountain - snow." This feedback sums it up well and shows where is the room for im- provement: "As an improvement for better immersion I would suggest -> improved graphics, level of details and level design and sound!"

5.2.15 Sound A total of 10 participants were missing sound, music or other audio effects in the experiment. Using a sound would not only be beneficial to involve another sense, but also to mask the sounds from the laboratory: "There could be some ambient music or sounds, noise from lab was distracting at the start." "The absence of sound was a big distraction from the experience." A use of sound could improve the immersion and presence: "Some sounds or interaction with the landscape would make the experience much more realistic. Also lack of sound caused more detachment from the environment."

120 5. Results

Here are various sources of sound or audio effects mentioned by participants: Sound of trees, sound of wind when going fast, ambient sounds, wind running through leaves, birds, sound of vehicle and horse running, birds, additional sounds of the movement (rustling), sounds of nature, sound of rails, water, animals sounds in the distance, scroop of cart, sound around gates, sound of wind on a top of moun- tain, surround sound (3D audio corresponding to the images) and insect noises. One participants shared they were missing a music to the experi- ment: "Music was missing, like a movie soundtrack or instrumental." Very creative feedback was to simulate Doppler effect [55] in the scene. The vehicle could move towards and away from the stationary sources of sound. It would only require a slight change in sound pitch. "Sta- tionary bee with the Doppler effect." There is a great potential for realistic sounds in digital games and in virtual reality experiences. For example physically-based sound [56] or realistic sound propagation [57] through the environment. Incorporating this into the virtual scene could be part of the future work. Adding sounds would be a great improvement for the future work and large number of participants would welcome it.

5.2.16 FoR, VP group specific feedback This section shows a specific feedback for the first group of participants. These participants experienced scene with both frame of reference and visible path methods turned on. Four participants mentioned the tip of the vehicle obstructing the view: "The tip of the vehicle hinders when I want to see what’s in front of it - especially closely before ’falling’ from the top of the hill." "Vehicle is too high - had to lean up to see in front." The angle formed between hill and the vehicle was reported to be too inclined: "When the vehicle is on the top of the hill (the one with rocks in the end of the session) a view to the side (left or right) doesn’t seem real (I think it’s too inclined)." This is probably false assumption, however I believe there is a space for improvement in the future. This issue was probably caused by virtual camera being fixed to the base of the vehicle. When the vehicle rotates (in ’pitch’ axis) the

121 5. Results base of the virtual camera rotates too. When this happens in we compensate it by leaning our body or neck, meaning our horizon stays leveled. To make experience more realistic a script rotating the base of the virtual camera would be needed. When asked, how would they describe their movement through the virtual environment, participants frequently (3) responded it felt like a roller coaster: "The movement simulates a moving cart on roller coaster." For some (1) it felt like flying: "Vehicle is flying." Two participants would appreciate physical cockpit which would map the movements of the virtual one: "And an actual real life cockpit would be great to enhance the experience :-D" One of the participants reported they did not understand the pur- pose for the red crosshair: "Red dot didn’t feel necessary." Probably because there was no gaze interaction or control mapped to this radial. Nevertheless, one of the participants reported following: "By concen- trating on the way I mean to aim the red point to circles to choose the way." It is interesting participant believed to be the one in the control of the movement even though it was entirely scripted. As expected, visible path (waypoints in the form of gates/rings) helped some (2) participants to anticipate movements of the vehicle: "Thanks to black rings I already knew the track and can prepare for it." However, an animation on the waypoints with inner circle scaling down repeatedly had an issue. Sometimes animation would finish just as participant was passing through the gate, meaning small circle will run through their heads. Two participants reported this in their feedback: "Animation inside rings sometimes flash right before the eyes." One of the participants correctly assumed the meaning of the colors of waypoints: "There were obvious reasons for different colors of rings - and that’s the speed of vehicle passing through - red ones faster, blue ones were slower." A large number of participants in all groups gave feedback about the movement. There were 4 participants in this group who mentioned it in the feedback: "It felt a bit weird that the movement on ride wasn’t completely smooth - each time the vehicle went through a circle, it jerked slightly (I guess it was aligning itself)"

122 5. Results

The most nauseating segment of the ride was (according to the feedback) the segment where vehicle tilts from left to right, two partic- ipants mentioned it in the feedback: "The last part of the ride when the vehicle rocked from side to side was quite nauseating for me - but it didn’t cause me any long term discomfort after the session was over." "When the vehicle started rotating - that was a nightmare!" Surprisingly, one of the participants reported giving themselves a little tasks along the way. This could caused some amount of cognitive load for this particular participant and influencing results to some degree: "Had to deal during the test to decide between ’being a sheep’ and concentrate on the way in circles or to try to remember things around (like the circles want all the same, etc)." And added: "I think can be useful not only for IT but also for psychological tests." Six participants shared they experienced some level of discomfort in the feedback: "Nausea through some parts - I saw that I was moving but I didn’t move at all." "I definitely feel very strong nausea + my stomach was swinging." "Definitely very uncomfortable but interesting experience." "The experience in VR was so real I felt sick during the ride. I don’t like roller coasters in real life either, but this VR roller coaster was so real. Additionaly - my brain realized that eyes can see moving land, but my feet were standing - this was very strange feeling. I would say that it was too real for my brain. If I were standing and moving on my own, it could be better." Two valuable things learned from this particular feedbacks are: Discomfort caused by the motion sickness can grow. Some participants used their body movements to some degree to feel better during the experiment. This could have influenced the results and would bean improvement for the future. For example, instruct all participants to sit the same way or to study different body postures and their effect on motion sickness. "It gets worse by time and I need to hold my chair to be able to finish the task." One additional feedback highlights the more nauseating parts of the scene: "Otherwise I didn’t experience any heavy discomfort except a bit stronger nausea at the end (and during faster rides)." One participant reported it was a "Pleasant ride". This might have or might not have been related to the experiment, it could be worth pursuing as an additional study for students of

123 5. Results medicine or similar fields: "10min after watching started to have mi- graine"

5.2.17 FoR group specific feedback Following section discusses feedback given specifically by the second group. These participants had only the frame of reference method enabled. Second group used various words to describe their movement through the virtual environment, for example hovering, carousel, sail- ing, moving car, flying or "wondering around in a little spaceship". However, most common was roller coaster, which was mentioned by 4 participants. "It feels like ride on roller coaster." "The movement of the vehicle simulated more a roller-coaster ride than a trip through nature (roller coaster movement triggers the instinct of raising hands and exhaling a shout..)" Participants in this group talked about speed, for one it was too fast: "Vehicle moved too much quickly and I could not focus and enjoy the environment so much." Another participant though it could be more realistic: "Make the speed more realistic - crawling up the hill - too fast, dangling on top of the hill - too slow" And one feedback was a bit more creative, discussing possibility of controlling the speed with the viewers center of gravity: "Able to speed up by moving your center of gravity - leaning forward, backward." Which I though to be a great idea worth trying in the future research. Just as with the first group, cockpit was obstructing view for 2 participants: "High front tip of the vehicle often blocked the view." Two participants mentioned that the vehicle is making very sharp turns, and one of them had a great explanation why this is a problem: "Vehicle should not make such a sharp turns, because it has momen- tum and it is impossible to change speed and direction so suddenly in the real world." One of the participants had an experience with 360 degree video: "I have already tried VR, yet it was totally different... I was in Spain and I could visit several places. These places were recorded and the resolution was high. I could truly feel that I was there. However, this scene was too much artificial and I was realizing that I am not apart of the environment." Another feedback, which shows it would be

124 5. Results

better to improve the graphics. People expect quality they have seen elsewhere. Two participants had an idea how to improve the experience. They would use the red crosshair to steer the vehicle: "Steering in the di- rection I’m looking (in the direction of the red dot)" "I was slightly expecting that I would be able to move the cart with looking at objects. When looking at some point for too long the cart would turn direction. This idea came to me because of the red dot I was seeing." One of the participants even believed they were ones steering the vehicle: "Controlling of the direction was difficult. It took a while, when my eyes were following the red dot." Four of the participants in this group were a bit surprised vehicle was moving smoothly even when above terrain bumps and dips: "I would appreciate it if the vehicle respected bumps and dips. It was somehow floating over the ground sometimes which made it hard for my brain to absorb" "Climbing the boulders felt somewhat unnatural, it seemed like there should be more shaking, etc. It was too smooth." Another feedback related to the movement was following: "When it started to climb the slope it started to be unrealistic - went over big stones and other uneven terrain and terrain was too steep to climb so smoothly." Several (9) participants shared how they felt in the feedback: "It made me kind of dizzy." "Feeling stomach when moving up and down, on curves, tilting." "I had a headache, stomachache." "I would appre- ciate if there would be less falling down. Because it made me little bit sick and uncomfortable and I could not really control it." "You get dizzy a lot when the vehicle goes straight downhill and when it wig- gles a lot from side to side." "From first downhill I experienced quite big dizziness almost all the time (not just during downhills). At some moments I had to catch the table because I had feeling like I would black out soon!" Two participants even though about interrupting the test: "I was thinking to stop the test but it has ended earlier then I said it." It seems that for this group it were the downhills and tilting which caused the most discomfort. One participant had an idea that by being able to predict the move- ment (eg. by seeing a road), they would feel less sick. Which is exactly

125 5. Results the purpose of visual path method: "If I could predict the way - for example if there was a road - it would not make me feel so sick."

5.2.18 VP group specific feedback The third group of participants were the ones, who experienced the virtual scene with only the visible path method enabled. A total of 10 participants from this group shared in their feedback, that they experienced some level of discomfort: "I felt little dizzy during the first drop. But on the second drop I was prepared. I still felt dizziness but I was prepared for it and handled it better." "Deeper into the experiment I felt a little dizzy, but at the end it was very bearable." A revealing feedback came from one of the participants: "I personally don’t like heights -> was a bit stressful." I did not realize there are heights that might have an effect on people who do not like orare afraid of them. Part where vehicle tilts from side to side was mentioned 4 times: "When vehicle tilted to sides right->left it was causing high level of nausea." Surprisingly, one participant felt like it makes them more integrated into the experience: "Moving on sides and tilting is making user more integrated into the game." One participants reported having a headache: "A little headache." And one felt like removing headset: "Felt like removing the goggles from my head right before achieving the peak of the first hill because I knew what is going to come (the fall)." Two participants shared their view on waypoints (the circles): "I must admit, that if there was no circles, which they indicate changing a way of moving and speed, that will be very difficult to me. But they were, so I can imagine which way and how fast I will go." "The idea and execution of ’passing circles’ is great - it helped empowering the feeling of movement and its speed in the environment." Six participants from this group gave a feedback related to the movement of the vehicle. First, there was a feedback related to the physics of the vehicle being unrealistic: "Changing of pace (where to go slower and where faster) to feel more according to the real-life physics. For example - there were some small slopes where I went significantly slower than to high hills with steeper slope)." Therest was about the changes in direction and speed being too sudden or

126 5. Results

sharp, participants would like a smoother ride: "I got really irritated by the lack of smooth animations and movement. The abrupt changes of direction and speed felt artificial and didn’t allow me to enjoy the scenery." "It would be a much better experience if there were gradual speed changes and the trajectory was curved (not a polygon) using splines." Two participants from this group reported an issue that the angle formed between hill and the vehicle was too inclined: "When i was ’climbing’ to the top of the hill, I was in very strange angle to the ground." "When went down the hill I would feel more comfortable looking to the different angle than right underneath myself ie. Would lift my head up to look onto the opposite hill. Angle of looking should not be aligned with vehicle but with the horizon, like in real life." Having some sort of control over the movement was requested by two: "I could use some steering mechanism. The whole time I was dependent on the movement of the vehicle." Second participant would be expecting to feel less dizzy when they would be in control of the movement: "Maybe the experiment would be better if I could control my movement - in terms of dizziness." One of the participants was discouraged by the vehicle floating over the terrain and ignoring the bumps: "felt more like floating over the terrain since the movement wasn’t following it completely (bumps etc)." Interestingly, one of the participants reported anxiety from being surrounded by the waypoints (circles): "In the descending movement try to use less circles -> would maybe feel less anxiety (from being surrouned by them)." The basic idea behind waypoints is that they show the way. This way viewer can predict the movement of the vehicle and in effect feel less discomfort. However, this feedback shows that there might be another aspect to the waypoints. Viewer might be focusing on them instead of the environment: "For more vibrant environment there could be added more houses or animals - but maybe it was my fault focusing so much on the rings and not environment." Apart from hovering or flying, the most common (mentioned 4 times) description of the movement was roller coaster: "I really did feel like I was on a roller coaster (the feeling in stomach, turning my head)." Participants reported having a ’real feelings’ from the experience:

127 5. Results

"Surprisingly real feelings from VR." "The feeling of my surrounding, being really there, was very strong." "I was pleasantly surprised how realistic the movement (changing speeds and directions) felt, as I could literally feel the acceleration." The head-tracking in VR was appreciated by one of the participants: "First time the experiment started I immediately noticed how much well responsive the head motion was." Personally, I was really happy to receive following feedback from one of the participants of this group: "It could definitely be more longer. Nonetheless I enjoyed it and I thank you for the opportunity to take part in the experiment."

5.2.19 Control group specific feedback The last group of participants had experienced the virtual scene with- out any of the methods enabled (Group ’none’). There were 14 men- tions of some level of discomfort in the feedback, varying from general ’discomfort’ to ’feeling sick’: "I was feeling discomfort while I was be- ing moved quickly down the hill in the environment or was swinged from left to right." A very interesting feedback was the one, were participant experi- enced more discomfort when slowing instead compared to the accel- erating or normal speed: "Slowing down caused me discomfort more than acceleration/normal speed." "Right before the end I wanted to stop the test because I felt sick (feeling like throwing up)." This feed- back shows again the involuntary body movement: "The part of the simulation with inclining towards left and right was uncomfortable for me. And it made me tilt my head accordingly." As already mentioned earlier, some participants are afraid of heights: "Very disturbing and VERY! unpleasant. I hate heights and I’m scared of heights and this was full of high view points. It was also way too fast. Sometimes I felt like I was falling -> made me scared." This feedback illustrates how was the track laid out, starting with slow linear motion without any acceleration and slowly progressing toward more dynamic movement: "The movement was pleasant at first, first fall was exciting as well but as it became more drastic like camera rotation to the sides I really felt like this is making me sick." One of the participants mentions a little trick they used to reduce the

128 5. Results

dizziness (this could have influenced the results): "I felt a little dizzy when I was moving up and down but I was able to control it a little by looking on the other side". A good point was made to improve the experience process by using a chair with armrests: "I had feeling that I fall of the chair so I think it will be good to give to participant something to hold (like arm-rests)." Predicting the movement could, according to one of the partic- ipants, help them feeling less nauseous: "Because I was somehow unable to predict the movement of camera it made me nauseous and I would not search this kind of experience." Similarly, the unclear nature of the movement could have had an effect: "Whole time I didn’t know If I am walking/running/flying or what, that was the most annoying and caused me most of nausea, because I couldn’t precompute what’s coming next." Just as in previous groups, bumps on the terrain were mentioned 2 times in relation to the movement: "The movement sometimes copied the terrain (such as running) and sometimes just ignored many bumps (such as flying), as it was not totally dependent on the terrain." And similarly, the changes of speed and the need to smooth out the movement was reported by 4 of the participants: "I would ’smooth’ out the movement to make it more predictable." Immersion would be better according to one of the participant if moving was more natural: "From what I concluded a bit more natural movement through environment would end up as more immersive." When asked how did the movement feel like, participants used various words to describe it: Flying, riding an invisible rolleroaster, floating in a water. like on scooter from a videogame, being onthe ride. Interestingly, two participants mention falling down: "Climbing mad me feel like I was just about to fall on my back." "If I wouldn’t be sitting down I would fall down." Two participants reported trouble with the horizon: "I actually felt like I should always be aligned with ground." "And it was a little bit confusing, because I was trying to locate the horizon all the time. There was a moment when I looked at the trees and according to the direction of their placement had to adjust my own sense of ’up and down’." Trees looked like having wrong angle with the ground, however participant understood they are correct.

129 5. Results

One of the participant would like to improve the sense of presence by having a body inside the virtual scene: "Maybe it would be advisable to create a roller coaster with a body in a cabin to have more ’present sense’, rather than flying around with no body." Another asked to "Please add vehicle." And it is clear, that the vehicle was missing in the scene, because one of the participant experienced the following: "When flying low close above terrain I though I would touch/hit it with my legs. This caused me pins and needles tingling." Surprisingly, only one participants mentions anything about want- ing to have an input on the direction of where would they go in the virtual scene. I am wondering why that might be, and of course it could be only coincidence. "Sometimes I was being transported in direction different from my personal intentions."

130 6 Conclusion

Conclusion consists of two sections. Discussion section reviews the thesis and its objectives, implementation, methodology and results. Potential future work is summarized in the second part.

6.1 Discussion

Experiment in the virtual reality was conducted, it was designed as a passive seated experience. The purpose of the experiment was to evaluate whether two selected methods help to alleviate motion sick- ness.1 Selected methods were frame of reference (FoR) and visible path (VP). Frame of reference method (Subsec. 2.3.1) means to add a visual frame of reference (a rest frame) in the view. Visible path (Subsec. 2.3.2) should help users to predict the movement by placing visible way- points along the track. Several other methods are introduced such as manipulating field of view or speed, using different types of locomo- tion, using pharmaceutics, adapting and others. Terms virtual reality and motion sickness were described (Sec. 2.1). Testing scene for the experiment was designed to gradually in- troduce more diverse movement and acceleration. First 15 seconds are without any movement, followed by moving in the straight line without any acceleration. In following parts there is an acceleration with linear movement, movement changes directions, then there is both acceleration and changing movement and finally rotation. The scene resembles countryside with farms, forests and rocky hills. Performance optimizations were an important aspect of develop- ment and influenced design of the scene. Size of the scene was tuned to accommodate 5 minute experiment. 3D models had to properly scaled for the VR. System for moving objects along waypoints was im-

1. One of the things, which was difficult for me while working on this thesis was making intentionally something which might make people feel a level of discomfort (such as nausea or dizziness). For many participants it was their first time trying out virtual reality and it many cases it was not a pleasant experience. I would really like for people to be excited about the VR and to enjoy it. However, this experiment might have discouraged some from using VR and that is a shame. Virtual reality has a great potential to be used not only for entertainment but for business too.

131 6. Conclusion plemented. It was employed for virtual camera as well as for butterfly and horse animations. Issue with draw distance and billboarding was circumvented by optimizing tree assets and manually placing objects in the scene.2 Several questionnaires were used to collect data: Simulator sickness questionnaire (SSQ) to measure level of discomfort, Personal informa- tion questionnaire for demographic data, Presence questionnaire (PQ) to evaluate several other aspects, Task load index (TLX) to measure cognitive load and general feedback for qualitative data. Participants were acquired through groups on social network. A public calendar was created with available testing session times and important in- formation for participants. Participants were randomly assigned to testing group using random number generator. Four testing groups were formed, each consisting of 15 partici- pants. There was one group for each method (groups ’FoR’ and ’VP’), one combining both methods (group ’FoR,VP’) and one control group (group ’none’). Experiment process involved participants signing con- sent form, filling in personal questionnaire, followed by 5 minute experiment. After the experiment, participants filled in SSQ, PQ, TLX and general feedback. Participants were rewarded by receiving dis- count voucher to a virtual reality arcade. After the data collection phase, all questionnaires and answers were digitalized. Qualitative data was transcribed into a spreadsheet.

2. Some of the things to thing about, when developing a virtual reality expe- rience is to optimize early (using profiler or other tools) and design levels with optimizations techniques in mind. It is hard to change whole scenes at the end after it becomes obvious they need to be optimized. There are many resources online on optimization. (optimizations for mobiles are also a good fit for VR). Testing with other people is important. Developers themselves have a tunnel vision and do not see flaws in their design. Building community around project, posting relevant content and giving back makes development more enjoyable when developer knows there are other people involved. Spending time to check the tools and assets avail- able and how they fit the project before committing to a specific one helps inthe longterm with the development. Implementing cheat codes, debug helper functions and keyboard shortcuts to help with your development and testing makes develop- ment much easier and more enjoying. For example, simple script to visualize track path between waypoints as colored lines was implemented. Making screenshots and sharing progress with others is good for motivation and it helps with writing, because it is easy to look back.

132 6. Conclusion

The data analysis showed large values of standard deviation. For example, for SSQ score it was 27.46 for group ’FoR’. There were several factors which could have influenced the results. There were language issues with questionnaires and many of par- ticipants were translating terms in SSQ (and other questionnaires) from English to Czech (using their own dictionaries or on a computer). One of the questions in the SSQ was especially problematic - "fullness of the head". Other participants lost meaning of individual terms in translation (one of the participants reported SSQ containing similar questions). Questionnaires were normalized for English speaking au- dience, but this did not matter when participants translated them. Maybe it would have been better to translate questionnaires or to only test with participants with a high proficiency in English. Another issue was using standard (and general) questionnaires with questions which were not suitable for the passive seated expe- rience. For example, there were questions about task performance, interactions, actions and similar. In retrospect these questions should have been omitted. Some participants came straight to the laboratory and some were picked at the front entrance and were escorted in an elevator (small workout such as climbing the stairs before the experiment). There were differences between body postures of individual participants (Subsec. 4.2.5). Also some participants were leaning bodies and tilting heads especially during the last segment of the virtual scene. Asking participants after the VR session whether they require napkin to wipe their forehead and by that biasing their responses in the SSQ. Assignment to each of the testing group was based on a random generator. Meaning each group might had a different demographic composition (for example, different female:male ratio or average age). Participant’s task of the experiment was communicated verbally right after their head-mounted displays (HMD) were adjusted. For most participants it was their first time in virtual reality and were probably overwhelmed and paying little attention to the instructions. Some of the participants reported in their feedback that they received no instructions what they should have been doing. This was magnified by questions in questionnaires asking about their performance or

133 6. Conclusion actions. Their task "to sit and watch the scene" should have been stated before they put on their headset. Another problem was an initial menu scene which included the calibration. There was a text stating they should look directly ahead. However, it was not said that it is for the initial calibration only and one participant misunderstood and proceeded to look directly ahead during the experiment. Some of the questions in questionnaires had a room for interpretation, for example some participants understood interactions in questions as a part of the head-tracking. One of the participants reported they though experimenter clicking the mouse on their laptop was somehow controlling the experiment. Paying attention to all the actions experimenter does during testing is important. Data analysis showed there is a difference between SSQ scores of testing groups. Highest median SSQ score had the group ’none’, followed by the group ’VP’ and ’FoR’. Lowest score had the combined group ’FoR,VP’. However, due to the large values of standard deviation there is no significant difference between any of the groups. Frequency histograms of SSQ scores for each group showed that the group ’none’ had a lot of answers with low SSQ scores. This was probably caused by uneven distribution of male and female partici- pants within the groups (see App. F for gender distribution between groups). Further analysis (Subsec. 5.1.2) shows that the large values of the standard deviation could be influenced by outliers who scored extremely high and low SSQ scores. Outlier Labeling Technique and 5% trimmed mean were employed to eliminate outliers. Scatter plots revealed decent clustering for the group ’VP’, group consisting only of male participants. One thing worth pointing out is that the SSQ specifically states "circle how much each symptom below is affecting you right now". since there were problems with participants not paying attention to instructions and understanding English, it is possible that some of them were not reporting their current state, but the state they have been experiencing while inside VR. Analysis of the personal information showed no significant differ- ence between age groups. Analysis of the gender was not as straight- forward. Distribution of gender between groups was not uniform, however overall male participants scored lower SSQ values (Fig. 5.17).

134 6. Conclusion

T-test showed a significant difference between gender (note that equal variances were not assumed). However, Mann-Whitney U test gives a contradictory result. More uniform distribution of male and female participants between groups should be employed to draw conclusions. Effect of the day of the week (weekday vs weekend) and time ofthe day (morning vs afternoon) were analyzed and it showed no significant difference. Nevertheless, this could have been caused by having mostly students as participants and conducting research during summary holidays. Effect of the week of the testing was analyzed. Analysis shown no significant difference between two selected weeks, meaning some level of quality of the experiment process was kept.3 There were too few participants who were using computer only a little to draw conclusion whether computer usage influences SSQ scores. Similarly, comparing gamers and non-gamers was problematic. There were too few non-gamers. Most of the participants scored medium values. Two groups of less and more playing participants were com- pared with no significant difference (less playing scored slightly higher SSQ scores). Similarly to the computer usage, VR usage was evaluated. In this case most of the participants had no previous experience with VR and no conclusion was made. This was also the case of dominant hand. There were only 2 participants who were left-handed. Presence questionnaire analysis consisted of choosing suitable questions to analyze and then examining relationship between the data. Nine questions were selected (List 5.1.4) for the analysis, the rest included terms such as actions and control which were confusing for the participants of the passive seated experience. One of the questions asked how natural did the movement felt and analysis showed that the control group ’none’ considered movement extremely artificial compared to groups with the frame of reference who considered it less artificial. On the other hand, consistency with the real world was lower for the ’For’ group compared to the ’VP’

3. This was my personal take on experiment process verification and should not be by any means considered evidential.

135 6. Conclusion group. It is possible that the design of the virtual cockpit had a negative effective. In the general feedback some participants reported that the cockpit was preventing them from seeing outside. Answers to questions 9 and 12 were used to analyze this issue. Results are in contradiction with general feedback. However, it was found that waypoints might have been blocking the view. When asked how compelling was the movement, group ’FoR’ scored the lowest. As for the quickness of adjustment to the expe- rience, it was fastest for the group ’none’. Group ’FoR’ took the longest to adjust. However, differences were small. Results of TLX were according to the expectations, passive seated experience does not require a lot of concentration. Group ’FoR,VP’ had the lowest score, combination of both methods seems to reduce task load index. Relationship between SSQ and TLX was discovered. Higher TLX scores correspond to higher SSQ scores. A lot of insight was gained from the general feedback. Participants shared a lot of constructive criticism as well as their point of view regarding the experiment. Most importantly, qualitative feedback re- veals certain flaws in the experiment (including recommendations to add sounds and to improve graphics). The task "to sit and watch the scene" should have been commu- nicated clearly and before participants put on HMD. They were not paying attention and were probably overwhelmed by wearing virtual reality device for the first time. Instructions for calibration should have been separated from the experiment in order not to confuse participants. Option to pause and skip the countdown at the beginning should have been implemented for easier testing. Overall, many participants were excited about virtual reality and really liked head-tracking and sense of presence. Participants suggested adding challenges or more life to the scene as they felt the scene is a bit boring and bare. A sense of presence could have been increased by adding a virtual body in the scene. Participants were not warned about potential discomfort in order not to influence the results and some shared they would like to be informed about potential dizziness. Consent form included statement allowing both parties to interrupt the experiment without stating the reason (to make

136 6. Conclusion

sure nobody continues experiment unless they feel like it). However, informing participants to not to eat before the experiment should have been part of information for participants in the calendar and confirmation email. Head-tracking sensors can loose tracking when there is an ob- stacle in the view, for example participant turning away from the sensors. This can cause a lag. Better placement of sensors or using a non-rotating chair should prevent this issue. Experimenter observed and participants reported, that they were moving involuntarily while in the VR. This was most present during the part of the experience when the virtual view was tilting from side to side. Participants were tilting their heads, leaning their bodies and holding the chair. These different body movements and postures could have influenced the results of the experiment. Some participants reported that the cable connected to HMD was a nuisance. Participants shared that some questions in questionnaires were confusing, because they were asking about task, performance, interactions and other things which were not part of the experiment. These questions should have been omitted. Feedback from participants in 4 tested groups revealed few interest- ing points. The group with both methods employed, shared following insights. Cockpit and its angle with the ground in some parts of the ex- periment obstructed the view and felt too inclined. The pivot point of the virtual camera should have been adjusted. Movement felt for them like a roller coaster. Waypoints were helping anticipate movement. 4 participants would like more smooth movement. Most nauseating segment of the ride was the tilting from left to right. Six participants wrote about their discomfort in the general feedback. Group with frame of reference stated in 4 cases movement feel- ing like a roller coaster, cockpit was obstructing view and smoother turning would have been appreciated. 4 participants from this group were surprised when vehicle went over terrain bumps like it was float- ing. Nine participants shared they felt discomfort and 2 considered interrupting the experiment. Group with visible path had 10 participants who shared they felt a level of discomfort. Segment of the track with tilting was mentioned 4 times, caused nausea for some. One participant wanted to interrupt the test. Waypoints helped to predict the movement. Six participants

137 6. Conclusion gave a feedback related to the movement of the vehicle, the speed felt unrealistic and turning should have been smoother. Two participants shared that the angle with the ground was too inclined in some parts. Vehicle felt like floating over the terrain bumps. Interesting finding was that the waypoints might be drawing attention away from the environment. 4 participants reported the movement to feel like a roller coaster. In the control group (group ’none’), there were 14 mentions of a level of discomfort in the general feedback. When there was no cockpit to block the view, it was possible see down from the top of the hill and some participants reported they were afraid of heights. One of the participants reported looking to the side to reduce dizziness and by that possibly influencing the results. Participant in this group reported they were not able to predict the movement. Movement was reported to ignore the terrain bumps. Changes of speed and smoothness of the movement were reported as inadequate. Interesting finding was that two participants felt like falling down. Two of them had trouble finding the horizon.

6.1.1 Summary Overall there were several issues revealed during the testing and by receiving feedback from participants. The most important were incon- sistencies between body postures of individual participants and par- ticipants tilting their heads and grabbing a chair, completely random distribution of participants in testing groups without keeping gender or age group ratio (meaning high variances between the groups), ir- relevant questions in questionnaires, participants translating meaning of terms in questionnaires, not setting a clear goal for participants before they put on the HMD, missing sound and poor graphics and unnatural movement through the environment. These and other factors contributed to large standard deviation of SSQ scores, preventing from conclusive analysis of the data. Neverthe- less, these are valuable lessons to learn and show how important is to think about every aspect of the experiment (from experiment design to selection of participants). Even though there is no significant difference, there is at least a pattern in the data. Mean values of the SSQ score for control group and

138 6. Conclusion

group with the frame of reference were very similar (31.65 and 33.72), meaning frame of reference seems to have no effect on motion sickness. Visual path method lowered the SSQ score (25.51). Lowest motion sickness score had the group with both methods applied (23.23), as expected. Recommendation for developers and creators of virtual reality experiences is to show waypoints marking the path of movement in their projects. If it is possible and does not ruin experience to include visual elements as a frame of reference too.

6.2 Future work

This section mentions suggestions for the future work to make the virtual scene, experiment, testing or some other aspect of this research better. Automatic check of terrain below vehicle to get rid of clipping. This feature would make design and implementation phase of the terrain much faster. As well as using better terrain editor or tweaking the configuration to prevent sharp edges. More realistic path through the environment without moving to places where it does not make any sense. Simulate physics of the vehicle movement to make the speed feel more natural. Tweaking pivot point of virtual camera to make angle with the ground more realistic. Implementing spline for smoothly curved movement. Implementing effect how terrain bumps influence the movement. Increasing size of the virtual world and employing different design techniques to hide edges of the scene. Improving graphics (textures, lighting, 3D models and animations). Adding sounds (perhaps even realistically propagating or physically- based audio.) Changing effect of shrinking torus on waypoints to prevent the small circle passing through participants heads, perhaps by disabling the effect when in close proximity. Fully modeling and texturing cock- pit 3D model. Make it easier for participants to register by implementing regis- tration calendar as a web page. Clearly explaining the task and cali- bration procedure to participants before they put on the HMD. Also text prompt instructing them to remove it once the scene ends.

139 6. Conclusion

Participants had different postures, this could be prevented by instructing them to sit in the desired position and perhaps by using a less flexible (non-rotating) chair with arm rests. Different approach could be to measure how much participants tilt their heads or lean on their own. This would also help with an occasional lag when head- tracking was lost when participants disrupted line of sight between HMD and sensors. Implementing or printing out a map of the virtual scene with names for each segment to make general feedback easier for partici- pants. Also have more predefined spawn points in the scene. Using only digital versions of questionnaires to make data collection faster. General feedback was a great place for participants to give any feedback they wanted. However, there were some specific questions which emerged during the testing. After they gave their general feed- back I proceeded to ask these questions and include the answers in the general feedback. These were questions such as: "Were the questions in the questionnaires understandable?", "What would you improve?", "Have you noticed any bugs or technical issues?", "What were your expectations?" and more. Additional questionnaire with specific ques- tions such as this one would be included in the experiment. Some of the input from the questionnaires was created manually and it was inconsistent. For example, the field of studies of students could have been represented differently. Some students wrote IT, some Information Technology, some wrote it in Czech. Another example were the timeslots of testing sessions. Time had different format, be- cause it was created ad-hoc during manually assigning participants in the table. This caused problems during the data analysis, because data had to be cleaned and fixed. It would be an improvement to make sure that inputs have same format by creating a pre-defined list for data such as field of study and by automating manual tasks. Measuring participants’ biometrics readings (for example, EEG) to collect objective data could provide more insight. Comparing different demographic groups of participants, for example young and senior age groups, gamers and non-gamers). More methods could be evaluated, for example giving participants control over the movement or giving them challenging tasks. They could control the speed of vehicle by using a controller or leaning forward and backwards. Using galvanic vestibular stimulation or test-

140 6. Conclusion ing with participants with inner ear system disability could be an interesting direction.

141

7 Bibliography

143

Bibliography

1. SONY INTERACTIVE ENTERTAINMENT. PlayStation VR. 2016. vir- tual reality device. 2. HTC CORPORATION. HTC Vive. 2016. virtual reality device. 3. OCULUS VR. Oculus Rift. 2016. virtual reality device. 4. SHUPAK, A; GORDON, CR. Motion sickness: Advances in pathogen- esis, prediction, prevention, and treatment. Aviat Space Environ Med. 2006, vol. 2006, no. 77, pp. 1213–1223. 5. KOLASINSKI, Eugenia M. Simulator Sickness in Virtual Environments: Technical Report 1027. Alexandria, Virginia: U.S. Army Research In- stitute, 1995. ISSN 0006E2EF. 6. KESHAVARZ, Behrang; STELZMANN, Daniela; PAILLARD, Aurore; HECHT, Heiko. Visually induced motion sickness can be alleviated by pleasant odors. Experimental Brain Research. 2015, vol. 233, no. issue 5, pp. 1353–1364. ISSN 00144819. Available from DOI: 10 . 1007/s00221-015-4209-9. 7. DAVIS, S; NESBITT, K; NALIVAIKO, E. Comparing the onset of cyber- sickness using the Oculus Rift and two virtual roller coasters. In: PISAN, Y; NESBITT, K; BLACKMORE, K (eds.). 11th Australasian Conference on Interactive Entertainment (IE 2015). Sydney, Australia: Australian Computer Society Inc., 2015, pp. 3–14. 8. MOUSAVI, Maryam; YAP, Hwa Jen; MUSA, Nurmaya. A Review on Cybersickness and Usability in Virtual Environments. 2013, vol. 10, pp. 34–39. ISSN 2234991X. Available from DOI: 10 . 4028 / www . scientific.net/AEF.10.34. 9. BOS, Jelte E. Nuancing the relationship between motion sickness and postural stability. Displays. 2011, vol. 32, no. issue 4, pp. 189–193. ISSN 01419382. Available from DOI: 10.1016/j.displa.2010.09. 005.

145 BIBLIOGRAPHY

10. CHANG, EunHee; HWANG, InJae; JEON, Hyeonjin; CHUN, Yeseul; KIM, Hyun Taek; PARK, Changhoon. Effects of rest frames on cyber- sickness and oscillatory brain activity. In: 2013 International Winter Workshop on Brain-Computer Interface (BCI). IEEE, 2013, pp. 62–64. ISBN 9781467359740. Available from DOI: 10.1109/IWW-BCI.2013. 6506631. 11. HAN, KyungHun; PARK,ChangHoon; KIM, EungSuk; KIM, DaeGuen; WOO, SungHo; JEONG, JiWoon; HWANG, InJae; KIM, HyunTaek. Effects of Different Types of 3D Rest Frames on Reducing Cybersick- ness in a Virtual Environment. I-Perception. 2011, vol. 2, no. issue 8, pp. 861–861. ISSN 20416695. Available from DOI: 10.1068/ic861. 12. WHITTINGHILL, David Matthew; ZIEGLER, Bradley; MOORE, James; CASE, Tristan. Nasum Virtualis: A Simple Technique for Reducing Simulator Sickness in Head Mounted VR. In: Game Developers Con- ference. San Francisco. 2015. 13. EMMERIK, Martijn; DE VRIES, Sjoerd; BOS, Jelte. Internal and exter- nal fields of view affect cybersickness. 2011, vol. 32, pp. 169–174. Available from DOI: 10.1016/j.displa.2010.11.003. 14. SO, Richard H. Y.; LO, W. T.; HO, Andy T. K. Effects of Navigation Speed on Motion Sickness Caused by an Immersive Virtual Environ- ment. Human Factors: The Journal of the Human Factors and Ergonomics Society. 2001, vol. 43, no. issue 3, pp. 452–461. ISSN 00187208. Avail- able from DOI: 10.1518/001872001775898223. 15. DORADO, Jose L. Figueroa. Ramps are better than stairs to reduce cybersickness in applications based on a HMD and a Gamepad. In: 2014 IEEE Symposium on 3D User Interfaces (3DUI). 2014, pp. 47–50. ISBN 9781479936243. Available from DOI: 10.1109/3DUI.2014. 6798841. 16. DORADO, Jose L.; FIGUEROA, Pablo A. Methods to reduce cyber- sickness and enhance presence for in-place navigation techniques. In: 2015 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 2015, pp. 145–146. ISBN 9781467368865. Available from DOI: 10.1109/ 3DUI.2015.7131742.

146 BIBLIOGRAPHY

17. ZHANG, Li-Li; WANG, Jun-Qin; QI, Rui-Rui; PAN, Lei-Lei; LI, Min; CAI, Yi-Ling. Motion Sickness: Current Knowledge and Recent Advance. CNS Neuroscience & Therapeutics. 2016, vol. 22, no. issue 1, pp. 15–24. ISSN 17555930. Available from DOI: 10.1111/cns.12468. 18. KENNEDY, Robert S.; STANNEY, Kay M.; DUNLAP, William P. Dura- tion and Exposure to Virtual Environments: Sickness Curves During and Across Sessions. Presence: Teleoperators and Virtual Environments. 2000, vol. 9, no. issue 5, pp. 463–472. ISSN 10547460. Available from DOI: 10.1162/105474600566952. 19. REED-JONES, Rebecca J; REED-JONES, James G; TRICK, Lana M; VALLIS, Lori A. Proceedings of the 4th International Driving Sym- posium on Human Factors in Driver Assessment, Training, and Vehi- cle Design: driving assessment 2007 - CAN GALVANIC VESTIBULAR STIMULATION REDUCE SIMULATOR ADAPTATION SYNDROME? 1st ed. Iowa City, Iowa: University of Iowa, Public Policy Center, 2007. ISBN 9780874141580. 20. FREITAG, Sebastian; RAUSCH, Dominik; KUHLEN, Torsten. Reori- entation in virtual environments using interactive portals. In: 2014 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 2014, pp. 119– 122. ISBN 9781479936243. Available from DOI: 10.1109/3DUI.2014. 6798852. 21. BOS, Jelte E. Less sickness with more motion and/or mental distrac- tion. Journal of Vestibular Research. 2015, vol. 25, no. 1, pp. 23–33. Available from DOI: 10.3233/VES-150541. 22. GOLDING, John; M MARKEY, H; STOTT, J.R.R. The effects of motion direction, body axis, and posture on motion sickness induced by low frequency linear oscillation. 1995, vol. 66, pp. 1046–51. 23. GOLDING, John F. Motion sickness susceptibility. Autonomic Neuro- science. 2006, vol. 129, no. 1-2, pp. 67–76. ISSN 15660702. Available from DOI: 10.1016/j.autneu.2006.07.019. 24. KENNEDY; BERBAUM; LILIENTHAL; DUNLAP; MULLIGAN; FU- NARO. Guidelines for Alleviation of Simulator Sickness Symptomatol- ogy. Orlando, Florida: Naval Training Systems Center, 1987.

147 BIBLIOGRAPHY

25. SHARPLES, Sarah; COBB, Sue; MOODY, Amanda; WILSON, John R. Virtual reality induced symptoms and effects (VRISE): Comparison of head mounted display (HMD), desktop and projection display systems. Displays. 2008, vol. 29, no. 2, pp. 58–69. ISSN 0141-9382. Available from DOI: https://doi.org/10.1016/j.displa.2007. 09.005. Health and Safety Aspects of Visual Displays. 26. DONG, Xiao; YOSHIDA, Ken; STOFFREGEN, Thomas A. Control of a virtual vehicle influences postural activity and motion sickness. Journal of Experimental Psychology: Applied. 2011, vol. 17, no. 2, pp. 128. 27. STANNEY, Kay M.; HASH, Phillip. Locus of User-Initiated Control in Virtual Environments: Influences on Cybersickness. Presence: Teleoperators and Virtual Environments. 1998, vol. 7, no. 5, pp. 447– 459. Available from DOI: 10.1162/105474698565848. 28. KENNEDY, Robert S.; LANE, Norman E.; BERBAUM, Kevin S.; LILIEN- THAL, Michael G. Simulator Sickness Questionnaire: An Enhanced Method for Quantifying Simulator Sickness. The International Jour- nal of Aviation Psychology. 1993, vol. 3, no. issue 3, pp. 203–220. ISSN 10508414. Available from DOI: 10.1207/s15327108ijap0303_3. 29. UNITY TECHNOLOGIES. Unity [online]. 2005 [visited on 2017-11-26]. Available from: https://unity3d.com/. 30. LAVALLE, Steven M. Virtual Reality. 1st ed. University of Illinois: Cam- bridge University Press, 2016. 31. SHERMAN, William R.; CRAIG, Alan B. Understanding virtual reality: interface, application, and design. 1st ed. Boston: Morgan Kaufmann Publishers, 2003. ISBN 978-1558603530. 32. HILL, K.J; HOWARTH, P.A. Habituation to the side effects of immer- sion in a virtual environment. Displays. 2000, vol. 21, no. issue 1, pp. 25–30. ISSN 01419382. Available from DOI: 10.1016/S0141- 9382(00)00029-9. 33. HOWARTH, Peter A. Hodder. Characteristics of habituation to motion in a virtual environment. Displays. 2008, vol. 29, no. 2, pp. 117–123. ISSN 01419382. Available from DOI: 10.1016/j.displa.2007.09. 009.

148 BIBLIOGRAPHY

34. ATLASSIAN. Trello [online]. 2011 [visited on 2017-11-26]. Available from: https://trello.com/. 35. UNITY TECHNOLOGIES. Manual: Occlusion Culling [online]. 2017 [visited on 2017-11-26]. Available from: https://docs.unity3d. com/Manual/OcclusionCulling.html. 36. BALK, Stacy A; BERTOLA, Anne; INMAN, Vaughan W. Simulator Sickness Questionnaire: Twenty Years Later. 7th International Driv- ing Symposium on Human Factors in Driver Assessment, Training, and Vehicle Design. 2013, no. issue 7, pp. 257–263. Available also from: https://trid.trb.org/view.aspx?id=1263840. 37. HART, Sandra G. NASA Task Load Index (TLX). Volume 1.0; Paper and Pencil Package. 1986. Available also from: https://ntrs.nasa. gov/search.jsp?R=20000021488. 38. HART, Sandra G. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proceedings of the Human Factors and Ergonomics Society Annual Meet- ing. 2016, vol. 50, no. 9, pp. 904–908. ISSN 1541-9312. Available from DOI: 10.1177/154193120605000909. 39. GOOGLE LLC. Google Forms [online]. 2007 [visited on 2017-11-28]. Available from: https://www.google.com/forms/about/. 40. SURVEYMONKEY. SurveyMonkey [online]. 1999 [visited on 2017-11-28]. Available from: https://www.surveymonkey.com/. 41. WITMER, Bob G.; SINGER, Michael J. Measuring Presence in Virtual Environments: A Presence Questionnaire. Presence: Teleoperators and Virtual Environments. 1998, vol. 7, no. 3, pp. 225–240. ISSN 1054-7460. Available from DOI: 10.1162/105474698565686. 42. FACULTY OF INFORMATICS MASARYK UNIVERSITY. SSME Ser- vice Science, Management and Engineering [online]. 2007 [visited on 2017-12-03]. Available from: http://ssme.fi.muni.cz/. 43. ROGERS, Carl R.; FARSON, Richard Evans. Active Listening. Martino Publishing, 2015. ISBN 1614278725. Available also from: https : / / www . amazon . com / Active - Listening - Carl - R - Rogers / dp / 1614278725?SubscriptionId=0JYN1NVW651KCA56C102&tag=techkie- 20&linkCode=xm2&camp=2025&creative=165953&creativeASIN= 1614278725.

149 BIBLIOGRAPHY

44. FACEBOOK, INC. Facebook [online]. 2004 [visited on 2017-11-26]. Avail- able from: https://facebook.com/. 45. GOOGLE LLC. Google Sheets [online]. 2007 [visited on 2017-11-28]. Available from: https://www.google.com/sheets/about/. 46. DOODLE. Doodle [online]. 2007 [visited on 2017-11-26]. Available from: https://doodle.com/. 47. RANDOMNESS AND INTEGRITY SERVICES LTD. RANDOM.ORG [online]. 1998 [visited on 2017-11-28]. Available from: https://www. random.org. 48. IBM. SPSS Statistics [online]. 1968 [visited on 2017-12-03]. Available from: https://www.ibm.com/products/spss-statistics. 49. PATON-SIMPSON AND ASSOCIATES LTD. SOFA Statistics [online]. 2009 [visited on 2017-12-05]. Available from: http://www.sofastatistics. com. 50. ZABELL, S. L. On Student’s 1908 Article “The Probable Error of a Mean”. Journal of the American Statistical Association. 2008, vol. 103, no. 481, pp. 1–7. Available from DOI: 10.1198/016214508000000030. 51. HOAGLIN, David C.; IGLEWICZ, Boris; TUKEY, John W. Performance of Some Resistant Rules for Outlier Labeling. Journal of the Ameri- can Statistical Association. 1986, vol. 81, no. 396, pp. 991–999. ISSN 01621459. Available also from: http://www.jstor.org/stable/ 2289073. 52. HOAGLIN, David C.; IGLEWICZ, Boris. Fine-Tuning Some Resistant Rules for Outlier Labeling. Journal of the American Statistical Associa- tion. 1987, vol. 82, no. 400, pp. 1147–1149. ISSN 0162-1459. Available from DOI: 10.1080/01621459.1987.10478551. 53. UCLA INSTITUTE FOR DIGITAL RESEARCH AND EDUCATION. Descriptive statistics | SPSS Annotated Output - IDRE Stats [online]. 2017 [visited on 2017-12-03]. Available from: https://stats.idre. ucla.edu/spss/output/descriptive-statistics/. 54. MORI, M.; MACDORMAN, K. F.; KAGEKI, N. The Uncanny Valley [From the Field]. IEEE Robotics Automation Magazine. 2012, vol. 19, no. 2, pp. 98–100. ISSN 1070-9932. Available from DOI: 10.1109/ MRA.2012.2192811.

150 BIBLIOGRAPHY

55. KAUNITZ, Jonathan D. The Doppler Effect: A Century from Red Shift to Red Spot. Digestive Diseases and Sciences. 2016, vol. 61, no. 2, pp. 340–341. ISSN 1573-2568. Available from DOI: 10.1007/s10620- 015-3998-9. 56. REN, Zhimin; YEH, Hengchin; LIN, Ming C. Example-guided physi- cally based modal sound synthesis. ACM Transactions on Graphics. 2013, vol. 32, no. 1, pp. 1–16. ISSN 07300301. Available from DOI: 10.1145/2421636.2421637. 57. RAGHUVANSHI,Nikunj; SNYDER, John; MEHRA, Ravish; LIN, Ming; GOVINDARAJU, Naga. Precomputed wave simulation for real-time sound propagation of dynamic sources in complex scenes. In: ACM SIGGRAPH 2010 papers on - SIGGRAPH ’10. New York, New York, USA: ACM Press, 2010, pp. 1–11. ISBN 9781450302104. Available from DOI: 10.1145/1833349.1778805.

151

8 Index

153

Index

A Simulator sickness, 5 Active listening, 106 Simulator Sickness Questionnaire Adaptation, 10 (SSQ), 41

C T Cybersickness, 5 Task Load Index (TLX), 42

D U Doppler effect, 121 Uncanny valley, 109

F V Field of view, 7 Virtual Reality, 5 Frame of reference, 6, 31 Visible path, 7, 31 Visually induced motion sickness G (VIMS), 5 Galvanic vestibular stimulation, 12 VR sickness, 5 H Habituation, 10 Head-mounted display (HMD), 5

L Level design, 36 Locomotion, 8

M Mal de debarquement (sea legs), 14 Motion Sickness, 5

O Occlusion culling, 36 Outlier Labeling Technique, 66

P Presence Questionnaire, 42

S Simulation sickness, 5

155

A Facebook groups

Below are Facebook groups which have been used while looking for participants. ∙ 1. ročník FI MUNI 2015/2016 ∙ MUNI FI Mgr. program Aplikovaná informatika 2014 - 2016 ∙ Prváci MUNI 2013/2014 ∙ 1. ročník FI MUNI 2013/2014 ∙ MUNI FI Mgr-AP 2014 - 2016 ∙ FI MUNI, Aplikovaná informatika, prváci 2016/2017 ∙ 2. ročník FI MUNI 2016/2017 ∙ FI MUNI ∙ Study group FI MUNI ∙ FF MU PSYCHOLOGIE (studenti z oboru) ∙ Prváci MU 2017 (studentská skupina) ∙ 1. ročník LF MUNI 2016/2017 ∙ Mediální studia a žurnalistika MUNI (FSS) 2015 ∙ Prváci MU 2017 (official) ∙ 1. ročník ESF MUNI 2017/2018 ∙ 1. ročník FSpS MUNI 2015/2016 ∙ Pedagogická fakulta MU Brno ∙ Environmentální studia FSS MUNI ∙ MUNI - Fakulta Informatiky - 1. ročník - 2014/2015 ∙ Pedagogické asistentství AJ MUNI 2015 ∙ Teorie interaktivních médií 1.ročník (2016-2019) ∙ Právo a právní věda MU 2016-2021 ∙ Job Hub FI MUNI ∙ SSME ∙ Filozofická fakulta MU Brno ∙ VUT FIT BIT 2016-2019 Teambuilding ∙ Prváci VUT Fakulta podnikatelská 2017/2018 ∙ VUT FIT BIT 2015-2018 Teambuilding ∙ VUT FSI - prodám/koupím/nabízím/sháním ∙ VUT FIT 2013-2015 ∙ VUT FIT MIT 2015-2017 Teambuilding

157 A. Facebook groups

∙ VUT FIT BIT 2014-2017 Teambuilding ∙ FAST VUT 2012-2016 ∙ VUT FSI ∙ VUT Fakulta podnikatelská ∙ Fakulta stavební VUT v Brně (FAST) ∙ VFU STUDENTI ∙ Prváci PEF MENDELU 16/17 ∙ Univerzita obrany Brno ∙ Prváci PEF MENDELU 14/15 ∙ JAMU Brno ∙ Koleje Komárov - Sladkého ∙ PPV - Koleje pod Palackého vrchem (VUT Brno) ∙ Koleje Komárov ∙ Kaunicovy koleje ∙ Koleje Kounicova ∙ Klácelky - VŠ koleje Klácelova 2 ∙ Mánesovy koleje Brno ∙ Koleje Vinařská ∙ Tauferovy koleje Brno ∙ koleje Tvrdého ∙ Purkyňovy koleje (VUT Brno) ∙ Práce , brigády - BRNO ∙ AKCE BRNO! Všichni přidávejte koho chcete aneb SPAMjaksviňa ∙ Nenechte si ujít! kulturní tipy Brno ∙ Brněnské deskovky ∙ ŽIVÉ MĚSTO BRNO - kulturní akce ∙ Práce/ Brigády/ Příležitosti - Brno a okolí ∙ Games Development Community CZ/SK ∙ Inzerce Brno ∙ Bazar Vseho -Brno- Bez Schvalovani ∙ BRIGÁDY PRÁCE BRNO ∙ Pokémon GO Brno a okolí ∙ K vašim službám! ∙ BAZAR BRNO - bez schvalování ∙ BRNO a okolí - práce, brigády, business

158 A. Facebook groups

∙ LARPy - Pozvánky na akce ∙ Vzděláváme se, rozvíjíme se - akce Brno ∙ MU Game Studies ∙ LARP CZ ∙ Technologie ve vzdělávání (KISK) ∙ English in Brno - Angličtina v Brně ∙ Living in Brno ∙ Free Things Brno ∙ Brno Forum ∙ Multilingual Jobs in Brno ∙ Living in Brno

159

B Consent form

161 Section A: Consent form Participant’s Copy Participant No______

Consent Form

Project Title: Oculus Rift

Please read completely

Research in the current conduct study is considered of “minimal risk”. During this experiment you will be asked to watch scenario in virtual reality environment. The monitor name for this study will be Bc. Roman Lukš. who has also been the designer of this experiment. You are expected to participate for approximately 1 hour. On completion of the task you will fill in questionnaires. We will use your data anonymously, along with the data of several other participants. By signing this form you agree to hold and maintain the confidential information (nature of this experiment) from the third party. Remember that your participation is entirely voluntarily. You can choose not to participate in part or all of the project and you can withdraw at any stage of the project without being penalized or disadvantaged in any way. Finally, we reserve the right to stop using you as a subject for any reason. Please circle your answer to the following issues and sign below:

Do you understand this consent form? YES NO

Do you give your consent to be a subject in this study? YES NO

Do you certify that you are at least 18 years of Age? YES NO

Can we use your photograph in a publication? YES NO

Have you ever suffered from severe vertigo or epilepsy? YES NO

Do you give your consent for your data to be used in further research YES NO projects which have research governance approval as long as your name and contact information is removed before it is passed on?

Do you agree to be invited to a follow up session? YES NO

Full name______Date______

Signature ______

1 | P a g e

Section A: Consent form Experimenter’s Copy Participant No______

Consent Form

Project Title: Oculus Rift Virtual Reality Game

Please read completely

Research in the current conduct study is considered of “minimal risk”. During this experiment you will be asked to watch scenario in virtual reality environment. The monitor name for this study will be Bc. Roman Lukš. who has also been the designer of this experiment. You are expected to participate for approximately 1 hour. On completion of the task you will fill in questionnaires. We will use your data anonymously, along with the data of several other participants. By signing this form you agree to hold and maintain the confidential information (nature of this experiment) from the third party. Remember that your participation is entirely voluntarily. You can choose not to participate in part or all of the project and you can withdraw at any stage of the project without being penalized or disadvantaged in any way. Finally, we reserve the right to stop using you as a subject for any reason. Please circle your answer to the following issues and sign below:

Do you understand this consent form? YES NO

Do you give your consent to be a subject in this study? YES NO

Do you certify that you are at least 18 years of Age? YES NO

Can we use your photograph in a publication? YES NO

Have you ever suffered from severe vertigo or epilepsy? YES NO

Do you give your consent for your data to be used in further research YES NO projects which have research governance approval as long as your name and contact information is removed before it is passed on?

Do you agree to be invited to a follow up session? YES NO

Full name______Date______

Signature ______

2 | P a g e

C Feedback form

165 Feedback Participant No______

Finally, you may use the space to express any additional comments about the experiment. For example – give us general feedback, tell us how we could improve, write what did you like or did not like, judge our lab equipment and the process of our experiment, have you noticed any technical imperfections or bugs, what were your expectations, what do you think about virtual environment, how did you perceive the movement, what do you think about the questionnaires, etc.

______

______

______

______

______

______

______

______

______

______

______

______

______

______

______

______

______

______

______

______

______D Presence Questionnaire

167 Questionnaire Participant No______

Characterize your experience with the virtual environments, by marking an "X" in the appropriate box of the 7-point scale, in accordance with the question content and descriptive labels. Please consider the entire scale when making your responses, as the intermediate levels may apply. Answer the questions independently in the order that they appear. Do not skip questions or return to a previous question to change your answer.

WITH REGARD TO THE EXPERIENCE

1. How much were you able to control events?

NOT AT ALL SOMEWHAT COMPLETELY

2. How responsive was the environment to actions that you initiated (or performed)?

NOT MODERATELY COMPLETELY RESPONSIVE RESPONSIVE RESPONSIVE

3. How natural did your interactions with the environment seem?

EXTREMELY BORDERLINE COMPLETELY ARTIFICIAL NATURAL

4. How much did the visual aspects of the environment involve you?

NOT AT ALL SOMEWHAT COMPLETELY

5. How natural was the mechanism which controlled movement through the environment?

EXTREMELY BORDERLINE COMPLETELY ARTIFICIAL NATURAL

6. How compelling was your sense of objects moving through space?

NOT AT ALL MODERATELY VERY COMPELLING COMPELLING Questionnaire Participant No______

7. How much did your experiences in the virtual environment seem consistent with your real world experiences?

NOT MODERATELY VERY CONSISTENT CONSISTENT CONSISTENT

8. Were you able to anticipate what would happen next in response to the actions that you performed?

NOT AT ALL SOMEWHAT COMPLETELY

9. How completely were you able to actively survey or search the environment using vision?

NOT AT ALL SOMEWHAT COMPLETELY

10. How compelling was your sense of moving around inside the virtual environment?

NOT MODERATELY VERY COMPELLING COMPELLING COMPELLING

11. How closely were you able to examine objects?

NOT AT ALL PRETTY VERY CLOSELY CLOSELY

12. How well could you examine objects from multiple viewpoints?

NOT AT ALL SOMEWHAT EXTENSIVELY

13. How involved were you in the virtual environment experience?

NOT MILDLY COMPLETELY INVOLVED INVOLVED ENGROSSED Questionnaire Participant No______

14. How much delay did you experience between your actions and expected outcomes?

NO DELAYS MODERATE LONG DELAYS DELAYS

15. How quickly did you adjust to the virtual environment experience?

NOT AT ALL SLOWLY LESS THAN ONE MINUTE

16. How proficient in moving and interacting with the virtual environment did you feel at the end of the experience?

NOT REASONABLY VERY PROFICIENT PROFICIENT PROFICIENT

17. How much did the visual display quality interfere or distract you from performing assigned tasks or required activities?

NOT AT ALL INTERFERED PREVENTED SOMEWHAT TASK PERFORMANCE

18. How much did the control devices interfere with the performance of assigned tasks or with other activities?

NOT AT ALL INTERFERED INTERFERED SOMEWHAT GREATLY

19. How well could you concentrate on the assigned tasks or required activities rather than on the mechanisms used to perform those tasks or activities?

NOT AT ALL SOMEWHAT COMPLETELY

E Simulator Sickness Questionnaire (SSQ)

Google Forms were used to create this questionnaire. Screenshots below help to illustrate the structure and visual representation of the questionnaire.

171 E. Simulator Sickness Questionnaire (SSQ)

172

Figure E.1: SSQ - part 1 E. Simulator Sickness Questionnaire (SSQ)

173

Figure E.2: SSQ - part 2 E. Simulator Sickness Questionnaire (SSQ)

174

Figure E.3: SSQ - part 3 F Personal Information Questionnaire

Google Forms were used to create this questionnaire. Screenshots below (Figure F.1 to Figure F.3) illustrate the structure and visual representation of the questionnaire.

175 F. Personal Information Questionnaire

Figure F.1: Personal Information questionnaire - part 1

176 F. Personal Information Questionnaire

177

Figure F.2: Personal Information questionnaire - part 2 F. Personal Information Questionnaire

Figure F.3: Personal Information questionnaire - part 3 (students only)

178 F. Personal Information Questionnaire

179

Figure F.4: Descriptives: Age F. Personal Information Questionnaire

Figure F.5: Descriptives: Morning/Afternoon

180 F. Personal Information Questionnaire

Figure F.6: Descriptives: Gender

181 F. Personal Information Questionnaire

Figure F.7: Descriptives: Weekdays

182 F. Personal Information Questionnaire

Figure F.8: Histogram: Weekdays

Figure F.9: Histogram: Weekend

183 Age Group (FoR)

Cumulative Frequency Percent Valid Percent Percent Valid 18-25 9 60,0 60,0 60,0 26-33 5 33,3 33,3 93,3 34-41 1 6,7 6,7 100,0 Total 15 100,0 100,0

Age Group (FoR,VP)

Cumulative Frequency Percent Valid Percent Percent Valid 18-25 7 46,7 46,7 46,7 26-33 7 46,7 46,7 93,3 34-41 1 6,7 6,7 100,0 Total 15 100,0 100,0

Age Group (VP)

Cumulative Frequency Percent Valid Percent Percent Valid 18-25 11 73,3 73,3 73,3 26-33 3 20,0 20,0 93,3 34-41 1 6,7 6,7 100,0 Total 15 100,0 100,0

Age Group (none)

Cumulative Frequency Percent Valid Percent Percent Valid 18-25 8 53,3 53,3 53,3 26-33 7 46,7 46,7 100,0 Total 15 100,0 100,0 Gender (FoR)

Cumulative Frequency Percent Valid Percent Percent Valid Female 9 60,0 60,0 60,0 Male 6 40,0 40,0 100,0 Total 15 100,0 100,0

Gender (FoR,VP)

Cumulative Frequency Percent Valid Percent Percent Valid Female 7 46,7 46,7 46,7 Male 8 53,3 53,3 100,0 Total 15 100,0 100,0

Gender (VP)

Cumulative Frequency Percent Valid Percent Percent Valid Male 15 100,0 100,0 100,0

Gender (none)

Cumulative Frequency Percent Valid Percent Percent Valid Female 4 26,7 26,7 26,7 Male 11 73,3 73,3 100,0 Total 15 100,0 100,0 To what extent do you use a computer in your daily activities? (FoR)

Cumulative Frequency Percent Valid Percent Percent Valid 3 1 6,7 6,7 6,7 4 5 33,3 33,3 40,0 5 9 60,0 60,0 100,0 Total 15 100,0 100,0

To what extent do you use a computer in your daily activities? (FoR,VP)

Cumulative Frequency Percent Valid Percent Percent Valid 3 2 13,3 13,3 13,3 4 1 6,7 6,7 20,0 5 12 80,0 80,0 100,0 Total 15 100,0 100,0

To what extent do you use a computer in your daily activities? (VP)

Cumulative Frequency Percent Valid Percent Percent Valid 2 1 6,7 6,7 6,7 4 1 6,7 6,7 13,3 5 13 86,7 86,7 100,0 Total 15 100,0 100,0

To what extent do you use a computer in your daily activities? (none)

Cumulative Frequency Percent Valid Percent Percent Valid 4 3 20,0 20,0 20,0 5 12 80,0 80,0 100,0 Total 15 100,0 100,0 To what extent do you play videogames? (eg. on PC, consoles, smartphone) (FoR)

Cumulative Frequency Percent Valid Percent Percent Valid 1 3 20,0 20,0 20,0 2 5 33,3 33,3 53,3 3 6 40,0 40,0 93,3 4 1 6,7 6,7 100,0 Total 15 100,0 100,0

To what extent do you play videogames? (eg. on PC, consoles, smartphone) (FoR,VP)

Cumulative Frequency Percent Valid Percent Percent Valid 2 5 33,3 33,3 33,3 3 4 26,7 26,7 60,0 4 2 13,3 13,3 73,3 5 4 26,7 26,7 100,0 Total 15 100,0 100,0

To what extent do you play videogames? (eg. on PC, consoles, smartphone) (VP)

Cumulative Frequency Percent Valid Percent Percent Valid 1 1 6,7 6,7 6,7 2 2 13,3 13,3 20,0 3 1 6,7 6,7 26,7 4 10 66,7 66,7 93,3 5 1 6,7 6,7 100,0 Total 15 100,0 100,0

To what extent do you play videogames? (eg. on PC, consoles, smartphone) (none)

Cumulative Frequency Percent Valid Percent Percent Valid 1 1 6,7 6,7 6,7 2 3 20,0 20,0 26,7 3 3 20,0 20,0 46,7 4 6 40,0 40,0 86,7 5 2 13,3 13,3 100,0 Total 15 100,0 100,0

G Presence Questionnaire - results

189 1. How much were you able to control events?

40 Mean = 1,72 Std. Dev. = 1,263 30 N = 60

20 Frequency 10

0 0 2 4 6 8

2. How responsive was the environment to actions that you initiated (or performed)?

20 Mean = 3,43 Std. Dev. = 2,142 15 N = 60

10 Frequency 5

0 0 2 4 6 8

3. How natural did your interactions with the environment seem?

15 Mean = 3,98 Std. Dev. = 1,692 N = 60 10

Frequency 5

0 0 2 4 6 8

Page 1 4. How much did the visual aspects of the environment involve you?

20 Mean = 3,72 Std. Dev. = 1,497 15 N = 60

10 Frequency 5

0 0 2 4 6

5. How natural was the mechanism which controlled movement through the environment?

12,5 Mean = 3,32 Std. Dev. = 1,712 10,0 N = 60

7,5

5,0 Frequency

2,5

0,0 0 2 4 6 8

6. How compelling was your sense of objects moving through space?

20 Mean = 4,68 Std. Dev. = 1,455 15 N = 60

10 Frequency 5

0 0 2 4 6 8

Page 2 7. How much did your experiences in the virtual environment seem consistent with your real world experiences? 20

Mean = 3,87 Std. Dev. = 1,408 15 N = 60

10

Frequency 5

0 0 2 4 6 8

8. Were you able to anticipate what would happen next in response to the actions that you performed?

15 Mean = 4,2 Std. Dev. = 1,929 N = 60 10

5 Frequency

0 0 2 4 6 8

9. How completely were you able to actively survey or search the environment using vision?

20 Mean = 5,73 Std. Dev. = 1,26 15 N = 60

10 Frequency 5

0 2 4 6 8

Page 3 10. How compelling was your sense of moving around inside the virtual environment?

20 Mean = 5,17 Std. Dev. = 1,428 15 N = 60

10 Frequency 5

0 2 4 6 8

11. How closely were you able to examine objects?

20 Mean = 3,58 Std. Dev. = 1,51 15 N = 60

10 Frequency 5

0 0 2 4 6 8

12. How well could you examine objects from multiple viewpoints?

25 Mean = 4,18 Std. Dev. = 20 1,308 N = 60

15

10 Frequency

5

0 0 2 4 6 8

Page 4 13. How involved were you in the virtual environment experience?

20 Mean = 4,53 Std. Dev. = 1,567 15 N = 60

10 Frequency 5

0 0 2 4 6 8

14. How much delay did you experience between your actions and expected outcomes?

40 Mean = 1,98 Std. Dev. = 1,42 30 N = 60

20 Frequency 10

0 0 2 4 6 8

15. How quickly did you adjust to the virtual environment experience?

40 Mean = 6,2 Std. Dev. = 1,205 30 N = 60

20 Frequency 10

0 2 4 6 8

Page 5 16. How proficient in moving and interacting with the virtual environment did you feel at the end of the experience? 25

Mean = 4,13 Std. Dev. = 20 1,641 N = 60 15

10 Frequency 5

0 0 2 4 6 8

17. How much did the visual display quality interfere or distract you from performing assigned tasks or required activities? 20

Mean = 3,1 Std. Dev. = 1,548 15 N = 60

10

Frequency 5

0 0 2 4 6

18. How much did the control devices interfere with the performance of assigned tasks or with other activities?

30 Mean = 2,22 Std. Dev. = 1,508 N = 60 20

10 Frequency

0 0 2 4 6

Page 6 19. How well could you concentrate on the assigned tasks or required activities rather than on the mechanisms used to perform those tasks or activities? 20

Mean = 5,18 Std. Dev. = 1,751 15 N = 60

10

Frequency 5

0 0 2 4 6 8

Page 7 H SSQ descriptives

197 Descriptives

Methods Statistic Std. Error Total Score FoR Mean 33,7240 7,08885 95% Confidence Interval for Lower Bound 18,5199 Mean Upper Bound 48,9281 5% Trimmed Mean 32,7433 Median 18,2200 Variance 753,777 Std. Deviation 27,45500 Minimum 3,00 Maximum 82,10 Range 79,10 Interquartile Range 34,44 Skewness ,748 ,580 Kurtosis -,770 1,121 FoR,VP Mean 23,2267 5,39480 95% Confidence Interval for Lower Bound 11,6560 Mean Upper Bound 34,7974 5% Trimmed Mean 21,8696 Median 16,2200 Variance 436,559 Std. Deviation 20,89399 Minimum 2,00 Maximum 68,88 Range 66,88 Interquartile Range 31,44 Skewness 1,126 ,580 Kurtosis ,415 1,121 none Mean 31,6453 6,37227 95% Confidence Interval for Lower Bound 17,9782 Mean Upper Bound 45,3125 5% Trimmed Mean 29,8370 Median 28,7000 Variance 609,088 Std. Deviation 24,67970 Minimum 2,00 Maximum 93,84 Range 91,84 Interquartile Range 27,70 Skewness 1,314 ,580 1,700 1,121 Page 1 Total Score

none

Descriptives

Methods Statistic Std. Error Kurtosis 1,700 1,121 VP Mean 25,5053 3,32588 95% Confidence Interval for Lower Bound 18,3720 Mean Upper Bound 32,6386 5% Trimmed Mean 24,9426 Median 22,2200 Variance 165,922 Std. Deviation 12,88108 Minimum 8,74 Maximum 52,40 Range 43,66 Interquartile Range 23,44 Skewness ,618 ,580 Kurtosis -,437 1,121

Extreme Values

Methods Case Number Participant No Value Total Score FoR Highest 1 4 22 82,10 2 12 30 82,10 3 6 24 70,88 4 10 28 47,92 5 7 25 47,66 Lowest 1 14 32 3,00 2 13 31 3,00 3 15 33 10,48 4 11 29 13,48 5 9 27 16,22 FoR,VP Highest 1 21 9 68,88 2 29 17 61,14 3 28 16 42,18 4 16 4 38,18 5 27 15 27,70 Lowest 1 22 10 2,00 2 18 6 2,00 3 30 18 3,74 4 19 7 6,74 5 26 14 12,48a 35 53 93,84 Page 2

I Task Load Index (TLX)

A HTML based questionnaire - screenshot below. Source code will be attached.

Figure I.1: HTML based TLX questionnaire - screenshot

201

J Task Load Index - CSV file

A structure used in CSV files (automatically downloaded by TLX questionnaire) allowing for easy export in spreadsheets.

Participant No,Mental Demand,Physical Demand,Temporal Demand,Performance,Effort,Frustration,Overall 61,35,10,10,15,10,20,16.666

Javascript code used for downloading the file: 1 function download(filename, text){ 2 var element= document.createElement(’a’); 3 element.setAttribute(’href’, 4 ’data:text/plain;charset=utf-8,’ 5 + encodeURIComponent(text)); 6 element.setAttribute(’download’, filename); 7 element.style.display=’none’; 8 document.body.appendChild(element); 9 element.click(); 10 document.body.removeChild(element); 11 }

203

K Promotion

Figure K.1: Notification is shown

Figure K.2: Participant (right) receiving voucher from Roman Lukš (left)

205 K. Promotion

Figure K.3: Example of a funny VR picture used in the registration calendar

Hi there! This is a confirmation for your participation in Testing virtual reality environments.

Check day&time of your session in the calendar

Information for you Experiment takes about 30-50 minutes and its fairly easy. First, you watch a VR scene, then answer few text ques- tionnaires (in English).

You do not need to bring anything with you, just come to our lab :) It’s room A421 "Lab. HCI" - 4th floor of "A" building of Faculty of Informatics Masaryk University, Botanická 68a, 602 00 Brno.

Google maps: link Building map: link Room map: link

On Saturday/Holidays I will pick you up at the front entrance - building will is closed.

206 K. Promotion

You need to be at least 18 years old to participate.

Our lab is super cozy, and we have air conditioning :-) Participants will receive a gift :) 20% discount voucher to a VR arena: VRena.cz

Please be on time.

If you wear contact lenses or glasses for reading, please come with contact lenses (its better that way). Thank you.

In case you want to re-schedule/cancel the session send me an email. Thanks :)

My phone number Best regards Roman Lukš

207

L Used assets

Following assets were used during the development. ∙ VR Samples ∙ Standard Assets ∙ Space ship cockpit ∙ Rigged Horse ∙ Prototype Materials Pack ∙ Green Forest ∙ Yughues Free Pillars & Columns ∙ Animated Horse ∙ Town Creator Kit LITE ∙ Rocky Meadow Assets ∙ Butterfly with Animations ∙ Traditional water well ∙ Gravel 01 Game-Ready ∙ Fantasy Defensive Structures

209