<<

MASARYK UNIVERSITY

FACULTY OF INFORMATICS

Examining Optical in VR

MASTER’S THESIS

Roman Gluszny

Brno, 2018

Declaration I hereby declare that this thesis is a presentation of my original research work and that no other sources were used other than what is cited. I furthermore declare that wherever contributions of others are involved, this contribution is indicated, clearly acknowledged and due reference is given to the author and source.

Brno 9.12. 2018

Supervisor: Fotis Liarokapis, Ph.D.

ii

Acknowledgment I want to express big thanks to my supervisor Fotis Liarokapis, Ph.D. for his professional guidance and support on this work. Another great thanks to RNDr. Filip Škola for teaching me the basics of electroencephalography recording and evaluation. Finally, I thank my family for their enduring support and to all my friends who helped me with the participation in the experiment.

iii

Abstract: The aim of this thesis was to examine optical illusions in three-dimensional space and the way they are perceived by human’s brain. In the beginning, a Unity application with several scenes was created, after which a user-study with 30 healthy participants took place using EEG device to collect data from brain waves. Received neurofeedback was subsequently analyzed to find possible correlations in reception using different media such as computer screen and virtual , and to determine scope and nature of changes on brain wave frequencies between different intervals while perceiving the illusions.

Keywords: optical illusions, , electroencephalography, oscillatory patterns, brain-computer interfaced,

iv

Table of Contents

List of Figures ...... viii

1 Introduction ...... 1

2 Optical Illusions ...... 3

2.1 Introduction to visual illusions ...... 3

2.1.1 Physical illusions ...... 3

2.1.2 Physiological illusions ...... 3

2.1.3 Cognitive illusions ...... 3

2.2 The Eye ...... 4

2.2.1 Structure ...... 5

2.2.2 Vision ...... 5

2.3 3D optical illusions ...... 6

2.3.1. Ambiguous cylinders ...... 6

2.3.2 ...... 7

2.3.3 Magnetic slopes ...... 8

2.3.4 Following eyes ...... 9

2.3.5 Rubin’s vase ambiguous figure ...... 9

2.3.6 ...... 10

2.3.7 Ambiguous sculptures ...... 11

3 Brain-Computer Interfaces ...... 12

3.1 Significance of BCI ...... 12

3.2 BCI Classification...... 13

3.3 EEG ...... 13

3.3.1 Noise and Artifacts ...... 15

3.4 Brain Waves ...... 16

3.4.1 Alpha waves ...... 16

3.4.2 Beta waves ...... 16

v

3.4.3 Gamma waves ...... 16

3.4.4 Delta waves ...... 16

3.4.5 Theta waves ...... 17

3.6 Enobio ...... 17

4 Virtual Reality ...... 19

4.1 Introduction to VR ...... 19

4.2 Brief history of VR ...... 20

4.3 Visualization in VR ...... 22

4.4 HTC Vive ...... 23

4.5 Unity ...... 23

5 Implementation ...... 25

5.1 Analysis and Design ...... 25

5.2 Scene Implementation ...... 26

5.2.1 Scene 1: Ambiguous Cylinders ...... 26

5.2.2 Scene 2: Ames Room ...... 26

5.2.3 Scene 3: Following Eyes ...... 27

5.2.4 Scene 4: Magnetic Slopes ...... 27

5.2.5 Scene 5: Ambiguous Statues ...... 27

5.2.6 Scene 6: Impossible Objects ...... 28

5.2.7 Scene 7: Ambiguous Shapes ...... 28

5.3 Used Assets...... 28

5.4 Technical Solution ...... 28

5.3.1 VR build vs FPS build ...... 30

6 User Study ...... 31

6.1 Testing Stages ...... 31

6.1.1 Setup ...... 31

6.1.2 EEG recording ...... 32

vi

6.1.3 Receiving feedback ...... 32

7 Data Analysis Result ...... 34

7.1 Demographics ...... 34

7.2 Qualitative Feedback ...... 34

7.3 NASA TLX...... 35

7.4 Scene rating ...... 36

7.5 EEG data ...... 37

7.5.1 Expected results ...... 37

7.5.2 Preprocessing ...... 37

7.5.3 Ambiguous cylinders ...... 38

7.5.4 Ames Room ...... 39

7.5.5 Following Eyes ...... 40

7.5.6 Magnetic slopes ...... 40

7.5.7 Ambiguous statues ...... 41

7.5.8 Impossible objects ...... 42

7.5.9. Ambiguous Shapes ...... 43

7.5.10 Correlations ...... 44

8 Conclusion ...... 46

9 References ...... 48

A Demographics Questionnaire Results ...... 57

B NASA TLX Results ...... 58

C Attachments ...... 59

vii

List of Figures

Figure 2.1: Kanizsa's triangle [51] ...... 4 Figure 2.2: The eye [52] ...... 5 Figure 2.3: Ambiguous Cylinders [61] ...... 7 Figure 2.4: Ames Room [55]...... 8 Figure 2.5: Magnetic Slopes [62] ...... 8 Figure 2.6: Following Eyes [63] ...... 9 Figure 2.7: Rubin’s Vase [64] ...... 10 Figure 2.8: Penrose Triangle [56] ...... 10 Figure 2.9: Yes or No [65] ...... 11 Figure 3.1: 10-20 system [53] ...... 15 Figure 3.2: Brain waves [59] ...... 17 Figure 3.3: Enobio [58] ...... 18 Figure 4.1: 3I of virtual reality [67] ...... 19 Figure 4.2: Sensorama [68] ...... 20 Figure 4.3: GROPE [69] ...... 21 Figure 4.4: HTC Vive [54] ...... 23 Figure 5.1: Scene 3 – Following Eyes ...... 27 Figure 5.2: Gallery reference image [57] ...... 29 Figure 6.1: Participant during EEG recording ...... 32 Figure 7.1: Status and qualification ...... 34 Figure 7.2: Cognitive workload, VR ...... 36 Figure 7.3: Cognitive workload, LCD ...... 36 Figure 7.4: Scene rating, VR ...... 37 Figure 7.5: Scene rating, LCD ...... 37 Figure 7.6: Ames Room – Comparing increase of alpha waves ...... 40 Figure 7.7: Comparison of alpha channels, VR ...... 42 Figure 7.8: ANOVA – Comparing values between channels on frequencies 8-12Hz, VR ..... 43 Figure 7.9: Correlations between channels and rating on alpha frequencies ...... 45

viii

1 Introduction

The human eye in an organ way too far from being flawless [73]. However, rather than our eyes being imperfect, our brain is the one we should blame, because it can be confused easily, or it takes incomplete data and presents it to our in an altered form based on our experience. In other words, the world as we observe it depends on our interpretation [74]. Some may say, optical illusions are suitable just for amusement; however, this is not true. For example, illusions can be used to decide sport events or to understand some clinical conditions such as psychoses or [72].

Virtual reality is an artificially created image presenting three-dimensional and seemingly real environment [75]. Nevertheless, being immersed in virtual worlds is not just entertaining, but it can be used for research as well. Not only the same scientific fields as with optical illusions can be targeted, but the VR is also an ideal tool to explore and [76]. Studying optical illusions in virtual reality with utilizing proper neuroimaging method has not been explored much so far, although it could propose interesting results. However, in order to fully utilize virtual reality, only three-dimensional illusions were the subject of this study.

The aim of this thesis was to investigate three-dimensional optical illusions in two mediums: a virtual reality headset HTC Vive and ordinary LCD monitor. For this reason, an application with seven scenes was designed and implemented in Unity game engine presenting optical illusions such as the Ames Room or Penrose triangle. A group of 30 healthy volunteers participating in the experiment was divided into two groups of size 15. There was one group for each medium and each volunteer participated with only one. The study was conducted using an EEG device Enobio to collect brain activity from the users during the testing session while additional feedback was collected via questionnaires for cognitive workload measurement and scene rating. In the end, the collected data was preprocessed in MATLAB software and evaluated to examine brain activity while observing visual illusion and differences between media used in the experiment.

This thesis consists of two sections. While the first three chapters describe the background of covered problematics, the second half of this work deals with the practical side of the experiment beginning from the application design to the results of the study. To be precise,

1 chapter two is dedicated to optical illusions, the science behind optical illusions and human . The third chapter covers brain-computer interfaces (BCIs) and electroencephalography, the specific method of non-invasive brain imaging technology used in the experiment. Chapter four describes virtual reality and its brief history. In the fifth chapter, design and implementation of application created for testing purposes is described. Chapter six is dedicated to the methodology of the experiment, while chapter seven presents data analysis and results of the experiment. The final chapter concludes the work and opens a discussion about possible future work.

2

2 Optical Illusions

2.1 Introduction to visual illusions

Illusions can occur with all human , for example, McGurk effect as auditory illusion [60] or being a [47]; however optical illusions are the most common, as sight is the we rely on the most. Psychology Dictionary defines , also known as visual illusion, as a “misinterpretation of exterior visual stimulants which takes place as an outcome of either a pathological condition or a misperception of the stimulants” [1]. In other words, they are deceptions usually caused by specific arrangement of color, light, size, shape or . The stimuli obtained by our eyes are processed and misinterpreted in the brain. As a result, we see altered objects [2] or we can see something that is absent from the original image. Visual illusions can be fascinating in their deception challenging the interest of physicists, psychologists, mathematicians or artists. British physicist Richard L. Gregory classifies optical illusions as follows [6]:

2.1.1 Physical illusions Physical or literal optical illusions are the simplest kind of illusions. They are creating images that are different from the original perceived objects [4]. They are called “physical” because they occur on account of physical properties before the light hits [15]. When the brain evaluates perceived information, it creates details on its own to fill seeming gaps, even though such details do not exist. As the mind focuses on different parts of the image, it can suddenly perceive multiple images in one [5].

2.1.2 Physiological illusions Physiological illusion is an effect of excessive stimulation by visual stimulus such as color, movement or brightness [10]. A common result of such stimulation can be for example an – a sense of an image without the original stimuli being present which is caused by fatigue of visual channels. It is believed, that the misconduct of visual system is connected to individual dedicated neural paths in cortex, whose repetitive stimulation leads to illusion [15].

2.1.3 Cognitive illusions Cognitive illusions misinterpret perceiver’s knowledge or assumptions resulting in “unconscious inference”, a term introduced by German physicist Dr. Hermann Hemholtz [4].

3

It is an idea that our mind makes assumptions of “what could be” while being unaware of this process [7]. The most significant difference is, that cognitive illusions, unlike physical or physiological illusions, are under some degree of cognitive control despite being subject of the perceptual system. Cognitive optical illusions can be further divided into:

● Geometrical-optical illusions – These illusions are distortions in size, shape or color. Distortion illusions are typically three-dimensional images represented in two- dimensional space [6], such as . ● Ambiguous illusions – Images or objects that trigger a perceptual switch between possible alternative representations. Rubin’s vase is an example of an ambiguous illusion. ● Paradox illusions – Paradoxes are or objects that would be impossible to construct in real world [6]. An example of a paradox illusion is the Penrose Triangle. ● Fiction illusions – Fiction illusions are of figures despite no stimulus being present [4]. An example of fiction is Kanizsa's triangle [6].

Figure 2.1: Kanizsa's triangle [51]

2.2 The Eye

The human eye represents the connection between what we perceive and actual reality, thus being an important part of optical illusion theory. Knowing how our visual system works allow us to explain why we experience visual illusions the way we experience them. claimed, that the eye is a defective optical instrument, but is the best eye we can have [2].

4

2.2.1 Structure The eye in an organ of visual system reacting to light and pressure [45]. While eyes of some other species are rather minimalistic, human eyes allow us to perceive also objects, colors, and depth. The eye is of approximately spherical shape and it consists of two parts: the posterior chamber and the anterior chamber. The former segment contains cornea, iris, lens and it is filled with aqueous humor. The posterior segment is filled with vitreous humor and consists of on the outside and retina – an extension central nervous system [2]. The retina contains rods and cones, primary photoreceptor cells, receiving light signals [17]. In the back side of the eye, the retina connects to the brain by the transmitting the image. The place where the optic nerve is connected is called the blind-spot and the object whose reflection hits it, are basically invisible [2]. The blind sport has a vital role in some optical illusions. As the brain fills gaps when it has no information the result in the image is corresponding to what our brain believes we should be seeing.

Figure 2.2: The eye [52]

2.2.2 Vision The eye functions almost like a camera. The light reflects off an object and enters the eye. Controlling the amount of light entering is the function of the iris. By of the eye, it controls the size of the and focal length. The light passes through the cornea and then into the lens [18]. In the end, the light reaches a point on the retina, where it hits photoreceptors. In these nerve endings, the light is converted into electrochemical signals and transported to the brain. Both optic nerves partially cross at a point called optic chiasm [19].

5

The signal itself enters the brain via thalamus separating information into two parts. One contains information about movement and while the second carries information about color and detail [18]. The message then moves to where being reconstructed and combined results in binocular vision. The binocular vision (the use of two eyes) improves contrast and visual acuity [27].

The angle in which we can see is called the field of view and it is an important factor of both virtual reality and optical illusions. Human vision allows 150° horizontal and 120° vertical field of view on each eye resulting in approximately 180° horizontal and 120° vertical for both eyes combined [20]. The overlapping area with 120° registers the same image and enables stereopsis – a perception of depth.

2.3 3D optical illusions

This subchapter describes the most important optical illusions in the application used for purpose of this work’s experiment and explains, how these illusions work. All illusions listed can be reproduced in three-dimensional space as examining two-dimensional geometrical- optical illusions in virtual reality should not make any difference from the real world.

2.3.1. Ambiguous cylinders The creator of this illusion is Japanese professor from Meiji University Dr. Sugihara Kukichi [9]. His figure called ambiguous cylinder is a 3D model demonstrating phenomenon called “anomalous mirror symmetry” [8]. It would be physically impossible to achieve such phenomenon in real world; however, by utilizing geometrical properties of these anomalous models we can at least imitate said symmetry. As a result, an object appearing as cylinder seems to have rectangular shape in the mirror, but in fact, it is neither cylinder nor cube.

6

Figure 2.3: Ambiguous Cylinders [61]

2.3.2 Ames room In this illusion, the observer can see two persons or objects in the opposite corners of a room. Despite both being the same height, they are perceived in different sizes. The trick is in the trapezoidal shape of the room, which seems to be square-shaped from the spectator’s point of view [3], which is even more supported by grid on the floor. This means that the seemingly smaller object is situated much further than the other one, despite we are led to believe, they are in the same depth of field. This effect was utilized for example in the movie The Lord of the Rings to make human characters look taller than little hobbits [3].

7

Figure 2.4: Ames Room [55]

2.3.3 Magnetic slopes Antigravity slopes are series of platforms created by Dr. Sugihara Kokihi performing so- called “impossible motion”. A spectator can see balls moving uphill against the laws of gravity [11]. In fact, all slopes are tilted downwards, but they have different lengths and they are supported by columns which seem to be parallel on from a certain point of view. As a trick of perspective, the slopes look straight only when looking from this specific point.

Figure 2.5: Magnetic Slopes [62]

8

2.3.4 Following eyes In this illusion, the spectator is presented with a simple model of an animal with eyes looking towards him or her. The interesting thing is that no matter how the viewer moves, the eyes seem to keep following him. The trick in our brain trying to interpret animal’s face as convex; however, in this model the face is concave. Sides of the head are curved from the center towards the spectator, making him believe, he is followed when cues from his brain get mixed up [12].

Figure 2.6: Following Eyes [63]

2.3.5 Rubin’s vase ambiguous figure Rubin’s vase illusion was discovered by Danish psychologist Edgar John Rubin. It is believed, the retinal image while observing this illusion is constant, as the observer can see one or other image at any time [13]. Rubin’s vase is one of few illusions that work in both 2D and 3D.

9

Figure 2.7: Rubin’s Vase [64]

2.3.6 Penrose triangle Penrose Triangle, also known as the ‘tribar’, is a typical example of cognitive optical illusions [6] and was created by Swedish artist Oscar Reutersvärd [14]. It is considered to be an as it breaks the rules of Euclidean . Starting from one of the vertex, one side seems to point towards the viewer with the other one going in the opposite direction, yet they again meet at the same point. The 3D interpretation of Penrose triangle consists of three bars bent in right angle. When viewed from the proper position, the arms seemingly connect, forming a triangle. Each part of the model seems to represent a rectangular three-dimensional object; however, as the lines of the figure are followed, a change in interpretation is required [40].

Figure 2.8: Penrose Triangle [56]

10

2.3.7 Ambiguous sculptures The last illusion on the list is a figure stating the word “Yes”. However, as we rotate the model, the word slowly turns into opposite “No”. This model is a work of a Swiss artist Markus Raetz [16]. The key to understanding of such pieces, is perspective, from a specific side, only one interpretation of the message is visible. This illusion is very popular among artists as not only different words but also shapes can be displayed.

Figure 2.9: Yes or No illusion [65]

11

3 Brain-Computer Interfaces

3.1 Significance of BCI

Alongside the technical development, the users have yearned to push the communication with computers to the maximum limit. For example, an experienced user can interact with his device using command line much faster than via graphical representation of the virtual environment [44]. But what if there was even faster way to communicate? A direct communication between mind and machine.

As a subfield of human-computer interaction, the brain-computer interface is a technology used for direct communication with electronic devices by signals obtained from reading brain activity. This form of interface is a great choice for physically disabled users, as their brain functions are in most cases intact thus allowing them to use electronic devices or prostheses despite limited mobility [30]. The brain contains nerve cells called neurons. These units of the nervous system communicate with other neurons either by sending electrical signals or chemicals also known as neurotransmitters [29] [31]. Neuroimaging techniques such as EEG can measure these signals and thus monitoring brain activity. Nevertheless, designing good BCI is a considerably difficult task, as it requires knowledge of different fields such as computer science, neuroscience or signal processing [30].

The first successful attempt to read brain signals from human scalp was performed by Hans Berger in 1920s [28]. However, it took a few more decades before the opportunities of this approach could be utilized. In 1964 Dr. Walter Grey connected electrodes to the motor area of undergoing surgery patient’s brain and asked the patient to press a button in order to control a slide projector. After the system was connected to the projector, the controls were based on the patient’s brain activity and his intention to press the button before inducting any muscle activity [28]. Nine years later, the term ‘Brain computer interface’ was coined by a researcher from the University of Carolina in Los Angeles Jacques J. Vidal describing it as “utilizing the brain signals in man-computer dialog [32]. Finally, the first high-quality brain signal was obtained in 1998 when Philip Kennedy implanted the first brain-computer interface beneath human scalp [33].

12

3.2 BCI Classification

We consider two main brain imaging techniques differencing on the placement of the sensors due to scalp: invasive BCI and non-invasive BCI [29]. Invasive techniques apply sensors implanted directly to the surface of the cortex. For this reason, the invasive approach requires a surgical procedure to open the subject’s skull, which naturally embraces health risks and moral issues. The other possible problem is the body not adapting to the implants and relatively high cost [28]. As a result, this approach is not very convenient for healthy users, whose life quality does not depend on BCI. On the other hand, invasive BCI provides very good quality of the signal, high special resolution and they are always available [33]. An example of invasive BCI is electrocorticogram (ECoG) [29].

Unlike invasive BCI approach, where the electrodes are placed directly to the brain tissue, the non-invasive BCI measure brain activity with sensors placed on top of the head. The most common non-invasive method is electroencephalography (EEG) measuring electrical potentials from the brain via electrodes placed on top of the scalp [29]; however, there are other types of sensors suitable for non-invasive BCI. For example, the magnetoencephalography is used for measuring magnetic fields associated with brain activity [28], while functional near-infrared spectroscopy (fNIRS) utilizes near-infrared light projected to the brain and reflected back, revealing information about brain oxygenation [29]. On a similar basis works functional magnetic resonance imaging (fMRI) [29].

Based on different approaches, the technologies have also different properties. For example, EEG is lightweight, provides good temporal resolution and is both relatively cheap and easy to use. The disadvantage of EEG is poor spatial resolution and frequency range [28]. In comparison, fMRI has high special resolution but low temporal resolution [33].

3.3 EEG

As explained in the previous sub-chapter, the EEG is a non-invasive diagnostic method used to measure brain activity in form of weak electrical signals also knows as brain waves. These signals originate in neurons oriented radially to the scalp [31] and have an electrical voltage between 5 and 100 µV [29]. The difference in electrical potential is acquired from the scalp surface via metal electrodes suited in typically neoprene cap [35]. A special abrasive electrode gel is applied to the contact area of the electrodes to increase the quality and strength of the signal thus the term ‘wet electrodes’. After the end of the session, each electrode must be

13 properly cleaned to prevent corrosion and extend its usability. Even though dry electrodes which do not require the application of any conductive paste exist, they are not widely used yet [28].

The brain is protected by skull covered with scalp with several other layers in between. For this reason, the application of multiple electrodes is necessary for reliable results. Nevertheless, the signal is additionally amplified in order to be displayed on a satisfactory scale [36]. Another reason to use more sensors is the fact, that there are different brain areas where originate specific brain patterns. In order to get consistent results from EEG recording, the International 10-20 System for electrode placement was introduced [28]. As the term suggests, the distance between individual electrodes makes 10 percent or 20 percent intervals of the nasion-ionion distance (see Figure 3.1), a subdivision of the scalp based on anatomical landmarks of the skull [31]. Each position on the cap is labeled with following letters to reflect corresponding brain regions [35]:

- Fp – frontpolar - F – frontal - T – temporal - O – occipital - C – central - P – parietal The second part of the label is either a number (odd for left hemisphere and even for right hemisphere) or character ‘z’, an indication that the electrode is intended for midline placement [35]. During referential mode of recording, additional one or two mastoid reference electrodes are used, typically connected to a clip placed on an earlobe or nose [31]. Recording without reference electrodes is also possible. In this case, common average reference can be used [36].

14

Figure 3.1: 10-20 system [53]

3.3.1 Noise and Artifacts During the recording, it is important to take into consideration the possibility of an interference which can originate from both outside factors and the tested subject. Noise in the recoded data is typically caused by other electronic devices with frequency 50 or 60 Hz in the vicinity of the EEG system [37]. As a result, it is advised to conduct EEG recording in shielded rooms and use amplifiers with a notch filter to mitigate the mains noise [36].

The most notable artifacts originating from the subject are generated by eye blinking, as the movement of the eyelid results in charge separation creating a potential of higher amplitude. This can be displayed as a positive peak and is observable especially in the frontal region; however, it is noticeable in other channels as well [35]. The artifacts generated by are created on a similar basis although they involve both cornea and retina [37]. Muscle contractions then generate muscle artifacts (EMG signals) most evident in frontpolar

15 and temporal derivation [35]. Another source of the artifacts might be the EEG system itself, or more precisely touching or moving its cables [46]. In order to minimize these artifacts, it is advised to explain this issue to the subject prior to the recording.

3.4 Brain Waves

The oscillatory brain activity observable during EEG recording is called brain waves. These waves are generated by neurons communicating with other cells in form of electrical pulses [38] and can be characterized by different frequency, phase or amplitude. Based on specific frequencies we classify following brain waves:

3.4.1 Alpha waves Alpha waves with frequency within the range of 8-12 Hz are located at occipital and posterior regions [36] and they are often associated with relaxed state, meditation or with persons keeping their eyes closed [33]. A subclass of alpha waves originating in sensorimotor areas is called mu activity or mu rhythms. These rhythms grow weaker as the subject performs some movement or imagines it [28].

3.4.2 Beta waves Beta rhythms are brain oscillations ranging from 13 to 30 Hz observable at parietal and frontal lobes [31]. Increased beta activity indicates alertness and concentration [31] but also with focus and [36]. They can be additionally divided into Low Beta, Beta and Higher Beta waves [38].

3.4.3 Gamma waves Gamma waves frequency ranges from 30 to 100 Hz and above dividing to low-frequency gamma (up to 70 Hz) and high-frequency gamma (over 75 Hz) [28]. They are the fastest brainwaves with the smallest amplitude [36]. Observable in multiple parts of the brain during both sleep and vigil, they are believed to be correlated with higher brain functions such as awareness, memory and perception [39].

3.4.4 Delta waves Brain waves with frequency 0.5-4 Hz are called delta waves, which are common for kids to the age of 10 years [35]. They also indicate drowsiness and are typical for deep slumber [36].

16

3.4.5 Theta waves Theta waves lie within the range of 4-8 Hz. They are associated with states like deep meditation, drowsiness and light sleep [36]. The observation of theta waves indicates, that REM (rapid eye movement) sleep phase is present [34].

Figure 3.2: Brain waves [59]

3.6 Enobio

Enobio 8 is a wireless sensor system for EEG recording developed by Neuroeclectrics company [41]. The core of the Enobio system is Neuroelectrics Control Box (Necbox) [42]. This device is attached to the neoprene head cap by Velcro, while the communication is going via Bluetooth or Wi-fi. Three different models of the devices available on the market provide support for up to 8, 20 or 32 channels and an earlobe reference electrode [43]. The main application of Enobio is EEG, but it can be also used for EOG, ECG and EMG recording [42].

17

Figure 3.3: Enobio [58]

18

4 Virtual Reality

4.1 Introduction to VR

VR refers to technology creating virtual, that is artificial, environments. This computer- generated environment can be either intended to resemble real world or to introduce fictional worlds designed from scratch. It is a real-time simulation where multiple senses can be stimulated [20]. As the sense of presence [22] in three-dimensional space is an important element of VR, the technology often uses head-mounted displays or displays in specially constructed rooms to achieve the effect of binocular vision. According to Burdea and Coiffet, the 3 I’s are the basics of virtual reality: immersion, interaction and imagination [20].

Figure 4.1: 3I of virtual reality [67]

Immersion equals the feeling of presence. The user is immersed only if he feels as if he really was in a different place. Achieved immersion is undermined by technical limitations. Apart from convincing visuals and satisfactory resolution, stimulation of other senses (for example by using haptic technology) increases the sense of presence [21]. Presence is another very important term linked to virtual reality. It is described to be the experience of the virtual environment; the sense of being inside [23]. This is further extended by the term telepresence. In VR, the virtual environment is rarely the same environment we are physically present. Telepresence is described as presence mediated through specific medium [20] [23].

By interaction, we assume the ability to intervene with the environment, objects and inhabitants, to change the flow of event or to have an overall impact on the virtual world,

19 while getting a real-time response [20]. In virtual reality, the user should not be just a passive viewer.

Last, but not least, imagination refers to the user buying the experience. While immersion is linked with emotional or mental state, imagination is based on experience [22].

4.2 Brief history of VR

The first step towards virtual reality as we know it today was made in the 1950s by Morton Heilig. His Sensorama was the first device to provide a stereoscopic 3D image as well as stimulate other senses such as smell or touch [20] [22]. With the recent expansion of 4D cinemas, some may say Heilig was a pioneer in the movie industry. However, the idea of the first virtual reality environment involving the essential element of interaction using haptic technology was introduced in 1965 by Ivan Sutherland [24]. Three years later, he constructed his ultimate display called Sword of Damocles consisting of cathode ray tubes including interactive computed-generated graphics [20] [26], which is an integral part of modern VR technologies. The headset had considerable weight, so it was supported by mechanical arm also designed for tracking of the user’s direction and position. His work was continued by Frederick Brooks at the University of North Carolina whose GROPE project presented the first force-feedback system [22] [24].

Figure 4.2: Sensorama [68]

20

Figure 4.3: GROPE [69]

In 1979 Eric Howlett constructed Large Expanse Enhanced Perspective system (LEEP), an HMD with rather wide field of view. The very same system was later used by NASA on the development of their own HMD knows as the VIVED project, which was followed by VIEW (Virtual Interface Environment Workstation) [22]. This was the beginning of commercial development of VR technology, as until this moment, the technology had been used primarily as a scientific tool or in military for flight simulations. The VPL company released new devices such as DataGlove or Eyephone receiving considerable popularity [24], although the technology was still very expensive at that time and only the largest and wealthiest facilities could afford it.

Finally, the Cave Automatic Virtual Environment was presented in 1992 on computer graphics conference in Chicago as an alternative to VR headsets [22]. The CAVE was designed as a tool for scientific visualization [25]. It could be described as a room consisting of several walls covered by rear-projection screens. The most significant advantage of the CAVE system is its high resolution and a large angle of view. On the other hand, the price is significantly higher when compared to HMDs. Furthermore, serious issues regarding interaction are still present in CAVE systems [70].

Thanks to the technological advance, the VR technology becomes lucrative again in the 21st century, as computers are getting smaller and more powerful for the lower price. Gartner hype cycle claims it is typical for all new technologies that the high expectations are from the product are not always met. The VR technology; however, survived through the phase of disillusion and is slowly entering the state, where it could be accessible and convenient to use for wide range of users. This growth culminated in 2016 also regarded as “the year of virtual

21 reality” [26]. These days, many companies such as HTC, Oculus VR or Samsung develop their headsets for moderately affordable price trying to enforce the market. Even smartphones, accessible to most of the public can be turned into a simple VR device in combination with a simple casing, thus making the technology suitable not only for training and research but also for entertainment.

4.3 Visualization in VR

The sense of presence makes virtual VR technology superior to standard computer graphics; however, at the higher cost of resources. Because sight is the sense most contributing to our perception, visual information is the most important aspect of the virtual world thus being expected to equalize with our visual system [24]. However, the present technology is still not capable of such performance. The use of VR also carries several problems such as cybersickness or motion sickness. Of course, all advantages and disadvantages are linked with individual VR technologies. Despite HMDs being the most used approach, we distinguish following VR displays [22].

- Stationary displays o Fishtank VR o Projection VR - Head-based displays o Occlusive HMDs o Non-occlusive HMDs - Hand-based displays In case of stationary display, the user is fixed in one place, while head-based display allows at least boundary limited freedom of movement. This movement might be further physically limited by the length of cables and the size of the real-world environment. The HMDs are usually cheaper than stationary displays, but on the other hand, they have additional limitations such as field of view or resolution. For the field of view in virtual reality headsets, the same rules apply as described in chapter two. Fish tank or monitor-based display is a virtual reality technology utilizing standard monitor and possibly 3D stereoscopic glasses [24] making it cheapest and most simple VR display on the cost of being immersive [22]. Just like fish tanks, projection-based displays are also stationary technology.

22

4.4 HTC Vive

HTC Vive is virtual reality system developed by HTC and Valve Corporation and released in 2016. The headset has two OLED panels with resolution 1080x1200, resulting in 2600x1200 combined pixels resolution, 110 degrees field of view and refresh rate 90 Hz. For tracking, two base stations are used promising 360-degree tracking coverage and the headset is connected to the computer via HDMI, USB 2.0 or 3.0 and 3.5mm audio jack. The chaperone system allows the player to keep track of the real world’s boundaries while the front-facing camera can display the surroundings directly to the glasses. By October 2018 the price for the set is about five hundred dollars including two wireless controllers with multiple input methods such as grip button, trigger buttons or touchpad. Since 2018, a new model called Pro is available featuring higher resolution or an additional camera. The system can be calibrated via SteamVR application for space up to 15x15 feet or for stationary play.

Figure 4.4: HTC Vive [54]

4.5 Unity

Unity is a game engine created by Unity Technologies designed for development of 2D and 3D games. It comes in three versions: Personal, Free and Pro thus being affordable for small studios or independent creators. It targets a wide range of platforms including Windows,

23

Android, PlayStation and many others. With the integration of suitable libraries such as OpenVR it is equipped for development of applications for virtual reality. Another useful tool for implementation of VR application in Unity is open source library Virtual Reality Toolkit (VRTK) available in the Asset Store for free.

24

5 Implementation

The first step in the practical aspect of this work was to create a simple but immersive application presenting several optical illusions in three-dimensional space to the user in two mediums: standard monitor display and virtual reality headset. The following table describes individual steps of the work in chronological order from start to the beginning of user testing.

- Topic analysis o Study illusions for 3D space o Choose suitable illusions o Find or create corresponding models o Group similar illusions - Application design o Design overall application workflow o Design scenes with individual illusions o Choose correct tools for controls implementation - Implementation o Create animations o Place origin point for the player controller o Implement controls o Adjust details o Set markers for EEG measuring - Pilot testing o Fix possible issues

5.1 Analysis and Design

The research was naturally the first step before the implementation itself. There are plenty of illusions displayed in two-dimensional space; however, we were interested only in illusions that challenge user’s perception in terms of 3D space. Moreover, diversity was another important factor while choosing illusions for the experiment. For this reason, several scenarios were skipped. It was also revealed that unlike common optical illusions, the majority of 3D optical illusions use perspective to confuse the viewer. One of the most critical issues proved to be the selection of models for the application. Many illusions required advanced understanding of geometry and high skill of modeling, so it was decided to use

25 ready-made models with available license rather than creating them from scratch. Nevertheless, there were few cases, which required creation of new models. For this work, Cinema 4D modeling software was used. In the end, there were 16 models which were grouped to seven scenes.

In most scenarios, each room contains one illusion at a time in a form of an exhibit which is typically situated in the middle of the room. As mentioned above, perspective plays vital role in the perception of these illusions. Such models are displayed for fixed amount of time and after it elapses, the model is rotated using animation tools from Unity thus revealing the trick, after which it becomes static again. After another delay, the model returns to its original position. When the animation ends, the object is replaced by the following model in the scene. The following section describes design of individual scenes. For the explanation of these optical illusions, see chapter two.

5.2 Scene Implementation

5.2.1 Scene 1: Ambiguous Cylinders A pair of ambiguous objects is situated in front of a wall with a mirror reflecting the models in the middle of the room. There are four different models displayed in ten seconds long intervals. Each model of the pair is rotated 180 degrees, so it casts the opposite reflection.

The most serious technical issue proved to be the mirror which was solved with Vive Stereo Rendering Toolkit library. Pilot testing session revealed that the models are located out of the user’s field of view in VR version of the build, so a sign asking the user to look down was added.

5.2.2 Scene 2: Ames Room Ames room is the only scene not situated in the prefabricated gallery room. For the objects to take one another’s place, a model of vase was created, as it has similar proportions as human figure commonly used for demonstration of this illusion and it was easy to animate. The model of vases based on old Greek amphorae was created in software Cinema 4D, while the room was created by a user with nickname Apos and downloaded from under Creative Commons attribution.

26

5.2.3 Scene 3: Following Eyes The model for the third scene was created in Cinemas 4D using the same procedure as a papercut model. A texture was applied on a single square polygon, then the quad was cut to match the template, unnecessary faces were deleted while the rest were bent around marked edges. For the scene, the environment lightning was used, because shadows cast by the upper part of the head were ruining the immersion of the illusion.

Figure 5.1: Scene 3 – Following Eyes

5.2.4 Scene 4: Magnetic Slopes The illusion is animated as described at the beginning of this chapter; however, two more seconds were added at the beginning of the scene before enabling gravity for the balls. The scene itself contains two models. The second model was created in Cinema 4D based on papercut scheme. In the scene, the shadows were disabled as they were implying the columns are tilted. The pilot testing in this scene also revealed an issue with improper camera angle, so the same sign as in the first scene was added. This seemed like the best solution of this issue for models that were partially out of the field of view as rotating the camera would feel unnatural for the user looking forward.

5.2.5 Scene 5: Ambiguous Statues The scene contains three models in total. A statue with text displaying either ‘Yes’ or ‘No’, ‘White’ or ‘Black’ and insignias of Rebel Alliance and Galactic Empire from Star Wars.

27

5.2.6 Scene 6: Impossible Objects The scene contains three models in total. The Penrose triangle, Escher’s staircase and a model which looks like a cube when observed from the correct position. The room was created differently from the rest by adding a glass floor revealing another room situated below where the staircase was placed.

5.2.7 Scene 7: Ambiguous Shapes This scene contains two models. A Rubin’s vase and a model in a shape of both rabbit and duck. While the vase remains static, the rabbit figure is being rotated after a brief delay to prompt the shape of a duck. For the scene the lighting was disabled and only the model remained illuminated to increase the effect from illusion.

5.3 Used Assets

The following list contains models, textures and other resources downloaded for the application from external sources:

-

5.4 Technical Solution

As the requirements for the experiment were becoming more and more clear, the character of the application was changing correspondingly. The application originally resembled more of a serious game, as it offered more freedom in terms of interaction. To achieve higher levels of immersion, the settings were stylized in a form of an art gallery. There was even an additional scene modeled as the gallery main room which served as a menu to enter individual scenes. The user would be allowed to move within room’s boundaries with the help of HTC Vive controllers or using mouse and keyboard. However, as the EEG experiment requires standstill and precise coordination of time and events, the freedom of movement was limited, and the gallery main room was eventually removed.

28

Figure 5.2: Gallery reference image [57]

For the application, the default first person controller from Unity standard assets was used. Nevertheless, as it was discovered that the movement was not necessary, only the camera remained. The VR build of the application uses a prefab object from Steam VR plugin accessible from the asset store. As the body movement was undesirable for the EEG measuring, the position tracking could be disabled for the controller and starting position of the player was hardcoded to match the same properties as the first-person controller.

Synchronization with EEG device was provided using OVR plugin. The plugin uses predefined codes, but it allows to define custom codes in unsigned long data type. To keep the evaluation as simple as possible a code in the following format is send to data stream each time a new model is displayed:

[scene number][model index]

The end of the scene is signalized by code:

[scene number][9]

29

5.3.1 VR build vs FPS build In order to present the application in virtual reality and on a regular screen, two different builds had to be created. However, to make sure that the only difference was concerning the display, the application was under development as a single project.

The coordinates of the player are stored within an empty object called the ‘origin’. This object has two children, one for each medium. The ‘FPSController’ contains a simple camera with the Y coordinate set to value 1.55. Unlike VR, in scenes, where the model is situated on the ground, the camera was rotated to capture the illusion in the center of the screen. Originally, the FPS controller from standard assets was used, but the concept was later simplified for purposes of EEG recording, so no character or camera movement is possible.

The object for VR controller contains assets from SteamVR plugin. Originally, the character was controlled using HTC Vive controllers for manipulation with models and teleporting, but the controls were removed later for the same reasons as described above in the case of standard display build. As the body movement would interfere with EEG recording, the position tracking could be disabled via ‘SettingsScript’. Nevertheless, the direction control remained turned on, which (as revealed during the pilot testing session) resulted in issues with scenes, where the models were located on the ground. To fix this inconvenience, a label instructing the player to look down was added, as it would feel unnatural to rotate the camera, while the participant would continue looking forward in the real world.

Both versions of the controller are present on both versions of the build and for development purposes it remained possible to switch between these displays by pressing the space button. As a result, the only difference in builds is the starting boolean value indicating which camera should be active.

30

6 User Study

The experiment was conducted with 30 healthy volunteers: 20 males and 10 females at the Faculty of Informatics at Masaryk University. The medium for each session was selected randomly; however, the same number of users participating with VR was the same as for the LCD. In order to reduce the amount of noise caused by all electronic devices on frequency 50- 60 Hz, the testing was held in a room containing only the equipment necessary for the experiment such as desktop computer, VR headset and Bluetooth. Each session took about 30-45 minutes.

6.1 Testing Stages

6.1.1 Setup In the beginning, each participant was explained the topic of the experiment and what the requirements were. Then they were asked to read and fill Section A and Section B of the questionnaire. These two sections also contain basic information about the experiment, describe participant’s right to withdraw from the test at any point of the testing session and ask some basic information about the participant such as age, gender or highest qualification achieved. By signing the consent form the user agreed with participation in the study and with the usage of collected data. Each participant was provided a copy of Section A with his signature.

After signing the consent form, the participants were asked to put on the Enobio cap. For this experiment 8 electrodes in the following order were used: P3, O1, O2, P4, F7, F3, F4 and F8. A special electroconductive gel was then applied under each electrode and on participant’s skin. The reference electrode was placed on an earlobe; however, in some cases the gel caused the clip to slide and fall so it had to be placed on the upper part of the auricle. In some cases, it was observed that this solution also reduced potential noise generated by VR headset. The next step for the user was put on the VR headset, assuming he was selected to participate using virtual reality. After that, it was necessary to make sure all electrodes had good signal and that the noise was minimal using NIC software by Neuroelectrics. This application; however, provides only insight to the quality of the signal, which is indicated by red, orange or green mark. The real-time image of the recording could then be visualized in OpenVibe software, allowing to check the quality of the signal manually including noise. Bad quality of

31 the signal was resolved by reattaching the electrode or adding more gel, while the high amount of noise could be typically fixed by reattaching the reference electrode.

The last step before starting the application was to briefly explain how EEG recording works. The participants were instructed to remain calm and not to move unless necessary. This was important to emphasize especially for participants with VR headset. They were also asked to reduce their blinking when they see the illusion as it creates significant artifacts. On the other hand, it was explained that blinking between scenes and after models were switched would not cause any issues.

6.1.2 EEG recording Each recording was about 5 minutes long. As the participants were only passive spectators in this experiment, the length of this stage was fixed. While the recording was in progress, the participants were carefully observed, although no communication with them was possible.

Figure 6.1: Participant during EEG recording

6.1.3 Receiving feedback Despite the recorded brain signals being the main source of data for evaluation, the participants were asked to provide additional feedback in form of questionnaires.

32

The first of them was the NASA Task Load Index questionnaire. This questionnaire measures the individual workload index for the selected task with 6 questions rated on a 21-point scale. The questions are as follows:

- Mental Demand: How mentally demanding was the task? - Physical Demand: How physically demanding was the task? - Temporal Demand: How hurried or rushed was the pace of the task? - Performance: How successful were you in accomplishing what you were asked to do? - Effort: How hard did you have to work to accomplish your level of performance? - Frustration How insecure, discouraged, irritated, stressed, and annoyed were you? Another question asked the participants to sort the scenes based on how believable or strong was the impression of the illusion. To keep the questionnaire simple, the preview included only the first model of each scene despite the fact that some scenes had several illusions. In the end, the participants had the opportunity to express any additional comments regarding technical aspects, the process of the experiment or the illusions.

After the testing session was over and there was no risk of influencing the results, the participants could ask their own questions. As some scenes did not reveal the „trick“, the explanation of how these illusions work was provided. This concerned mostly Ambiguous Cylinders, Ames Room and Following Eyes illusion. Some participants who showed interest in the experiment and provided their email address were promised to receive results as soon as the study would be complete.

33

7 Data Analysis Result

7.1 Demographics

The objective was to get 30 recordings with good quality; however; during analysis we discovered that in one case a channel stopped recording mid-session. Another file was proven to be corrupted for unknown reason as it was no possible to open it in MATLAB. As a result, 32 participants in total took part in the experiment and participants whose data were considered defective were excluded from the analysis.

There were 10 females and 20 males participating in the experiment. The average age of the participants was 24. When asked, to what extent does the participant use computer in his daily activities, the participants most often answered ‘5’. Seven participants answered ‘4’, while three subjects replied with ‘3’. Only one participant answered with mark ‘2’ out of ‘5’. For current status and highest qualification achieved of the participants see Figure 7.1.

Current Status Qualification Count Student (BSc/BA) high school 13 Student (MSc/MA) high school 4 Student (MSc/MA) BSc/BA 2 Student (PhD) MSc/MA 1 Academic MSc/MA 1 Academic PhD 1 Technical BSc/BA 1 Employed high school 5 Employed BSc/BA 2 Figure 7.1: Status and qualification

7.2 Qualitative Feedback

After the sessions, all participants were asked to provide additional feedback in form of comments on an empty page. Most of the comments collected regarded participants’ feelings about EEG or technical aspects of the application.

The most common response was the critic of the first scene. As there was no explanation of the illusion, the participants felt confused. Some participants did not recognize that the more

34 distant cylinders were mirror reflection. This issue was more frequent when observing the illusion on LCD monitor.

Some participants claimed, that the rendering of particular scenes was not perfect, which might result in a weaker impression of the illusion, although the participants answering the illusions were convincing enough when asked directly, often provided no written feedback. The first model in scene 4 was reported to not being very successful in convincing the user that the slopes are heading upwards. Two participants also felt that the sides of the cube in scene 6 did not join exactly in one vertex. On the other hand, some participants responded, that most of the scenes, especially the Penrose triangle, looked convincing and the effect of the illusion was strong. The negative comments were more frequent from the users participating in virtual reality. The participants watching the illusions on monitor seemed to be more convinced, although they overall provided fewer comments on the paper sheet. An illusion some participants highlighted, was scene 3: Following Eyes, where the effect of the illusion was supposedly strong. Another scene which got attention was Ames Room, where the effect was reported to be convincing and the vases seemed to change sizes. One participant stated that he realized that the columns on the second model in scene 4 were nor parallel before the model rotated. On the other hand, another subject reported, that the illusion of balls moving uphill was very strong and satisfying.

Other comments were related to the VR headset. Some participants claimed, that the resolution of the display was too low and individual pixels were apparent and that the HMD was heavy.

Many participants noted either explicitly or orally that the experiment was interesting and fun. One subject even appreciated that the approach felt professional and that it was nice to see the explanation of the illusions after the recording. Unfortunately, a few participants focused instruction to reduce the amount of blinking unnecessarily hard, as their comments stated it was physically demanding to resist the urge to blink. According to their comments, a notification, that the scene and the model were being changed would be welcome to allow the participant to relax a bit.

7.3 NASA TLX

On average, the lowest rating was given to question “How insecure, discouraged, irritated, stressed and annoyed were you?” which was -8.13. On the other hand, the highest mark was

35 provided regarding subjects’ performance with an average rating of -3.31. Many participants were initially confused by this question, as they felt there was no real task to follow. Such participants were more tentative to rate their performance as successful, while subjects who mentioned their effort not to blink possibly felt less successful. For some reason, we observed it was a little bit more demanding to participate using the LCD monitor. In general, it was revealed that subjects in VR were more prone to rate their performance as a failure. For more details about cognitive workload see Figure 7.2 and 7.3.

Mental Physical Temporal Perform. Effort Frustration Average -5.13 -6.6 -4.93 -0.93 -5.4 -8.27 Median -7 -9 -6 -0.5 -7 -10 Mode -8 -10 -10 -7 -10 -10 Min -10 -10 -10 -10 -10 -10 Max 2 1 3 8 6 -3 Figure 7.2: Cognitive workload, VR

Mental Physical Temporal Perform. Effort Frustration Average -3.8 -5.87 -6.2 -5.53 -4.73 -8 Median -5 -9 -7 -6 -7 -10 Mode -7 -10 -8 -8 -10 -10 Min -10 -10 -10 -10 -10 -10 Max 4 7 1 5 6 -2 Figure 7.3: Cognitive workload, LCD

7.4 Scene rating

After the session, each participant was asked to sort viewed scenes based on the impression it made. The worst rated scene was scene 1: Ambiguous Cylinders with average order 5.6 for VR and 6.7 for LCD, due to its bad design as the participants complained in subchapter 7.2. Another badly evaluated scene in virtual reality was scene 4: Magnetic Slopes (5.33). Despite some participants claimed the feel of the illusion was strong, nobody rated this scene as the best. The average mark for this scene on LCD was better (3.33); however, scene 3: Following Eyes was considered less impressive. On the other hand, the best average score was given to scene 5. One participant specifically stated, that even though it was very simple, it felt the most satisfying to watch. Other more common illusions such as the Penrose Triangle or

36

Rubin’s Vase were considered more impressive. For more details see Figure 7.4 and Figure 7.5.

Scene 1 Scene 2 Scene 3 Scene 4 Scene 5 Scene 6 Scene 7 Average 5,6 4,4 3,93 5,33 2,13 3,06 3,53 Median 6 5 4 5 2 3 3 Mode 7 5 4 5 1 2 3 Min 1 1 1 3 1 1 1 Max 7 7 7 7 5 6 7 Figure 7.4: Scene rating, VR

Scene 1 Scene 2 Scene 3 Scene 4 Scene 5 Scene 6 Scene 7 Average 6.07 4.47 5.27 3.33 3.33 2.4 3.33 Median 7 5 5 3 3 2 4 Mode 7 6 6 1 3 2 4 Min 3 1 2 1 1 1 1 Max 7 7 7 7 7 7 6 Figure 7.5: Scene rating, LCD

7.5 EEG data

7.5.1 Expected results As mentioned in the earlier chapters, optical illusions have been a subject of interest of neuroscientists for some time and several experiments focusing on brain activity during the process have been conducted in past. For example, a study of revealed, that perceiving the illusion leads to the destruction of alpha and gamma patterns [50]. On the other hand, a positive reversal might occur primarily on beta frequencies in parietal areas and on gamma frequencies measured by frontal electrodes [51]. A study conducted by H. Tang and Z. Tang discovered negative peaks on Pz and POz channels [71]. These results could suggest the patterns, how should the response change in dependence on time. However, the size of the change might be related to the classification of specific illusions and selected medium.

7.5.2 Preprocessing The gathered data was processed using the EEGlab plugin for MATLAB and evaluated in SPSS Statistics software. The first step was to import recorded files in .gdf format and assign correct channel labels. The FIR filter was then used in order to remove noise lower 1 Hz and

37 above 40 Hz. After this, each file was manually checked for signal quality and artifacts. Using event tags described in chapter 5, the epochs containing individual optical illusions were extracted and saved in separate files.

Each illusion in every scene was analyzed individually. To do so, a pair of intervals was being compared. Majority of the scenes consists of following intervals:

- A: Base state. The model is visible but remains static. - B: The model is rotated, either causing or revealing the illusion. - C: Resting state. Another point of view is displayed. - D: The models return to the base state. - E: Base state after the illusion was revealed. The first two periods to compare in each scene were intervals A and B to analyze changes from baseline to the state where the illusion is occurring or to compare interval where the illusion is already observable with the interval, where the explanation was revealed. However, other combinations which could potentially provide interesting results were tested.

To extract essential data, several MATLAB scripts were created. They applied an additional filter and imported event-related spectral perturbation data to a matrix [49], which was then subdivided to correspond desired intervals and directed. A difference between intervals was calculated using the following formula:

diff = (post-pre)/pre*100

These values are the percentual differences in brain activity between two intervals in selected frequency band and from this point they will be referred as ‘percentual changes’ as they represent observed reaction in brain during selected intervals of illusion. In the end, the results were saved to an .xlsx file on a sheet for each combination of selected medium and set of brainwaves and evaluated in IBM SPSS Statistics software using analysis of variance to determine differences among channels and used media. The generated table usually had 15 columns (each for one participant on a medium) and 8 rows (each for one channel). Additionally, Pearson’s correlation coefficients between recorded data and participants’ answers were calculated to determine dependencies between changes on individual frequencies and the grades given by participants.

7.5.3 Ambiguous cylinders In the end, the first scene was not analyzed due to bad feedback and lack of any events.

38

7.5.4 Ames Room When the scene begins, the two vases are displayed for 5 seconds before moving thus creating the illusion. The first glance on the data revealed an positive percentual changes of alpha waves, especially on occipital channels. The difference between channels was confirmed using the Kruskal-Wallis test (p=0.027) with average value on O1 being +5.61% average change on O2 +7.63%. This was; however, observable only on the VR data sample as the change of activity for the subjects participating with LCD was close to zero. Analysis of variance showed that there was a statistically significant difference between media used (p=0.000) on alpha frequencies. A similar phenomenon could be observed for beta (p=0.0000 and theta waves (p=0.001), although in these cases, occipital channels were not statistically different when the test was applied to show differences on the channels. Eventually, an increase in gamma activity was discovered, but the distribution of data across media was proven to be the same.

Nevertheless, similar data were extracted when comparing intervals A-C and A-D, indicating the brain activity remained high after the illusion was not present. The same phenomenon was observed between intervals A-E; however, a decrease of delta activity in terms of percentual changes was measured through all channels. The reduction was similar for both media with the value -2.89 % in the average for VR and -2.86 % for LCD.

Finally, a decrease of alpha activity through all channels (-1.28 % in VR, -0.27 % for LCD) occurred between intervals D-E. Another increase on these intervals was observed on beta in LCD (+1.38 %). Despite the average percentual changes were observed in LCD, these differences were not considered statistically significant in any frequency band.

39

Figure 7.6: Ames Room – Comparing increase of alpha waves

7.5.5 Following Eyes The third scene is another illusion starting with several seconds long interval, which could be used as a baseline. In this case, a change in brain activity might be expected when comparing interval A and interval B, where the object rotates and the illusion is visible. In that case, no significant differences were found among channels on any frequency in both media. Only differences on delta frequencies were found statistically significant (p=0.001) when comparing VR (+1.02 %) and LCD (-0.99 %) results. However, comparing intervals before and after the illusion (A-C) suggested average percentual differences among media on alpha (+1.19 % VR, +0.08 % LCD with p=0.022) and on beta (+1.58 % VR and -0.13 % LCD, p=0.001). Subsequently, intervals C and D were compared using identical methods and conformity of values among channels was confirmed. On the other hand, significantly higher values in average though all channels of gamma frequencies could be observed on VR (+0.47 %) compared to LCD (-1.01 %) with p=0.022.

7.5.6 Magnetic slopes In data recorded for this scene, another interval A0 was considered and used as a baseline for determining the change in brain activity. This interval offered some time for the participant to adjust before enabling gravity, causing the balls to roll down the slopes. In this case, the evaluation was based on two different figures presenting similar impossible motion illusion. The recording of both illusions suggests percentual changes around +1 % in average through

40 all channels on alpha and beta frequencies. No significant differences were found when comparing VR and LCD outputs. Regarding gamma frequencies, a similar increase of event- related shifts in power spectrum was recorded for both media of either illusion; however, only the first recording reported statistically significant differences between VR and LCD (p=0.049). Apart from the first illusion for LCD participants, a considerable change in delta activity was discovered ranging from -3.29 % to -4.68 % in average through all channels, although a difference among media was found in delta frequency band in seconds illusion (p=0.048).

On the other hand, almost no differences in comparing brain activity between intervals A and B were found implying, that revealing true shape of the slopes triggered no reaction, as values very close to zero were observed on both media through all frequencies and channels.

7.5.7 Ambiguous statues As the fifth scene received the best grading among all participants, the most noticeable changes in brain activity would be expected to occur. Between intervals A and B, a growth of alpha activity was measured. Comparable to Ames Room, the increase was the most distinct in occipital areas (+6.45 % and +7.28 %). Analysis of variance confirmed that distribution of values representing percentual changes between intervals recorded on alpha channels O1 and O2 in VR are different from some frontal and parietal channels (p=0.2). In comparison to VR, the percentual differences measured on LCD participants were less significant in most cases, which was supported by ANOVA (p=0.000). Percentual changes between media were also significantly different on beta (p=0.000), gamma (p=0.000) and theta (p=0.039), where the changes were more perceptible compared to LCD. Kruskall-Walis test also revealed similar differences among media in the second illusion on alpha (p=0.000), beta (p=0.005), gamma (p=0.003) and delta waves (p=0.004), but not so in the third where the only difference could be observed on theta waves (p=0.13). In the latter illusions, considerably lower percentual changes and close to zero values for both media could be observed.

41

Figure 7.7: Comparison of alpha channels, VR

Comparison of intervals B and C confirmed that the brain activity after observing the illusion in VR remained increased while the word ‘NO’ could be observable. As a matter of fact, yet another growth of alpha and beta activity was measured in occipital areas, despite these values were lower (from +1.5 % to +2 %) than between intervals A and B. Only minor increase on alpha frequencies was observed on frontal channels when comparing intervals C and D. Finally, some brain pattern destruction was found on all channels of all frequencies considering intervals D-E. These values; however, are relatively small compared to the change in activity after the illusion begin to project.

7.5.8 Impossible objects In previous scenes, there was a brief interval when the user was not exposed to the illusion, which served as a baseline to determine changes on individual frequencies. However, comparing intervals A and B in Penrose Triangle and Esher might provide some insight to brain reaction reacting to the revelation of the true shape of the figure, as the illusion in this scene was present from the very beginning thus reverting the process.

As in previous cases, the Penrose Triangle illusion revealed a very low increase of alpha waves in VR (between +2 % and +3 % on each channel). This was also observed on beta and gamma frequencies, although the values there were lower. The analysis of variance confirmed, that the values measured through all channels belong to the same group; however, the null hypothesis about equality of variance was rejected when comparing differences

42 between media on alpha (p=0.002), beta (p=0.000), gamma (p=0.002) and theta (p=0.000) waves.

Figure 7.8: ANOVA – Comparing values between channels on frequencies 8-12Hz, VR

Nevertheless, these results were not confirmed as expected when comparing data from Esher Stairs illusion where no significant changes could be found through media and frequencies.

In the end, intervals C and D were compared to analyze states before and during Penrose Triangle illusion. According to Kurskal-Walis test, the values from all channels were from the same distribution on all frequencies. However, differences between VR and LCD were calculated on alpha frequencies (+0.22 % VR and -1.44 % LCD, p=0.004) and on gamma frequencies (+0.12 VR, -2.17 LCD, p=0.001). Statistically different values between media; however, were not defined on Esher Staircase illusion.

7.5.9. Ambiguous Shapes Rubin’s Vase illusion proved to be rather problematic in terms of evaluation, as the model was static, and it was not possible to explicitly formulate when the change in perception of the figure occurred. Some participants with VR headset also claimed, they did not see there were faces before being given the questionnaire for scene quality. According to the experiment with Necker cube (see chapter 7.5.1), some decrease on alpha and gamma frequencies would be expected, although manual analysis of the several recordings discovered no such phenomenon.

Nevertheless, the second illusion could be separated into five intervals like in previous scenes. A statistically significant difference between media was observed on delta frequencies

43

(p=0.025). The average change in VR was +4.15 % and it was apparent mainly on occipital and parietal channels, although Kruskall-Walis test retained the null hypothesis about equality of groups. In comparison, the values of delta frequencies in LCD were near zero on all channels. Another difference was found on theta waves (p=0.003). While in VR the average change through all channels was -0.53 %, on LCD the value was +1.02 %. The difference among groups was found also between intervals A and C, although this was confirmed only for theta waves (p=0.004).

7.5.10 Correlations In the end, values from all scenes were collected and Pearson’s correlation coefficient was calculated for each percentual changes on each frequency band and rating given to the scenes. The objective of this test was to determine, how individual frequencies correlate with each other through all scenes and whether percentual changes on specific channel could be related to the score obtained from questionnaire. While positive correlations were found between each pair of channels, no relation between the change of brain activity and participants’ rating was found on frequency 8-12 Hz (see Figure 7.9). Regarding beta frequencies, all channels were correlated. What is more, a positive correlation (r=0.178, p=0.017) between rating and percentual increase on P3 channel was discovered. A similar phenomenon was observed on gamma band with a positive correlation (r=0.155, p=0.038) between rating and changes on channel P4. Likewise, delta and theta frequencies confirmed correlations between individual channels, but no other explanation regarding participants score and change in brain activity was found.

Linear correlation coefficients were then calculated to compare answers collected from the NASA TLX questionnaire with percentual changes on each channel through all scenes. Although high amount of dependencies could be found, only correlations detected in at least two scenes were considered., as the participants’ workload was measured for the whole experiment instead of individual scenes. As a result, positive correlations were discovered in virtual reality on beta frequencies between temporal demand and percentual changes on channels P3 and O2. Besides, frustration seemed to correlate with channels P3 and F4, although one case of negative correlation was observed on LCD medium. Positive change on gamma frequencies was reported to correlate with channel O1 in scene 3 (r=0.521, p=0.047) and in scene 6 (r=0.593, p=0.02). A correlation between mental demand and percentual changes on channel O2 was also suggested on theta waves, virtual reality. Finally, the effort made by the participants correlates with frustration (r=0,635, p=0,011).

44

Figure 7.9: Correlations between channels and rating on alpha frequencies

45

8 Conclusion

The aim of this work was to examine optical illusions in three-dimensional space using two different media: a virtual reality HMD and ordinary LCD monitor. A comparative study with 30 healthy volunteers (20 males and 10 females) was conducted on Faculty of Informatics at Masaryk University. Neural oscilations were recorded during the testing session using EEG device Enobio with 8 channels. The participants were also asked to answer several questions regarding the impression of the scenes and cognitive workload during the experiment

One of the main goals was to examine differences between media used for displaying the illusions. As illusions in the 3D space were the focus of this work, it could be expected that illusions perceived in VR should feel more convincing compared to the same illusions perceived on LCD. This hypothesis was confirmed when the analysis of variance suggested statistically significant differences between values in virtual reality and LCD tabletop monitor in multiple scenes through different frequency bands. The compared values were representing percentual changes in event-related shifts in power spectrum between two intervals, usually before and during illusion, but other combinations were tested as well. Differences among media on alpha and beta frequencies were discovered together in all cases for scene 2 (Ames Room), scene 3 (Following Eyes), scene 5 (Yes/No sculpture) and scene 6 (Penrose Triangle). Percentual changes on gamma waves were reported to be varying in scene 4 (Magnetic Slopes), scene 5 (Yes/No) and scene 6 (Penrose triangle). Different distribution among media was discovered in delta band in scenes 3 (Following Eyes), 4 (Magnetic Slopes) and 7 (Rabbit or Duck), while values on theta frequencies were varying in scenes 2 (Ames Room), 5 (Yes/No), 6 (Penrose triangle) and 7 (Rabbit or Duck). The percentual change in brain activity was noticeably higher in virtual reality on all frequency bands, while values close to zero were observed on the majority of frequencies for monitor display. Even though some of these differences might be caused by the VR headset (weight of HMD, body movement, etc.) rather than immersive display of optical illusions, our claim that the illusions are more impressive in VR was supported by qualitative feedback of the participants who were more prone to provide more detailed evaluation of the scenes, either being positive or negative.

Another goal was to determine actual percentual changes in brain activity in individual frequency bands. Multiple combinations of intervals were compared; however, the percentage changes in brain activity suggested very subtle changes in most cases (around +2 %). Such

46 values were concentrated mainly in alpha, beta and gamma frequency bands in majority of the scenes. On delta frequencies, less consistent results were observed. The percentual changes observed through all channels were close to zero in most cases except scene 4 (Magnetic slopes), where decrease in brain activity was detected. Finally, analysis of theta waves suggested percentual increase in all scenes, most significantly in scene 2 (Ames Room). Using statistical methods, we could observe that the distribution of values representing changes in brain activity was usually comparable among all channels; however, scene 2 (Ames Room) and scene 5 (Yes/No) displayed considerably higher values on channels O1 and O2 on alpha frequencies, where the changes exceeded +6 %. Occipital areas were also reported to correlate with mental demand and frustration, indicating they might play vital role in perceiving optical illusions. Another discovery was the observation that the brain activity remained increased in following intervals rather than reverting to original levels. This was more thoroughly examined in scene 5, where negative very subtle percentual changes through all channels in average were discovered on frequencies alpha, beta, gamma and theta in the very end of the scene.

Future work on this topic would be possible considering patterns discovered on data obtained from recording with virtual reality. To achieve more reliable results, more channels could be used in order to more precisely classify significantly different groups and corresponding changes on each frequency bands. What is more, it would be appropriate to increase the number of participants, as frequent deviations in data prevented us from deducing unambiguous conclusions. Lastly, several improvements on the application could be made according to the participants’ recommendations such as longer intervals between scenes to provide more time for eye relief and even more consistent baseline to calculate the results from.

47

9 References

[1] NUGENT, PAM M.S., 2013, What is VISUAL ILLUSION? definition of VISUAL ILLUSION (Psychology Dictionary). Psychology Dictionary [online]. 2013. [Accessed 6 November 2018]. Available from: https://psychologydictionary.org/visual-illusion/

[2] LUCKIESH, MATTHEW, 1965, Visual Illusions; Their Causes, Characteristics and Applications. Dover.

[3] CHERRY, KENDRA, 2018, 6 Fascinating Optical Illusions. Verywell Mind [online]. 2018. [Accessed 6 November 2018]. Available from: https://www.verywellmind.com/optical-illusions-4020333

[4] SINCERO, SARAH MAE, 2013, Optical Illusions. Explorable.com [online]. 2013. [Accessed 6 November 2018]. Available from: https://explorable.com/optical- illusions

[5] BIEN, SARAH, 2015, Literal Optical Illusions| Project| IDeATe. Ideate.xsead.cmu.edu [online]. 2015. [Accessed 6 November 2018]. Available from: http://ideate.xsead.cmu.edu/gallery/projects/literal-illusions

[6] GREGORY, RICHARD L, 1991, Putting Illusions in their Place. Perception. 1991. Vol. 20, no. 1, p. 1-4. DOI 10.1068/p200001. SAGE Publications

[7] EAGLEMAN, DAVID, 2011, Incognito. New York : Pantheon Books.

[8] SUGIHARA, KOKICHI, 2016, Anomalous Mirror Symmetry Generated by Optical Illusion. Symmetry. 2016. Vol. 8, no. 4, p. 21. DOI 10.3390/sym8040021. MDPI AG

[9] FOLEY, MADDY, 2016, You'll Never Guess How This Optical Illusion Works. Bustle[online]. 2016. [Accessed 6 November 2018]. Available from: https://www.bustle.com/articles/170704-how-does-the-ambiguous-cylinder-illusion- work-this-mystery-has-the-internet-stumped-video

[10] Physiological Illusions explained - Examples and Illustrations, 2017. World Mysteries Blog [online]. 2017. [Accessed 6 November 2018]. Available from: http://blog.world-mysteries.com/science/physiological-illusions/

48

[11] SUGIHARA, KOKICHI, 2014, Design of solids for antigravity motion illusion. Computational Geometry. 2014. Vol. 47, no. 6, p. 675-682. DOI 10.1016/j.comgeo.2013.12.007. Elsevier BV

[12] PLAIT, PHIL, 2013, Slate’s Use of Your Data. Slate Magazine [online]. 2013. [Accessed 6 November 2018]. Available from: https://slate.com/technology/2013/12/another-brain-melting-illusion-the-dragon-that- follows-your-gaze.html

[13] DONALDSON, J., 2018, Rubin's Vase - The Illusions Index. The Illusions Index[online]. 2018. [Accessed 6 November 2018]. Available from: https://www.illusionsindex.org/i/rubin-s-vase

[14] DONALDSON, J. and MACPHERSON, F., 2017, Impossible Triangle - The Illusions Index. The Illusions Index [online]. 2017. [Accessed 6 November 2018]. Available from: https://www.illusionsindex.org/i/30-penrose-triangle

[15] Illusion - New World Encyclopedia, 2018. Newworldencyclopedia.org [online]. 2018. [Accessed 6 November 2018]. Available from: http://www.newworldencyclopedia.org/entry/Illusion

[16] DEAN, JAMES, 2011, Yes or No? Make Up Your Mind!. Mighty Optical Illusions[online]. 2011. [Accessed 9 November 2018]. Available from: https://www.moillusions.com/yes-or-no-make-up-your-mind/

[17] ŠAJDÍKOVÁ, MARTINA, MAĎA, PATRIK and FONTANA, JOSEF, 2013, 1. Visual System • Functions of Cells and Human Body. Fblt.cz [online]. 2013. [Accessed 9 November 2018]. Available from: http://fblt.cz/en/skripta/xiii- smysly/1-zrakovy-system/

[18] TYLEY, JODIE, 2015, The science of vision: How do our eyes see?. The Independent[online]. 2015. [Accessed 9 November 2018]. Available from: https://www.independent.co.uk/life-style/health-and-families/features/the-science-of- vision-how-do-our-eyes-see-10513902.html

49

[19] GIBSON, MELANIE, 2018, The Optic Nerve (CN II) and Visual Pathway. TeachMeAnatomy [online]. 2018. [Accessed 9 November 2018]. Available from: https://teachmeanatomy.info/head/cranial-nerves/optic-cnii/

[20] BURDEA, GRIGORE and COIFFET, PHILIPPE, 2013, Virtual reality technology. 2. New Jersey : John Wiley & Sons.

[21] HEIM, MICHAEL, 1994, Metaphysics of virtual reality, the [electronic resource]. New York : Oxford University Press.

[22] SHERMAN, WILLIAM R and CRAIG, ALAN B, 2003, Understanding virtual reality. San Francisco, CA : Morgan Kaufmann.

[23] STEUER, JONATHAN, 1992, Defining Virtual Reality: Dimensions Determining Telepresence. Journal of Communication. 1992. Vol. 42, no. 4, p. 73-93. DOI 10.1111/j.1460-2466.1992.tb00812.x. Oxford University Press (OUP)

[24] MAZURYK, TOMASZ and GERVAUTZ, MICHAEL, [no date], Virtual Reality History, Applications, Technology and Future. [online]. [Accessed 9 November 2018]. Available from: https://www.cg.tuwien.ac.at/research/publications/1996/mazuryk-1996-VRH/TR- 186-2-96-06Paper.pdf

[25] CRUZ-NEIRA, Carolina, SANDIN, Daniel J. and DEFANTI, Thomas A. Surround- screen projection-based virtual reality. Proceedings of the 20th annual conference on Computer graphics and interactive techniques - SIGGRAPH 93. 1993. DOI 10.1145/166117.166134.

[26] STEINICKE, FRANK, 2016, Being Really Virtual. Hamburg : Springer.

[27] ATCHISON, DAVID A and SMITH, GEORGE, 2000, of the human eye. Oxford : Butterworth-Heinemann.

[28] ALLISON, BRENDAN, GRAIMANN, BERNHARD and PFURTSCHELLER, GERT, 2010, Brain-computer interfaces. Heidelberg : Springer.

[29] TAN, DESNEY S and NIJHOLT, ANTON, 2010, Brain-computer interfaces. London : Springer.

50

[30] CLERC, MAUREEN, BOUGRAIN, LAURENT and LOTTE, FABIEN, 2016, Brain-computer interfaces. Hoboken : John Wiley & Sons.

[31] RAO, RAJESH P. N, 2013, Brain-computer interfacing. New York : Cambridge University Press.

[32] LOTTE, FABIEN, NAM, CHANG S and NIJHOLT ANTON. Introduction: Evolution of Brain-Computer Interfaces. Chang S. Nam; Anton Nijholt; Fabien Lotte. Brain-Computer Interfaces Handbook: Technological and Theoretical Advance, Taylor

[33] LIAROKAPIS, FOTIOS, 2016, Interfaces Lecture 2 Brain Computer Interfaces for Virtual and Augmented Reality. . Lecture. 2016.

[34] TONG, SHANBAO and THAKOR, NITISH VYOMESH, 2009, Quantitative EEG analysis methods and clinical applications. Boston : Artech House.

[35] TATUM, WILLIAM O, 2008, Handbook of EEG interpretation. New York : Demos Medical Pub.

[36] SANEI, SAEID and CHAMBERS, JONATHON A, 2009, EEG signal processing. Chichester : John Wiley & Sons.

[37] SCHALK, GERWIN and MELLINGER, JÜRGEN, 2010, A practical guide to brain-computer interfacing with BCI2000. New York : Springer.

[38] What are Brainwaves ? Types of Brain waves | EEG sensor and brain wave – UK. Brainworksneurotherapy.com [online]. [Accessed 9 November 2018]. Available from: https://brainworksneurotherapy.com/what-are-brainwaves

[39] SAXTON-SWEET, JENNIFER, 2018, The Effect of Binaural Beats on Memory Through the Induction of Gamma Brain Waves. Owlcation [online]. 2018. [Accessed 9 November 2018]. Available from: https://owlcation.com/social- sciences/The-Effect-of-Binaural-Beats-on-Memory-through-the-Induction-of-of- Gamma-Brain-Waves

51

[40] PENROSE, L. S. and PENROSE, R., 1958, IMPOSSIBLE OBJECTS: A SPECIAL TYPE OF VISUAL ILLUSION. British Journal of Psychology. 1958. Vol. 49, no. 1, p. 31-33. DOI 10.1111/j.2044-8295.1958.tb00634.x. Wiley

[41] 2018, Products / ENOBIO / ENOBIO 8. Neuroelectrics [online]. 2018. [Accessed 13 November 2018]. Available from: https://www.neuroelectrics.com/products/enobio/enobio-8/

[42] NEUROELECTRICS. Neuroelectrics User Manual. 2018. Barcelona, Spain.

[43] MALIK, AAMIR SAEED and AMIN, HAFEEZ ULAH. Designing EEG experiments for studying the brain: design code and example datasets. London : Elsevier AP, 2017.

[44] Command line vs. GUI. Computer Hope [online]. 29 December 2017. [Accessed 13 November 2018]. Available from: https://www.computerhope.com/issues/ch000619.htm

[45] KOZARSKY, ALAN, 2017, How the Human Eye Sees. WebMD [online]. 2017. [Accessed 13 November 2018]. Available from: https://www.webmd.com/eye- health/amazing-human-eye

[46] FAZEL-REZAI, REZA, 2011, Recent advances in brain-computer interface systems. Rijeka : InTech.

[47] KAISA TIIPPANA. What is the McGurk effect? Frontiers [online]. 23 June 2014. [Accessed 18 November 2018]. Available from: https://www.frontiersin.org/articles/10.3389/fpsyg.2014.00725/full

[48] 2008, 105 Mind-Bending Illusions. Scientific American Special [online]. 2008. Vol. 18, no. 2. [Accessed 3 December 2018]. Available from: http://www.psy.ritsumei.ac.jp/~akitaoka/ScientificAmerican2008Illusions.pdf

[49] 2014, Chapter 11: Time/Frequency decomposition - SCCN. Sccn.ucsd.edu[online]. 2014. [Accessed 3 December 2018]. Available from: https://sccn.ucsd.edu/wiki/Chapter_11:_Time/Frequency_decomposition

52

[50] RUNNOVA, Anastasiya E., 2016, Experimental Study of Oscillatory Patterns in the Human Eeg During the Perception of Bistable Images. Opera Medica et Physiologica. 2016. Vol. 2, no. 2. DOI 10.20388/OMP2016.002.0033.

[51] THOMPSON, G. and MACPHERSON, F., 2017, Kanizsa Triangle - The Illusions Index. The Illusions Index [online]. 2017. [Accessed 3 December 2018]. Available from: https://www.illusionsindex.org/i/kanizsa-triangle

[52] CHOLKAR, KISHORE, PATEL, ASHABEN, DUTT VADLAPUDI, ASWANI and K. MITRA, ASHIM, 2012, Novel Nanomicellar Formulation Approaches for Anterior and Posterior Segment Ocular Drug Delivery. Recent Patents on Nanomedicinee. 2012. Vol. 2, no. 2, p. 82-95. DOI 10.2174/1877912311202020082. Bentham Science Publishers Ltd.

[53] ROJAS, GONZALO M., ALVAREZ, CAROLINA, MONTOYA, CARLOS E., DE LA IGLESIA-VAYÁ, MARÍA, CISTERNAS, JAIME E. and GÁLVEZ, MARCELO, 2018, Study of Resting-State Functional Connectivity Networks Using EEG Electrodes Position As Seed. Frontiers in Neuroscience. 2018. Vol. 12. DOI 10.3389/fnins.2018.00235. Frontiers Media SA

[54] A, A, 2016, HTC Vive Now Up For Pre-Order. Flickr [online]. 2016. [Accessed 3 December 2018]. Available from: https://www.flickr.com/photos/bagogames/25845851080

[55] VALAVANIS, ALEX, 2007, File:Ames room.svg - Wikimedia Commons. Commons.wikimedia.org [online]. 2007. [Accessed 3 December 2018]. Available from: https://commons.wikimedia.org/wiki/File:Ames_room.svg

[56] R, TOBIAS, 2007, File:Penrose-dreieck.svg - Wikimedia Commons. Commons.wikimedia.org [online]. 2007. [Accessed 3 December 2018]. Available from: https://commons.wikimedia.org/wiki/File:Penrose-dreieck.svg

[57] HANUSEK, NICOLE, 2011, The Permanent Collection at The Art Gallery. Pinterest[online]. 2011. [Accessed 3 December 2018]. Available from: https://cz.pinterest.com/pin/221591244142289169/

53

[58] 2018, Products / ENOBIO / ENOBIO 8. Neuroelectrics [online]. 2018. [Accessed 3 December 2018]. Available from: https://www.neuroelectrics.com/products/enobio/enobio-8/

[59] OJALLA, DAVINDER, 2017, How Your Brain Waves Mould Your Success In Life & Biz - Davinder Ojalla. Davinderojalla.com [online]. 2017. [Accessed 3 December 2018]. Available from: https://www.davinderojalla.com/how- your-brain-waves-mould-your-success-in-life-biz/

[60] NICHOLLS, MICHAEL E.R. and SEARLE, DARA A., 2006, Asymmetries for the visual expression and perception of speech. Brain and Language. 2006. Vol. 97, no. 3, p. 322-331. DOI 10.1016/j.bandl.2005.11.007. Elsevier BV

[61] FOLEY, MADDY, 2016, You'll Never Guess How This Optical Illusion Works. Bustle[online]. 2016. [Accessed 4 December 2018]. Available from: https://www.bustle.com/articles/170704-how-does-the-ambiguous-cylinder-illusion- work-this-mystery-has-the-internet-stumped-video

[62] ABRAMS, AVI, 2018, Dark Roasted Blend: Mind-Blowing Optical Illusions, Part 6. Dark Roasted Blend [online]. 2018. [Accessed 4 December 2018]. Available from: http://www.darkroastedblend.com/2014/04/mind-blowing-optical-illusions-part-6.html

[63] DEAN, JAMES, 2018, Dragon Illusion. Mighty Optical Illusions [online]. 2018. [Accessed 4 December 2018]. Available from: https://www.moillusions.com/dragon- illusion/

[64] SMITHSON, JOHN, 2018, File:Rubin2.jpg - Wikimedia Commons. Commons.wikimedia.org [online]. 2018. [Accessed 4 December 2018]. Available from: https://commons.wikimedia.org/wiki/File:Rubin2.jpg

[65] GIL HERNÁNDEZ, JESÚS, 2016, Avoid the Yes/No Trap - Jesús Gil Hernández. Jesús Gil Hernández [online]. 2016. [Accessed 4 December 2018]. Available from: http://jesusgilhernandez.com/2016/12/11/avoid-yesno-trap/

[66] VINCE, JOHN, 2001, Essential virtual reality fast. London : Springer.

54

[67] ZHANG, HUI, 2017, Head-mounted display-based intuitive virtual reality training system for the mining industry. . 2017.

[68] SPENCE, CHARLES, OBRIST, MARIANNA, VELASCO, CARLOS and RANASINGHE, NIMESHA, 2017, Digitizing the chemical senses: Possibilities & pitfalls. International Journal of Human-Computer Studies. 2017. Vol. 107, p. 62-74. DOI 10.1016/j.ijhcs.2017.06.003. Elsevier BV

[69] BROOKS, FREDERICK P., OUH-YOUNG, MING, BATTER, JAMES J. and JEROME KILPATRICK, P., 1990, Project GROPEHaptic displays for scientific visualization. ACM SIGGRAPH Computer Graphics. 1990. Vol. 24, no. 4, p. 177-185. DOI 10.1145/97880.97899. Association for Computing Machinery (ACM)

[70] HAVIG, PAUL, MCINTIRE, JOHN and GEISELMAN, ERIC, 2011, Virtual reality in a cave: limitations and the need for HMDs?. Head- and Helmet-Mounted Displays XVI: Design and Applications. 2011. DOI 10.1117/12.883855. SPIE

[71] ZHANG, HONGSUO, CAO, QIPING and TANG, ZHENG, 2011, The brainwave response of optical illusion stimulus. International Journal of Computer Science and Network Security. 2011. Vol. 11, no. 12.

[72] BACH, M. and POLOSCHEK, C. (2006). Optical Illusions. Advances in Clinical Neuroscience & Rehabilitation, 6(2).

[73] H. LENTS, N. (2015). The Poor Design of the Human Eye. [online] The Human Evolution Blog. Available at: https://thehumanevolutionblog.com/2015/01/12/the- poor-design-of-the-human-eye/ [Accessed 4 Dec. 2018].

[74] AMEMIYA, H. (2017). More than Meets the Eye: How Optical Illusions Stump Our Brains. [online] MiSciWriters. Available at: https://misciwriters.com/2017/01/10/more-than-meets-the-eye-how-optical-illusions- stump-our-brains/ [Accessed 4 Dec. 2018].

[75] 2018. What is Virtual Reality (VR)? Ultimate Guide to Virtual Reality (VR) Technology. [online] Reality Technologies. 2018. [Accessed 9 Dec. 2018]. Available at: https://www.realitytechnologies.com/virtual-reality/

55

[76] WEIR, K. (2018). Virtual reality expands its reach. Monitor on Psychology, 49(2).

56

A Demographics Questionnaire Results

Medium Age Gender PC use Status Qualification 1 LCD 30 Female 4 Academic PhD 2 LCD 24 Female 5 Student MA Highschool 3 LCD 19 Male 5 Student BA Highschool 4 LCD 24 Male 5 Academic MA 5 VR 20 Male 4 Student BA Highschool 6 VR 24 Female 3 Student MA Highschool 7 VR 34 Male 5 Student PhD MA 8 LCD 21 Female 3 Student MA Highschool 9 VR 20 Male 2 Student BA Highschool 10 VR 30 Male 5 Employed Highschool 11 LCD 22 Male 5 Student BA Highschool 12 VR 21 Male 5 Student MA Highschool 13 VR 41 Male 3 Employed Highschool 14 LCD 19 Male 4 Employed Highschool 15 LCD 24 Male 5 Employed BA 16 LCD 24 Female 5 Student MA BA 17 LCD 24 Female 4 Student BA Highschool 18 VR 21 Female 5 Student BA Highschool 19 VR 22 Male 4 Student BA Highschool 20 VR 20 Female 4 Student BA Highschool 21 VR 29 Male 5 Technical BA 22 LCD 19 Male 5 Student BA Highschool 23 VR 20 Male 4 Student BA Highschool 24 LCD 21 Male 5 Student BA Highschool 25 VR 20 Female 5 Student BA Highschool 26 VR 20 Female 5 Student BA Highschool 27 LCD 28 Male 5 Student MA BA 28 LCD 24 Male 5 Employed Highschool 29 LCD 31 Male 5 Employed BA 30 VR 22 Male 5 Employed Highschool

57

B NASA TLX Results

Mental Physical Temporal Perf. Effort Frust. 1 -9 -10 -8 -1 -10 -10 2 -6 -10 -1 -9 -10 -5 3 -1 -10 -6 -8 -2 -6 4 -10 -8 -8 -8 -10 -10 5 -7 -10 -1 -7 -10 -10 6 -10 -10 -10 -8 -10 -10 7 2 -3 -10 -2 -4 -8 8 -7 -10 1 -4 -8 -8 9 -2 -10 -7 -9 -4 10 -8 -6 -2 1 -7 -10 11 -9 -5 -7 -10 -10 -9 12 -3 -10 -10 8 -5 -10 13 -8 -10 -8 -10 -9 -10 14 -5 -10 -10 -8 -7 -2 15 -4 -9 -10 -8 -8 -10 16 2 7 -2 5 6 -4 17 4 -9 -10 -1 1 -10 18 -8 -2 2 5 6 -3 19 -1 1 3 5 -6 -10 20 -3 -9 -1 -8 -5 -7 21 -8 -10 -10 3 -7 -10 22 -3 3 -9 -5 -3 -10 23 -5 -3 -6 6 -7 -8 24 -7 -3 -1 -5 -1 -6 25 -8 -4 -4 -7 -10 -10 26 -1 -4 -8 4 -1 -10 27 -7 6 -8 -6 -10 -10 28 1 -10 -7 -10 2 -10 29 4 -10 -7 -5 -1 -10 30 -7 -9 -2 -3 3 -4

58

C Attachments

• VR build of the application • Desktop build of the application • NASA TLX questionnaire • Scene rating questionnaire • Consent form and personal information document

59