Threefolded Motion Perception During Immersive Walkthroughs
Total Page:16
File Type:pdf, Size:1020Kb
Threefolded Motion Perception During Immersive Walkthroughs Gerd Bruder∗, Frank Steinicke† Human-Computer Interaction Department of Computer Science University of Hamburg Abstract with speed s, distance d, and time t. Motion can be observed by attaching a frame of reference to an object and measuring its change Locomotion is one of the most fundamental processes in the real relative to another reference frame. As there is no absolute frame of world, and its consideration in immersive virtual environments reference, absolute motion cannot be determined; everything in the (IVEs) is of major importance for many application domains requir- universe can be considered to be moving [Bottema and Roth 2012]. ing immersive walkthroughs. From a simple physics perspective, Determining motions in the frame of reference of a human observer such self-motion can be defined by the three components speed, dis- imposes a significant challenge to the perceptual processes in the tance, and time. Determining motions in the frame of reference of human brain, and the resulting percepts of motions are not always a human observer imposes a significant challenge to the perceptual veridical [Berthoz 2000]. Misperception of speed, distances, and processes in the human brain, and the resulting speed, distance, and time has been observed for different forms of self-motion in the time percepts are not always veridical. In previous work in the area real world [Efron 1970; Gibb et al. 2010; Mao et al. 2010]. of IVEs, these components were evaluated in separate experiments, i. e., using largely different hardware, software and protocols. In the context of self-motion, walking is often regarded as the most basic and intuitive form of locomotion in an environment. Self- In this paper we analyze the perception of the three components of motion estimates from walking are often found to be more accurate locomotion during immersive walkthroughs using the same setup than for other forms of motion [Ruddle and Lessels 2009]. This and similar protocols. We conducted experiments in an Oculus may be explained by adaptation and training since early childhood Rift head-mounted display (HMD) environment which showed that and evolutionary tuning of the human brain to the physical affor- subjects largely underestimated virtual distances, slightly underes- dances for locomotion of the body [Choi and Bastian 2007]. While timated virtual speed, and we observed that subjects slightly over- walking in the real world, sensory information such as vestibular, estimated elapsed time. proprioceptive, and efferent copy signals as well as visual and audi- tive information create consistent multi-sensory cues that indicate CR Categories: H.5.1 [Information Interfaces and Presentation]: one’s own motion [Wertheim 1994]. Multimedia Information Systems—Artificial, augmented, and vir- tual realities; I.3.7 [Computer Graphics]: Three-Dimensional However, a large body of studies have shown that spatial percep- Graphics and Realism—Virtual reality tion in IVEs differs from the real world. Empirical data shows that static and walked distances are often systematically over- or under- Keywords: Locomotion, real walking, perception, immersive vir- estimated in IVEs (cf. [Loomis and Knapp 2003] and [Renner et al. tual environments. 2013] for thorough reviews) even if the displayed world is modeled as an exact replica of the real laboratory in which the experiments are conducted in [Interrante et al. 2006]. Less empirical data ex- 1 Introduction ists on speed misperception in IVEs, which shows a tendency that visual speed during walking is underestimated [Banton et al. 2005; The motion of an observer or scene object in the real world or in Bruder et al. 2012; Durgin et al. 2005; Steinicke et al. 2010]. a virtual environment (VE) is of great interest for many research and application fields. This includes computer-generated imagery, We are not aware of any empirical data which has been collected e. g., in movies or games, in which a sequence of individual images for time misperception in IVEs. However, there is evidence that evoke the illusion of a moving picture [Thompson et al. 2011]. In time perception can deviate from veridical judgments due to visual the real world, humans move, for example, by walking or running, or auditive stimulation related to motion misperception [Grondin physical objects move and sometimes actuate each other, and, fi- 2008; Roussel et al. 2009; Sarrazin et al. 2004]. Different causes nally, the earth spins around itself as well as around the sun. From of motion misperception in IVEs have been identified, including a simple physics perspective, such motions can be defined by three hardware characteristics [Jones et al. 2011; Willemsen et al. 2009], main components: (i) (linear or angular) speed and (ii) distances, rendering issues [Thompson et al. 2011], and miscalibration [Kuhl as well as (iii) time. The interrelation between these components et al. 2009; Willemsen et al. 2008]. is given by the speed of motion, which is defined as the change in position or orientation of an object with respect to time: Considering that self-motion perception is affected by different d hard- and software factors, it is unfortunate that no comprehensive s = (1) analysis to this day exists in which the different components were t tested using the same setup. Hence, it remains unclear whether or ∗e-mail: [email protected] not there is a correlation or causal relation between speed, distance, †e-mail: [email protected] and time misperception in current-state IVEs. Our main contributions in this paper are: • We compare self-motion estimation in the three components using the same setup and similar experimental designs. • We introduce novel action-based two-alternative forced- choice methods for distance, speed, and time judgments. • We provide empirical data on motion estimation that show re- 2.2 Speed Perception searchers and practitioners what they may expect when they present a virtual world to users with an Oculus Rift head- Different sensory motion cues support the perception of the speed mounted display (HMD). of walking in an IVE (cf. [Durgin et al. 2005]). Visual motion infor- mation is often estimated as most reliable by the perceptual system, The paper is structured as follows. In Section2 we provide back- but can cause incorrect motion percepts. For example, in the il- ground information on self-motion perception in virtual reality lusion of linear vection [Berthoz 2000] observers feel their body (VR) setups. We describe the experiments in Section3 and discuss moving although they are physically stationary because they are the results in Section4. Section5 concludes the paper. presented with large-field visual motion that resembles the motion pattern normally experienced during self-motion. Humans use such 2 Background optic flow patterns to determine the speed of movement, although the speed of retinal motion signals is not uniquely related to move- As stated above, research on distance, speed, and time perception ment speed. For any translational motion the visual velocity of any in IVEs is divided in separate experiments using largely different point in the scene depends on the distance of this point from the eye, hardware, software and experimental protocols. In this section we i. e., points farther away move slower over the retina than points give an overview of the results for distance, speed, and time percep- closer to the eye [Bremmer and Lappe 1999; Warren, Jr. 1998]. tion in IVEs. Banton et al. [Banton et al. 2005] observed for subjects walking on a treadmill with an HMD that optic flow fields at the speed of the treadmill were estimated as approximately 53% slower than their 2.1 Distance Perception walking speed. Durgin et al. [Durgin et al. 2005] reported on a se- ries of experiments with subjects wearing HMDs while walking on During self-motion in IVEs, different cues provide information a treadmill or over solid ground. Their results show that subjects about the travel distance with respect to the motion speed or often estimated subtracted speeds of displayed optic flow fields time [Mohler 2007]. Humans can use these cues to keep track of the as matching their walking speed. Steinicke et al. [Steinicke et al. distance they traveled, the remaining distance to a goal, or discrimi- 2010] evaluated speed estimation of subjects in an HMD environ- nate travel distance intervals [Bremmer and Lappe 1999]. Although ment with a real walking user interface in which they manipulated humans are considerably good at making distance judgments in the subjects’ self-motion speed in the VE compared to their walking real world, experiments in IVEs show that characteristic estima- speed in the real world. Their results show that on average subjects tion errors occur such that distances are often severely overesti- underestimated their walking speed by approximately 7%. Bruder mated for very short distances and underestimated for longer dis- et al. [Bruder et al. 2012; Bruder et al. 2013] showed that visual illu- tances [Loomis et al. 1993]. Different misperception effects were sions related to optic flow perception can change self-motion speed found over a large range of IVEs and experimental methods to estimates in IVEs. measure distance estimation. While verbal estimates of the dis- tance to a target can be used to assess distance perception, meth- ods based on visually directed actions have been found to generally 2.3 Time Perception provide more accurate results [Loomis and Knapp 2003]. The most widely used action-based method is blind walking, in which sub- As discussed for distance and speed perception in IVEs, over- or jects are asked to walk without vision to a previously seen target. underestimations have been well documented by researchers in dif- Several experiments have shown that over medium range distances ferent hard-, software and experimental protocols. In contrast, per- subjects can accurately indicate distances using the blind walking ception of time has not been extensively researched in IVEs so far.