Threefolded Motion Perception During Immersive Walkthroughs

Total Page:16

File Type:pdf, Size:1020Kb

Threefolded Motion Perception During Immersive Walkthroughs Threefolded Motion Perception During Immersive Walkthroughs Gerd Bruder∗, Frank Steinicke† Human-Computer Interaction Department of Computer Science University of Hamburg Abstract with speed s, distance d, and time t. Motion can be observed by attaching a frame of reference to an object and measuring its change Locomotion is one of the most fundamental processes in the real relative to another reference frame. As there is no absolute frame of world, and its consideration in immersive virtual environments reference, absolute motion cannot be determined; everything in the (IVEs) is of major importance for many application domains requir- universe can be considered to be moving [Bottema and Roth 2012]. ing immersive walkthroughs. From a simple physics perspective, Determining motions in the frame of reference of a human observer such self-motion can be defined by the three components speed, dis- imposes a significant challenge to the perceptual processes in the tance, and time. Determining motions in the frame of reference of human brain, and the resulting percepts of motions are not always a human observer imposes a significant challenge to the perceptual veridical [Berthoz 2000]. Misperception of speed, distances, and processes in the human brain, and the resulting speed, distance, and time has been observed for different forms of self-motion in the time percepts are not always veridical. In previous work in the area real world [Efron 1970; Gibb et al. 2010; Mao et al. 2010]. of IVEs, these components were evaluated in separate experiments, i. e., using largely different hardware, software and protocols. In the context of self-motion, walking is often regarded as the most basic and intuitive form of locomotion in an environment. Self- In this paper we analyze the perception of the three components of motion estimates from walking are often found to be more accurate locomotion during immersive walkthroughs using the same setup than for other forms of motion [Ruddle and Lessels 2009]. This and similar protocols. We conducted experiments in an Oculus may be explained by adaptation and training since early childhood Rift head-mounted display (HMD) environment which showed that and evolutionary tuning of the human brain to the physical affor- subjects largely underestimated virtual distances, slightly underes- dances for locomotion of the body [Choi and Bastian 2007]. While timated virtual speed, and we observed that subjects slightly over- walking in the real world, sensory information such as vestibular, estimated elapsed time. proprioceptive, and efferent copy signals as well as visual and audi- tive information create consistent multi-sensory cues that indicate CR Categories: H.5.1 [Information Interfaces and Presentation]: one’s own motion [Wertheim 1994]. Multimedia Information Systems—Artificial, augmented, and vir- tual realities; I.3.7 [Computer Graphics]: Three-Dimensional However, a large body of studies have shown that spatial percep- Graphics and Realism—Virtual reality tion in IVEs differs from the real world. Empirical data shows that static and walked distances are often systematically over- or under- Keywords: Locomotion, real walking, perception, immersive vir- estimated in IVEs (cf. [Loomis and Knapp 2003] and [Renner et al. tual environments. 2013] for thorough reviews) even if the displayed world is modeled as an exact replica of the real laboratory in which the experiments are conducted in [Interrante et al. 2006]. Less empirical data ex- 1 Introduction ists on speed misperception in IVEs, which shows a tendency that visual speed during walking is underestimated [Banton et al. 2005; The motion of an observer or scene object in the real world or in Bruder et al. 2012; Durgin et al. 2005; Steinicke et al. 2010]. a virtual environment (VE) is of great interest for many research and application fields. This includes computer-generated imagery, We are not aware of any empirical data which has been collected e. g., in movies or games, in which a sequence of individual images for time misperception in IVEs. However, there is evidence that evoke the illusion of a moving picture [Thompson et al. 2011]. In time perception can deviate from veridical judgments due to visual the real world, humans move, for example, by walking or running, or auditive stimulation related to motion misperception [Grondin physical objects move and sometimes actuate each other, and, fi- 2008; Roussel et al. 2009; Sarrazin et al. 2004]. Different causes nally, the earth spins around itself as well as around the sun. From of motion misperception in IVEs have been identified, including a simple physics perspective, such motions can be defined by three hardware characteristics [Jones et al. 2011; Willemsen et al. 2009], main components: (i) (linear or angular) speed and (ii) distances, rendering issues [Thompson et al. 2011], and miscalibration [Kuhl as well as (iii) time. The interrelation between these components et al. 2009; Willemsen et al. 2008]. is given by the speed of motion, which is defined as the change in position or orientation of an object with respect to time: Considering that self-motion perception is affected by different d hard- and software factors, it is unfortunate that no comprehensive s = (1) analysis to this day exists in which the different components were t tested using the same setup. Hence, it remains unclear whether or ∗e-mail: [email protected] not there is a correlation or causal relation between speed, distance, †e-mail: [email protected] and time misperception in current-state IVEs. Our main contributions in this paper are: • We compare self-motion estimation in the three components using the same setup and similar experimental designs. • We introduce novel action-based two-alternative forced- choice methods for distance, speed, and time judgments. • We provide empirical data on motion estimation that show re- 2.2 Speed Perception searchers and practitioners what they may expect when they present a virtual world to users with an Oculus Rift head- Different sensory motion cues support the perception of the speed mounted display (HMD). of walking in an IVE (cf. [Durgin et al. 2005]). Visual motion infor- mation is often estimated as most reliable by the perceptual system, The paper is structured as follows. In Section2 we provide back- but can cause incorrect motion percepts. For example, in the il- ground information on self-motion perception in virtual reality lusion of linear vection [Berthoz 2000] observers feel their body (VR) setups. We describe the experiments in Section3 and discuss moving although they are physically stationary because they are the results in Section4. Section5 concludes the paper. presented with large-field visual motion that resembles the motion pattern normally experienced during self-motion. Humans use such 2 Background optic flow patterns to determine the speed of movement, although the speed of retinal motion signals is not uniquely related to move- As stated above, research on distance, speed, and time perception ment speed. For any translational motion the visual velocity of any in IVEs is divided in separate experiments using largely different point in the scene depends on the distance of this point from the eye, hardware, software and experimental protocols. In this section we i. e., points farther away move slower over the retina than points give an overview of the results for distance, speed, and time percep- closer to the eye [Bremmer and Lappe 1999; Warren, Jr. 1998]. tion in IVEs. Banton et al. [Banton et al. 2005] observed for subjects walking on a treadmill with an HMD that optic flow fields at the speed of the treadmill were estimated as approximately 53% slower than their 2.1 Distance Perception walking speed. Durgin et al. [Durgin et al. 2005] reported on a se- ries of experiments with subjects wearing HMDs while walking on During self-motion in IVEs, different cues provide information a treadmill or over solid ground. Their results show that subjects about the travel distance with respect to the motion speed or often estimated subtracted speeds of displayed optic flow fields time [Mohler 2007]. Humans can use these cues to keep track of the as matching their walking speed. Steinicke et al. [Steinicke et al. distance they traveled, the remaining distance to a goal, or discrimi- 2010] evaluated speed estimation of subjects in an HMD environ- nate travel distance intervals [Bremmer and Lappe 1999]. Although ment with a real walking user interface in which they manipulated humans are considerably good at making distance judgments in the subjects’ self-motion speed in the VE compared to their walking real world, experiments in IVEs show that characteristic estima- speed in the real world. Their results show that on average subjects tion errors occur such that distances are often severely overesti- underestimated their walking speed by approximately 7%. Bruder mated for very short distances and underestimated for longer dis- et al. [Bruder et al. 2012; Bruder et al. 2013] showed that visual illu- tances [Loomis et al. 1993]. Different misperception effects were sions related to optic flow perception can change self-motion speed found over a large range of IVEs and experimental methods to estimates in IVEs. measure distance estimation. While verbal estimates of the dis- tance to a target can be used to assess distance perception, meth- ods based on visually directed actions have been found to generally 2.3 Time Perception provide more accurate results [Loomis and Knapp 2003]. The most widely used action-based method is blind walking, in which sub- As discussed for distance and speed perception in IVEs, over- or jects are asked to walk without vision to a previously seen target. underestimations have been well documented by researchers in dif- Several experiments have shown that over medium range distances ferent hard-, software and experimental protocols. In contrast, per- subjects can accurately indicate distances using the blind walking ception of time has not been extensively researched in IVEs so far.
Recommended publications
  • The Hodgkin-Huxley Neuron Model for Motion Detection in Image Sequences Hayat Yedjour, Boudjelal Meftah, Dounia Yedjour, Olivier Lézoray
    The Hodgkin-Huxley neuron model for motion detection in image sequences Hayat Yedjour, Boudjelal Meftah, Dounia Yedjour, Olivier Lézoray To cite this version: Hayat Yedjour, Boudjelal Meftah, Dounia Yedjour, Olivier Lézoray. The Hodgkin-Huxley neuron model for motion detection in image sequences. Neural Computing and Applications, Springer Verlag, In press, 10.1007/s00521-021-06446-0. hal-03331961 HAL Id: hal-03331961 https://hal.archives-ouvertes.fr/hal-03331961 Submitted on 2 Sep 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Noname manuscript No. (will be inserted by the editor) The Hodgkin{Huxley neuron model for motion detection in image sequences Hayat Yedjour · Boudjelal Meftah · Dounia Yedjour · Olivier L´ezoray Received: date / Accepted: date Abstract In this paper, we consider a biologically-inspired spiking neural network model for motion detection. The proposed model simulates the neu- rons' behavior in the cortical area MT to detect different kinds of motion in image sequences. We choose the conductance-based neuron model of the Hodgkin-Huxley to define MT cell responses. Based on the center-surround antagonism of MT receptive fields, we model the area MT by its great pro- portion of cells with directional selective responses.
    [Show full text]
  • Capture of Stereopsis and Apparent Motion by Illusory Contours
    Perception & Psychophysics 1986, 39 (5), 361-373 Capture of stereopsis and apparent motion by illusory contours V. S. RAMACHANDRAN University of California, San Diego, La Jolla, California The removal of right-angle sectors from four circular black disks created the impression of a white "subjective" square with its four corners occluding the disks ("illusory contours"). The dis­ play in one eye was identical to that in the other except that for one eye a horizontal disparity was introduced between the edges of the cut sectors so that an illusory white square floated out in front of the paper. When a "template" ofthis stereogram was superimposed on a vertical square­ wave grating (6 cycles/deg) the illusory square pulled the corresponding lines of grating forward even though the grating was at zero disparity. (Capture was not observed if the template was superimposed on horizontal gratings; the grating remained flush with the background). Analo­ gous effects were observed in apparent motion ("motion capture"). Removal of sectors from disks in an appropriately timed sequence generated apparent motion of an illusory square. When a template of this display was superimposed on a grating, the lines appeared to move vividly with the square even though they were physically stationary. It was concluded that segmentation of the visual scene into surfaces and objects can profoundly influence the early visual processing of stereopsis and apparent motion. However, there are interesting differences between stereo cap­ ture and motion capture, which probably reflect differences in natural constraints. The implica­ tions of these findings for computational models of vision are discussed.
    [Show full text]
  • Unsupervised Learning of a Hierarchical Spiking Neural Network for Optical Flow Estimation: from Events to Global Motion Perception
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019 1 Unsupervised Learning of a Hierarchical Spiking Neural Network for Optical Flow Estimation: From Events to Global Motion Perception Federico Paredes-Valles´ , Kirk Y. W. Scheper , and Guido C. H. E. de Croon , Member, IEEE Abstract—The combination of spiking neural networks and event-based vision sensors holds the potential of highly efficient and high-bandwidth optical flow estimation. This paper presents the first hierarchical spiking architecture in which motion (direction and speed) selectivity emerges in an unsupervised fashion from the raw stimuli generated with an event-based camera. A novel adaptive neuron model and stable spike-timing-dependent plasticity formulation are at the core of this neural network governing its spike-based processing and learning, respectively. After convergence, the neural architecture exhibits the main properties of biological visual motion systems, namely feature extraction and local and global motion perception. Convolutional layers with input synapses characterized by single and multiple transmission delays are employed for feature and local motion perception, respectively; while global motion selectivity emerges in a final fully-connected layer. The proposed solution is validated using synthetic and real event sequences. Along with this paper, we provide the cuSNN library, a framework that enables GPU-accelerated simulations of large-scale spiking neural networks. Source code and samples are available at https://github.com/tudelft/cuSNN. Index Terms—Event-based vision, feature extraction, motion detection, neural nets, neuromorphic computing, unsupervised learning F 1 INTRODUCTION HENEVER an animal endowed with a visual system interconnected cells for motion estimation, among other W navigates through an environment, turns its gaze, or tasks.
    [Show full text]
  • Human Visual Motion Perception Shows Hallmarks of Bayesian Structural Inference Sichao Yang1,2,6*, Johannes Bill3,4,6*, Jan Drugowitsch3,5,7 & Samuel J
    www.nature.com/scientificreports OPEN Human visual motion perception shows hallmarks of Bayesian structural inference Sichao Yang1,2,6*, Johannes Bill3,4,6*, Jan Drugowitsch3,5,7 & Samuel J. Gershman4,5,7 Motion relations in visual scenes carry an abundance of behaviorally relevant information, but little is known about how humans identify the structure underlying a scene’s motion in the frst place. We studied the computations governing human motion structure identifcation in two psychophysics experiments and found that perception of motion relations showed hallmarks of Bayesian structural inference. At the heart of our research lies a tractable task design that enabled us to reveal the signatures of probabilistic reasoning about latent structure. We found that a choice model based on the task’s Bayesian ideal observer accurately matched many facets of human structural inference, including task performance, perceptual error patterns, single-trial responses, participant-specifc diferences, and subjective decision confdence—especially, when motion scenes were ambiguous and when object motion was hierarchically nested within other moving reference frames. Our work can guide future neuroscience experiments to reveal the neural mechanisms underlying higher-level visual motion perception. Motion relations in visual scenes are a rich source of information for our brains to make sense of our environ- ment. We group coherently fying birds together to form a fock, predict the future trajectory of cars from the trafc fow, and infer our own velocity from a high-dimensional stream of retinal input by decomposing self- motion and object motion in the scene. We refer to these relations between velocities as motion structure.
    [Show full text]
  • Motion-In-Depth Perception and Prey Capture in the Praying Mantis Sphodromantis Lineola
    © 2019. Published by The Company of Biologists Ltd | Journal of Experimental Biology (2019) 222, jeb198614. doi:10.1242/jeb.198614 RESEARCH ARTICLE Motion-in-depth perception and prey capture in the praying mantis Sphodromantis lineola Vivek Nityananda1,*, Coline Joubier1,2, Jerry Tan1, Ghaith Tarawneh1 and Jenny C. A. Read1 ABSTRACT prey as they come near. Just as with depth perception, several cues Perceiving motion-in-depth is essential to detecting approaching could contribute to the perception of motion-in-depth. or receding objects, predators and prey. This can be achieved Two of the motion-in-depth cues that have received the most using several cues, including binocular stereoscopic cues such attention in humans are binocular: changing disparity and as changing disparity and interocular velocity differences, and interocular velocity differences (IOVDs) (Cormack et al., 2017). monocular cues such as looming. Although these have been Stereoscopic disparity refers to the difference in the position of an studied in detail in humans, only looming responses have been well object as seen by the two eyes. This disparity reflects the distance to characterized in insects and we know nothing about the role of an object. Thus as an object approaches, the disparity between the stereoscopic cues and how they might interact with looming cues. We two views changes. This changing disparity cue suffices to create a used our 3D insect cinema in a series of experiments to investigate perception of motion-in-depth for human observers, even in the the role of the stereoscopic cues mentioned above, as well as absence of other cues (Cumming and Parker, 1994).
    [Show full text]
  • Motion Processing in Human Visual Cortex
    1 Motion Processing in Human Visual Cortex Randolph Blake*, Robert Sekuler# and Emily Grossman* *Vanderbilt Vision Research Center, Nashville TN 37203 #Volen Center for Complex Systems, Brandeis University, Waltham MA 02454 Acknowledgments. The authors thank David Heeger, Alex Huk, Nestor Matthews and Mark Nawrot for comments on sections of this chapter. David Bloom helped greatly with manuscript formatting and references. During preparation of this chapter RB and EG were supported by research grants from the National Institutes of Health (EY07760) and the National Science Foundation (BCS0121962) and by a Core Grant from the National Institutes of Health (EY01826) awarded to the Vanderbilt Vision Research Center. [email protected] 2 During the past few decades our understanding of the neural bases of visual perception and visually guided actions has advanced greatly; the chapters in this volume amply document these advances. It is fair to say, however, that no area of visual neuroscience has enjoyed greater progress than the physiology of visual motion perception. Publications dealing with visual motion perception and its neural bases would fill volumes, and the topic remains central within the visual neuroscience literature even today. This chapter aims to provide an overview of brain mechanisms underlying human motion perception. The chapter comprises four sections. We begin with a short review of the motion pathways in the primate visual system, emphasizing work from single-cell recordings in Old World monkeys; more extensive reviews of this literature appear elsewhere.1,2 With this backdrop in place, the next three sections concentrate on striate and extra-striate areas in the human brain that are implicated in the perception of motion.
    [Show full text]
  • A Test of an Auditory Motion Hypothesis for Continuous and Discrete Sounds Moving in Pitch Space
    A TEST OF AN AUDITORY MOTION HYPOTHESIS FOR CONTINUOUS AND DISCRETE SOUNDS MOVING IN PITCH SPACE Molly J. Henry A Dissertation Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY May 2011 Committee: J. Devin McAuley, Advisor Rodney M. Gabel Graduate Faculty Representative Jennifer Z. Gillespie Dale S. Klopfer ii ABSTRACT J. Devin McAuley, Advisor Ten experiments tested an auditory motion hypothesis, which proposes that regular pitch- time trajectories facilitate perception of and attention to auditory stimuli; on this view, listeners are assumed to use velocity information (pitch change per unit time) to generate expectations about the future time course of continuous and discrete sounds moving in pitch space. Toward this end, two sets of experiments were conducted. In six experiments reported in Part I of this dissertation, listeners judged the duration or pitch change of a continuous or discrete comparison stimulus relative to a standard, where the comparison’s velocity varied on each trial relative to the fixed standard velocity. Results indicate that expectations generated based on velocity information led to distortions in perceived duration and pitch change of continuous stimuli that were consistent with the auditory motion hypothesis; specifically, when comparison velocity was relatively fast, duration was overestimated and pitch change was underestimated. Moreover, when comparison velocity was relatively slow, duration was underestimated and pitch change was overestimated. On the other hand, no perceptual distortions were observed for discrete stimuli, consistent with the idea that velocity information is less clearly conveyed, or easier to ignore, for discrete auditory stimuli.
    [Show full text]
  • An Investigation of Tactile Localization and Skin-Based Maps
    Where was I touched? – An investigation of tactile localization and skin-based maps Jack Brooks Doctor of Philosophy Neuroscience Research Australia School of Medical Sciences, Faculty of Medicine University of New South Wales November 2017 ii Preface The coding of the position of touch on the skin and of the size and shape of the body are both fundamental for interacting with our surrounds. The aim of this thesis was to learn more about the mechanisms of tactile localization and to characterize the principles by which skin-based representations of the body update. It is commonly accepted that skin-based representations of the body are generated from the statistics of touch and other inputs. My studies required skin stimulation customised to account for inter-individual differences in touch sensitivity and forearm shape. Within the constraints of these methodological challenges, the central questions of this thesis were addressed by performing multiple behavioural experiments. In my first study, I tested how touch intensity and history influence touch localization. The study showed that reducing touch intensity increases the variability of pointing responses to touch and results in spatial biases to the middle of the recent history of touch. Thus, I showed that when uncertain about perceived touch location, a strategy is used that minimises localization errors over time. This error minimisation mechanism stabilises our perception of events on the skin and their sensory features. Next, I investigated uncertainty in a motion stimulus by fragmenting it. Studies in vision suggest that missing sensory inputs are filled-in from the surrounds, while previous tactile studies suggest fragmented motion could influence skin-based representations.
    [Show full text]
  • Abnormal Cortical Processing of Pattern Motion in Amblyopia: Evidence from Fmri
    NeuroImage 60 (2012) 1307–1315 Contents lists available at SciVerse ScienceDirect NeuroImage journal homepage: www.elsevier.com/locate/ynimg Abnormal cortical processing of pattern motion in amblyopia: Evidence from fMRI B. Thompson a,b,⁎, M.Y. Villeneuve c,d, C. Casanova e, R.F. Hess b a Department of Optometry and Vision Science, University of Auckland, Private Bag 92019, Auckland, New Zealand b McGill Vision Research, Department of Ophthalmology, McGill University, Royal Victoria Hospital, 687 Pine Avenue West, Montreal, Canada c NMR Martinos Center, Massachusetts General Hospital, Harvard Medical School, Harvard University, Boston, USA d Montreal Neurological Institute, Departments of Neurology and Neurosurgery, McGill University, Montreal, Canada e Laboratoire des Neurosciences de la Vision, École d'optométrie, Université de Montréal, CP 6128, succ. Centre-Ville, Montreal, Canada article info abstract Article history: Converging evidence from human psychophysics and animal neurophysiology indicates that amblyopia is as- Received 12 November 2011 sociated with abnormal function of area MT, a motion sensitive region of the extrastriate visual cortex. In this Revised 29 December 2011 context, the recent finding that amblyopic eyes mediate normal perception of dynamic plaid stimuli was sur- Accepted 14 January 2012 prising, as neural processing and perception of plaids has been closely linked to MT function. One intriguing Available online 24 January 2012 potential explanation for this discrepancy is that the amblyopic eye recruits alternative visual brain areas to support plaid perception. This is the hypothesis that we tested. We used functional magnetic resonance im- Keywords: Amblyopia aging (fMRI) to measure the response of the amblyopic visual cortex and thalamus to incoherent and coher- Motion ent motion of plaid stimuli that were perceived normally by the amblyopic eye.
    [Show full text]
  • Binocular Eye Movement Control and Motion Perception: What Is Being Tracked?
    Eye Movements, Strabismus, Amblyopia, and Neuro-Ophthalmology Binocular Eye Movement Control and Motion Perception: What Is Being Tracked? Johannes van der Steen and Joyce Dits PURPOSE. We investigated under what conditions humans can representation of the visual world is constructed based on make independent slow phase eye movements. The ability to binocular retinal correspondence.4 To achieve this, the brain make independent movements of the two eyes generally is must use the visual information from each eye to stabilize the attributed to few specialized lateral eyed animal species, for two retinal images relative to each other. Imagine one is example chameleons. In our study, we showed that humans holding two laser pointers, one in each hand, and one must also can move the eyes in different directions. To maintain project the two beams precisely on top of each other on a wall. binocular retinal correspondence independent slow phase Eye movements contribute to binocular vision using visual movements of each eye are produced. feedback from image motion of each eye and from binocular correspondence. It generally is believed that the problem of METHODS. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed binocular correspondence is solved in the primary visual 5,6 visual stimuli oscillating in orthogonal direction. cortex (V1). Neurons in V1 process monocular and binocular visual information.7 At subsequent levels of process- RESULTS. Correlated stimuli led to orthogonal slow eye ing eye-of-origin information is lost, and the perception of the movements, while the binocularly perceived motion was the binocular world is invariant for eye movements and self- vector sum of the motion presented to each eye.
    [Show full text]
  • Motion Perception and Prediction: a Subsymbolic Approach
    Motion Perception and Prediction: A Subsymbolic Approach Volker Baier Institut für Informatik, Lehrstuhl für Theoretische Informatik und Grundlagen der Künstliche Intelligenz Motion Perception and Prediction: A Subsymbolic Approach Volker Baier Vollständiger Abdruck der von der Fakultät für Informatik der Technischen Universität München zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften genehmigten Dissertation. Vorsitzender: Univ.-Prof. Dr. A. Knoll Prüfer der Dissertation: 1. Univ.-Prof. Dr. Dr. h.c. mult. W. Brauer 2. Univ.-Prof. Dr. K. Schill Universität Bremen 3. Univ.-Prof. M. Beetz, PhD Die Dissertation wurde am 28.06.2006 bei der Technischen Universität München eingereicht und durch die Fakultät für Informatik am 13.11.2006 angenommen. Contents 1 Introduction 3 2 Overview of the Model 9 3 Paradigms of Perception 13 3.1 Philosophy . 13 3.2 Behaviorism . 14 3.3 Cognitivism . 15 3.4 Cybernetics and Constructivism . 17 3.5 Connectionism . 18 3.6 Computational Neuroscience . 19 3.7 Guiding Concepts . 20 4 Neuro-Anatomy 21 4.1 Visual Streams . 21 4.1.1 The Dorsal Pathway . 22 4.1.2 The Ventral Pathway . 23 4.2 Visual Cortical Areas . 23 4.2.1 Retina and Lateral Geniculate Nucleus . 24 4.2.2 Primary Visual Cortex V1 . 24 4.3 Motion Processing Related Areas . 28 4.4 What we Derive from Neuro-Anatomy . 30 5 Cognitive Psychology 33 5.1 Visual Spatio-Temporal Memory . 34 5.2 Dynamic Spatio-Temporal Memory . 36 5.3 Psychophysic Experimental Results . 39 5.4 What we Derive for our Model . 44 6 Neural Networks for Static Patterns 47 6.1 Multilayer Perceptron .
    [Show full text]
  • Space and Time in Perception and Action Edited by Romi Nijhawan and Beena Khurana Index More Information
    Cambridge University Press 978-0-521-86318-6 - Space and Time in Perception and Action Edited by Romi Nijhawan and Beena Khurana Index More information Index absolute time, 1, 232 effect on perceived speed, 233 acceleration, 99, 118, 124, 133, 177, 287, 342, 366, 449, effect on time perception, 269, 272 545 endogenous, 269–271, 532, 543 acuity, visual, 102, 105, 449 exogenous, 269–271 adaptation, 43, 44, 48, 63, 64, 68, 78, 97, 124, 128, 202, 209, in flash-lag effect, 396–403, 414, 423, 479, 487, 493–495 212, 236, 243, 245, 248, 284, 288, 295, 345, 351, 352, in Frohlich¨ effect, 321, 328–331, 334 359, 489, 531, 532 and motion information, 521, 524–526, 528–532 Adelson, E. H., 126, 137, 281, 392, 418, 525, 530 and perceptual asynchrony, 269–271, 516, 517 afference, 13, 57, 95, 97, 211, 423–428, 431, 435, 436, 438, in representational momentum, 343, 348, 358, 367 543–545, 548, 551 and salience, 284, 294 after-image, 13, 33, 95–97, 100 shifts in, 203, 243, 256 aim, target, 109, 110, 113, 118 in temporal order, 170, 174, 272, 286 Alais, D., 233, 240, 272, 273, 402, 479, 481, 482, 493, 546 audiovisual interaction, 236, 243, 245, 247, 278, 279, 284, algorithm, 278, 282, 284, 295, 350, 357, 358 286, 293–295, 303, 546 aliasing. See temporal aliasing audition, 3, 73, 74, 80, 152, 159, 160, 166, 169, 182, 217, 228, allocentric location. See also exocentric location 247, 254, 271, 278, 283, 288, 290, 303, 342, 397, 479, amodal 481, 550 representation, 284 and simultaneity, 232–235, 238, 240, 241, 243, 245, 279, tokens, 278, 279 284, 286, 293–295 Andersen, R.
    [Show full text]