Relationship Between Binocular Disparity and Motion Parallax in Surface Detection

Total Page:16

File Type:pdf, Size:1020Kb

Relationship Between Binocular Disparity and Motion Parallax in Surface Detection Perception & Psychophysics 1997,59 (3),370-380 Relationship between binocular disparity and motion parallax in surface detection JESSICATURNERand MYRON L, BRAUNSTEIN University ofCalifornia, Irvine, California and GEORGEJ. ANDERSEN University ofColifornia, Riverside, California The ability to detect surfaces was studied in a multiple-cue condition in which binocular disparity and motion parallax could specify independent depth configurations, On trials on which binocular disparity and motion parallax were presented together, either binocular disparity or motion parallax could indicate a surface in one of two intervals; in the other interval, both sources indicated a vol­ ume of random points. Surface detection when the two sources of information were present and compatible was not betterthan detection in baseline conditions, in which only one source of informa­ tion was present. When binocular disparity and motion specified incompatible depths, observers' ability to detect a surface was severely impaired ifmotion indicated a surface but binocular disparity did not. Performance was not as severely degraded when binocular disparity indicated a surface and motion did not. This dominance of binocular disparity persisted in the presence of foreknowledge about which source of information would be relevant. The ability to detect three-dimensional (3-D) surfaces action ofdepth cues in determining the perceived shape has been studied previously using static stereo displays of objects has been a subject of considerable attention. (e.g., Uttal, 1985), motion parallax displays (Andersen, Biilthoffand Mallot (1988) discussed four ways in which 1996; Andersen & Wuestefeld, 1993), and structure­ depth information from different cues may be combined: from-motion (SFM) displays (Turner, Braunstein, & An­ accumulation, veto, disambiguation, and cooperation. In dersen, 1995).1 This research has examined the signal-to­ accumulation, the depths computed from the various cues noise ratio required to detect a surface in the presence of are combined additively to provide an overall perceived added "noise" points in stereo displays of smooth, qua­ depth. Veto implies that the depth computed from one cue dratic surfaces (Uttal, 1985) and in motion parallax dis­ determines the perceived depth, overriding the depths plays of sinusoidal gratings (Andersen & Wuestefeld, computed from other cues. Disambiguation refers to the 1993). Other studies have considered the minimum num­ use ofinformation from one cue to disambiguate the depth ber ofpoints required to detect a smooth surface in motion information provided by another cue. In cooperation, the parallax displays (Andersen, 1996) as a function of fre­ effectiveness ofone cue can be enhanced by information quency and amplitude for sinusoidally corrugated sur­ from another cue, resulting in a combined depth that is faces and in SFM displays (Turner et aI., 1995) as a func­ greater than the sum ofthe depths provided by each cue tion of frequency and amplitude or of shape index and in isolation. Accumulation and veto are consistent with a curvedness (see Koenderink, 1990).This research has doc­ model ofcue combination in which depths are computed umented the ability ofhuman observers to detect a vari­ separately by each cue-that is, a strict modularity or ety of surfaces with a small number of feature points or weak fusion model. Disambiguation and cooperation, on with a small ratio offeature points to noise points, from the other hand, involve interactions ofthe cues prior to the either binocular disparity alone or motion alone. The in­ final depth computation and are not consistent with weak teraction ofbinocular disparity and motion in surface de­ fusion. Disambiguation and cooperation can occur in the tection, however, has not been previously investigated. absence of modularity-i-that is, with strong fusion-but Although the interaction ofbinocular disparity and mo­ they can also occur with modules that interact prior to tion has not been investigated in detection tasks, the inter- computing a final depth map but that still compute sepa­ rate depth maps. Landy, Maloney, Johnston, and Young This research was supported by National Science Foundation (1995) refer to this as modified weak fusion. Grants DBS-9209973, SBR-951 0431, and SBR-95 I 1198. Portions of All four types of interactions discussed by Biilthoff this research were presented at the 1995 meeting of the Association and Mallot (1988) have been found in research concerned for Research in Vision and Ophthalmology. The authors are grateful to with recovering 3-D shape from combinations ofbinocu­ Jeffrey C. Liter, Asad Saidpour, and Matthew D. Turner. Correspon­ dence should be addressed to 1. Turner, Department ofCognitive Sci­ lar disparity and motion information. Johnston, Cumming, ences, School ofSocial Sciences, University ofCalifornia, Irvine, CA and Landy (1994) found that when binocular disparity 92717-5100 (e-mail: [email protected]). and motion parallax indicated different curvatures, the per- Copyright 1997 Psychonomic Society, Inc. 370 DETECTION OF SURFACES 371 ceived shape was a linear combination ofthe curvatures scattered in a volume. This combination provided the simulated, with the weights constrained to sum to one. "noise" condition for our detection experiments. Norman and Todd (1995) found that one source ofinfor­ In the three experiments presented here, the subject's mation would veto another when binocular disparity and task was to determine which oftwo sequentially presented motion indicated conflicting shapes, but the dominant displays contained information indicating the presence source varied between subjects. Binocular disparity was ofa smooth surface. In the first experiment, detection was the dominant source ofinformation for determining sur­ examined for binocular disparity and motion parallax face depth when placed in conflict with motion parallax separately and for the two sources of information in (Rogers & Collett, 1989). Motion information that was combination. The results of this experiment indicated a not compatible with the perceived shape was interpreted veto ofmotion information by binocular disparity when as relative motion within the surface. Tittle and Braun­ the two sources were present and only one indicated a stein (1993) found a cooperative relationship between smooth surface. The second and third experiments exam­ binocular disparity and SFM for transparent displays with ined two methods of improving surface detection when high disparity. For some conditions, judged depth from motion carried the surface information: providing a pre­ combined binocular disparity and SFM was greater than trial cue indicating whether disparity or motion would the sum ofthe judged depths with either cue alone. Landy carry the surface information on each trial, or presenting et al. (1995) interpreted this result in terms of changing the trials in blocks and informing the subject ofthe rele­ the weights within a linear-combination approach. The vant cue for each block. role of binocular disparity in disambiguating SFM dis­ plays has been the subject of both theoretical analyses EXPERIMENT 1 (Richards, 1985) and empirical investigations (Braunstein, Andersen, Rouse, & Tittle, 1986; Rouse, Tittle, & Braun­ In Experiment 1, we examined the interaction of bi­ stein, 1989). nocular disparity and motion information in determining In examining the relationship between binocular dis­ the accuracy ofsurface detection both when the two cues parity and motion parallax in the detection of smooth provided disparate information about the presence of a surfaces, we will consider three combinations ofinforma­ smooth surface and when the two cues provided corre­ tion from these cues. Each ofthese combinations will be sponding information. In a strict weak fusion model of compared with detection with each cue presented in iso­ cue interaction, the ability to register depth from one cue lation. First, both cues can indicate the presence ofa sur­ is independent ofthe information provided by other cues. face. In this case, the question ofinterest is how detection If the two cues create independent depth maps, and the from the combined information compares with detection decision process can work independently on the two depth from each cue separately. The possible outcomes are that maps, then performance when the two cues agree is pre­ (1) detection is enhanced over that found with either cue dicted by a probability-summation model. Ifthe probabil­ alone, (2) detection is no better than that found with either ity ofdetecting the surface using motion parallax infor­ cue separately (assuming that the effectiveness of the mation only is P(M), and the probability ofdetecting the separate cues is matched), or (3) detection is inferior to surface using disparity information only is P(S), and the that found with the separate cues. The first two outcomes two are independent, then the probability ofdetecting the could occur with any degree of modularity and would surface when both sources indicate the surface should be depend on the combination rules, with the second outcome P(M & S) = P(M) + P(S) - P(M) * P(S). To deter­ consistent with a "winner-take-all" or veto rule. The mine whether probability summation occurs for surface third outcome, although unlikely, would suggest a form detection from combined disparity and motion cues, we ofstrong fusion involving interference between the cues. included conditions in which only disparity information
Recommended publications
  • Binocular Disparity - Difference Between Two Retinal Images
    Depth II + Motion I Lecture 13 (Chapters 6+8) Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 depth from focus: tilt shift photography Keith Loutit : tilt shift + time-lapse photography http://vimeo.com/keithloutit/videos 2 Monocular depth cues: Pictorial Non-Pictorial • occlusion • motion parallax relative size • accommodation shadow • • (“depth from focus”) • texture gradient • height in plane • linear perspective 3 • Binocular depth cue: A depth cue that relies on information from both eyes 4 Two Retinas Capture Different images 5 Finger-Sausage Illusion: 6 Pen Test: Hold a pen out at half arm’s length With the other hand, see how rapidly you can place the cap on the pen. First using two eyes, then with one eye closed 7 Binocular depth cues: 1. Vergence angle - angle between the eyes convergence divergence If you know the angles, you can deduce the distance 8 Binocular depth cues: 2. Binocular Disparity - difference between two retinal images Stereopsis - depth perception that results from binocular disparity information (This is what they’re offering in “3D movies”...) 9 10 Retinal images in left & right eyes Figuring out the depth from these two images is a challenging computational problem. (Can you reason it out?) 11 12 Horopter: circle of points that fall at zero disparity (i.e., they land on corresponding parts of the two retinas) A bit of geometric reasoning will convince you that this surface is a circle containing the fixation point and the two eyes 13 14 point with uncrossed disparity point with crossed
    [Show full text]
  • Recovery of 3-D Shape from Binocular Disparity and Structure from Motion
    Perception & Psychophysics /993. 54 (2). /57-J(B Recovery of 3-D shape from binocular disparity and structure from motion JAMES S. TI'ITLE The Ohio State University, Columbus, Ohio and MYRON L. BRAUNSTEIN University of California, Irvine, California Four experiments were conducted to examine the integration of depth information from binocular stereopsis and structure from motion (SFM), using stereograms simulating transparent cylindri­ cal objects. We found that the judged depth increased when either rotational or translational motion was added to a display, but the increase was greater for rotating (SFM) displays. Judged depth decreased as texture element density increased for static and translating stereo displays, but it stayed relatively constant for rotating displays. This result indicates that SFM may facili­ tate stereo processing by helping to resolve the stereo correspondence problem. Overall, the re­ sults from these experiments provide evidence for a cooperative relationship betweel\..SFM and binocular disparity in the recovery of 3-D relationships from 2-D images. These findings indicate that the processing of depth information from SFM and binocular disparity is not strictly modu­ lar, and thus theories of combining visual information that assume strong modularity or indepen­ dence cannot accurately characterize all instances of depth perception from multiple sources. Human observers can perceive the 3-D shape and orien­ tion, both sources of information (along with many others) tation in depth of objects in a natural environment with im­ occur together in the natural environment. Thus, it is im­ pressive accuracy. Prior work demonstrates that informa­ portant to understand what interaction (if any) exists in the tion about shape and orientation can be recovered from visual processing of binocular disparity and SFM.
    [Show full text]
  • Binocular Vision
    BINOCULAR VISION Rahul Bhola, MD Pediatric Ophthalmology Fellow The University of Iowa Department of Ophthalmology & Visual Sciences posted Jan. 18, 2006, updated Jan. 23, 2006 Binocular vision is one of the hallmarks of the human race that has bestowed on it the supremacy in the hierarchy of the animal kingdom. It is an asset with normal alignment of the two eyes, but becomes a liability when the alignment is lost. Binocular Single Vision may be defined as the state of simultaneous vision, which is achieved by the coordinated use of both eyes, so that separate and slightly dissimilar images arising in each eye are appreciated as a single image by the process of fusion. Thus binocular vision implies fusion, the blending of sight from the two eyes to form a single percept. Binocular Single Vision can be: 1. Normal – Binocular Single vision can be classified as normal when it is bifoveal and there is no manifest deviation. 2. Anomalous - Binocular Single vision is anomalous when the images of the fixated object are projected from the fovea of one eye and an extrafoveal area of the other eye i.e. when the visual direction of the retinal elements has changed. A small manifest strabismus is therefore always present in anomalous Binocular Single vision. Normal Binocular Single vision requires: 1. Clear Visual Axis leading to a reasonably clear vision in both eyes 2. The ability of the retino-cortical elements to function in association with each other to promote the fusion of two slightly dissimilar images i.e. Sensory fusion. 3. The precise co-ordination of the two eyes for all direction of gazes, so that corresponding retino-cortical element are placed in a position to deal with two images i.e.
    [Show full text]
  • Motion Parallax Is Asymptotic to Binocular Disparity
    Motion Parallax is Asymptotic to Binocular Disparity by Keith Stroyan Mathematics Department University of Iowa Iowa City, Iowa Abstract Researchers especially beginning with (Rogers & Graham, 1982) have noticed important psychophysical and experimental similarities between the neurologically different motion parallax and stereopsis cues. Their quantitative analysis relied primarily on the "disparity equivalence" approximation. In this article we show that retinal motion from lateral translation satisfies a strong ("asymptotic") approximation to binocular disparity. This precise mathematical similarity is also practical in the sense that it applies at normal viewing distances. The approximation is an extension to peripheral vision of (Cormac & Fox's 1985) well-known non-trig central vision approximation for binocular disparity. We hope our simple algebraic formula will be useful in analyzing experiments outside central vision where less precise approximations have led to a number of quantitative errors in the vision literature. Introduction A translating observer viewing a rigid environment experiences motion parallax, the relative movement on the retina of objects in the scene. Previously we (Nawrot & Stroyan, 2009), (Stroyan & Nawrot, 2009) derived mathematical formulas for relative depth in terms of the ratio of the retinal motion rate over the smooth pursuit eye tracking rate called the motion/pursuit law. The need for inclusion of the extra-retinal information from the pursuit eye movement system for depth perception from motion
    [Show full text]
  • Depth Perception, Part II
    Depth Perception, part II Lecture 13 (Chapter 6) Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Fall 2017 1 nice illusions video - car ad (2013) (anamorphosis, linear perspective, accidental viewpoints, shadows, depth/size illusions) https://www.youtube.com/watch?v=dNC0X76-QRI 2 Accommodation - “depth from focus” near far 3 Depth and scale estimation from accommodation “tilt shift photography” 4 Depth and scale estimation from accommodation “tilt shift photography” 5 Depth and scale estimation from accommodation “tilt shift photography” 6 Depth and scale estimation from accommodation “tilt shift photography” 7 Depth and scale estimation from accommodation “tilt shift photography” 8 more on tilt shift: Van Gogh http://www.mymodernmet.com/profiles/blogs/van-goghs-paintings-get 9 Tilt shift on Van Gogh paintings http://www.mymodernmet.com/profiles/blogs/van-goghs-paintings-get 10 11 12 countering the depth-from-focus cue 13 Monocular depth cues: Pictorial Non-Pictorial • occlusion • motion parallax relative size • accommodation shadow • • (“depth from focus”) • texture gradient • height in plane • linear perspective 14 • Binocular depth cue: A depth cue that relies on information from both eyes 15 Two Retinas Capture Different images 16 Finger-Sausage Illusion: 17 Pen Test: Hold a pen out at half arm’s length With the other hand, see how rapidly you can place the cap on the pen. First using two eyes, then with one eye closed 18 Binocular depth cues: 1. Vergence angle - angle between the eyes convergence divergence If you know the angles, you can deduce the distance 19 Binocular depth cues: 2. Binocular Disparity - difference between two retinal images Stereopsis - depth perception that results from binocular disparity information (This is what they’re offering in 3D movies…) 20 21 Retinal images in left & right eyes Figuring out the depth from these two images is a challenging computational problem.
    [Show full text]
  • Perceiving Depth and Size Depth
    PERCEIVING DEPTH AND SIZE Visual Perception DEPTH 1 Cue Approach Identifies information on the retina Correlates it with the depth of the scene Different cues Previous knowledge Slide 3 Aditi Majumder, UCI Depth Cues Oculomotor Monocular Binocular Slide 4 Aditi Majumder, UCI 2 Oculomotor Cues Convergence Inward movement for nearby objects Outward movements for farther objects Accommodation Tightening of muscles the change the shape of the lens Slide 5 Aditi Majumder, UCI Monocular Cues (Pictorial) Occlusion Atmospheric perspective Relative height Linear perspective Cast shadows Texture gradient Relative size Movement-based cues Familiar size Motion Parallax Deletion and accretion Slide 6 Aditi Majumder, UCI 3 Occlusion Slide 7 Aditi Majumder, UCI Relative Height Slide 8 Aditi Majumder, UCI 4 Conflicting Cues Slide 9 Aditi Majumder, UCI Cast Shadows Slide 10 Aditi Majumder, UCI 5 Relative Size Slide 11 Aditi Majumder, UCI Familiar Size Slide 12 Aditi Majumder, UCI 6 Atmospheric Perspective Slide 13 Aditi Majumder, UCI Linear Perspective Texture Gradient Slide 14 Aditi Majumder, UCI 7 Presence of Ground It helps in perception Removal of ground causes difficulty in perception Slide 15 Aditi Majumder, UCI Motion Parallax Further points have lesser parallax Closer points move more Slide 16 Aditi Majumder, UCI 8 Deletion and Accretion Slide 17 Aditi Majumder, UCI Binocular Cues Binocular disparity and stereopsis Corresponding points Correspondence problems Disparity information in the brain Slide 18 Aditi Majumder,
    [Show full text]
  • Disparity Channels in Early Vision
    11820 • The Journal of Neuroscience, October 31, 2007 • 27(44):11820–11831 Symposium Disparity Channels in Early Vision Anna W. Roe,1 Andrew J. Parker,2 Richard T. Born,3 and Gregory C. DeAngelis4 1Department of Psychology, Vanderbilt University, Nashville, Tennessee 37203, 2Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, 3Department of Neurobiology, Harvard Medical School, Boston, Massachusetts 02115, and 4Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627 The past decade has seen a dramatic increase in our knowledge of the neural basis of stereopsis. New cortical areas have been found to represent binocular disparities, new representations of disparity information (e.g., relative disparity signals) have been uncovered, the first topographic maps of disparity have been measured, and the first causal links between neural activity and depth perception have been established. Equally exciting is the finding that training and experience affects how signals are channeled through different brain areas, a flexibility that may be crucial for learning, plasticity, and recovery of function. The collective efforts of several laboratories have established stereo vision as one of the most productive model systems for elucidating the neural basis of perception. Much remains to be learned about how the disparity signals that are initially encoded in primary visual cortex are routed to and processed by extrastriate areas to mediate the diverse capacities of three-dimensional vision that enhance our daily experience of the world. Key words: stereopsis; binocular disparity; MT primate; topography; choice probability Linking psychology and physiology in binocular vision ration for many others.
    [Show full text]
  • Depth Perception 0
    Depth Perception 0 Depth Perception Suggested reading:! ! Stephen E. Palmer (1999) Vision Science. ! Read JCA (2005) Early computational processing in binocular vision and depth perception. Progress in Biophysics and Molecular Biology 87: 77-108 Depth Perception 1 Depth Perception! Contents: Contents: • The problem of depth perception • • WasDepth is lerning cues ? • • ZellulärerPanum’s Mechanismus fusional area der LTP • • UnüberwachtesBinocular disparity Lernen • • ÜberwachtesRandom dot Lernen stereograms • • Delta-LernregelThe correspondence problem • • FehlerfunktionEarly models • • GradientenabstiegsverfahrenDisparity sensitive cells in V1 • • ReinforcementDisparity tuning Lernen curve • Binocular engergy model • The role of higher visual areas Depth Perception 2 The problem of depth perception! Our 3 dimensional environment gets reduced to two-dimensional images on the two retinae. Viewing Retinal geometry image The “lost” dimension is commonly called depth. As depth is crucial for almost all of our interaction with our environment and we perceive the 2D images as three dimensional the brain must try to estimate the depth from the retinal images as good as it can with the help of different cues and mechanisms. Depth Perception 3 Depth cues - monocular! Texture Accretion/Deletion Accommodation (monocular, optical) (monocular, ocular) Motion Parallax Convergence of Parallels (monocular, optical) (monocular, optical) Depth Perception 4 Depth cues - monocular ! Position relative to Horizon Familiar Size (monocular, optical) (monocular, optical) Relative Size Texture Gradients (monocular, optical) (monocular, optical) Depth Perception 5 Depth cues - monocular ! ! Edge Interpretation (monocular, optical) ! Shading and Shadows (monocular, optical) ! Aerial Perspective (monocular, optical) Depth Perception 6 Depth cues – binocular! Convergence Binocular Disparity (binocular, ocular) (binocular, optical) As the eyes are horizontally displaced we see the world from two slightly different viewpoints.
    [Show full text]
  • Binocular Eye Movements Are Adapted to the Natural Environment
    The Journal of Neuroscience, April 10, 2019 • 39(15):2877–2888 • 2877 Systems/Circuits Binocular Eye Movements Are Adapted to the Natural Environment X Agostino Gibaldi and Martin S. Banks Vision Science Program, School of Optometry University of California, Berkeley, Berkeley, California 94720 Humans and many animals make frequent saccades requiring coordinated movements of the eyes. When landing on the new fixation point, the eyes must converge accurately or double images will be perceived. We asked whether the visual system uses statistical regu- larities in the natural environment to aid eye alignment at the end of saccades. We measured the distribution of naturally occurring disparities in different parts of the visual field. The central tendency of the distributions was crossed (nearer than fixation) in the lower field and uncrossed (farther) in the upper field in male and female participants. It was uncrossed in the left and right fields. We also measuredhorizontalvergenceaftercompletionofvertical,horizontal,andobliquesaccades.Whentheeyesfirstlandedneartheeccentric target, vergence was quite consistent with the natural-disparity distribution. For example, when making an upward saccade, the eyes diverged to be aligned with the most probable uncrossed disparity in that part of the visual field. Likewise, when making a downward saccade, the eyes converged to enable alignment with crossed disparity in that part of the field. Our results show that rapid binocular eye movements are adapted to the statistics of the 3D environment, minimizing the need for large corrective vergence movements at the end of saccades. The results are relevant to the debate about whether eye movements are derived from separate saccadic and vergence neural commands that control both eyes or from separate monocular commands that control the eyes independently.
    [Show full text]
  • Visually Imbalanced Stereo Matching
    Visually Imbalanced Stereo Matching Yicun Liu∗13† Jimmy Ren∗1 Jiawei Zhang1 Jianbo Liu12 Mude Lin1 SenseTime Research1 The Chinese University of Hong Kong2 Columbia University3 {rensijie,zhangjiawei,linmude}@sensetime.com1 [email protected] [email protected] Abstract Understanding of human vision system (HVS) has in- spired many computer vision algorithms. Stereo matching, which borrows the idea from human stereopsis, has been ex- tensively studied in the existing literature. However, scant attention has been drawn on a typical scenario where binoc- ular inputs are qualitatively different (e.g., high-res master camera and low-res slave camera in a dual-lens module). Recent advances in human optometry reveal the capabil- ity of the human visual system to maintain coarse stereop- sis under such visually imbalanced conditions. Bionically aroused, it is natural to question that: do stereo machines share the same capability? In this paper, we carry out a systematic comparison to investigate the effect of various imbalanced conditions on current popular stereo match- Figure 1. Illustration of visually imbalanced scenarios: (a) input left view (b) Input downgraded right view, from top to bottom: ing algorithms. We show that resembling the human vi- monocular blur, monocular blur with rectification error, monocular sual system, those algorithms can handle limited degrees of noise. (c) Disparity predicted from mainstream monocular depth monocular downgrading but also prone to collapses beyond (only left view as input)/stereo matching (stereo views as input) a certain threshold. To avoid such collapse, we propose a algorithms: from top to bottom are DORN [9], PSMNet [7] and solution to recover the stereopsis by a joint guided-view- CRL [31].
    [Show full text]
  • Depth Interval Estimates from Motion Parallax and Binocular Disparity Beyond Interaction Space Barbara Gillam University of New South Wales
    University of Wollongong Research Online Faculty of Health and Behavioural Sciences - Papers Faculty of Science, Medicine and Health (Archive) 2011 Depth interval estimates from motion parallax and binocular disparity beyond interaction space Barbara Gillam University of New South Wales Stephen Palmisano University of Wollongong, [email protected] Donovan Govan University of New South Wales Publication Details Gillam, B., Palmisano, S. A. & Govan, D. G. (2011). Depth interval estimates from motion parallax and binocular disparity beyond interaction space. Perception, 40 (1), 39-49. Research Online is the open access institutional repository for the University of Wollongong. For further information contact the UOW Library: [email protected] Depth interval estimates from motion parallax and binocular disparity beyond interaction space Abstract Static and dynamic observers provided binocular and monocular estimates of the depths between real objects lying well beyond interaction space. On each trial, pairs of LEDs were presented inside a dark railway tunnel. The nearest LED was always 40 m from the observer, with the depth separation between LED pairs ranging from 0 up to 248 m. Dynamic binocular viewing was found to produce the greatest (ie most veridical) estimates of depth magnitude, followed next by static binocular viewing, and then by dynamic monocular viewing. (No significant depth was seen with static monocular viewing.) We found evidence that both binocular and monocular dynamic estimates of depth were scaled for the observation distance when the ground plane and walls of the tunnel were visible up to the nearest LED. We conclude that both motion parallax and stereopsis provide useful long-distance depth information and that motion parallax information can enhance the degree of stereoscopic depth seen.
    [Show full text]
  • Stability of Binocular Depth Perception with Moving Head and Eyes RAYMOND Van EE,*T CASPER J
    Vision Res., Vol. 36, No. 23, pp. 3827-3842, 1996 Pergamon Copyright © 1996 Elsevier Science Ltd. All rights reserved PH: S0042-6989(96)00103-4 Printed in Great Britain 0042-6989/96 $15.00 + 0.00 Stability of Binocular Depth Perception with Moving Head and Eyes RAYMOND van EE,*t CASPER J. ERKELENS* Received 23 February 1995; in revised form 22 December 1995; in final form 26 March 1996 We systematically analyse the binocular disparity field under various eye, head and stimulus positions and orientations. From the literature we know that certain classes of disparity which involve the entire disparity field (such as those caused by horizontal lateral shift, differential rotation, horizontal scale and horizontal shear between the entire half-images of a stereogram) lead to relatively poor depth perception in the case of limited observation periods. These classes of disparity are found to be similar to the classes of disparities which are brought about by eye and head movements. Our analysis supports the suggestion that binocular depth perception is based primarily (for the first few hundred milliseconds) on classes of disparity that do not change as a result of ego-movement. Copyright © 1996 Elsevier Science Ltd. Binocular disparity Binocular vision Depth perception Stereopsis Robot vision INTRODUCTION movement, the disparity field before and after the The separation of the human eyes causes each eye to see a movements will also change. However, this time the disparity of the object and environment change together. disparate image of the outside world. Generally, it has In short, during eye and head movements, the images of been accepted that positional disparities are sufficient to the entire visual world are continuously changing on both generate a three-dimensional (3D) percept (e.g.
    [Show full text]