Recovery of 3-D Shape from Binocular Disparity and Structure from Motion

Total Page:16

File Type:pdf, Size:1020Kb

Recovery of 3-D Shape from Binocular Disparity and Structure from Motion Perception & Psychophysics /993. 54 (2). /57-J(B Recovery of 3-D shape from binocular disparity and structure from motion JAMES S. TI'ITLE The Ohio State University, Columbus, Ohio and MYRON L. BRAUNSTEIN University of California, Irvine, California Four experiments were conducted to examine the integration of depth information from binocular stereopsis and structure from motion (SFM), using stereograms simulating transparent cylindri­ cal objects. We found that the judged depth increased when either rotational or translational motion was added to a display, but the increase was greater for rotating (SFM) displays. Judged depth decreased as texture element density increased for static and translating stereo displays, but it stayed relatively constant for rotating displays. This result indicates that SFM may facili­ tate stereo processing by helping to resolve the stereo correspondence problem. Overall, the re­ sults from these experiments provide evidence for a cooperative relationship betweel\..SFM and binocular disparity in the recovery of 3-D relationships from 2-D images. These findings indicate that the processing of depth information from SFM and binocular disparity is not strictly modu­ lar, and thus theories of combining visual information that assume strong modularity or indepen­ dence cannot accurately characterize all instances of depth perception from multiple sources. Human observers can perceive the 3-D shape and orien­ tion, both sources of information (along with many others) tation in depth of objects in a natural environment with im­ occur together in the natural environment. Thus, it is im­ pressive accuracy. Prior work demonstrates that informa­ portant to understand what interaction (if any) exists in the tion about shape and orientation can be recovered from visual processing of binocular disparity and SFM. binocular disparity (e.g., Ono & Comerford, 1977) and Recent research has considered how the human visual structure from motion (SFM) (e.g., Braunstein, 1976). system combines depth information from multiple sources However, neither SFM nor binocular disparity in isola­ (Braunstein, Andersen, Rouse, & Tittle, 1986; Bruno & tion specifies completely both an object's shape and orien­ Cutting, 1988; Biilthoff & Mallott, 1988; Dosher, Sper­ tation relative to an observer. Structure from motion can ling, & Wurst, 1986; Rogers & Collett, 1989; Rouse, Tit­ provide information about object shape, but near/far rela­ tle, & Braunstein, 1989). Specifically, Biilthoff and Mal­ tions between the object and the observer are not speci­ lott discuss four types of interactions between different fied (Wallach & O'Connell, 1953).1 Binocular disparity, sources of depth information: accumulation, disambigu­ on the other hand, unambiguously specifies near/far rela­ ation, veto, and cooperation. Accumulation refers to an tions, but the shape of the object is only determined up additive relationship between the various processes that to a scale factor: perceived fixation distance (Gogel, 1960; is consistent with modularity in visual information pro­ Ogle, 1953; Wallach & Zuckerman, 1962). Because each cessing (Fodor, 1983). The perceived depth from this type of these sources of information specifies what the other of interaction is a weighted summation or integration of lacks, the combination of SFM and binocular disparity the depth specified by each separate process. Empirical could provide a more robust representation of an object evidence for an accumulative relationship has been pro­ than would either source alone (Richards, 1985). In addi- vided for stereo and relative brightness (Dosher et al., 1986), motion parallax and pictorial depth cues (Bruno & Cutting, 1988), and motion parallax and binocular dis­ Thisresearchwas supportedby National ScienceFoondationGrant BNS­ parity (Rogers & Collett, 1989). A veto relationship be­ 8819565. Portions of this research were presented at the SPIE Sensor tween visual processes is one in which one source of Fusion III Conference (Tittle & Braunstein, 1991) and at the Annual information completely dominates others. It is also con­ Meeting of the Association for Research in Vision and Ophthalmology (Tittle & Braunstein. 1989). This research was completed as partial ful­ sistent with a modular conceptualization of visual depth fillment of the requirements for the Ph.D. at the University of Califor­ processing. An example ofa disambiguating relationship nia, Irvine by the first author. We would like to thank Asad Saidpour between two sources of information comes from a study for assistance in developing the stimulus displays and Joseph Lappin by Andersen and Braunstein (1983), who used dynamic for helpful comments on an earlier draft of this paper. Correspondence should be addressed to J. S. Tittle, Department of Psychology, Ohio occlusion to disambiguate the order of perceived depth State University, 140D Lazenby Hall, 1827 Neil Avenue, Columbus, in orthographic SFM displays, Disambiguation ofan SFM Ohio 43210 (e-mail: [email protected]). display can also result from adaptation to a stereo dis- 157 Copyright 1993 Psychonomic Society, Inc. 158 TITTLE AND BRAUNSTEIN play (Nawrot & Blake, 1991). A cooperative interaction still able to make accurate curvature discriminations on between visual processes is one in which strict modular­ the basis of SFM. ity does not hold. Instead, information from one process In a study examining the contributions of relative bright­ facilitates the functioning of another process. ness and stereopsis to the perception of sign of depth, Richards (1985) has proposed a specific relationship be­ Dosher et al. (1986) presented observers with perspec­ tween stereopsis and SFM that eliminates the ambiguities tive projections of rotating wire-frame Necker cubes. inherent in each source ofinformation. According to this They found that direction-of-rotation judgments were analysis, SFM provides information for 3-D shape and primarily determined by binocular disparity. However, stereopsis disambiguates depth order. The derivation of the relative brightness cue did have some influence on the 3-D shape from motion assumes parallel projection and rotation judgments, and the results were fit closely by a planar motion of the points. Using this approach, Richards model in which the strength of evidence from each source shows that with two stereo views of three points, the depth of information was algebraically added. Their weighted coordinates of those points are uniquely determined. Such linear summation model was similar to a more general a relationship between stereo and SFM would achieve approach for describing information integration developed shape constancywhile still preserving unambiguousviewer­ by Anderson (1981, 1982). relative depth relations. However, one assumption of this Braunstein et al. (1986) showed that binocular dispar­ analysis is that the magnitude of perceived depth from ity can disambiguate the sign of depth for computer­ stereopsisand SFM depends solelyon the SFM information. generated displays consisting of orthographic projections Early empirical research on conflicting stereo and SFM of texture elements on the surface of rotating spheres. information is consistent with Richards's (1985) theoreti­ They also found that, for opaque stimuli, dynamic occlu­ cal analysis. Specifically, Wallach, Moore, and Davidson sion information usually vetoed conflicting binocular dis­ (1963), and Epstein (1968) found that stereopsis is recali­ parity information in determining perceived sign ofdepth brated by prolonged adaptation to conflicting SFM infor­ (measured by direction-of-rotationjudgments). However, mation. In these studies, wire-frame objects were viewed for transparent stimuli, direction-of-rotation judgments through a telestereoscope that exaggerated the binocular were primarily determined by binocular disparities. disparities projected onto the retinae. Thus, when the ob­ Braunstein et al. also found that 6 subjects who could not jects were rotated, the binocular disparity information was detect any targets on a static stereo test (Randot circles) consistent with an object much deeper than that indicated were able to make accurate direction-of-rotation judg­ by the SFM information. Observers made depth judgments ments for stereo-SFM displays on over 90% of the trials. of static stereo displays before and after a lO-min adap­ Because the direction of rotation is inherently ambiguous tation period. Epstein found that the same amount ofdis­ in an orthographic projection, these static stereo-deficient parity was judged to have less depth after the adaptation observers must have been able to use the binocular dis­ period than before it. These findings seem to provide evi­ parity information in the stereo-SFM displays, even dence that SFM dominates stereopsis, and may indeed though they did not respond correctly to binocular dis­ solely determine the magnitude of perceived depth when parities on a standard static stereo test. combined with binocular disparities, as suggested by Rouse et aI. (1989) examined this effect further, and Richards. However, Fisher and Ebenholtz (1986) showed found that 7 out of 11 static stereo-deficient observers that depth aftereffects, of the type reported in the previous could judge, with perfect accuracy, direction of rotation research, occurred even when stereo and SFM were not with combined stereo-SFM displays similar to those used presented together in the adapting stimulus. Their results by Braunstein
Recommended publications
  • Binocular Disparity - Difference Between Two Retinal Images
    Depth II + Motion I Lecture 13 (Chapters 6+8) Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 depth from focus: tilt shift photography Keith Loutit : tilt shift + time-lapse photography http://vimeo.com/keithloutit/videos 2 Monocular depth cues: Pictorial Non-Pictorial • occlusion • motion parallax relative size • accommodation shadow • • (“depth from focus”) • texture gradient • height in plane • linear perspective 3 • Binocular depth cue: A depth cue that relies on information from both eyes 4 Two Retinas Capture Different images 5 Finger-Sausage Illusion: 6 Pen Test: Hold a pen out at half arm’s length With the other hand, see how rapidly you can place the cap on the pen. First using two eyes, then with one eye closed 7 Binocular depth cues: 1. Vergence angle - angle between the eyes convergence divergence If you know the angles, you can deduce the distance 8 Binocular depth cues: 2. Binocular Disparity - difference between two retinal images Stereopsis - depth perception that results from binocular disparity information (This is what they’re offering in “3D movies”...) 9 10 Retinal images in left & right eyes Figuring out the depth from these two images is a challenging computational problem. (Can you reason it out?) 11 12 Horopter: circle of points that fall at zero disparity (i.e., they land on corresponding parts of the two retinas) A bit of geometric reasoning will convince you that this surface is a circle containing the fixation point and the two eyes 13 14 point with uncrossed disparity point with crossed
    [Show full text]
  • Relationship Between Binocular Disparity and Motion Parallax in Surface Detection
    Perception & Psychophysics 1997,59 (3),370-380 Relationship between binocular disparity and motion parallax in surface detection JESSICATURNERand MYRON L, BRAUNSTEIN University ofCalifornia, Irvine, California and GEORGEJ. ANDERSEN University ofColifornia, Riverside, California The ability to detect surfaces was studied in a multiple-cue condition in which binocular disparity and motion parallax could specify independent depth configurations, On trials on which binocular disparity and motion parallax were presented together, either binocular disparity or motion parallax could indicate a surface in one of two intervals; in the other interval, both sources indicated a vol­ ume of random points. Surface detection when the two sources of information were present and compatible was not betterthan detection in baseline conditions, in which only one source of informa­ tion was present. When binocular disparity and motion specified incompatible depths, observers' ability to detect a surface was severely impaired ifmotion indicated a surface but binocular disparity did not. Performance was not as severely degraded when binocular disparity indicated a surface and motion did not. This dominance of binocular disparity persisted in the presence of foreknowledge about which source of information would be relevant. The ability to detect three-dimensional (3-D) surfaces action ofdepth cues in determining the perceived shape has been studied previously using static stereo displays of objects has been a subject of considerable attention. (e.g., Uttal, 1985), motion parallax displays (Andersen, Biilthoffand Mallot (1988) discussed four ways in which 1996; Andersen & Wuestefeld, 1993), and structure­ depth information from different cues may be combined: from-motion (SFM) displays (Turner, Braunstein, & An­ accumulation, veto, disambiguation, and cooperation.
    [Show full text]
  • Binocular Vision
    BINOCULAR VISION Rahul Bhola, MD Pediatric Ophthalmology Fellow The University of Iowa Department of Ophthalmology & Visual Sciences posted Jan. 18, 2006, updated Jan. 23, 2006 Binocular vision is one of the hallmarks of the human race that has bestowed on it the supremacy in the hierarchy of the animal kingdom. It is an asset with normal alignment of the two eyes, but becomes a liability when the alignment is lost. Binocular Single Vision may be defined as the state of simultaneous vision, which is achieved by the coordinated use of both eyes, so that separate and slightly dissimilar images arising in each eye are appreciated as a single image by the process of fusion. Thus binocular vision implies fusion, the blending of sight from the two eyes to form a single percept. Binocular Single Vision can be: 1. Normal – Binocular Single vision can be classified as normal when it is bifoveal and there is no manifest deviation. 2. Anomalous - Binocular Single vision is anomalous when the images of the fixated object are projected from the fovea of one eye and an extrafoveal area of the other eye i.e. when the visual direction of the retinal elements has changed. A small manifest strabismus is therefore always present in anomalous Binocular Single vision. Normal Binocular Single vision requires: 1. Clear Visual Axis leading to a reasonably clear vision in both eyes 2. The ability of the retino-cortical elements to function in association with each other to promote the fusion of two slightly dissimilar images i.e. Sensory fusion. 3. The precise co-ordination of the two eyes for all direction of gazes, so that corresponding retino-cortical element are placed in a position to deal with two images i.e.
    [Show full text]
  • Abstract Computer Vision in the Space of Light Rays
    ABSTRACT Title of dissertation: COMPUTER VISION IN THE SPACE OF LIGHT RAYS: PLENOPTIC VIDEOGEOMETRY AND POLYDIOPTRIC CAMERA DESIGN Jan Neumann, Doctor of Philosophy, 2004 Dissertation directed by: Professor Yiannis Aloimonos Department of Computer Science Most of the cameras used in computer vision, computer graphics, and image process- ing applications are designed to capture images that are similar to the images we see with our eyes. This enables an easy interpretation of the visual information by a human ob- server. Nowadays though, more and more processing of visual information is done by computers. Thus, it is worth questioning if these human inspired “eyes” are the optimal choice for processing visual information using a machine. In this thesis I will describe how one can study problems in computer vision with- out reference to a specific camera model by studying the geometry and statistics of the space of light rays that surrounds us. The study of the geometry will allow us to deter- mine all the possible constraints that exist in the visual input and could be utilized if we had a perfect sensor. Since no perfect sensor exists we use signal processing techniques to examine how well the constraints between different sets of light rays can be exploited given a specific camera model. A camera is modeled as a spatio-temporal filter in the space of light rays which lets us express the image formation process in a function ap- proximation framework. This framework then allows us to relate the geometry of the imaging camera to the performance of the vision system with regard to the given task.
    [Show full text]
  • Motion Parallax Is Asymptotic to Binocular Disparity
    Motion Parallax is Asymptotic to Binocular Disparity by Keith Stroyan Mathematics Department University of Iowa Iowa City, Iowa Abstract Researchers especially beginning with (Rogers & Graham, 1982) have noticed important psychophysical and experimental similarities between the neurologically different motion parallax and stereopsis cues. Their quantitative analysis relied primarily on the "disparity equivalence" approximation. In this article we show that retinal motion from lateral translation satisfies a strong ("asymptotic") approximation to binocular disparity. This precise mathematical similarity is also practical in the sense that it applies at normal viewing distances. The approximation is an extension to peripheral vision of (Cormac & Fox's 1985) well-known non-trig central vision approximation for binocular disparity. We hope our simple algebraic formula will be useful in analyzing experiments outside central vision where less precise approximations have led to a number of quantitative errors in the vision literature. Introduction A translating observer viewing a rigid environment experiences motion parallax, the relative movement on the retina of objects in the scene. Previously we (Nawrot & Stroyan, 2009), (Stroyan & Nawrot, 2009) derived mathematical formulas for relative depth in terms of the ratio of the retinal motion rate over the smooth pursuit eye tracking rate called the motion/pursuit law. The need for inclusion of the extra-retinal information from the pursuit eye movement system for depth perception from motion
    [Show full text]
  • Stereopsis by a Neural Network Which Learns the Constraints
    Stereopsis by a Neural Network Which Learns the Constraints Alireza Khotanzad and Ying-Wung Lee Image Processing and Analysis Laboratory Electrical Engineering Department Southern Methodist University Dallas, Texas 75275 Abstract This paper presents a neural network (NN) approach to the problem of stereopsis. The correspondence problem (finding the correct matches between the pixels of the epipolar lines of the stereo pair from amongst all the possible matches) is posed as a non-iterative many-to-one mapping. A two-layer feed forward NN architecture is developed to learn and code this nonlinear and complex mapping using the back-propagation learning rule and a training set. The important aspect of this technique is that none of the typical constraints such as uniqueness and continuity are explicitly imposed. All the applicable constraints are learned and internally coded by the NN enabling it to be more flexible and more accurate than the existing methods. The approach is successfully tested on several random­ dot stereograms. It is shown that the net can generalize its learned map­ ping to cases outside its training set. Advantages over the Marr-Poggio Algorithm are discussed and it is shown that the NN performance is supe­ rIOr. 1 INTRODUCTION Three-dimensional image processing is an indispensable property for any advanced computer vision system. Depth perception is an integral part of 3-d processing. It involves computation of the relative distances of the points seen in the 2-d images to the imaging device. There are several methods to obtain depth information. A common technique is stereo imaging. It uses two cameras displaced by a known distance to generate two images of the same scene taken from these two different viewpoints.
    [Show full text]
  • Distance Estimation from Stereo Vision: Review and Results
    DISTANCE ESTIMATION FROM STEREO VISION: REVIEW AND RESULTS A Project Presented to the faculty of the Department of the Computer Engineering California State University, Sacramento Submitted in partial satisfaction of the requirements for the degree of MASTER OF SCIENCE in Computer Engineering by Sarmad Khalooq Yaseen SPRING 2019 © 2019 Sarmad Khalooq Yaseen ALL RIGHTS RESERVED ii DISTANCE ESTIMATION FROM STEREO VISION: REVIEW AND RESULTS A Project by Sarmad Khalooq Yaseen Approved by: __________________________________, Committee Chair Dr. Fethi Belkhouche __________________________________, Second Reader Dr. Preetham B. Kumar ____________________________ Date iii Student: Sarmad Khalooq Yaseen I certify that this student has met the requirements for format contained in the University format manual, and that this project is suitable for shelving in the Library and credit is to be awarded for the project. __________________________, Graduate Coordinator ___________________ Dr. Behnam S. Arad Date Department of Computer Engineering iv Abstract of DISTANCE ESTIMATION FROM STEREO VISION: REVIEW AND RESULTS by Sarmad Khalooq Yaseen Stereo vision is the one of the major researched domains of computer vision, and it can be used for different applications, among them, extraction of the depth of a scene. This project provides a review of stereo vision and matching algorithms, used to solve the correspondence problem [22]. A stereo vision system consists of two cameras with parallel optical axes which are separated from each other by a small distance. The system is used to produce two stereoscopic pictures of a given object. Distance estimation between the object and the cameras depends on two factors, the distance between the positions of the object in the pictures and the focal lengths of the cameras [37].
    [Show full text]
  • Depth Perception, Part II
    Depth Perception, part II Lecture 13 (Chapter 6) Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Fall 2017 1 nice illusions video - car ad (2013) (anamorphosis, linear perspective, accidental viewpoints, shadows, depth/size illusions) https://www.youtube.com/watch?v=dNC0X76-QRI 2 Accommodation - “depth from focus” near far 3 Depth and scale estimation from accommodation “tilt shift photography” 4 Depth and scale estimation from accommodation “tilt shift photography” 5 Depth and scale estimation from accommodation “tilt shift photography” 6 Depth and scale estimation from accommodation “tilt shift photography” 7 Depth and scale estimation from accommodation “tilt shift photography” 8 more on tilt shift: Van Gogh http://www.mymodernmet.com/profiles/blogs/van-goghs-paintings-get 9 Tilt shift on Van Gogh paintings http://www.mymodernmet.com/profiles/blogs/van-goghs-paintings-get 10 11 12 countering the depth-from-focus cue 13 Monocular depth cues: Pictorial Non-Pictorial • occlusion • motion parallax relative size • accommodation shadow • • (“depth from focus”) • texture gradient • height in plane • linear perspective 14 • Binocular depth cue: A depth cue that relies on information from both eyes 15 Two Retinas Capture Different images 16 Finger-Sausage Illusion: 17 Pen Test: Hold a pen out at half arm’s length With the other hand, see how rapidly you can place the cap on the pen. First using two eyes, then with one eye closed 18 Binocular depth cues: 1. Vergence angle - angle between the eyes convergence divergence If you know the angles, you can deduce the distance 19 Binocular depth cues: 2. Binocular Disparity - difference between two retinal images Stereopsis - depth perception that results from binocular disparity information (This is what they’re offering in 3D movies…) 20 21 Retinal images in left & right eyes Figuring out the depth from these two images is a challenging computational problem.
    [Show full text]
  • Perceiving Depth and Size Depth
    PERCEIVING DEPTH AND SIZE Visual Perception DEPTH 1 Cue Approach Identifies information on the retina Correlates it with the depth of the scene Different cues Previous knowledge Slide 3 Aditi Majumder, UCI Depth Cues Oculomotor Monocular Binocular Slide 4 Aditi Majumder, UCI 2 Oculomotor Cues Convergence Inward movement for nearby objects Outward movements for farther objects Accommodation Tightening of muscles the change the shape of the lens Slide 5 Aditi Majumder, UCI Monocular Cues (Pictorial) Occlusion Atmospheric perspective Relative height Linear perspective Cast shadows Texture gradient Relative size Movement-based cues Familiar size Motion Parallax Deletion and accretion Slide 6 Aditi Majumder, UCI 3 Occlusion Slide 7 Aditi Majumder, UCI Relative Height Slide 8 Aditi Majumder, UCI 4 Conflicting Cues Slide 9 Aditi Majumder, UCI Cast Shadows Slide 10 Aditi Majumder, UCI 5 Relative Size Slide 11 Aditi Majumder, UCI Familiar Size Slide 12 Aditi Majumder, UCI 6 Atmospheric Perspective Slide 13 Aditi Majumder, UCI Linear Perspective Texture Gradient Slide 14 Aditi Majumder, UCI 7 Presence of Ground It helps in perception Removal of ground causes difficulty in perception Slide 15 Aditi Majumder, UCI Motion Parallax Further points have lesser parallax Closer points move more Slide 16 Aditi Majumder, UCI 8 Deletion and Accretion Slide 17 Aditi Majumder, UCI Binocular Cues Binocular disparity and stereopsis Corresponding points Correspondence problems Disparity information in the brain Slide 18 Aditi Majumder,
    [Show full text]
  • Disparity Channels in Early Vision
    11820 • The Journal of Neuroscience, October 31, 2007 • 27(44):11820–11831 Symposium Disparity Channels in Early Vision Anna W. Roe,1 Andrew J. Parker,2 Richard T. Born,3 and Gregory C. DeAngelis4 1Department of Psychology, Vanderbilt University, Nashville, Tennessee 37203, 2Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, 3Department of Neurobiology, Harvard Medical School, Boston, Massachusetts 02115, and 4Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627 The past decade has seen a dramatic increase in our knowledge of the neural basis of stereopsis. New cortical areas have been found to represent binocular disparities, new representations of disparity information (e.g., relative disparity signals) have been uncovered, the first topographic maps of disparity have been measured, and the first causal links between neural activity and depth perception have been established. Equally exciting is the finding that training and experience affects how signals are channeled through different brain areas, a flexibility that may be crucial for learning, plasticity, and recovery of function. The collective efforts of several laboratories have established stereo vision as one of the most productive model systems for elucidating the neural basis of perception. Much remains to be learned about how the disparity signals that are initially encoded in primary visual cortex are routed to and processed by extrastriate areas to mediate the diverse capacities of three-dimensional vision that enhance our daily experience of the world. Key words: stereopsis; binocular disparity; MT primate; topography; choice probability Linking psychology and physiology in binocular vision ration for many others.
    [Show full text]
  • Depth Perception 0
    Depth Perception 0 Depth Perception Suggested reading:! ! Stephen E. Palmer (1999) Vision Science. ! Read JCA (2005) Early computational processing in binocular vision and depth perception. Progress in Biophysics and Molecular Biology 87: 77-108 Depth Perception 1 Depth Perception! Contents: Contents: • The problem of depth perception • • WasDepth is lerning cues ? • • ZellulärerPanum’s Mechanismus fusional area der LTP • • UnüberwachtesBinocular disparity Lernen • • ÜberwachtesRandom dot Lernen stereograms • • Delta-LernregelThe correspondence problem • • FehlerfunktionEarly models • • GradientenabstiegsverfahrenDisparity sensitive cells in V1 • • ReinforcementDisparity tuning Lernen curve • Binocular engergy model • The role of higher visual areas Depth Perception 2 The problem of depth perception! Our 3 dimensional environment gets reduced to two-dimensional images on the two retinae. Viewing Retinal geometry image The “lost” dimension is commonly called depth. As depth is crucial for almost all of our interaction with our environment and we perceive the 2D images as three dimensional the brain must try to estimate the depth from the retinal images as good as it can with the help of different cues and mechanisms. Depth Perception 3 Depth cues - monocular! Texture Accretion/Deletion Accommodation (monocular, optical) (monocular, ocular) Motion Parallax Convergence of Parallels (monocular, optical) (monocular, optical) Depth Perception 4 Depth cues - monocular ! Position relative to Horizon Familiar Size (monocular, optical) (monocular, optical) Relative Size Texture Gradients (monocular, optical) (monocular, optical) Depth Perception 5 Depth cues - monocular ! ! Edge Interpretation (monocular, optical) ! Shading and Shadows (monocular, optical) ! Aerial Perspective (monocular, optical) Depth Perception 6 Depth cues – binocular! Convergence Binocular Disparity (binocular, ocular) (binocular, optical) As the eyes are horizontally displaced we see the world from two slightly different viewpoints.
    [Show full text]
  • Binocular Eye Movements Are Adapted to the Natural Environment
    The Journal of Neuroscience, April 10, 2019 • 39(15):2877–2888 • 2877 Systems/Circuits Binocular Eye Movements Are Adapted to the Natural Environment X Agostino Gibaldi and Martin S. Banks Vision Science Program, School of Optometry University of California, Berkeley, Berkeley, California 94720 Humans and many animals make frequent saccades requiring coordinated movements of the eyes. When landing on the new fixation point, the eyes must converge accurately or double images will be perceived. We asked whether the visual system uses statistical regu- larities in the natural environment to aid eye alignment at the end of saccades. We measured the distribution of naturally occurring disparities in different parts of the visual field. The central tendency of the distributions was crossed (nearer than fixation) in the lower field and uncrossed (farther) in the upper field in male and female participants. It was uncrossed in the left and right fields. We also measuredhorizontalvergenceaftercompletionofvertical,horizontal,andobliquesaccades.Whentheeyesfirstlandedneartheeccentric target, vergence was quite consistent with the natural-disparity distribution. For example, when making an upward saccade, the eyes diverged to be aligned with the most probable uncrossed disparity in that part of the visual field. Likewise, when making a downward saccade, the eyes converged to enable alignment with crossed disparity in that part of the field. Our results show that rapid binocular eye movements are adapted to the statistics of the 3D environment, minimizing the need for large corrective vergence movements at the end of saccades. The results are relevant to the debate about whether eye movements are derived from separate saccadic and vergence neural commands that control both eyes or from separate monocular commands that control the eyes independently.
    [Show full text]