A Comparative Study of Stereovision Algorithms

Total Page:16

File Type:pdf, Size:1020Kb

A Comparative Study of Stereovision Algorithms (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 11, 2017 A Comparative Study of Stereovision Algorithms Elena Bebeşelea-Sterp Raluca Brad Remus Brad NTT DATA ROMANIA Faculty of Engineering Faculty of Engineering Sibiu, Romania Lucian Blaga University of Sibiu Lucian Blaga University of Sibiu Sibiu, Romania Sibiu, Romania Abstract—Stereo vision has been and continues to be one of Stereo vision has a wide range of applications nowadays, the most researched domains of computer vision, having many especially domains in which realistic object models are needed. applications, among them, allowing the depth extraction of a Depending on how important the processing time is, these scene. This paper provides a comparative study of stereo vision applications can be classified into two categories: static scene and matching algorithms, used to solve the correspondence description and dynamic scene description. For the first problem. The study of matching algorithms was followed by category, accuracy is of higher importance compared to the experiments on the Middlebury benchmarks. The tests focused processing time. Usually, image pairs are acquired by means of on a comparison of 6 stereovision methods. In order to assess the a special device and reconstructed afterwards for cartography, performance, RMS and some statistics related were computed. In crime scene reconstruction, car crashes scene reconstruction, order to emphasize the advantages of each stereo algorithm 3D models for architecture. In dynamic scene description real- considered, two-frame methods have been employed, both local and global. The experiments conducted have shown that the best time processing of the data is critical. Of course, a certain level results are obtained by Graph Cuts. Unfortunately, this has a of accuracy must be fulfilled. Possible applications are obstacle higher computational cost. If high quality is not an issue in (e.g. pedestrians) avoidance, autonomous navigation in which applications, local methods provide reasonable results within a an internal map is continuously updated, height estimation, etc. much lower time-frame and offer the possibility of parallel This paper aims to review different stereo methods implementations. available and compare several stereo matching algorithms across sets of images. Section 2 describes the fundamental of Keywords—Stereo vision; disparity; correspondence; comparative study; middlebury benchmark stereo correspondence that makes possible the depth estimation, and discuss some of the stereo methods available nowadays, I. INTRODUCTION classifying them into local, global and semi-global algorithms, while Section 3 analyses in more detail some referenced Stereovision is an area of computer vision focusing on the algorithms considered for experiments. The results are extraction of 3D information from digital images. The most discussed in Section 4. researched aspect of this field is stereo matching: given two or more images as input, matching pixels have to be found across II. A CLASSIFICATION OF STEREOVISION ALGORITHMS all images so that their 2D positions can be converted into 3D depths, producing as result a 3D estimation of the scene. As This section reviews some of the stereo correspondence many other breakthrough ideas in computer science, algorithms that have been proposed over the last decades. stereovision is strongly related to a biological concept, namely These can be classified depending on multiple criteria such as stereopsis, which is the impression of depth that is perceived the method which assign disparities to pixels, occlusion when a scene is viewed by someone with both eyes and normal handling, color usage, matching cost computation etc. binocular vision [1]. By aid of stereoscopic vision, we can A. Local Methods perceive the surroundings in relation with our bodies and detect Local methods assign disparities depending on the objects that are moving towards or away from us. While the information provided by the neighboring pixels, usually fast entire process seems an easy task for us and other biological and yield good results. One broad category of local methods is systems, the same does not apply to computer systems. Finding represented by the block matching algorithms, which try to the correspondences across images is a challenging task. The find a correspondence for a point in the reference image by earliest stereo matching algorithms were developed in the field comparing a small region surrounding it with small regions in of photogrammetry for automatically constructing topographic the target image. This region is reduced to a single line, called elevation maps from overlapping aerial images [2]. the epipolar line. Block matching algorithms are used not only Stereo matching has been one of the most studied topics, in stereo vision, but also in visual tracking and video starting with the work of D. Marr and T. Poggio [3] which compression. focuses on human stereopsis. Lane and Thacker’s study on Among the first techniques which appeared is box filtering. stereo matching [4] presents a dozen algorithms from 1973 up This involves replacing each pixel of an image with the to 1992, but no comparison between them was made. Another average in a box and it is an efficient general purpose tool for important study is the one done by Scharstein and Szeliski [5], image processing. In [6] a procedure for mean calculation has in which they have compared several algorithms and their been proposed. The advantage of box filtering is the speed, performance based on different metrics. cumulating the sum of pixel values along rows and columns. 359 | P a g e www.ijacsa.thesai.org (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 11, 2017 This way, the sum of a rectangle is computed in a linear time, Another class of local algorithms is represented by the independent of its size. gradient-based methods, also known as optical flow. In this case, the matching costs are insensitive to bias and camera Another algorithm that achieves high speed and reasonable noise. These methods determine the disparities between two quality is the one proposed by Mühlmann et al. [7]. It makes images by formulating a differential equation relating motion use of rectified images and can be used by other, more and image brightness. In order to do this, the assumption is sophisticated, algorithms that need an initial guess at the made that the image brightness of a point in the scene is disparity map. In order to eliminate false matches it makes use constant between the two views [16]. of the left-right consistency check and uniqueness validation. One of the earliest methods for optical flow estimation is A new block matching algorithm has been proposed in [8] the one developed by Kanade and Lucas [17]. This solves the and even though the focus of this paper was on video flow equations for all the pixels in the neighborhood using the compression, the algorithm can be applied also to stereo vision. least squares criterion. However it cannot provide flow This is based on a winner-update strategy which uses a lower information inside regions that are uniform. This method can bound list of the matching error to determine the temporary be used in many applications of image registration, including winner. stereo vision. More similar information can be found in [18]. Efficient usage of color information can improve results, A newer algorithm from the same class is the one proposed therefore, some of the best stereo correspondence algorithms by Zhou and Boulanger [19]. This is based on relative use color segmentation techniques. One of the most popular gradients in order to eliminate the radiometric variance. Most color segmentation algorithms has been proposed in [9], based stereo matching methods assume that the object surface is on the mean-shift algorithm which dates back to 1975, but Lambertian, meaning that the color for every point in the scene extended on computer vision only later on. The algorithm will be constant in the views captured by two separate cameras. proposed in [10] is one of the top performing algorithms in the However, to most real-world objects this does not apply, Middlebury classification [11] and it makes use of this reflecting light that is view dependent. The algorithm is able to segmentation technique. This is based on inter-regional deal with both view dependent and independent colors and cooperative optimization. The algorithm uses regions as both color and gray scale images. matching primitives. More exactly, they use color statistics of the regions and constraints on smoothness and occlusion Block matching and gradient-based methods are sensitive between adjacent regions. A similar algorithm, that is currently to depth discontinuities, since the region of support near a one place above the before mentioned [10], is the algorithm discontinuity contains points from more than one depth. These proposed by Klaus et al. in [12]. Firstly, homogenous regions methods are also sensitive to regions of uniform texture [16]. are extracted from the reference image, also with the method Another class of algorithms that aims to overcome these [9]. Secondly, local windows-based matching is applied using drawbacks is represented by the feature-matching algorithms, a self-adapting dissimilarity measure that combines SAD and a which limit the support region to reliable features in the image. gradient based measure. Using the reliable correspondences,
Recommended publications
  • CS 534: Computer Vision Stereo Imaging Outlines
    CS 534 A. Elgammal Rutgers University CS 534: Computer Vision Stereo Imaging Ahmed Elgammal Dept of Computer Science Rutgers University CS 534 – Stereo Imaging - 1 Outlines • Depth Cues • Simple Stereo Geometry • Epipolar Geometry • Stereo correspondence problem • Algorithms CS 534 – Stereo Imaging - 2 1 CS 534 A. Elgammal Rutgers University Recovering the World From Images We know: • 2D Images are projections of 3D world. • A given image point is the projection of any world point on the line of sight • So how can we recover depth information CS 534 – Stereo Imaging - 3 Why to recover depth ? • Recover 3D structure, reconstruct 3D scene model, many computer graphics applications • Visual Robot Navigation • Aerial reconnaissance • Medical applications The Stanford Cart, H. Moravec, 1979. The INRIA Mobile Robot, 1990. CS 534 – Stereo Imaging - 4 2 CS 534 A. Elgammal Rutgers University CS 534 – Stereo Imaging - 5 CS 534 – Stereo Imaging - 6 3 CS 534 A. Elgammal Rutgers University Motion parallax CS 534 – Stereo Imaging - 7 Depth Cues • Monocular Cues – Occlusion – Interposition – Relative height: the object closer to the horizon is perceived as farther away, and the object further from the horizon is perceived as closer. – Familiar size: when an object is familiar to us, our brain compares the perceived size of the object to this expected size and thus acquires information about the distance of the object. – Texture Gradient: all surfaces have a texture, and as the surface goes into the distance, it becomes smoother and finer. – Shadows – Perspective – Focus • Motion Parallax (also Monocular) • Binocular Cues • In computer vision: large research on shape-from-X (should be called depth from X) CS 534 – Stereo Imaging - 8 4 CS 534 A.
    [Show full text]
  • Recovery of 3-D Shape from Binocular Disparity and Structure from Motion
    Perception & Psychophysics /993. 54 (2). /57-J(B Recovery of 3-D shape from binocular disparity and structure from motion JAMES S. TI'ITLE The Ohio State University, Columbus, Ohio and MYRON L. BRAUNSTEIN University of California, Irvine, California Four experiments were conducted to examine the integration of depth information from binocular stereopsis and structure from motion (SFM), using stereograms simulating transparent cylindri­ cal objects. We found that the judged depth increased when either rotational or translational motion was added to a display, but the increase was greater for rotating (SFM) displays. Judged depth decreased as texture element density increased for static and translating stereo displays, but it stayed relatively constant for rotating displays. This result indicates that SFM may facili­ tate stereo processing by helping to resolve the stereo correspondence problem. Overall, the re­ sults from these experiments provide evidence for a cooperative relationship betweel\..SFM and binocular disparity in the recovery of 3-D relationships from 2-D images. These findings indicate that the processing of depth information from SFM and binocular disparity is not strictly modu­ lar, and thus theories of combining visual information that assume strong modularity or indepen­ dence cannot accurately characterize all instances of depth perception from multiple sources. Human observers can perceive the 3-D shape and orien­ tion, both sources of information (along with many others) tation in depth of objects in a natural environment with im­ occur together in the natural environment. Thus, it is im­ pressive accuracy. Prior work demonstrates that informa­ portant to understand what interaction (if any) exists in the tion about shape and orientation can be recovered from visual processing of binocular disparity and SFM.
    [Show full text]
  • Stereopsis and Stereoblindness
    Exp. Brain Res. 10, 380-388 (1970) Stereopsis and Stereoblindness WHITMAN RICHARDS Department of Psychology, Massachusetts Institute of Technology, Cambridge (USA) Received December 20, 1969 Summary. Psychophysical tests reveal three classes of wide-field disparity detectors in man, responding respectively to crossed (near), uncrossed (far), and zero disparities. The probability of lacking one of these classes of detectors is about 30% which means that 2.7% of the population possess no wide-field stereopsis in one hemisphere. This small percentage corresponds to the probability of squint among adults, suggesting that fusional mechanisms might be disrupted when stereopsis is absent in one hemisphere. Key Words: Stereopsis -- Squint -- Depth perception -- Visual perception Stereopsis was discovered in 1838, when Wheatstone invented the stereoscope. By presenting separate images to each eye, a three dimensional impression could be obtained from two pictures that differed only in the relative horizontal disparities of the components of the picture. Because the impression of depth from the two combined disparate images is unique and clearly not present when viewing each picture alone, the disparity cue to distance is presumably processed and interpreted by the visual system (Ogle, 1962). This conjecture by the psychophysicists has received recent support from microelectrode recordings, which show that a sizeable portion of the binocular units in the visual cortex of the cat are sensitive to horizontal disparities (Barlow et al., 1967; Pettigrew et al., 1968a). Thus, as expected, there in fact appears to be a physiological basis for stereopsis that involves the analysis of spatially disparate retinal signals from each eye. Without such an analysing mechanism, man would be unable to detect horizontal binocular disparities and would be "stereoblind".
    [Show full text]
  • Algorithms for Single Image Random Dot Stereograms
    Displaying 3D Images: Algorithms for Single Image Random Dot Stereograms Harold W. Thimbleby,† Stuart Inglis,‡ and Ian H. Witten§* Abstract This paper describes how to generate a single image which, when viewed in the appropriate way, appears to the brain as a 3D scene. The image is a stereogram composed of seemingly random dots. A new, simple and symmetric algorithm for generating such images from a solid model is given, along with the design parameters and their influence on the display. The algorithm improves on previously-described ones in several ways: it is symmetric and hence free from directional (right-to-left or left-to-right) bias, it corrects a slight distortion in the rendering of depth, it removes hidden parts of surfaces, and it also eliminates a type of artifact that we call an “echo”. Random dot stereograms have one remaining problem: difficulty of initial viewing. If a computer screen rather than paper is used for output, the problem can be ameliorated by shimmering, or time-multiplexing of pixel values. We also describe a simple computational technique for determining what is present in a stereogram so that, if viewing is difficult, one can ascertain what to look for. Keywords: Single image random dot stereograms, SIRDS, autostereograms, stereoscopic pictures, optical illusions † Department of Psychology, University of Stirling, Stirling, Scotland. Phone (+44) 786–467679; fax 786–467641; email [email protected] ‡ Department of Computer Science, University of Waikato, Hamilton, New Zealand. Phone (+64 7) 856–2889; fax 838–4155; email [email protected]. § Department of Computer Science, University of Waikato, Hamilton, New Zealand.
    [Show full text]
  • Chromostereo.Pdf
    ChromoStereoscopic Rendering for Trichromatic Displays Le¨ıla Schemali1;2 Elmar Eisemann3 1Telecom ParisTech CNRS LTCI 2XtremViz 3Delft University of Technology Figure 1: ChromaDepth R glasses act like a prism that disperses incoming light and induces a differing depth perception for different light wavelengths. As most displays are limited to mixing three primaries (RGB), the depth effect can be significantly reduced, when using the usual mapping of depth to hue. Our red to white to blue mapping and shading cues achieve a significant improvement. Abstract The chromostereopsis phenomenom leads to a differing depth per- ception of different color hues, e.g., red is perceived slightly in front of blue. In chromostereoscopic rendering 2D images are produced that encode depth in color. While the natural chromostereopsis of our human visual system is rather low, it can be enhanced via ChromaDepth R glasses, which induce chromatic aberrations in one Figure 2: Chromostereopsis can be due to: (a) longitunal chro- eye by refracting light of different wavelengths differently, hereby matic aberration, focus of blue shifts forward with respect to red, offsetting the projected position slightly in one eye. Although, it or (b) transverse chromatic aberration, blue shifts further toward might seem natural to map depth linearly to hue, which was also the the nasal part of the retina than red. (c) Shift in position leads to a basis of previous solutions, we demonstrate that such a mapping re- depth impression. duces the stereoscopic effect when using standard trichromatic dis- plays or printing systems. We propose an algorithm, which enables an improved stereoscopic experience with reduced artifacts.
    [Show full text]
  • Binocular Vision
    BINOCULAR VISION Rahul Bhola, MD Pediatric Ophthalmology Fellow The University of Iowa Department of Ophthalmology & Visual Sciences posted Jan. 18, 2006, updated Jan. 23, 2006 Binocular vision is one of the hallmarks of the human race that has bestowed on it the supremacy in the hierarchy of the animal kingdom. It is an asset with normal alignment of the two eyes, but becomes a liability when the alignment is lost. Binocular Single Vision may be defined as the state of simultaneous vision, which is achieved by the coordinated use of both eyes, so that separate and slightly dissimilar images arising in each eye are appreciated as a single image by the process of fusion. Thus binocular vision implies fusion, the blending of sight from the two eyes to form a single percept. Binocular Single Vision can be: 1. Normal – Binocular Single vision can be classified as normal when it is bifoveal and there is no manifest deviation. 2. Anomalous - Binocular Single vision is anomalous when the images of the fixated object are projected from the fovea of one eye and an extrafoveal area of the other eye i.e. when the visual direction of the retinal elements has changed. A small manifest strabismus is therefore always present in anomalous Binocular Single vision. Normal Binocular Single vision requires: 1. Clear Visual Axis leading to a reasonably clear vision in both eyes 2. The ability of the retino-cortical elements to function in association with each other to promote the fusion of two slightly dissimilar images i.e. Sensory fusion. 3. The precise co-ordination of the two eyes for all direction of gazes, so that corresponding retino-cortical element are placed in a position to deal with two images i.e.
    [Show full text]
  • Abstract Computer Vision in the Space of Light Rays
    ABSTRACT Title of dissertation: COMPUTER VISION IN THE SPACE OF LIGHT RAYS: PLENOPTIC VIDEOGEOMETRY AND POLYDIOPTRIC CAMERA DESIGN Jan Neumann, Doctor of Philosophy, 2004 Dissertation directed by: Professor Yiannis Aloimonos Department of Computer Science Most of the cameras used in computer vision, computer graphics, and image process- ing applications are designed to capture images that are similar to the images we see with our eyes. This enables an easy interpretation of the visual information by a human ob- server. Nowadays though, more and more processing of visual information is done by computers. Thus, it is worth questioning if these human inspired “eyes” are the optimal choice for processing visual information using a machine. In this thesis I will describe how one can study problems in computer vision with- out reference to a specific camera model by studying the geometry and statistics of the space of light rays that surrounds us. The study of the geometry will allow us to deter- mine all the possible constraints that exist in the visual input and could be utilized if we had a perfect sensor. Since no perfect sensor exists we use signal processing techniques to examine how well the constraints between different sets of light rays can be exploited given a specific camera model. A camera is modeled as a spatio-temporal filter in the space of light rays which lets us express the image formation process in a function ap- proximation framework. This framework then allows us to relate the geometry of the imaging camera to the performance of the vision system with regard to the given task.
    [Show full text]
  • From Stereogram to Surface: How the Brain Sees the World in Depth
    Spatial Vision, Vol. 22, No. 1, pp. 45–82 (2009) © Koninklijke Brill NV, Leiden, 2009. Also available online - www.brill.nl/sv From stereogram to surface: how the brain sees the world in depth LIANG FANG and STEPHEN GROSSBERG ∗ Department of Cognitive and Neural Systems, Center for Adaptive Systems and Center of Excellence for Learning in Education, Science, and Technology, Boston University, 677 Beacon St., Boston, MA 02215, USA Received 19 April 2007; accepted 31 August 2007 Abstract—When we look at a scene, how do we consciously see surfaces infused with lightness and color at the correct depths? Random-Dot Stereograms (RDS) probe how binocular disparity between the two eyes can generate such conscious surface percepts. Dense RDS do so despite the fact that they include multiple false binocular matches. Sparse stereograms do so even across large contrast-free regions with no binocular matches. Stereograms that define occluding and occluded surfaces lead to surface percepts wherein partially occluded textured surfaces are completed behind occluding textured surfaces at a spatial scale much larger than that of the texture elements themselves. Earlier models suggest how the brain detects binocular disparity, but not how RDS generate conscious percepts of 3D surfaces. This article proposes a neural network model that predicts how the layered circuits of visual cortex generate these 3D surface percepts using interactions between visual boundary and surface representations that obey complementary computational rules. The model clarifies how interactions between layers 4, 3B and 2/3A in V1 and V2 contribute to stereopsis, and proposes how 3D perceptual grouping laws in V2 interact with 3D surface filling-in operations in V1, V2 and V4 to generate 3D surface percepts in which figures are separated from their backgrounds.
    [Show full text]
  • The Effects of 3D Movies to Our Eyes
    Bridge This Newsletter aims to promote communication between schools and Sept 2013 Issue No.60 Published by the Student Health Service, Department of Health the Student Health Service of the Department of Health Editiorial Continuous improvement in technology has made 3D film and 3D television programme available. It rewards our life with more advance visual experience. But at the same time, we should be aware of the risk factors of 3D films when we are enjoying them. In this issue, we would like to provide some information on “Evolution of Binocular vision”, “Principal of 3D Films” and “Common Side Effects of Viewing 3D Films”. Most important, we would like to enhance readers’ awareness on taking necessary precautions. We hope our readers could enjoy 3D films as well as keep our eyes healthy. Editorial Board Members: Dr. HO Chun-luen, David, Ms. LAI Chiu-wah, Phronsie, Ms. CHAN Shuk-yi, Karindi, Ms. CHOI Choi-fung, Ms. CHAN Kin-pui Tel : 2349 4212 / 3163 4600 Fax : 2348 3968 Website : http://www.studenthealth.gov.hk 1 Health Decoding Optometrist: Lam Kin Shing, Wallace Cheung, Rita Wong, Roy Wong Introduction 3D movies are becoming more popular nowadays. The brilliant box record of the 3D movie “Avatar” has led to the fast development of 3D technology. 3D technology not only affects the film industry but also extends to home televisions, video game products and many other goods. It certainly enriches our visual perception and gives us entertainment. Evolution of Binocular vision Mammals have two eyes. Some prey animals, e.g. rabbits, buffaloes, and sheep, have their two eyes positioned on opposite sides of their heads to give the widest possible field of view.
    [Show full text]
  • Stereopsis by a Neural Network Which Learns the Constraints
    Stereopsis by a Neural Network Which Learns the Constraints Alireza Khotanzad and Ying-Wung Lee Image Processing and Analysis Laboratory Electrical Engineering Department Southern Methodist University Dallas, Texas 75275 Abstract This paper presents a neural network (NN) approach to the problem of stereopsis. The correspondence problem (finding the correct matches between the pixels of the epipolar lines of the stereo pair from amongst all the possible matches) is posed as a non-iterative many-to-one mapping. A two-layer feed forward NN architecture is developed to learn and code this nonlinear and complex mapping using the back-propagation learning rule and a training set. The important aspect of this technique is that none of the typical constraints such as uniqueness and continuity are explicitly imposed. All the applicable constraints are learned and internally coded by the NN enabling it to be more flexible and more accurate than the existing methods. The approach is successfully tested on several random­ dot stereograms. It is shown that the net can generalize its learned map­ ping to cases outside its training set. Advantages over the Marr-Poggio Algorithm are discussed and it is shown that the NN performance is supe­ rIOr. 1 INTRODUCTION Three-dimensional image processing is an indispensable property for any advanced computer vision system. Depth perception is an integral part of 3-d processing. It involves computation of the relative distances of the points seen in the 2-d images to the imaging device. There are several methods to obtain depth information. A common technique is stereo imaging. It uses two cameras displaced by a known distance to generate two images of the same scene taken from these two different viewpoints.
    [Show full text]
  • Construction of Autostereograms Taking Into Account Object Colors and Its Applications for Steganography
    Construction of Autostereograms Taking into Account Object Colors and its Applications for Steganography Yusuke Tsuda Yonghao Yue Tomoyuki Nishita The University of Tokyo ftsuday,yonghao,[email protected] Abstract Information on appearances of three-dimensional ob- jects are transmitted via the Internet, and displaying objects q plays an important role in a lot of areas such as movies and video games. An autostereogram is one of the ways to represent three- dimensional objects taking advantage of binocular paral- (a) SS-relation (b) OS-relation lax, by which depths of objects are perceived. In previous Figure 1. Relations on an autostereogram. methods, the colors of objects were ignored when construct- E and E indicate left and right eyes, re- ing autostereogram images. In this paper, we propose a L R spectively. (a): A pair of points on the screen method to construct autostereogram images taking into ac- and a single point on the object correspond count color variation of objects, such as shading. with each other. (b): A pair of points on the We also propose a technique to embed color informa- object and a single point on the screen cor- tion in autostereogram images. Using our technique, au- respond with each other. tostereogram images shown on a display change, and view- ers can enjoy perceiving three-dimensional object and its colors in two stages. Color information is embedded in we propose an approach to take into account color variation a monochrome autostereogram image, and colors appear of objects, such as shading. Our approach assign slightly when the correct color table is used.
    [Show full text]
  • Stereogram Design for Testing Local Stereopsis
    Stereogram design for testing local stereopsis Gerald Westheimer and Suzanne P. McKee The basic ability to utilize disparity as a depth cue, local stereopsis, does not suffice to respond to random dot stereograms. They also demand complex perceptual processing to resolve the depth ambiguities which help mask their pattern from monocular recognition. This requirement may cause response failure in otherwise good stereo subjects and seems to account for the long durations of exposure required for target disparities which are still much larger than the best stereo thresholds. We have evaluated spatial stimuli that test local stereopsis and have found that targets need to be separated by at least 10 min arc to obtain good stereo thresholds. For targets with optimal separation of components and brief exposure duration (250 msec), thresh- olds fall in the range of 5 to 15 sec arc. Intertrial randomization of target lateral placement can effectively eliminate monocular cues and yet allow excellent stereoacuity. Stereoscopic acuity is less seriously affected by a small amount of optical defocus when target elements are widely, separated than when they are crowded, though in either case the performance decrement is higher than for ordinary visual acuity. Key words: stereoscopic acuity, local stereopsis, global stereopsis, optical defocus A subject is said to have stereopsis if he can are very low indeed, certainly under 10 sec detect binocular disparity of retinal images arc in a good observer, and such low values and associate the quality of depth with it. In can be attained with exposures of half a sec- its purest form this would manifest itself by ond or even less.
    [Show full text]