The Interplay Between Stereopsis and Structure from Motion

Total Page:16

File Type:pdf, Size:1020Kb

The Interplay Between Stereopsis and Structure from Motion Perception &: Psychophysics 1991, 49 (3), 230-244 The interplay between stereopsis and structure from motion MARK NAWROT and RANDOLPH BLAKE Vanderbilt University, Nashville, Tennessee In a series of psychophysical experiments, an adaptation paradigm was employed to study the influence of stereopsis on perception of rotation in an ambiguous kinetic depth (KD) display. Without prior adaptation or stereopsis, a rotating globe undergoes spontaneous reversals in per­ ceived direction of rotation, with successive durations of perceived rotation being random vari­ ables. Following 90 sec of viewing a stereoscopic globe undergoing unambiguous rotation, the KD globe appeared to rotate in a direction opposite that experienced during the stereoscopic adap­ tation period. This adaptation aftereffect was short-lived, and it occurred only when the adapta­ tion and test figures stimulated the same retinal areas, and only when the adaptation and test figures rotated about the same axis. The aftereffect was just as strong when the test and adapta­ tion figures had different shapes, as long as the adaptation figure contained multiple directions of motion imaged at different retinal disparities. Nonstereoscopic adaptation figures had no ef­ fect on the perceived direction of rotation of the ambiguous KD figure. These results imply that stereopsis and motion strongly interact in the specification of structure from motion, a result that complements earlier work on this problem. This paper documents the interplay between stereop­ space; in the other instance, information is integrated over sis and motion information in the generation of three­ time. Besides their geometric similarities, stereopsis and dimensional (3-D) surface perception in human vision. motion parallax also yield comparable levels of perfor­ It has long been known that retinal disparity can create mance on tasks involving the measurement ofdepth sen­ a robust sensation of depth and solidity (Wheatstone, sitivity (Rogers & Graham, 1982). Moreover, both stere­ 1838), and its effectiveness is most dramatically revealed opsis and motion parallax yield equivalent variations in in the case of texture stereograms devoid of monocular perceived depth that are dependent on the retinal orienta­ information about shape (Julesz, 1971). Equally impres­ tion of the depth gradient (Rogers & Graham, 1983). sive is the recovery of shape information when an ob­ Stereopsis and motion are also similar in that both are ject is viewed under conditions of motion (Wallach & degraded at equiluminance. Lu and Fender (1972) found O'Connell, 1953). Motion information can even reveal that stereopsis with random-dot stereograms was markedly the surface structure ofan object that is literally invisible impaired when the chromatic stereopair was equiluminant. when stationary. This well-known effect, sometimes called Similarly, Ramachandran and Gregory (1978) reported the kinetic depth effect (KDE), has received much atten­ that moving contours at equiluminance yield very feeble tion in recent years (Braunstein & Andersen, 1984; Lappin motion perception. The exact extent to which stereopsis & Fuqua, 1983; Sperling, Landy, Dosher, & Perkins, and motion perception are impaired at equiluminance re­ 1989; Todd, 1984, 1985). mains unsettled (Shapley, 1990). As numerous authors, including Helmholtz (1909/ Are these similarities between motion parallax and ste­ 1962), have pointed out, stereopsis and motion parallax reopsis coincidental or do they reveal important linkages are comparable geometrically. In the case of stereopsis, in the underlying processing architecture? Several lines of the brain utilizes two views ofa scene from slightly differ­ evidence suggest that the similarities are more than coin­ ent perspectives (i.e., the difference in location ofthe two cidental. For example, a significant correlation has been eyes) to derive a description of an object's depth and reported between an observer's level of competence on a shape. Similarly, from a single vantage point, slightly stereopsis task and thatperson's ability to judge depth when different views ofan object over time form the basis for given motion information. Specifically, Richards and the KDE. In one instance, information is integrated over Leiberman (1985) tested observers with varying degrees ofimpairment in stereopsis, as indexed by a deficiency to judge depthfrom disparity information. Next, Richards and Preliminaryresults from some of these experiments have been reported Leiberman measured the ability of these observers to iden­ in Nawrot and Blake, 1989. This work was supported by NIH Grant tify and judge the apparent 3-D size of a dynamic 2-D EY07760 and NIH Vision Core Grant P3D-EY08126. We thankMyron Braunstein for helpful discussion. Correspondence should be addressed display. The correlation between stereo performance and to Randolph Blake, Department of Psychology, Vanderbilt University, kinetic depth was 0.72. Richards and Leiberman also ob­ Nashville, TN 37240. served that people who performed particularly well on Copyright 1991 Psychonomic Society, Inc. 230 STEREOPSIS AND STRUCTURE FROM MOTION 231 stereo displays containing crossed disparity information study, Wallach et al. used a mirror stereoscope to exag­ also tended to exhibit better performance on the KD task, gerate the retinal disparities associated with viewing a whereas stereo performance with uncrossed disparities rotating wire figure. Following prolonged inspection of was less predictive of KD performance. Richards and this exaggerated stereoscopic depth, observers perceived Leiberman concluded that the outputs from kinetic depth a stationary version of the figure to be flattened, or com­ mechanisms and from convergent stereoscopic depth pressed in depth, even though it contained the same ex­ mechanisms are combined prior to an object's assignment aggerated disparities. This so-called satiation effect was to a given depth. While not quarreling with this conclu­ not observed, however, when the test figure was a mon­ sion, Bradshaw, Frisby, and Mayhew (1987) found that ocularly viewed KD version of the figure, implying that observers were adept at detecting structure from motion stereoscopic satiation did not generalize to motion. In a in displays imaged entirely at uncrossed disparities; this later paper, Wallach and Karsh (1963) proposed that the led them to question the tightness of the linkage between initial effect ofadaptation to exaggerated disparity results crossed disparities and KD. from the discrepancy between the depth signaled by KD Quite recently, Howard and Simpson (1989) reported and the depth signaled by the exaggerated disparities. that the gain of optokinetic nystagmus varies inversely According to this view, then, kinetic depth information with binocular disparity, which they attributed to neurons plays an essential role in the modification ofstereoscopic sensitive to both disparity and direction of motion. As depth perception. Richards (1985) pointed out, linkage between stereopsis In the present paper, we give the results of a series of and motion processing could serve to resolve ambigui­ psychophysical experiments in which we examined in ties inherent when disparity information or motion paral­ closer detail the interactions between stereopsis and mo­ lax information is available on its own. tion information in the specification of structure from mo­ Cross-adaptation studies also imply some form of in­ tion. We utilized a version of the cross-adaptation para­ teraction in the processing of stereoscopic information and digm, whereby adaptation to a stereoscopically defined, kinetic depth information. Smith (1976) found what he rotating object subsequently biases the perception of ro­ termed a "contingent depth aftereffect" with Lissajous tation ofan object defined solely by motion information. figures (a type of KD display easily generated on an os­ In particular, we have assessed the necessary and suffi­ cilloscope by applying the same signal to the horizontal cient stimulus conditions for adaptation by various 3-D and vertical amplifiers; see Braunstein, 1962). Viewed figures to affect the perception of subsequently presented normally, a Lissajous figure undergoes spontaneous rever­ 2-D figures. sals in the direction ofapparent rotation. However, place­ ment of a neutral density (ND) filter in front of one eye MEmOD produces a strong bias in the perceived direction of rota­ tion, presumably by generating a retinal disparity cue from Stimuli and Displays interocular delay (i.e., the Pulfrich effect). Smith found TIlestimuli for these experiments consisted of computer-generated that when the ND filter was removed following a 30-sec random-dot cinematograms depicting objects rotating in depth. Each observation period ofunambiguous motion, the Lissajous frame in a given cinematogram was a 2-D, parallel projection "snap­ shot" of the object at some degree of rotation around a stationary figure appeared to rotate in the direction opposite that ex­ axis in the mathematicallydefined3-D coordinate system. Presenting perienced with the filter in place. The stereo cue, in other the cinematogram frames quickly in succession on the face of the words, disambiguated the motion cue, and this disambig­ video monitor produced the KDE of a rotating object. Although uation in tum yielded a pronounced visual aftereffect. A the KD figure presented on a single monitor appears vividly to be rather similar cross-adaptation result was described by 3-D, it is devoid of retinal
Recommended publications
  • Seamless Texture Mapping of 3D Point Clouds
    Seamless Texture Mapping of 3D Point Clouds Dan Goldberg Mentor: Carl Salvaggio Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology Rochester, NY November 25, 2014 Abstract The two similar, quickly growing fields of computer vision and computer graphics give users the ability to immerse themselves in a realistic computer generated environment by combining the ability create a 3D scene from images and the texture mapping process of computer graphics. The output of a popular computer vision algorithm, structure from motion (obtain a 3D point cloud from images) is incomplete from a computer graphics standpoint. The final product should be a textured mesh. The goal of this project is to make the most aesthetically pleasing output scene. In order to achieve this, auxiliary information from the structure from motion process was used to texture map a meshed 3D structure. 1 Introduction The overall goal of this project is to create a textured 3D computer model from images of an object or scene. This problem combines two different yet similar areas of study. Computer graphics and computer vision are two quickly growing fields that take advantage of the ever-expanding abilities of our computer hardware. Computer vision focuses on a computer capturing and understanding the world. Computer graphics con- centrates on accurately representing and displaying scenes to a human user. In the computer vision field, constructing three-dimensional (3D) data sets from images is becoming more common. Microsoft's Photo- synth (Snavely et al., 2006) is one application which brought attention to the 3D scene reconstruction field. Many structure from motion algorithms are being applied to data sets of images in order to obtain a 3D point cloud (Koenderink and van Doorn, 1991; Mohr et al., 1993; Snavely et al., 2006; Crandall et al., 2011; Weng et al., 2012; Yu and Gallup, 2014; Agisoft, 2014).
    [Show full text]
  • Robust Tracking and Structure from Motion with Sampling Method
    ROBUST TRACKING AND STRUCTURE FROM MOTION WITH SAMPLING METHOD A DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS for the degree DOCTOR OF PHILOSOPHY IN ROBOTICS Robotics Institute Carnegie Mellon University Peng Chang July, 2002 Thesis Committee: Martial Hebert, Chair Robert Collins Jianbo Shi Michael Black This work was supported in part by the DARPA Tactical Mobile Robotics program under JPL contract 1200008 and by the Army Research Labs Robotics Collaborative Technology Alliance Contract DAAD 19- 01-2-0012. °c 2002 Peng Chang Keywords: structure from motion, visual tracking, visual navigation, visual servoing °c 2002 Peng Chang iii Abstract Robust tracking and structure from motion (SFM) are fundamental problems in computer vision that have important applications for robot visual navigation and other computer vision tasks. Al- though the geometry of the SFM problem is well understood and effective optimization algorithms have been proposed, SFM is still difficult to apply in practice. The reason is twofold. First, finding correspondences, or "data association", is still a challenging problem in practice. For visual navi- gation tasks, the correspondences are usually found by tracking, which often assumes constancy in feature appearance and smoothness in camera motion so that the search space for correspondences is much reduced. Therefore tracking itself is intrinsically difficult under degenerate conditions, such as occlusions, or abrupt camera motion which violates the assumptions for tracking to start with. Second, the result of SFM is often observed to be extremely sensitive to the error in correspon- dences, which is often caused by the failure of the tracking. This thesis aims to tackle both problems simultaneously.
    [Show full text]
  • Self-Driving Car Autonomous System Overview
    Self-Driving Car Autonomous System Overview - Industrial Electronics Engineering - Bachelors’ Thesis - Author: Daniel Casado Herráez Thesis Director: Javier Díaz Dorronsoro, PhD Thesis Supervisor: Andoni Medina, MSc San Sebastián - Donostia, June 2020 Self-Driving Car Autonomous System Overview Daniel Casado Herráez "When something is important enough, you do it even if the odds are not in your favor." - Elon Musk - 2 Self-Driving Car Autonomous System Overview Daniel Casado Herráez To my Grandfather, Family, Friends & to my supervisor Javier Díaz 3 Self-Driving Car Autonomous System Overview Daniel Casado Herráez 1. Contents 1.1. Index 1. Contents ..................................................................................................................................................... 4 1.1. Index.................................................................................................................................................. 4 1.2. Figures ............................................................................................................................................... 6 1.1. Tables ................................................................................................................................................ 7 1.2. Algorithms ........................................................................................................................................ 7 2. Abstract .....................................................................................................................................................
    [Show full text]
  • Euclidean Reconstruction of Natural Underwater
    EUCLIDEAN RECONSTRUCTION OF NATURAL UNDERWATER SCENES USING OPTIC IMAGERY SEQUENCE By Han Hu B.S. in Remote Sensing Science and Technology, Wuhan University, 2005 M.S. in Photogrammetry and Remote Sensing, Wuhan University, 2009 THESIS Submitted to the University of New Hampshire In Partial Fulfillment of The Requirements for the Degree of Master of Science In Ocean Engineering Ocean Mapping September, 2015 This thesis has been examined and approved in partial fulfillment of the requirements for the degree of M.S. in Ocean Engineering Ocean Mapping by: Thesis Director, Yuri Rzhanov, Research Professor, Ocean Engineering Philip J. Hatcher, Professor, Computer Science R. Daniel Bergeron, Professor, Computer Science On May 26, 2015 Original approval signatures are on file with the University of New Hampshire Graduate School. ii ACKNOWLEDGEMENTS I would like to thank all those people who made this thesis possible. My first debt of sincere gratitude and thanks must go to my advisor, Dr. Yuri Rzhanov. Throughout the entire three-year period, he offered his unreserved help, guidance and support to me in both work and life. Without him, it is definite that my thesis could not be completed successfully. I could not have imagined having a better advisor than him. Besides my advisor in Ocean Engineering, I would like to give many thanks to the rest of my thesis committee and my advisors in the Department of Computer Science: Prof. Philip J. Hatcher and Prof. R. Daniel Bergeron as well for their encouragement, insightful comments and all their efforts in making this thesis successful. I also would like to express my thanks to the Center for Coastal and Ocean Mapping and the Department of Computer Science at the University of New Hampshire for offering me the opportunities and environment to work on exciting projects and study in useful courses from which I learned many advanced skills and knowledge.
    [Show full text]
  • Motion-In-Depth Perception and Prey Capture in the Praying Mantis Sphodromantis Lineola
    © 2019. Published by The Company of Biologists Ltd | Journal of Experimental Biology (2019) 222, jeb198614. doi:10.1242/jeb.198614 RESEARCH ARTICLE Motion-in-depth perception and prey capture in the praying mantis Sphodromantis lineola Vivek Nityananda1,*, Coline Joubier1,2, Jerry Tan1, Ghaith Tarawneh1 and Jenny C. A. Read1 ABSTRACT prey as they come near. Just as with depth perception, several cues Perceiving motion-in-depth is essential to detecting approaching could contribute to the perception of motion-in-depth. or receding objects, predators and prey. This can be achieved Two of the motion-in-depth cues that have received the most using several cues, including binocular stereoscopic cues such attention in humans are binocular: changing disparity and as changing disparity and interocular velocity differences, and interocular velocity differences (IOVDs) (Cormack et al., 2017). monocular cues such as looming. Although these have been Stereoscopic disparity refers to the difference in the position of an studied in detail in humans, only looming responses have been well object as seen by the two eyes. This disparity reflects the distance to characterized in insects and we know nothing about the role of an object. Thus as an object approaches, the disparity between the stereoscopic cues and how they might interact with looming cues. We two views changes. This changing disparity cue suffices to create a used our 3D insect cinema in a series of experiments to investigate perception of motion-in-depth for human observers, even in the the role of the stereoscopic cues mentioned above, as well as absence of other cues (Cumming and Parker, 1994).
    [Show full text]
  • Motion Estimation for Self-Driving Cars with a Generalized Camera
    2013 IEEE Conference on Computer Vision and Pattern Recognition Motion Estimation for Self-Driving Cars With a Generalized Camera Gim Hee Lee1, Friedrich Fraundorfer2, Marc Pollefeys1 1 Department of Computer Science Faculty of Civil Engineering and Surveying2 ETH Zurich,¨ Switzerland Technische Universitat¨ Munchen,¨ Germany {glee@student, pomarc@inf}.ethz.ch [email protected] Abstract In this paper, we present a visual ego-motion estima- tion algorithm for a self-driving car equipped with a close- to-market multi-camera system. By modeling the multi- camera system as a generalized camera and applying the non-holonomic motion constraint of a car, we show that this leads to a novel 2-point minimal solution for the general- ized essential matrix where the full relative motion includ- ing metric scale can be obtained. We provide the analytical solutions for the general case with at least one inter-camera correspondence and a special case with only intra-camera Figure 1. Car with a multi-camera system consisting of 4 cameras correspondences. We show that up to a maximum of 6 so- (front, rear and side cameras in the mirrors). lutions exist for both cases. We identify the existence of degeneracy when the car undergoes straight motion in the ready available in some COTS cars, we focus this paper special case with only intra-camera correspondences where on using a multi-camera setup for ego-motion estimation the scale becomes unobservable and provide a practical al- which is one of the key features for self-driving cars. A ternative solution. Our formulation can be efficiently imple- multi-camera setup can be modeled as a generalized cam- mented within RANSAC for robust estimation.
    [Show full text]
  • Structure from Motion Using a Hybrid Stereo-Vision System François Rameau, Désiré Sidibé, Cédric Demonceaux, David Fofi
    Structure from motion using a hybrid stereo-vision system François Rameau, Désiré Sidibé, Cédric Demonceaux, David Fofi To cite this version: François Rameau, Désiré Sidibé, Cédric Demonceaux, David Fofi. Structure from motion using a hybrid stereo-vision system. 12th International Conference on Ubiquitous Robots and Ambient Intel- ligence, Oct 2015, Goyang City, South Korea. hal-01238551 HAL Id: hal-01238551 https://hal.archives-ouvertes.fr/hal-01238551 Submitted on 5 Dec 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. The 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2015) October 28 30, 2015 / KINTEX, Goyang city, Korea ⇠ Structure from motion using a hybrid stereo-vision system Franc¸ois Rameau, Desir´ e´ Sidibe,´ Cedric´ Demonceaux, and David Fofi Universite´ de Bourgogne, Le2i UMR 5158 CNRS, 12 rue de la fonderie, 71200 Le Creusot, France Abstract - This paper is dedicated to robotic navigation vision system and review the previous works in 3D recon- using an original hybrid-vision setup combining the ad- struction using hybrid vision system. In the section 3.we vantages offered by two different types of camera. This describe our SFM framework adapted for heterogeneous couple of cameras is composed of one perspective camera vision system which does not need inter-camera corre- associated with one fisheye camera.
    [Show full text]
  • Motion Estimation for Self-Driving Cars with a Generalized Camera
    Motion Estimation for Self-Driving Cars With a Generalized Camera Gim Hee Lee1, Friedrich Fraundorfer2, Marc Pollefeys1 1 Department of Computer Science Faculty of Civil Engineering and Surveying2 ETH Zurich,¨ Switzerland Technische Universitat¨ Munchen,¨ Germany {glee@student, pomarc@inf}.ethz.ch [email protected] Abstract In this paper, we present a visual ego-motion estima- tion algorithm for a self-driving car equipped with a close- to-market multi-camera system. By modeling the multi- camera system as a generalized camera and applying the non-holonomic motion constraint of a car, we show that this leads to a novel 2-point minimal solution for the general- ized essential matrix where the full relative motion includ- ing metric scale can be obtained. We provide the analytical solutions for the general case with at least one inter-camera correspondence and a special case with only intra-camera Figure 1. Car with a multi-camera system consisting of 4 cameras correspondences. We show that up to a maximum of 6 so- (front, rear and side cameras in the mirrors). lutions exist for both cases. We identify the existence of degeneracy when the car undergoes straight motion in the ready available in some COTS cars, we focus this paper special case with only intra-camera correspondences where on using a multi-camera setup for ego-motion estimation the scale becomes unobservable and provide a practical al- which is one of the key features for self-driving cars. A ternative solution. Our formulation can be efficiently imple- multi-camera setup can be modeled as a generalized cam- mented within RANSAC for robust estimation.
    [Show full text]
  • Stereoscopic Viewing Enhances Visually Induced Motion Sickness
    Behrang Keshavarz* Stereoscopic Viewing Enhances Heiko Hecht Department of General Visually Induced Motion Sickness Experimental Psychology but Sound Does Not Johannes Gutenberg-University 55099 Mainz, Germany Abstract Optic flow in visual displays or virtual environments often induces motion sickness (MS). We conducted two studies to analyze the effects of stereopsis, background sound, and realism (video vs. simulation) on the severity of MS and related feelings of immersion and vection. In Experiment 1, 79 participants watched either a 15-min-long video clip taken during a real roller coaster ride, or a precise simulation of the same ride. Additionally, half of the participants watched the movie in 2D, and the other half in 3D. MS was measured using the Simulator Sickness Questionnaire (SSQ) and the Fast Motion Sickness Scale (FMS). Results showed a significant interaction for both variables, indicating highest sickness scores for the real roller coaster video presented in 3D, while all other videos provoked less MS and did not differ among one another. In Experiment 2, 69 subjects were exposed to a video captured during a bicycle ride. Viewing mode (3D vs. 2D) and sound (on vs. off) were varied between subjects. Response measures were the same as in Experiment 1. Results showed a significant effect of stereopsis; MS was more severe for 3D presentation. Sound did not have a significant effect. Taken together, stereoscopic viewing played a crucial role in MS in both experiments. Our findings imply that stereoscopic videos can amplify visual dis- comfort and should be handled with care. 1 Introduction Over the past decades, the technological progress in entertainment sys- tems has grown rapidly.
    [Show full text]
  • Noise Models in Feature-Based Stereo Visual Odometry
    Noise Models in Feature-based Stereo Visual Odometry Pablo F. Alcantarillay and Oliver J. Woodfordz Abstract— Feature-based visual structure and motion recon- struction pipelines, common in visual odometry and large-scale reconstruction from photos, use the location of corresponding features in different images to determine the 3D structure of the scene, as well as the camera parameters associated with each image. The noise model, which defines the likelihood of the location of each feature in each image, is a key factor in the accuracy of such pipelines, alongside optimization strategy. Many different noise models have been proposed in the literature; in this paper we investigate the performance of several. We evaluate these models specifically w.r.t. stereo visual odometry, as this task is both simple (camera intrinsics are constant and known; geometry can be initialized reliably) and has datasets with ground truth readily available (KITTI Odometry and New Tsukuba Stereo Dataset). Our evaluation Fig. 1. Given a set of 2D feature correspondences zi (red circles) between shows that noise models which are more adaptable to the two images, the inverse problem to be solved is to estimate the camera varying nature of noise generally perform better. poses and the 3D locations of the features Yi. The true 3D locations project onto each image plane to give a true image location (blue circles). The I. INTRODUCTION noise model represents a probability distribution over the error e between the true and measured feature locations. The size of the ellipses depends on Inverse problems—given a set of observed measurements, the uncertainty in the feature locations.
    [Show full text]
  • Durham E-Theses
    Durham E-Theses Validating Stereoscopic Volume Rendering ROBERTS, DAVID,ANTHONY,THOMAS How to cite: ROBERTS, DAVID,ANTHONY,THOMAS (2016) Validating Stereoscopic Volume Rendering, Durham theses, Durham University. Available at Durham E-Theses Online: http://etheses.dur.ac.uk/11735/ Use policy The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that: • a full bibliographic reference is made to the original source • a link is made to the metadata record in Durham E-Theses • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders. Please consult the full Durham E-Theses policy for further details. Academic Support Oce, Durham University, University Oce, Old Elvet, Durham DH1 3HP e-mail: [email protected] Tel: +44 0191 334 6107 http://etheses.dur.ac.uk Validating Stereoscopic Volume Rendering David Roberts A Thesis presented for the degree of Doctor of Philosophy Innovative Computing Group School of Engineering and Computing Sciences Durham University United Kingdom 2016 Dedicated to My Mom and Dad, my sisters, Sarah and Alison, my niece Chloe, and of course Keeta. Validating Stereoscopic Volume Rendering David Roberts Submitted for the degree of Doctor of Philosophy 2016 Abstract The evaluation of stereoscopic displays for surface-based renderings is well established in terms of accurate depth perception and tasks that require an understanding of the spatial layout of the scene.
    [Show full text]
  • Stereoscopic 3D Videos and Panoramas Stereoscopic 3D Videos and Panoramas
    Christian Richardt Stereoscopic 3D Videos and Panoramas Stereoscopic 3D videos and panoramas 1. Capturing and displaying stereo 3D videos 2. Viewing comfort considerations 3. Editing stereo 3D videos (research papers) 4. Creating stereo 3D panoramas 2017-08-03 Christian Richardt – Stereoscopic 3D Videos and Panoramas 2 Stereo camera rigs Parallel Converged (‘toed-in’) 2017-08-03 Christian Richardt – Stereoscopic 3D Videos and Panoramas 3 Stereo camera rigs Parallel Converged (‘toed-in’) Kreylos Kreylos Oliver Oliver 2012 2012 © ©2012 Oliver 2017-08-03 Christian Richardt – Stereoscopic 3D Videos and Panoramas 4 Computational stereo 3D camera system et al./ACM et Computational stereo camera system with programmable control loop S. Heinzle, P. Greisen, D. Gallup, C. Chen, D. Saner, A. Smolic, A. Burg, W. Matusik & M. Gross Heinzle ACM Transactions on Graphics (SIGGRAPH), 2011, 30(4), 94:1–10 2011 © 2017-08-03 Christian Richardt – Stereoscopic 3D Videos and Panoramas 5 Commercial stereo 3D projection Polarised projection Wavelength multiplexing e.g. RealD 3D, MasterImage 3D e.g. Dolby 3D 3.0 - SA - BY - Commons/CC Wilkinson/Sound and Vision and Wilkinson/Sound NK, 3dnatureguy/Wikimedia Raoul Raoul ©2011 ©2011 Scott © 2017-08-03 Christian Richardt – Stereoscopic 3D Videos and Panoramas 6 Medium-scale stereo 3D displays Active shutter glasses Autostereoscopy e.g. NVIDIA 3D Vision, 3D TVs 3.0 - SA - BY - ©2011 ©2011 MTBS3D/NVIDIA /Wikimedia Commons/CC /Wikimedia Cmglee © 2017-08-03 Christian Richardt – Stereoscopic 3D Videos and Panoramas 7 Other stereo 3D displays Head-mounted displays (HMDs) Anaglyph stereo e.g. HTC Vive, Oculus Rift, Google Cardboard e.g. red cyan glasses, ColorCode 3-D, Inficolor 3D ©2016 HTC Corporation ©2016 HTC Taringa 2016 2016 © 2017-08-03 Christian Richardt – Stereoscopic 3D Videos and Panoramas 8 Cinema 3D Narrow angular range that spans a single seat et al.
    [Show full text]