Motion Parallax in Stereo 3D: Model and Applications

Total Page:16

File Type:pdf, Size:1020Kb

Motion Parallax in Stereo 3D: Model and Applications Motion Parallax in Stereo 3D: Model and Applications Petr Kellnhofer1 Piotr Didyk1;2 Tobias Ritschel1;2;3 Belen Masia1;4 Karol Myszkowski1 Hans-Peter Seidel1 1MPI Informatik 2Saarland University, MMCI 3University College London 4Universidad de Zaragoza Scene motion Near Far Near Far Original S3D Original disparity Parallax Map Our disparity Our S3D Figure 1: Starting from stereoscopic video content with a static observer in a moving train (Left), our method detects regions where motion parallax acts as an additional depth cue (Center, white color) and uses our model to redistribute the disparity depth budget from such regions (the countryside) to regions where it is more needed (the train interior) (Right). Abstract 1 Introduction Binocular disparity is the main depth cue that makes stereoscopic Perceiving the layout of a scene is one of the main tasks performed images appear 3D. However, in many scenarios, the range of depth by the human visual system. To this end, different depth cues [Cut- that can be reproduced by this cue is greatly limited and typically ting and Vishton 1995] are analyzed and combined into a common fixed due to constraints imposed by displays. For example, due to understanding of the scene. Not all cues, however, provide reliable the low angular resolution of current automultiscopic screens, they information. Pictorial cues, such as shadows, aerial perspective or can only reproduce a shallow depth range. In this work, we study defocus, can often be misleading. Other cues, such as occlusion, the motion parallax cue, which is a relatively strong depth cue, and provide only a depth ordering. There are also very strong cues, such can be freely reproduced even on a 2D screen without any limits. as ocular convergence and binocular disparity, which provide a true We exploit the fact that in many practical scenarios, motion parallax stereoscopic impression. These, however, can only be reproduced in provides sufficiently strong depth information that the presence of a limited fashion due to significant limitations of the current display binocular depth cues can be reduced through aggressive disparity technology. The first problem is the lack of a correct accommoda- compression. To assess the strength of the effect we conduct psycho- tion cue in most of the current stereoscopic displays, which leads to visual experiments that measure the influence of motion parallax on visual discomfort [Hoffman et al. 2008; Lambooij et al. 2009] when depth perception and relate it to the depth resulting from binocular a large disparity range is shown to observers. The limited range disparity. Based on the measurements, we propose a joint disparity- of displayable disparity that does not cause such issues defines the parallax computational model that predicts apparent depth resulting depth budget of the display suitable for practical applications. While from both cues. We demonstrate how this model can be applied in current research tries to address this problem with new light-field the context of stereo and multiscopic image processing, and propose displays [Masia et al. 2013b], these solutions have even stronger new disparity manipulation techniques, which first quantify depth requirements regarding the depth range. This is due to the apparent obtained from motion parallax, and then adjust binocular disparity blur for objects that are located at a significant distance from the information accordingly. This allows us to manipulate the disparity screen plane [Zwicker et al. 2006; Wetzstein et al. 2012]. Because of signal according to the strength of motion parallax to improve the the aforementioned reasons, it is crucial to take any opportunity that overall depth reproduction. This technique is validated in additional allows us to improve depth reproduction without using additional experiments. disparity range. One of the strongest monocular depth cues is motion parallax. It Keywords: stereoscopic 3D, gaze tracking, gaze-contingent dis- arises when the location of features at different depths results in play, eye tracking, retargeting, remapping different retinal velocities. The strength of this cue is relatively high Concepts: •Computing methodologies ! Perception; Image when compared to other monocular cues and also when compared processing; to binocular disparity [Cutting and Vishton 1995]. This fact has been exploited in several applications, such as wiggle stereoscopy Permission to make digital or hard copies of all or part of this work for [Wikipedia 2015b] where motion parallax is used as a metaphor for personal or classroom use is granted without fee provided that copies are not stereoscopic images, or parallax scrolling [Wikipedia 2015a] used made or distributed for profit or commercial advantage and that copies bear in games where, by moving foreground and background at different this notice and the full citation on the first page. Copyrights for components speeds, a depth sensation is evoked. Striking examples of motion of this work owned by others than the author(s) must be honored. Abstracting parallax efficiency are species that introduce subtle head movements with credit is permitted. To copy otherwise, or republish, to post on servers to enable motion parallax [Kral 2003]. This mechanism has been or to redistribute to lists, requires prior specific permission and/or a fee. incorporated into cameras where apparent depth is enhanced by Request permissions from [email protected]. c 2016 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISBN: 978-1-4503-4514-9/16/12 SA ’16 Technical Papers, December 05 - 08, 2016, Macao DOI: http://dx.doi.org/10.1145/2980179.2980230 dθ ∆f Parallax = dα ≈ f Disparity = (A) (B) |Ddθd ∆−Df d | Parallax =2= θ dα ≈ f dθ ∆f 1 Parallax = DiPasparrallaitxy = dDi(Aspar) itdy(B) dα ≈ f ≈ |Ddα −D | =2θ Disparity = dθ (A∆) f (B) dθ ∆f Parallax = |Dd −Dd | Parallax = 1 dα ≈ f Parallaxdα ≈ fDisparity =2θ α ≈ dα Disparity = 1d(A) d(B) Disparitdyθ= θ ∆d(fA) Pad(Bra)llax |D Dispar−Dity | Parallax = |D −D | =2≈ dθα (1) dα=2d≈αθ f α 1 Disparity = (dAθ1) (B)Parallax Disparity Parallax d θ Dispard ity α≈ dα |D≈ dα−D | (1) =2θ ∆dαf θ (1) 1 f Parallax Didθsparity dαα ≈ dαα d(∆Af) dθ ∆f D dθθ (1) θ Parallax = (1) d(Bf) dα ≈dα f dDα ∆f dθ ∆f α Disparity = (A) Pa(Bra)llax = subtle motion of the sensor [Proffitt and Banton 1999; v3 c Imaging d(AA) dθ ∆d fdθ d dDθParallax = |D f −D | dαθ ≈ ∆ff θ * =2*θ Parallax (1=) 2015]. Interestingly, motion parallax is not limited to observer d(BB) dα ≈ f∆f dα ≈ f ∆f dθ ∆f d(A) Disparity = d(A) d(B) motion, but also provides depth information whenever local motion dα D D1 |D −D | PaDirallasparAx =ity = d(A) f d(BDi) sparity = d(A) d(B) f Padθradllaα∆|Dx≈f f d−DDi(Bspar) it|y =2|Dθ −D | in the scene follows a predictable transformation [Ullman 1983; Padθrallax = =2θ≈ dDα B dα ≈ f d(AA) =21θ Luca et al. 2007] (Fig. 2). These facts suggest that motion parallax ∆fDid(Aspar) ity = d(A) D d(B) D * |D 1 −D* | Parallax Disparity is a very strong source of depth information for the human visual DisparitPay =ralladx(A) Did(sparBd()BBit) y ≈ d1α f d(B) |D=2θ α−DD | Parallax Disparity D ≈ dα ≈ dα system (HVS), but it has never been explored in the context of =2θ 1 A d(A) PaArallax Diθ sparity stereoscopic image manipulations. D 1 B α (1) B ≈ dα d(PaB)rallax αDidsparα ity α In this work, we address this opportunity and propose a compu- D ≈ dα θ (1) A θ dθ tational model for detecting motion parallax and quantifying the α dαθ (1) (1) B dα ∆f amount of apparent depth it induces together with binocular dispar- α θ ddαθ ity. To this end, we conduct a series of psychovisual experiments dθ f (1) θ dα ∆dfθ (1) that measure apparent depth in stereoscopic stimuli in the presence (A) dα ∆fd ∆f of motion parallax. This is done for simple, sinusoidal corrugation dθ D fd(B) dθ ∆f D d(Af) stimuli. Based on the measurements, we propose a computational D ∆f d(A) A (A) model that predicts apparent depth induced by these cues in complex Df Dd (B) Figure 2: Motion(B) parallaxB mechanicsD [Nawrot and Stroyan 2009]. images. Furthermore, we demonstrate how the model can be used f d d(BA) d(DA) D TranslationD of aA pair of points A and B to the right (or equivalently to improve depth reproduction on current display devices. To this d(A) AB observerD translationd(B) to the left) produces motion parallax. Assuming end, we develop a new, motion-aware disparity manipulation tech- (DB) B B nique. The key idea is to reallocate the disparity range from regions thatDAd is theA fixation point, its retinal position does not change, due that exhibit motion parallax to static parts of the scene so that the to the pursuitA B eye rotation α at angular velocity dα=dt, while at overall depth perceived by observers is maximized. To evaluate the the sameB time the retinal position of B changes with respect to A effectiveness of our manipulations, we perform additional validation by the angular distance θ producing retinal motion with velocity experiments which confirm that by taking the motion parallax depth dθ=dt. The relation dθ=dα ≈ ∆f=f = m approximately holds, cue into account, the overall depth impression can be enhanced with- which, given the fixation distance f, enables recovery of depth ∆f out extending the disparity budget (Fig. 1). More precisely, we make from motion parallax. The right side of the figure shows the relation the following contributions: of motion parallax to the angular measure of binocular disparity d = jD (A) − D (B)j [Howard and Rogers 2012, Fig.
Recommended publications
  • Seamless Texture Mapping of 3D Point Clouds
    Seamless Texture Mapping of 3D Point Clouds Dan Goldberg Mentor: Carl Salvaggio Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology Rochester, NY November 25, 2014 Abstract The two similar, quickly growing fields of computer vision and computer graphics give users the ability to immerse themselves in a realistic computer generated environment by combining the ability create a 3D scene from images and the texture mapping process of computer graphics. The output of a popular computer vision algorithm, structure from motion (obtain a 3D point cloud from images) is incomplete from a computer graphics standpoint. The final product should be a textured mesh. The goal of this project is to make the most aesthetically pleasing output scene. In order to achieve this, auxiliary information from the structure from motion process was used to texture map a meshed 3D structure. 1 Introduction The overall goal of this project is to create a textured 3D computer model from images of an object or scene. This problem combines two different yet similar areas of study. Computer graphics and computer vision are two quickly growing fields that take advantage of the ever-expanding abilities of our computer hardware. Computer vision focuses on a computer capturing and understanding the world. Computer graphics con- centrates on accurately representing and displaying scenes to a human user. In the computer vision field, constructing three-dimensional (3D) data sets from images is becoming more common. Microsoft's Photo- synth (Snavely et al., 2006) is one application which brought attention to the 3D scene reconstruction field. Many structure from motion algorithms are being applied to data sets of images in order to obtain a 3D point cloud (Koenderink and van Doorn, 1991; Mohr et al., 1993; Snavely et al., 2006; Crandall et al., 2011; Weng et al., 2012; Yu and Gallup, 2014; Agisoft, 2014).
    [Show full text]
  • Robust Tracking and Structure from Motion with Sampling Method
    ROBUST TRACKING AND STRUCTURE FROM MOTION WITH SAMPLING METHOD A DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS for the degree DOCTOR OF PHILOSOPHY IN ROBOTICS Robotics Institute Carnegie Mellon University Peng Chang July, 2002 Thesis Committee: Martial Hebert, Chair Robert Collins Jianbo Shi Michael Black This work was supported in part by the DARPA Tactical Mobile Robotics program under JPL contract 1200008 and by the Army Research Labs Robotics Collaborative Technology Alliance Contract DAAD 19- 01-2-0012. °c 2002 Peng Chang Keywords: structure from motion, visual tracking, visual navigation, visual servoing °c 2002 Peng Chang iii Abstract Robust tracking and structure from motion (SFM) are fundamental problems in computer vision that have important applications for robot visual navigation and other computer vision tasks. Al- though the geometry of the SFM problem is well understood and effective optimization algorithms have been proposed, SFM is still difficult to apply in practice. The reason is twofold. First, finding correspondences, or "data association", is still a challenging problem in practice. For visual navi- gation tasks, the correspondences are usually found by tracking, which often assumes constancy in feature appearance and smoothness in camera motion so that the search space for correspondences is much reduced. Therefore tracking itself is intrinsically difficult under degenerate conditions, such as occlusions, or abrupt camera motion which violates the assumptions for tracking to start with. Second, the result of SFM is often observed to be extremely sensitive to the error in correspon- dences, which is often caused by the failure of the tracking. This thesis aims to tackle both problems simultaneously.
    [Show full text]
  • Self-Driving Car Autonomous System Overview
    Self-Driving Car Autonomous System Overview - Industrial Electronics Engineering - Bachelors’ Thesis - Author: Daniel Casado Herráez Thesis Director: Javier Díaz Dorronsoro, PhD Thesis Supervisor: Andoni Medina, MSc San Sebastián - Donostia, June 2020 Self-Driving Car Autonomous System Overview Daniel Casado Herráez "When something is important enough, you do it even if the odds are not in your favor." - Elon Musk - 2 Self-Driving Car Autonomous System Overview Daniel Casado Herráez To my Grandfather, Family, Friends & to my supervisor Javier Díaz 3 Self-Driving Car Autonomous System Overview Daniel Casado Herráez 1. Contents 1.1. Index 1. Contents ..................................................................................................................................................... 4 1.1. Index.................................................................................................................................................. 4 1.2. Figures ............................................................................................................................................... 6 1.1. Tables ................................................................................................................................................ 7 1.2. Algorithms ........................................................................................................................................ 7 2. Abstract .....................................................................................................................................................
    [Show full text]
  • A Review and Selective Analysis of 3D Display Technologies for Anatomical Education
    University of Central Florida STARS Electronic Theses and Dissertations, 2004-2019 2018 A Review and Selective Analysis of 3D Display Technologies for Anatomical Education Matthew Hackett University of Central Florida Part of the Anatomy Commons Find similar works at: https://stars.library.ucf.edu/etd University of Central Florida Libraries http://library.ucf.edu This Doctoral Dissertation (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Electronic Theses and Dissertations, 2004-2019 by an authorized administrator of STARS. For more information, please contact [email protected]. STARS Citation Hackett, Matthew, "A Review and Selective Analysis of 3D Display Technologies for Anatomical Education" (2018). Electronic Theses and Dissertations, 2004-2019. 6408. https://stars.library.ucf.edu/etd/6408 A REVIEW AND SELECTIVE ANALYSIS OF 3D DISPLAY TECHNOLOGIES FOR ANATOMICAL EDUCATION by: MATTHEW G. HACKETT BSE University of Central Florida 2007, MSE University of Florida 2009, MS University of Central Florida 2012 A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Modeling and Simulation program in the College of Engineering and Computer Science at the University of Central Florida Orlando, Florida Summer Term 2018 Major Professor: Michael Proctor ©2018 Matthew Hackett ii ABSTRACT The study of anatomy is complex and difficult for students in both graduate and undergraduate education. Researchers have attempted to improve anatomical education with the inclusion of three-dimensional visualization, with the prevailing finding that 3D is beneficial to students. However, there is limited research on the relative efficacy of different 3D modalities, including monoscopic, stereoscopic, and autostereoscopic displays.
    [Show full text]
  • Relative Importance of Binocular Disparity and Motion Parallax for Depth Estimation: a Computer Vision Approach
    remote sensing Article Relative Importance of Binocular Disparity and Motion Parallax for Depth Estimation: A Computer Vision Approach Mostafa Mansour 1,2 , Pavel Davidson 1,* , Oleg Stepanov 2 and Robert Piché 1 1 Faculty of Information Technology and Communication Sciences, Tampere University, 33720 Tampere, Finland 2 Department of Information and Navigation Systems, ITMO University, 197101 St. Petersburg, Russia * Correspondence: pavel.davidson@tuni.fi Received: 4 July 2019; Accepted: 20 August 2019; Published: 23 August 2019 Abstract: Binocular disparity and motion parallax are the most important cues for depth estimation in human and computer vision. Here, we present an experimental study to evaluate the accuracy of these two cues in depth estimation to stationary objects in a static environment. Depth estimation via binocular disparity is most commonly implemented using stereo vision, which uses images from two or more cameras to triangulate and estimate distances. We use a commercial stereo camera mounted on a wheeled robot to create a depth map of the environment. The sequence of images obtained by one of these two cameras as well as the camera motion parameters serve as the input to our motion parallax-based depth estimation algorithm. The measured camera motion parameters include translational and angular velocities. Reference distance to the tracked features is provided by a LiDAR. Overall, our results show that at short distances stereo vision is more accurate, but at large distances the combination of parallax and camera motion provide better depth estimation. Therefore, by combining the two cues, one obtains depth estimation with greater range than is possible using either cue individually.
    [Show full text]
  • Euclidean Reconstruction of Natural Underwater
    EUCLIDEAN RECONSTRUCTION OF NATURAL UNDERWATER SCENES USING OPTIC IMAGERY SEQUENCE By Han Hu B.S. in Remote Sensing Science and Technology, Wuhan University, 2005 M.S. in Photogrammetry and Remote Sensing, Wuhan University, 2009 THESIS Submitted to the University of New Hampshire In Partial Fulfillment of The Requirements for the Degree of Master of Science In Ocean Engineering Ocean Mapping September, 2015 This thesis has been examined and approved in partial fulfillment of the requirements for the degree of M.S. in Ocean Engineering Ocean Mapping by: Thesis Director, Yuri Rzhanov, Research Professor, Ocean Engineering Philip J. Hatcher, Professor, Computer Science R. Daniel Bergeron, Professor, Computer Science On May 26, 2015 Original approval signatures are on file with the University of New Hampshire Graduate School. ii ACKNOWLEDGEMENTS I would like to thank all those people who made this thesis possible. My first debt of sincere gratitude and thanks must go to my advisor, Dr. Yuri Rzhanov. Throughout the entire three-year period, he offered his unreserved help, guidance and support to me in both work and life. Without him, it is definite that my thesis could not be completed successfully. I could not have imagined having a better advisor than him. Besides my advisor in Ocean Engineering, I would like to give many thanks to the rest of my thesis committee and my advisors in the Department of Computer Science: Prof. Philip J. Hatcher and Prof. R. Daniel Bergeron as well for their encouragement, insightful comments and all their efforts in making this thesis successful. I also would like to express my thanks to the Center for Coastal and Ocean Mapping and the Department of Computer Science at the University of New Hampshire for offering me the opportunities and environment to work on exciting projects and study in useful courses from which I learned many advanced skills and knowledge.
    [Show full text]
  • Motion Estimation for Self-Driving Cars with a Generalized Camera
    2013 IEEE Conference on Computer Vision and Pattern Recognition Motion Estimation for Self-Driving Cars With a Generalized Camera Gim Hee Lee1, Friedrich Fraundorfer2, Marc Pollefeys1 1 Department of Computer Science Faculty of Civil Engineering and Surveying2 ETH Zurich,¨ Switzerland Technische Universitat¨ Munchen,¨ Germany {glee@student, pomarc@inf}.ethz.ch [email protected] Abstract In this paper, we present a visual ego-motion estima- tion algorithm for a self-driving car equipped with a close- to-market multi-camera system. By modeling the multi- camera system as a generalized camera and applying the non-holonomic motion constraint of a car, we show that this leads to a novel 2-point minimal solution for the general- ized essential matrix where the full relative motion includ- ing metric scale can be obtained. We provide the analytical solutions for the general case with at least one inter-camera correspondence and a special case with only intra-camera Figure 1. Car with a multi-camera system consisting of 4 cameras correspondences. We show that up to a maximum of 6 so- (front, rear and side cameras in the mirrors). lutions exist for both cases. We identify the existence of degeneracy when the car undergoes straight motion in the ready available in some COTS cars, we focus this paper special case with only intra-camera correspondences where on using a multi-camera setup for ego-motion estimation the scale becomes unobservable and provide a practical al- which is one of the key features for self-driving cars. A ternative solution. Our formulation can be efficiently imple- multi-camera setup can be modeled as a generalized cam- mented within RANSAC for robust estimation.
    [Show full text]
  • Interactive Visual Analytics for Large-Scale Particle Simulations
    University of New Hampshire University of New Hampshire Scholars' Repository Master's Theses and Capstones Student Scholarship Winter 2016 Interactive Visual Analytics for Large-scale Particle Simulations Erol Aygar University of New Hampshire, Durham Follow this and additional works at: https://scholars.unh.edu/thesis Recommended Citation Aygar, Erol, "Interactive Visual Analytics for Large-scale Particle Simulations" (2016). Master's Theses and Capstones. 896. https://scholars.unh.edu/thesis/896 This Thesis is brought to you for free and open access by the Student Scholarship at University of New Hampshire Scholars' Repository. It has been accepted for inclusion in Master's Theses and Capstones by an authorized administrator of University of New Hampshire Scholars' Repository. For more information, please contact [email protected]. INTERACTIVE VISUAL ANALYTICS FOR LARGE-SCALE PARTICLE SIMULATIONS BY Erol Aygar MS, Bosphorus University, 2006 BS, Mimar Sinan Fine Arts University, 1999 THESIS Submitted to the University of New Hampshire in Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Science December, 2016 ALL RIGHTS RESERVED c 2016 Erol Aygar This thesis has been examined and approved in partial fulfillment of the requirements for the de- gree of Master of Science in Computer Science by: Thesis Director, Colin Ware Professor of Computer Science University of New Hampshire R. Daniel Bergeron Professor of Computer Science University of New Hampshire Thomas Butkiewicz Research Assistant Professor of Computer Science University of New Hampshire On November 2, 2016 Original approval signatures are on file with the University of New Hampshire Graduate School. Dedication This thesis is dedicated to the memory of my grandfather.
    [Show full text]
  • Structure from Motion Using a Hybrid Stereo-Vision System François Rameau, Désiré Sidibé, Cédric Demonceaux, David Fofi
    Structure from motion using a hybrid stereo-vision system François Rameau, Désiré Sidibé, Cédric Demonceaux, David Fofi To cite this version: François Rameau, Désiré Sidibé, Cédric Demonceaux, David Fofi. Structure from motion using a hybrid stereo-vision system. 12th International Conference on Ubiquitous Robots and Ambient Intel- ligence, Oct 2015, Goyang City, South Korea. hal-01238551 HAL Id: hal-01238551 https://hal.archives-ouvertes.fr/hal-01238551 Submitted on 5 Dec 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. The 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2015) October 28 30, 2015 / KINTEX, Goyang city, Korea ⇠ Structure from motion using a hybrid stereo-vision system Franc¸ois Rameau, Desir´ e´ Sidibe,´ Cedric´ Demonceaux, and David Fofi Universite´ de Bourgogne, Le2i UMR 5158 CNRS, 12 rue de la fonderie, 71200 Le Creusot, France Abstract - This paper is dedicated to robotic navigation vision system and review the previous works in 3D recon- using an original hybrid-vision setup combining the ad- struction using hybrid vision system. In the section 3.we vantages offered by two different types of camera. This describe our SFM framework adapted for heterogeneous couple of cameras is composed of one perspective camera vision system which does not need inter-camera corre- associated with one fisheye camera.
    [Show full text]
  • B.S., Wright State University, 1992
    THE EFFECTS OF DEPTH AND ECCENTRICITY ON VISUAL SEARCH IN A DEPTH DISPLAY A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Engineering By GEORGE ANGELO REIS B.S., Wright State University, 1992 2009 Wright State University WRIGHT STATE UNIVERSITY SCHOOL OF GRADUATE STUDIES January 2, 2009 I HEREBY RECOMMEND THAT THE THESIS PREPARED UNDER MY SUPERVISION BY George Angelo Reis ENTITLED The Effects of Depth and Eccentricity on Visual Search in a Depth Display BE ACCEPTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in Engineering. ______________________________ Yan Liu, Ph.D. Thesis Director ______________________________ S. Narayanan, Ph.D. Department Chair Committee on Final Examination _________________________ Yan Liu, Ph.D. _________________________ Paul Havig, Ph.D. _________________________ Misty Blue, Ph.D. _________________________ Joseph F. Thomas, Jr., Ph.D. Dean, School of Graduate Studies ABSTRACT Reis, George Angelo. M.S. Egr., Department of Biomedical, Industrial, and Human Factors Engineering. Wright State University, 2009. The Effects of Depth and Eccentricity on Visual Search in a Depth Display. The attribute of depth has been shown to provide saliency or conspicuity for items in a visual search task. Novel displays that present information in real physical depth offer potential benefits. Previous research has studied depth in visual search but depth was mostly realized without real physical separation of display elements. This study aimed to better understand the effects of real physical depth and eccentricity on visual search in a depth display. Through this understanding, we can appropriately utilize this new technology. An experiment was conducted to test four hypotheses regarding how depth, eccentricity, target feature, and screen location would affect target acquisition speed.
    [Show full text]
  • ( 12 ) United States Patent
    US010254942B2 (12 ) United States Patent ( 10 ) Patent No. : US 10 , 254, 942 B2 Vranjes et al. (45 ) Date of Patent: Apr . 9 , 2019 ( 54 ) ADAPTIVE SIZING AND POSITIONING OF ( 56 ) References Cited APPLICATION WINDOWS U . S . PATENT DOCUMENTS (71 ) Applicant: Microsoft Technology Licensing, LLC , 3 , 227 ,888 A 1 / 1966 Shepherd et al . Redmond , WA (US ) 3 ,410 , 774 A 11/ 1968 Barson et al. 3 , 542 , 453 A 11 / 1970 Kantor ( 72 ) Inventors : Miron Vranjes , Seattle , WA (US ) ; 3 , 836 , 258 A 9 / 1974 Courten et al. 3 , 906 , 528 A 9 / 1975 Johnson Oliver R . Jones , Seattle , WA (US ); Nils 3 , 971, 065 A 7 / 1976 Bayer Anders Sundelin , Duvall , WA (US ) ; 4 , 200 , 395 A 4 / 1980 Smith et al . Chaitanya Dev Sareen , Seattle , WA 4 , 294 , 507 A 10 / 1981 Johnson 4 , 343 , 890 A 8 / 1982 Phillips et al. (US ) ; Steven J . Frederickson , Seattle , 4 ,402 ,610 A 9 / 1983 Lacombat WA (US ) 4 , 560 , 249 A 12 / 1985 Nishiwaki et al. 4 ,664 , 524 A 5 / 1987 Hattori et al. ( 73 ) Assignee : Microsoft Technology Licensing , LLC , 4 ,711 ,512 A 12 / 1987 Upatnieks 4 ,758 ,087 A 7 / 1988 Hicks, Jr . Redmond , WA (US ) 4 ,799 , 752 A 1 / 1989 Carome 4 ,822 , 145 A 4 / 1989 Staelin ( * ) Notice : Subject to any disclaimer, the term of this 4 , 823 , 283 A 4 / 1989 Diehm et al. patent is extended or adjusted under 35 4 , 860 , 361 A 8 / 1989 Sato et al . U . S . C . 154 ( b ) by 459 days . ( Continued ) (21 ) Appl. No.
    [Show full text]
  • Phaser.Js Game Design Workbook
    Phaser.js Game Design Workbook Game development guide using Phaser v2.6.2., and Community Edition JavaScript Game Framework Stephen Gose This book is for sale at http://leanpub.com/phaserjsgamedesignworkbook This version was published on 2018-11-12 This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and many iterations to get reader feedback, pivot until you have the right book and build traction once you do. © Copyright, 2008-2018, Stephen Gose. All rights reserved. Tweet This Book! Please help Stephen Gose by spreading the word about this book on Twitter! The suggested tweet for this book is: I’m making HTML games using Phaser Game Design Workbook. The suggested hashtag for this book is #PBMCube. Find out what other people are saying about the book by clicking on this link to search for this hashtag on Twitter: #PBMCube Also By Stephen Gose Voice of Foreign Exchange Game Template Mechanics for ActionScript JigSaw Puzzles Phaser Game Prototyping Multi-player Gaming Systems Phaser Game Starter Kit Collection Kiwi Game Design Workbook Making Dress-UP Browser Games with Phaser v2 Using JavaScript OLOO in game development Making Dating & Quiz Games Making Puzzle Browser Games with Phaser v2 Phaser v2 Game Design Workshop Course Phaser III Game Design Workshop Making Peg Solitaire Browser Games with Phaser v2 Phaser III Game Prototyping Phaser III Game Design Workbook Phaser III Game Starter Kit Collection Making RPG Games with Phaser v2 For my student@ Early Career Academy, Tempe, AZ and @ ITT Technical Institute, Tempe, AZ and more currently To my students @ University of Advancing Technology (UAT), Tempe, AZ CONTENTS Contents Copyright Notice: .........................................
    [Show full text]