WO 2013/158322 Al 24 October 2013 (24.10.2013) P O P C T

Total Page:16

File Type:pdf, Size:1020Kb

WO 2013/158322 Al 24 October 2013 (24.10.2013) P O P C T (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International Publication Date WO 2013/158322 Al 24 October 2013 (24.10.2013) P O P C T (51) International Patent Classification: (74) Agent: STEIN, Michael, D.; Woodcock Washburn LLP, G06T 15/08 (201 1.01) Cira Centre, 2929 Arch Street, 12th Floor, Philadelphia, PA 19104-2891 (US). (21) International Application Number: PCT/US20 13/032821 (81) Designated States (unless otherwise indicated, for every kind of national protection available): AE, AG, AL, AM, (22) Date: International Filing AO, AT, AU, AZ, BA, BB, BG, BH, BN, BR, BW, BY, 18 March 2013 (18.03.2013) BZ, CA, CH, CL, CN, CO, CR, CU, CZ, DE, DK, DM, (25) Filing Language: English DO, DZ, EC, EE, EG, ES, FI, GB, GD, GE, GH, GM, GT, HN, HR, HU, ID, IL, IN, IS, JP, KE, KG, KM, KN, KP, (26) Publication Language: English KR, KZ, LA, LC, LK, LR, LS, LT, LU, LY, MA, MD, (30) Priority Data: ME, MG, MK, MN, MW, MX, MY, MZ, NA, NG, NI, 61/635,075 18 April 2012 (18.04.2012) US NO, NZ, OM, PA, PE, PG, PH, PL, PT, QA, RO, RS, RU, RW, SC, SD, SE, SG, SK, SL, SM, ST, SV, SY, TH, TJ, (71) Applicant: THE REGENTS OF THE UNIVERSITY TM, TN, TR, TT, TZ, UA, UG, US, UZ, VC, VN, ZA, OF CALIFORNIA [US/US]; 1111 Franklin Street, ZM, ZW. Twelfth Floor, Oakland, CA 94607 (US). (84) Designated States (unless otherwise indicated, for every (72) Inventors: DAVIS, James E.; 1156 High Street: SOE3, kind of regional protection available): ARIPO (BW, GH, Santa Cruz, CA 95064 (US). SCHER, Steven; 543 Beacon GM, KE, LR, LS, MW, MZ, NA, RW, SD, SL, SZ, TZ, Place, Chula Vista, CA 91910 (US). LIU, Jing; 1200 Dale UG, ZM, ZW), Eurasian (AM, AZ, BY, KG, KZ, RU, TJ, Avenue, Apartment 87, Mountain View, CA 94040 (US). TM), European (AL, AT, BE, BG, CH, CY, CZ, DE, DK, VAISH, Rajan; 1200 Dale Avenue, Apartment 87, Moun EE, ES, FI, FR, GB, GR, HR, HU, IE, IS, IT, LT, LU, LV, tain View, CA 94040 (US). GUNAWARDANE, Prabath; MC, MK, MT, NL, NO, PL, PT, RO, RS, SE, SI, SK, SM, 216 Mountain View, Apartment 3, Mountain View, CA TR), OAPI (BF, BJ, CF, CG, CI, CM, GA, GN, GQ, GW, 9404 1 (US). ML, MR, NE, SN, TD, TG). [Continued on nextpage] (54) Title: SIMULTANEOUS 2D AND 3D IMAGES ON A DISPLAY (57) Abstract: Many 3D displays show 3D images to view ers wearing special eyeglasses, while showing a double im age to viewers without eyeglasses (13e). We demonstrate a method (15a-15h) that provides those with eyeglasses a 3D experience while viewers without eyeglasses see a 2D image without artifacts (Fig. 1, image lb). In addition to separate Left ("L") and Right ("R") images in each frame, we add a X third image ("N"), invisible to those with glasses. In the com bined view seen by those without glasses, this cancels the Right image, leaving only the Left. If the Left and Right im ages are of equal brightness, this approach results in low contrast to viewers without glasses. Allowing differential brightness between the Left and Right images improves 2D contrast. We determine that viewers with glasses maintain a strong 3D experience, even when one eye is significantly darker than the other. FIG. 15 w o 2013/158322 Al 111 111 II II I Hill 1 I ll l l III I II III II I II Published: SIMULTANEOUS 2D AND 3D IMAGES ON A DISPLAY CROSS REFERENCE [0001] This application claims the benefit of U.S. Provisional Application No. 61/635075, filed on April 18, 2012, the content of which is hereby incorporated by reference in its entirety. TECHNICAL FIELD [0002] The disclosed subject matter relates generally to three-dimensional television (3DTV) technology, and more particularly to a method and system that provide viewers with 3D glasses a 3D experience while viewers without glasses see a 2D image without artifacts such as ghosting. Our approach is applicable to displays using either active-shutter glasses or passive glasses. BACKGROUND [0003] With a 3DTV, depth perception is conveyed to the viewer by employing techniques such as stereoscopic display, multi-view display, 2D-plus-depth, or any other form of 3D display. Most modern 3D television sets use an active shutter 3D system or a polarized 3D system and some are autostereoscopic without the need of glasses. [0004] There are several techniques to produce and display 3D moving pictures. A basic requirement for display technologies is to display offset images that are filtered separately to the left and right eye. Two approached have been used to accomplish this: (1) have the viewer wear 3D eyeglasses to filter the separately offset images to each eye, or (2) have the light source split the images directionally into the viewer's eyes, with no 3D glasses required. [0005] As explained in the detailed description below, many 3D displays show 3D images to viewers wearing the special 3D eyeglasses, but show an incomprehensible double image (called "ghosting") to viewers without glasses. A goal of the present invention is to devise a method and system for providing viewers with glasses a 3D experience while also providing viewers without glasses a 2D image without artifacts. SUMMARY [0006] Many 3D displays show 3D images to viewers wearing special eyeglasses, while showing an incomprehensible double image to viewers without glasses. We demonstrate a method that provides those with eyeglasses a 3D experience while viewers without glasses see a 2D image without artifacts. In addition to separate Left and Right images in each frame, we add a third image, invisible to those with glasses. In the combined view seen by those without glasses, this cancels the Right image, leaving only the Left. If the Left and Right images are of equal brightness, this approach results in low contrast to viewers without glasses. Allowing differential brightness between the Left and Right images improves 2D contrast. We determine that viewers with glasses maintain a strong 3D experience, even when one eye is significantly darker than the other. Since viewers with glasses see a darker image in one eye, they experience a small distortion of perceived depths due to the Pulfrich Effect. This produces illusions similar to those caused by a time delay in one eye. We find that a 40% brightness difference cancels an opposing distortion caused by the typical 8 millisecond delay between the Left and Right images of sequential active-shutter stereoscopic displays. Our technique is applicable to displays using either active-shutter glasses or passive glasses. [0007] Other aspects of the inventive method and system are described below. BRIEF DESCRIPTION OF THE DRAWINGS [0008] Figure 1, part (a), depicts how a typical glasses-based 3DTV shows a different image to each eye of viewers wearing stereo glasses. Part (b) depicts how an inventive 3D+ 2DTV shows a different image to each eye of viewers wearing stereo glasses, but shows only one of these images to those without glasses, removing the "ghosted" double- image. Part (c) illustrates that this is accomplished by cancelling out one image of the stereo pair. [0009] Figure 2 depicts a comparison of various displays that show a sequence of frames. [0010] Figure 3 illustrates how different amounts of wasted light result from different frame lengths for the L, R and inverse R frames. [0011] Figure 4 is a graph showing how max2D, the brightness of the composite image seen by viewers not wearing stereo glasses, improves when aR, the brightness of the image shown to the right eye of 3D viewers, is decreased. [0012] Figure 5 depicts two versions of an image: (left) the ghosted double-image that would be seen on a typical 3D display if the viewer did not wear stereo glasses, and (right) the lower-contrast image without ghosting that the viewer would see on a display in accordance with the present invention. [0013] Figure 6 is a graph of viewer preference data. [0014] Figure 7 depicts images used in an experiment to quantify viewers' ability to perceive depth in static images on a stereoscopic display when one eye is presented with a darker image than the other eye. [0015] Figure 8 is a graph of the results of the experiment of Figure 7, showing that as one eye's brightness decreases, viewers' ability to perceive depth is not affected until the brightness of the darker eye is below 20% of the brightness of the brighter eye. [0016] Figure 9 depicts images used in an experiment to measure viewers' ability to perceive depth when the images shown to one eye are darker than those shown to the other eye. [0017] Figure 10 is a graph of data from the experiment of Figure 9, showing that viewers' ability to perceive depth differences was undisturbed by one eye seeing a darker image than the other provided the dark image was at least 10% as bright as the brighter eye. [0018] Figure 11 depicts images used in an experiment to quantify the impact of the Pulfrich Effect on depth perception. [0019] Figure 12 is made up of two graphs of data showing that, when one eye is brighter than the other, the depth of moving objects is misperceived. [0020] Figure 13 depicts a prototype of the inventive system including two projectors and a polarization preserving screen. [0021] Figure 14 depicts several example images of the inventive prototype in use.
Recommended publications
  • The Perfect Mix Getting Engaged in College
    05 07 10 | reportermag.com GETTING ENGAGED IN COLLEGE The other kind of RIT Rings. THE PERFECT MIX Remember: intro, rising action, climax, denouement and conclusion. ROADTRIP TO THE FUTURE Four men. Four cities. One mission. EDITOR’S NOTE TABLE OF CONTENTS 05 07 10 | VOLUME 59 | ISSUE 29 EDITOR IN CHIEF Madeleine Villavicencio | [email protected] My Innovative Mixtape MANAGING EDITOR Emily Mohlmann Every few weeks or so, I abandon the “shuffle play all” function on my MP3 player, turn off Genius on | [email protected] iTunes, and make a playlist. I spend hours listening to track after track, trimming down the set list and COPY EDITOR Laura Mandanas attempting to get the transitions just right. Sometimes, it just comes together; other times, I just can’t | [email protected] quite get it right. But one thing’s for certain: each mix is a reflection of who I am at the time of its creation. NEWS EDITOR Emily Bogle And if it’s good enough and means something, I’ll share it with someone special. | [email protected] LEISURE EDITOR Alex Rogala It crossed my mind to share a complete and perfected mix, but I decided that would take away from its | [email protected] original value. Instead, I’ve decided to share something unfinished and challenge you to help me find the FEATURES EDITOR John Howard perfect mix. Add or cut tracks as you please, and jumble them up as you see fit. And when you think you’ve | [email protected] got it, send that final track list my way.
    [Show full text]
  • Virtual Reality and Visual Perception by Jared Bendis
    Virtual Reality and Visual Perception by Jared Bendis Introduction Goldstein (2002) defines perception as a “conscious sensory experience” (p. 6) and as scientists investigate how the human perceptual system works they also find themselves investigating how the human perceptual system doesn’t work and how that system can be fooled, exploited, and even circumvented. The pioneers in the ability to control the human perceptual system have been in the field of Virtual Realty. In Simulated and Virtual Realities – Elements of Perception, Carr (1995) defines Virtual Reality as “…the stimulation of human perceptual experience to create an impression of something which is not really there” (p. 5). Heilig (2001) refers to this form of “realism” as “experience” and in his 1955 article about “The Cinema of the Future” where he addresses the need to look carefully at perception and breaks down the precedence of perceptual attention as: Sight 70% Hearing 20% Smell 5% Touch 4% Taste 1% (p. 247) Not surprisingly sight is considered the most important of the senses as Leonardo da Vinci observed: “They eye deludes itself less than any of the other senses, because it sees by none other than the straight lines which compose a pyramid, the base of which is the object, and the lines conduct the object to the eye… But the ear is strongly subject to delusions about the location and distance of its objects because the images [of sound] do not reach it in straight lines, like those of the eye, but by tortuous and reflexive lines. … The sense of smells is even less able to locate the source of an odour.
    [Show full text]
  • Evaluating Stereoacuity with 3D Shutter Glasses Technology Huang Wu1*, Han Jin2, Ying Sun3, Yang Wang1, Min Ge1, Yang Chen1 and Yunfeng Chi1
    Wu et al. BMC Ophthalmology (2016) 16:45 DOI 10.1186/s12886-016-0223-3 RESEARCH ARTICLE Open Access Evaluating stereoacuity with 3D shutter glasses technology Huang Wu1*, Han Jin2, Ying Sun3, Yang Wang1, Min Ge1, Yang Chen1 and Yunfeng Chi1 Abstract Background: To determine the stereoacuity threshold with a 3D laptop equipped with 3D shutter glasses, and to evaluate the effect of different shape and size of test symbols and different type of disparities to stereoacuity. Methods: Thirty subjects with a visual acuity in each eye of at least 0 logMAR and a stereoacuity of at least 32 arcsec (as assessed in Fly Stereo Acuity Test) were recruited. Three target symbols—tumbling "E", tumbling "C", and "□"—were displayed, each with six different sizes representing a visual acuity ranging from 0.5 to 0 logMAR when tested at 4.1 m, and with both crossed and uncrossed disparities. Two test systems were designed - fixed distance of 4.1 m and one for variable distance. The former has disparities ranging from 10 to 1000 arcsec. Each subject completed 36 trials to investigate the effect of different symbol sizes and shapes, and disparity types on stereoacuity. In the variable distance system, each subject was tested 12 times for the same purposes, both proximally and distally (the point where the 3D effect just appears and where it just disappears respectively), and the mean value was calculated from the mean proximal and distal distances. Results: No significant difference was found among the groups in the fixed distance test system (Kruskal-Wallis test; Chi-square = 29.844, P = 0.715).
    [Show full text]
  • Real-Time 2D to 3D Video Conversion Using Compressed Video Based on Depth-From Motion and Color Segmentation
    International Journal of Latest Trends in Engineering and Technology (IJLTET) Real-Time 2D to 3D Video Conversion using Compressed Video based on Depth-From Motion and Color Segmentation N. Nivetha Research Scholar, Dept. of MCA, VELS University, Chennai. Dr.S.Prasanna, Asst. Prof, Dept. of MCA, VELS University, Chennai. Dr.A.Muthukumaravel, Professor & Head, Department of MCA, BHARATH University, Chennai. Abstract :- This paper provides the conversion of two dimensional (2D) to three dimensional (3D) video. Now-a-days the three dimensional video are becoming more popular, especially at home entertainment. For converting 2D to 3D, the conversion techniques are used so able to deliver the 3D videos efficiently and effectively. In this paper, block matching based depth from motion estimation and color segmentation is used for presenting the video conversion scheme i.e., automatic monoscopic video to stereoscopic 3D.To provide a good region boundary information the color based segmentation is used for fuse with block-based depth map for assigning good depth values in every segmented region and eliminating the staircase effect. The experimental results can achieve 3D stereoscopic video output is relatively high quality manner. Keywords - Depth from Motion, 3D-TV, Stereo vision, Color Segmentation. I. INTRODUCTION 3DTV is television that conveys depth perception to the viewer by employing techniques such as stereoscopic display, multiview display, 2D-plus depth, or any other form of 3D display. In 2010, 3DTV is widely regarded as one of the next big things and many well-known TV brands such as Sony and Samsung were released 3D- enabled TV sets using shutter glasses based 3D flat panel display technology.
    [Show full text]
  • COMS W4172 Perception, Displays, and Devices 3
    COMS W4172 Perception, Displays, and Devices 3 Steven Feiner Department of Computer Science Columbia University New York, NY 10027 www.cs.columbia.edu/graphics/courses/csw4172 February 16, 2021 1 Stereoscopic Viewing Wheatstone, 1838 Passive . Spatial multiplexing . “Free viewing” http://www.luminous-lint.com/app/image/06751152706639519425853376/ . Stereoscope . Presents each eye with its own view Brewster, 1839 Holmes, 1861 http://www.gilai.com/product_763/Holms-Wood-Stereoscope-with-Green-Velvet-Lining-and-Stereocard. 2 Feiner, COMS W4172, Spring 2021 Stereoscopic Viewing Passive . Spatial multiplexing . “Free viewing” . Stereoscope presents each eye with its own view 3 What’s wrong with these pictures? 4 Feiner, COMS W4172, Spring 2021 Commercial versions (w/o camera support): • www.amazon.com/Hasbro-Viewer-touch- Stereoscopic Viewing iPhone-Black/dp/B004T7VI2Y • www.samsung.com/global/galaxy/gear-vr For commodity • arvr.google.com/cardboard • arvr.google.com/daydream/smartphonevr Passive mobile devices • holokit.io/ (optical see-through) . Spatial multiplexing . VR: Stereoscopic viewer for smartphone/tablet . AR: 1. Cheat: Separate left/right stereoscopic views of virtual objects combined with L–R shifted copies of a monoscopic camera view 2. Full video see-through stereo using additional camera lens (e.g., www.kula3d.com/kula-bebe) Stereo viewer used with Columbia’s Goblin XNA and Nokia Lumia 800 3. Optical see-through stereo using 2012 additional optics (e.g., holokit.io) Google Cardboard, 2014 https://arvr.google.com/cardboard/ Stereo viewer for Nokia Lumia 800 courtesy of USC ICT MxR Lab, 2012 https://web.archive.org/web/20190314093037/http://projects.ict.usc.edu/mxr/diy/fov2go-viewer/ 5 Autostereoscopic Viewing Passive .
    [Show full text]
  • Course Notes
    Siggraph ‘97 Stereo Computer Graphics for Virtual Reality Course Notes Lou Harrison David McAllister Martin Dulberg Multimedia Lab Department of Computer Science North Carolina State University ACM SIGGRAPH '97 Stereoscopic Computer Graphics for Virtual Reality David McAllister Lou Harrison Martin Dulberg MULTIMEDIA LAB COMPUTER SCIENCE DEPARTMENT NORTH CAROLINA STATE UNIVERSITY http://www.multimedia.ncsu.edu Multimedia Lab @ NC State Welcome & Overview • Introduction to depth perception & stereo graphics terminology • Methods to generate stereoscopic images • Stereo input/output techniques including head mounted displays • Algorithms in stereoscopic computer graphics Multimedia Lab @ NC State Speaker Biographies: David F. McAllister received his BS in mathematics from the University of North Carolina at Chapel Hill in 1963. Following service in the military, he attended Purdue University, where he received his MS in mathematics in 1967. He received his Ph. D. in Computer Science in 1972 from the University of North Carolina at Chapel Hill. Dr. McAllister is a professor in the Department of Computer Science at North Carolina State University. He has published many papers in the areas of 3D technology and computer graphics and has given several courses in these areas at SPIE, SPSE, Visualization and SIGGRAPH. He is the editor of a book on Stereo Computer Graphics published by Princeton University Press. Lou Harrison received his BS in Computer Science from North Carolina State University in 1987 and his MS in Computer Science, also from NCSU, in 1990. Mr. Harrison has taught courses in Operating Systems and Computer Graphics at NCSU and is currently Manager of Operations for the Department of Computer Science at NCSU while pursuing his Ph.
    [Show full text]
  • 3D Display Techniques for Cartographic Purposes: Semiotic Aspects
    Buchroithner, Manfred 3D DISPLAY TECHNIQUES FOR CARTOGRAPHIC PURPOSES: SEMIOTIC ASPECTS Manfred F. Buchroithner*, Robert Schenkel**, Sabine Kirschenbauer*** Dresden University of Technology, Germany Institute of Cartography *[email protected], **[email protected], ***[email protected] KEY WORDS: 3D-Display Techniques, Cartographic Semiotic, Psychological Cues, Cognition ABSTRACT The variety of the cartographic 3D-visualisation is constantly increasing and thereby more and more fields of application are being developed. These facts require that the coherence of technical, perceptive-psychological and cartographic-theoretical aspects has to be subdivided and classified. Through these formulations of interdependence of the single parameters shall be pointed out. The consideration of communication-theoretical and perceptive-theoretical aspects shall lead to a purpose-oriented realisation of the 3D-visualisation techniques according to the user’s necessities. KURZFASSUNG Das thematische Gebiet “Kartographische 3D-Visualisierung”, dessen Vielfältigkeit ständig zunimmt und dem immer mehr Anwendungsbereiche erschlossen werden, erfordert, daß die Zusammenhänge von technisch- verfahrensspezifischen, wahrnehmungspsychologischen und kartographisch-theoretischen Faktoren aufgeschlüsselt und klassifiziert werden. Hierdurch sollen Ansätze einer Interdependanz der einzelnen Parameter aufgezeigt werden. Die Berücksichtigung kommunikationstheoretischer und wahrnehmungstheoretischer Aspekte soll zu einer anwendergerechten
    [Show full text]
  • Vers Un Modèle Unifié Pour L'affichage
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Savoirs UdeS VERS UN MODÈLE UNIFIÉ POUR L’AFFICHAGE AUTOSTÉRÉOSCOPIQUE D’IMAGES par Clément Proust Mémoire présenté au Département d’informatique en vue de l’obtention du grade de maître ès sciences (M.Sc.) FACULTÉ DES SCIENCES UNIVERSITÉ DE SHERBROOKE Sherbrooke, Québec, Canada, 24 février 2016 Le 24 février 2016 Le jury a accepté le mémoire de Clément Proust dans sa version finale Membres du jury Professeur Djemel Ziou Directeur Département d’informatique de l’Université de Sherbrooke Professeur Bessam Abdulrazak Président-rapporteur Département d’informatique de l’Université de Sherbrooke Professeur Marie–Flavie Auclair–Fortier Membre interne Département d’informatique de l’Université de Sherbrooke i Sommaire Dans un premier chapitre, nous décrivons un modèle de formation d’image affichée sur un écran en revenant sur les concepts que sont la lumière, la géométrie et l’optique. Nous détaillons ensuite les différentes techniques d’affichage stéréoscopique utilisées à l’heure actuelle, en parcourant la stéréoscopie, l’autostéréoscopie et plus particuliè- rement le principe d’imagerie intégrale. Le deuxième chapitre introduit un nouveau modèle de formation d’image stéréoscopique. Ce dernier nous permet d’observer deux images d’une paire stéréoscopique soumises à des éventuelles transformations et à l’effet d’une ou de plusieurs optiques particulières, pour reproduire la perception de trois dimensions. Nous abordons l’aspect unificateur de ce modèle. En effet il permet de décrire et d’expliquer de nombreuses techniques d’affichage stéréoscopique exis- tantes. Enfin, dans un troisième chapitre nous discutons d’une méthode particulière de création de paires d’images stéréoscopiques à l’aide d’un champ lumineux.
    [Show full text]
  • 3D Frequently Asked Questions
    3D Frequently Asked Questions Compiled from the 3-D mailing list 3D Frequently Asked Questions This document was compiled from postings on the 3D electronic mail group by: Joel Alpers For additions and corrections, please contact me at: [email protected] This is Revision 1.1, January 5, 1995 The information in this document is provided free of charge. You may freely distribute this document to anyone you desire provided that it is passed on unaltered with this notice intact, and that it be provided free of charge. You may charge a reasonable fee for duplication and postage. This information is deemed accurate but is not guaranteed. 2 Table Of Contents 1 Introduction . 7 1.1 The 3D mailing list . 7 1.2 3D Basics . 7 2 Useful References . 7 3 3D Time Line . 8 4 Suppliers . 9 5 Processing / Mounting . 9 6 3D film formats . 9 7 Viewing Stereo Pairs . 11 7.1 Free Viewing - viewing stereo pairs without special equipment . 11 7.1.1 Parallel viewing . 11 7.1.2 Cross-eyed viewing . 11 7.1.3 Sample 3D images . 11 7.2 Viewing using 3D viewers . 11 7.2.1 Print viewers - no longer manufactured, available used . 11 7.2.2 Print viewers - currently manufactured . 12 7.2.3 Slide viewers - no longer manufactured, available used . 12 7.2.4 Slide viewers - currently manufactured . 12 8 Stereo Cameras . 13 8.1 Currently Manufactured . 13 8.2 Available used . 13 8.3 Custom Cameras . 13 8.4 Other Techniques . 19 8.4.1 Twin Camera . 19 8.4.2 Slide Bar .
    [Show full text]
  • The Effect of Audio Cues and Sound Source Stimuli on Looming
    The Effect of Audio Cues and Sound Source Stimuli on Looming Perception Sonia Wilkie Submitted in partial fulfilment of the requirements of the Degree of Doctor of Philosophy School of Electronic Engineering and Computer Science Queen Mary, University of London January 28, 2015 Statement of Originality I, Sonia Wilkie, confirm that the research included within this thesis is my own work or that where it has been carried out in collaboration with, or supported by others, that this is duly acknowledged below and my contribution indicated. Previously published material is also acknowledged below. I attest that I have exercised reasonable care to ensure that the work is original, and does not to the best of my knowledge break any UK law, infringe any third partys copyright or other Intellectual Property Right, or contain any confidential material. I accept that the College has the right to use plagiarism detection software to check the electronic version of the thesis. I confirm that this thesis has not been previously submitted for the award of a degree by this or any other university. The copyright of this thesis rests with the author and no quotation from it or in- formation derived from it may be published without the prior written consent of the author. Signature: Date: Abstract Objects that move in depth (looming) are ubiquitous in the real and virtual worlds. How humans interact and respond to these approaching objects may affect their continued survival in both the real and virtual words, and is dependent on the individual's capacity to accurately interpret depth and movement cues.
    [Show full text]
  • 3D Television - Wikipedia
    3D television - Wikipedia https://en.wikipedia.org/wiki/3D_television From Wikipedia, the free encyclopedia 3D television (3DTV) is television that conveys depth perception to the viewer by employing techniques such as stereoscopic display, multi-view display, 2D-plus-depth, or any other form of 3D display. Most modern 3D television sets use an active shutter 3D system or a polarized 3D system, and some are autostereoscopic without the need of glasses. According to DisplaySearch, 3D televisions shipments totaled 41.45 million units in 2012, compared with 24.14 in 2011 and 2.26 in 2010.[1] As of late 2013, the number of 3D TV viewers An example of three-dimensional television. started to decline.[2][3][4][5][6] 1 History 2 Technologies 2.1 Displaying technologies 2.2 Producing technologies 2.3 3D production 3TV sets 3.1 3D-ready TV sets 3.2 Full 3D TV sets 4 Standardization efforts 4.1 DVB 3D-TV standard 5 Broadcasts 5.1 3D Channels 5.2 List of 3D Channels 5.3 3D episodes and shows 5.3.1 1980s 5.3.2 1990s 5.3.3 2000s 5.3.4 2010s 6 World record 7 Health effects 8See also 9 References 10 Further reading The stereoscope was first invented by Sir Charles Wheatstone in 1838.[7][8] It showed that when two pictures 1 z 17 21. 11. 2016 22:13 3D television - Wikipedia https://en.wikipedia.org/wiki/3D_television are viewed stereoscopically, they are combined by the brain to produce 3D depth perception. The stereoscope was improved by Louis Jules Duboscq, and a famous picture of Queen Victoria was displayed at The Great Exhibition in 1851.
    [Show full text]
  • Anaglyph Image - Wikipedia, the Free Encyclopedia
    Anaglyph image - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Anaglyph_image Anaglyph image From Wikipedia, the free encyclopedia 1 of 11 12/28/08 6:41 PM Anaglyph image - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Anaglyph_image Stereo monochrome image anaglyphed for red (left eye) and cyan (right eye) filters. Stereogram source image for the anaglyph above. Stereoscopic effect used in Macro photography. 2 of 11 12/28/08 6:41 PM Anaglyph image - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Anaglyph_image Anaglyph images are used to provide a stereoscopic 3D effect, when viewed with 2 color glasses (each lens a chromatically opposite color, usually red and cyan). Images are made up of two color layers, superimposed, but offset with respect to each other to produce a depth effect. Usually the main subject is in the center, while the foreground and background are shifted laterally in opposite directions. The picture contains two differently filtered colored images, one for each eye. When viewed through the "color coded" "anaglyph glasses", they reveal an integrated stereoscopic image. The visual cortex of the brain fuses this into perception of a Anaglyph (3D photograph) of three dimensional scene or composition. Saguaro National Park at dusk. Anaglyph images have seen a recent resurgence due to the presentation of images and video on the internet, Blu-ray HD disks, CDs, and even in print. Low cost paper frames or plastic-framed glasses hold accurate color filters, that typically, after 2002 make use of all 3 primary colors. The current norm is red for one channel (usually the left) and a combination of both blue and green in the other filter.
    [Show full text]