United States Patent ( 10 ) Patent No.: US 10,520,782 B2 Busch (45 ) Date of Patent : Dec

Total Page:16

File Type:pdf, Size:1020Kb

United States Patent ( 10 ) Patent No.: US 10,520,782 B2 Busch (45 ) Date of Patent : Dec US010520782B2 United States Patent ( 10 ) Patent No.: US 10,520,782 B2 Busch (45 ) Date of Patent : Dec. 31, 2019 (54 ) DISPLAY DEVICES , SYSTEMS AND USPC 359/259 METHODS CAPABLE OF SINGLE -SIDED , See application file for complete search history . DUAL - SIDED , AND TRANSPARENT MIXED REALITY APPLICATIONS (56 ) References Cited (71 ) Applicant: James David Busch , Tempe, AZ (US ) U.S. PATENT DOCUMENTS 3,037,138 A 5/1962 Motson (72 ) Inventor: James David Busch , Tempe , AZ (US ) 5,103,328 A 4/1992 Numao 5,357,355 A 10/1994 Arai ( * ) Notice : Subject to any disclaimer, the term of this patent is extended or adjusted under 35 (Continued ) U.S.C. 154 ( b ) by 0 days . FOREIGN PATENT DOCUMENTS (21 ) Appl. No.: 15 / 887,882 CN 105098087 A 11/2015 DE 102014111484 Al 2/2016 ( 22 ) Filed : Feb. 2 , 2018 (Continued ) (65 ) Prior Publication Data OTHER PUBLICATIONS US 2018/0217429 A1 Aug. 2 , 2018 “ See -through display, ” Wikipedia ( retrieved Jan. 14 , 2017) . Related U.S. Application Data ( Continued ) (60 ) Provisional application No. 62/ 454,020 , filed on Feb. Primary Examiner — Mohammed A Hasan 2 , 2017 . (74 ) Attorney, Agent, or Firm James David Busch ( 51 ) Int . Ci. (57 ) ABSTRACT GO2F 1/139 ( 2006.01 ) The display includes a transparent display ( TD ) element or GO2F 1/07 (2006.01 ) layer such as at least one Transparent Organic Light Emit GO2B 27/00 ( 2006.01) ting Diode ( TOLED ) element and at least one active shutter HOIL 27/32 (2006.01 ) (AS ) element or layer such as a liquid crystal utter. The AS G06T 19/00 ( 2011.01 ) may be opaque or transparent. The TD may be transparent or display or emit light in mixtures of color. Black may be (52 ) U.S. CI. displayed by a TD pixel by having the TD be transparent and CPC G02F 1/1395 (2013.01 ) ; G02B 27/00 turning an AS opaque . Each display is composed of elements ( 2013.01) ; G02F 1/076 ( 2013.01 ) ; G02F that can have many states . The states can be used to create 2201/44 ( 2013.01) ; GO6T 19/006 (2013.01 ) ; different display modes . Devices with these displays can HOIL 27/323 ( 2013.01 ) ; HOIL 27/3232 have many form factors and configurations. These devices ( 2013.01) ; HOIL 27/3267 (2013.01 ) ; HOIL may change display modes based on spatial context as 2251/5323 (2013.01 ) determined by any one of various environmental and con ( 58 ) Field of Classification Search figuration sensors . CPC GO2F 2201/52 , GO2F 1/133514 ; GO2F 1/19 ; GO2F 2001/212 ; B82Y 20/00 19 Claims, 28 Drawing Sheets -102 110 ORCHESTRATOR 106 108 US 10,520,782 B2 Page 2 ( 56 ) References Cited 2006/0238684 Al 10/2006 Kiya 2006/0268208 A1 11/2006 Murakami U.S. PATENT DOCUMENTS 2007/0002130 A1 1/2007 Hartkop 2007/0080629 A1 4/2007 Ko 5,757,139 A 5/1998 Forrest 2007/0114916 Al 5/2007 Chang 5,825,543 A 10/1998 Ouderkirk 2007/0120478 A1 5/2007 Lee 5,881,377 A 3/1999 Giel 2007/0138941 Al 6/2007 Jin 5,986,401 A 11/1999 Thompson 2007/0159070 Al 7/2007 Hu 6,088,079 A 7/2000 Kameyama 2007/0201248 Al 8/2007 Jung 6,288,840 B1 9/2001 Perkins 2007/0274077 Al 11/2007 MacKinnon 6,337,492 B1 1/2002 Jones 2008/0013174 A1 1/2008 Allen 6,361,838 B1 3/2002 Miyatake 2008/0018732 Al 1/2008 Moller 6,433,853 B1 8/2002 Kameyama 2008/0062259 Al 3/2008 Lipton 6,537,688 B2 3/2003 Silvernail 2008/0231690 A1 9/2008 Woodgate 6,657,690 B2 12/2003 Hashimoto 2008/0248815 Al 10/2008 Busch 6,674,504 B1 1/2004 Li 2008/0266500 A1 10/2008 Nimura 6,689,492 B1 2/2004 Yamazaki 2008/0297695 Al 12/2008 Sekiguchi 6,831,298 B2 12/2004 Park 2008/0303996 Al 12/2008 Morisawa 6,961,178 B2 11/2005 Sugino 2009/0091826 A1 4/2009 Sugino 7,030,552 B2 4/2006 Chao 2009/0176029 Al 7/2009 Daniels 7,034,452 B2 4/2006 Kim 2009/0201446 A1 8/2009 Zhuang 7,106,001 B2 9/2006 Kim 2009/0201447 Al 8/2009 Zhuang 7,164,151 B2 1/2007 Yamazaki 2009/0219253 Al 9/2009 Izadi 7,450,105 B2 11/2008 Nakamura 2009/0273275 Al 11/2009 Tyldesley 7,514,861 B2 4/2009 Chang 2009/0278452 Al 11/2009 Kim 7,538,841 B2 5/2009 Murakami 2009/0284672 A1 11/2009 Baek 7,637,648 B2 12/2009 Jung 2009/0284690 Al 11/2009 Kuo 7,687,983 B2 3/2010 Lee 2010/0013380 A1 1/2010 Kim 7,714,904 B2 5/2010 Kudoh 2010/0026934 A1 2/2010 Sun 7,772,765 B2 8/2010 Park 2010/0033461 A1 2/2010 Hasegawa 7,804,493 B2 9/2010 Gettemy 2010/0033462 A1 2/2010 Hasegawa 7,859,526 B2 12/2010 Konicek 2010/0085207 Al 4/2010 Leung 7,864,270 B2 1/2011 Zhuang 2010/0091207 A1 4/2010 Hasegawa 7,916,183 B2 3/2011 Kudoh 2010/0115407 A1 5/2010 Kim 8,023,080 B2 9/2011 Kuo 2010/0125816 A1 5/2010 Bezos 8,059,232 B2 11/2011 Zhuang 2010/0165267 A1 7/2010 Yoshida 8,138,669 B2 3/2012 Kim 2010/0188605 Al 7/2010 Hasegawa 8,157,399 B2 4/2012 Leung 2010/0208110 A1 8/2010 Kudoh 8,314,542 B2 11/2012 Lee 2010/0233601 A1 9/2010 Takimoto 8,330,909 B2 12/2012 Yoshida 2010/0252825 Al 10/2010 Yamazaki 8,362,992 B2 1/2013 Kuhlman 2010/0271394 Al 10/2010 Howard 8,724,304 B2 5/2014 Raff 2010/0277439 A1 11/2010 Charlier 8,947,627 B2 2/2015 Rappoport 2011/0001805 Al 1/2011 Mentz 9,282,614 B2 3/2016 Yang 2011/0001808 Al 1/2011 Mentz 9,316,843 B2 4/2016 Ishiguro 9,495,008 B2 11/2016 Savastinuk 2011/0038045 Al 2/2011 Zhou 9,543,364 B2 1/2017 Rappoport 2011/0057866 A1 3/2011 Konicek 2001/0036013 Al 11/2001 Allen 2011/0090324 Al 4/2011 Mentz 2002/0015120 A1 2/2002 Kameyama 2011/0096156 A1 4/2011 Kim 2002/0118452 Al 8/2002 Taniguchi 2011/0157708 Al 6/2011 Kuo 2002/0122019 A1 9/2002 Baba 2011/0195354 Al 8/2011 Nukada 2003/0002296 Al 1/2003 Steiner 2011/0195758 A1 8/2011 Damale 2003/0048522 Al 3/2003 Liang 2011/0227487 Al 9/2011 Nichol 2003/0074672 A1 4/2003 Daniels 2011/0255303 A1 10/2011 Nichol 2003/0189754 A1 10/2003 Sugino 2011/0273906 A1 11/2011 Nichol 2003/0198456 A1 10/2003 Steiner 2012/0049723 Al 3/2012 Lee 2003/0210370 A1 11/2003 Yano 2012/0057006 A1 3/2012 Joseph 2004/0036707 Al 2/2004 Willis 2012/0081524 A1 4/2012 Joseph 2004/0041800 A1 3/2004 Daniels 2012/0099033 A1 4/2012 Ishiguro 2004/0043139 A1 3/2004 Daniels 2012/0163021 A1 * 6/2012 Bohn GO2F 1/133615 2004/0070822 A1 4/2004 Shioda 362/608 2004/0108971 Al 6/2004 Waldern 2004/0119407 Al 6/2004 Kim 2012/0188637 Al 7/2012 Joseph 2004/0125430 A1 * 7/2004 Kasajima GO2F 1/133536 2012/0320287 Al 12/2012 Kim 359/247 2014/0362429 A1 * 12/2014 Yokozeki HOIL 27/3232 2004/0212300 A1 10/2004 Chao 359/267 2004/0239243 A1 12/2004 Roberts 2015/0077323 Al 3/2015 Ramaswamy 2005/0012881 Al 1/2005 Liang 2015/0155344 A1 6/2015 Lee 2005/0018106 A1 1/2005 Wang 2015/0295015 A1 10/2015 Yu 2005/0024339 Al 2/2005 Yamazaki 2015/0349293 A1 12/2015 Park 2005/0052342 A1 3/2005 Wu 2016/0027859 Al 1/2016 Kim 2005/0146492 Al 7/2005 Baba 2005/0213181 Al 9/2005 MacKinnon FOREIGN PATENT DOCUMENTS 2006/0006792 A1 1/2006 Strip 2006/0066236 A1 3/2006 Tanaka EP 2211397 A3 6/2014 2006/0132634 Al 6/2006 Kudoh JP 2000098386 A 4/2000 2006/0152812 Al 7/2006 Woodgate WO 2009100055 A3 8/2009 2006/0176541 A1 8/2006 Woodgate WO 2009125918 A2 10/2009 2006/0181662 A1 8/2006 Kameyama WO 2009125918 A3 10/2011 US 10,520,782 B2 Page 3 ( 56 ) References Cited “White OLEDs (WOLEDs ) , ” Universal Display Corporation ( retrieved Jan. 26 , 2017 ) . FOREIGN PATENT DOCUMENTS “ Augmented reality , ” Wikipedia ( retrieved Jan. 26 , 2017) . “ Transparent Display, ” Kent Optoronics ( retrieved Jan. 26 , 2017) . WO 2015032901 A3 6/2015 “ InfoGlazing Transparent Display ,” Kent Optronics ( retrieved on WO 2016164184 Al 10/2016 Feb. 2 , 2017 ) . “ Custom Solutions ," Kent Optronics ( retrieved Jan. 26 , 2017 ) . OTHER PUBLICATIONS “ AMOLED , ” Wikipedia (retrieved Jan. 26 , 2017 ) . " OLED : Felxible devices, transparent screens and organic lamps, " Venturing Mind ( retrieved Jan. 26 , 2017 ) . “ Active shutter 3D system ,” Wikipedia (retrieved Jan. 14 , 2017) . “ Transparent OLED Display Module ,” Nuts and Volts ( retrieved “ StereoGraphics, ” Wikipedia ( retrieved Jan. 14, 2017 ) . Jan. 26 , 2017 ) . “ CrystalEYES User's Manual, ” StereoGraphics Corporation (2000 ) . “ PMOLED ( PassiveMatrix OLED ) : introduction and basics , ” OLED Alissa Wilkinson , “ Passengers is 3 movies in one, each creepier than Info ( retrieved Jan. 26 , 2017 ) . the last ,” Vox (Dec. 24 , 2016 ) . “ TDK startsmass production of a transparent 2.4 " QVGAPMOLEDs, " “ HUD applications, ” Won Motors Korea Co., Ltd ( retrieved Jan. 14 , OLED - Info ( retrieved Jan. 26 , 2017 ) . 2017 ) . “ Flexible OLEDs: introduction , applications and market status, " Ron Mertens , “ Hands on with 4D Systems 2 " transparent PMOLED OLED - Info ( retrieved Jan. 26 , 2017 ) . panel, ” OLED - Info ( Apr. 28 , 2013 ) . “ Transparent OLEDs: introduction and market status, " OLED - Info Ron Mertens, “ Lenovo's S - 800 uses a transparent PMOLED , prob ( retrieved Jan.
Recommended publications
  • Evaluating Stereoacuity with 3D Shutter Glasses Technology Huang Wu1*, Han Jin2, Ying Sun3, Yang Wang1, Min Ge1, Yang Chen1 and Yunfeng Chi1
    Wu et al. BMC Ophthalmology (2016) 16:45 DOI 10.1186/s12886-016-0223-3 RESEARCH ARTICLE Open Access Evaluating stereoacuity with 3D shutter glasses technology Huang Wu1*, Han Jin2, Ying Sun3, Yang Wang1, Min Ge1, Yang Chen1 and Yunfeng Chi1 Abstract Background: To determine the stereoacuity threshold with a 3D laptop equipped with 3D shutter glasses, and to evaluate the effect of different shape and size of test symbols and different type of disparities to stereoacuity. Methods: Thirty subjects with a visual acuity in each eye of at least 0 logMAR and a stereoacuity of at least 32 arcsec (as assessed in Fly Stereo Acuity Test) were recruited. Three target symbols—tumbling "E", tumbling "C", and "□"—were displayed, each with six different sizes representing a visual acuity ranging from 0.5 to 0 logMAR when tested at 4.1 m, and with both crossed and uncrossed disparities. Two test systems were designed - fixed distance of 4.1 m and one for variable distance. The former has disparities ranging from 10 to 1000 arcsec. Each subject completed 36 trials to investigate the effect of different symbol sizes and shapes, and disparity types on stereoacuity. In the variable distance system, each subject was tested 12 times for the same purposes, both proximally and distally (the point where the 3D effect just appears and where it just disappears respectively), and the mean value was calculated from the mean proximal and distal distances. Results: No significant difference was found among the groups in the fixed distance test system (Kruskal-Wallis test; Chi-square = 29.844, P = 0.715).
    [Show full text]
  • Real-Time 2D to 3D Video Conversion Using Compressed Video Based on Depth-From Motion and Color Segmentation
    International Journal of Latest Trends in Engineering and Technology (IJLTET) Real-Time 2D to 3D Video Conversion using Compressed Video based on Depth-From Motion and Color Segmentation N. Nivetha Research Scholar, Dept. of MCA, VELS University, Chennai. Dr.S.Prasanna, Asst. Prof, Dept. of MCA, VELS University, Chennai. Dr.A.Muthukumaravel, Professor & Head, Department of MCA, BHARATH University, Chennai. Abstract :- This paper provides the conversion of two dimensional (2D) to three dimensional (3D) video. Now-a-days the three dimensional video are becoming more popular, especially at home entertainment. For converting 2D to 3D, the conversion techniques are used so able to deliver the 3D videos efficiently and effectively. In this paper, block matching based depth from motion estimation and color segmentation is used for presenting the video conversion scheme i.e., automatic monoscopic video to stereoscopic 3D.To provide a good region boundary information the color based segmentation is used for fuse with block-based depth map for assigning good depth values in every segmented region and eliminating the staircase effect. The experimental results can achieve 3D stereoscopic video output is relatively high quality manner. Keywords - Depth from Motion, 3D-TV, Stereo vision, Color Segmentation. I. INTRODUCTION 3DTV is television that conveys depth perception to the viewer by employing techniques such as stereoscopic display, multiview display, 2D-plus depth, or any other form of 3D display. In 2010, 3DTV is widely regarded as one of the next big things and many well-known TV brands such as Sony and Samsung were released 3D- enabled TV sets using shutter glasses based 3D flat panel display technology.
    [Show full text]
  • Vers Un Modèle Unifié Pour L'affichage
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Savoirs UdeS VERS UN MODÈLE UNIFIÉ POUR L’AFFICHAGE AUTOSTÉRÉOSCOPIQUE D’IMAGES par Clément Proust Mémoire présenté au Département d’informatique en vue de l’obtention du grade de maître ès sciences (M.Sc.) FACULTÉ DES SCIENCES UNIVERSITÉ DE SHERBROOKE Sherbrooke, Québec, Canada, 24 février 2016 Le 24 février 2016 Le jury a accepté le mémoire de Clément Proust dans sa version finale Membres du jury Professeur Djemel Ziou Directeur Département d’informatique de l’Université de Sherbrooke Professeur Bessam Abdulrazak Président-rapporteur Département d’informatique de l’Université de Sherbrooke Professeur Marie–Flavie Auclair–Fortier Membre interne Département d’informatique de l’Université de Sherbrooke i Sommaire Dans un premier chapitre, nous décrivons un modèle de formation d’image affichée sur un écran en revenant sur les concepts que sont la lumière, la géométrie et l’optique. Nous détaillons ensuite les différentes techniques d’affichage stéréoscopique utilisées à l’heure actuelle, en parcourant la stéréoscopie, l’autostéréoscopie et plus particuliè- rement le principe d’imagerie intégrale. Le deuxième chapitre introduit un nouveau modèle de formation d’image stéréoscopique. Ce dernier nous permet d’observer deux images d’une paire stéréoscopique soumises à des éventuelles transformations et à l’effet d’une ou de plusieurs optiques particulières, pour reproduire la perception de trois dimensions. Nous abordons l’aspect unificateur de ce modèle. En effet il permet de décrire et d’expliquer de nombreuses techniques d’affichage stéréoscopique exis- tantes. Enfin, dans un troisième chapitre nous discutons d’une méthode particulière de création de paires d’images stéréoscopiques à l’aide d’un champ lumineux.
    [Show full text]
  • The Effect of Audio Cues and Sound Source Stimuli on Looming
    The Effect of Audio Cues and Sound Source Stimuli on Looming Perception Sonia Wilkie Submitted in partial fulfilment of the requirements of the Degree of Doctor of Philosophy School of Electronic Engineering and Computer Science Queen Mary, University of London January 28, 2015 Statement of Originality I, Sonia Wilkie, confirm that the research included within this thesis is my own work or that where it has been carried out in collaboration with, or supported by others, that this is duly acknowledged below and my contribution indicated. Previously published material is also acknowledged below. I attest that I have exercised reasonable care to ensure that the work is original, and does not to the best of my knowledge break any UK law, infringe any third partys copyright or other Intellectual Property Right, or contain any confidential material. I accept that the College has the right to use plagiarism detection software to check the electronic version of the thesis. I confirm that this thesis has not been previously submitted for the award of a degree by this or any other university. The copyright of this thesis rests with the author and no quotation from it or in- formation derived from it may be published without the prior written consent of the author. Signature: Date: Abstract Objects that move in depth (looming) are ubiquitous in the real and virtual worlds. How humans interact and respond to these approaching objects may affect their continued survival in both the real and virtual words, and is dependent on the individual's capacity to accurately interpret depth and movement cues.
    [Show full text]
  • 3D Television - Wikipedia
    3D television - Wikipedia https://en.wikipedia.org/wiki/3D_television From Wikipedia, the free encyclopedia 3D television (3DTV) is television that conveys depth perception to the viewer by employing techniques such as stereoscopic display, multi-view display, 2D-plus-depth, or any other form of 3D display. Most modern 3D television sets use an active shutter 3D system or a polarized 3D system, and some are autostereoscopic without the need of glasses. According to DisplaySearch, 3D televisions shipments totaled 41.45 million units in 2012, compared with 24.14 in 2011 and 2.26 in 2010.[1] As of late 2013, the number of 3D TV viewers An example of three-dimensional television. started to decline.[2][3][4][5][6] 1 History 2 Technologies 2.1 Displaying technologies 2.2 Producing technologies 2.3 3D production 3TV sets 3.1 3D-ready TV sets 3.2 Full 3D TV sets 4 Standardization efforts 4.1 DVB 3D-TV standard 5 Broadcasts 5.1 3D Channels 5.2 List of 3D Channels 5.3 3D episodes and shows 5.3.1 1980s 5.3.2 1990s 5.3.3 2000s 5.3.4 2010s 6 World record 7 Health effects 8See also 9 References 10 Further reading The stereoscope was first invented by Sir Charles Wheatstone in 1838.[7][8] It showed that when two pictures 1 z 17 21. 11. 2016 22:13 3D television - Wikipedia https://en.wikipedia.org/wiki/3D_television are viewed stereoscopically, they are combined by the brain to produce 3D depth perception. The stereoscope was improved by Louis Jules Duboscq, and a famous picture of Queen Victoria was displayed at The Great Exhibition in 1851.
    [Show full text]
  • Stereoscopic 3D in Games
    DEGREE PROJECT, IN COMPUTER SCIENCE , SECOND LEVEL STOCKHOLM, SWEDEN 2015 Stereoscopic 3D in games HENNING BENNET AND DAVID LINDSTRÖM KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF COMPUTER SCIENCE AND COMMUNICATION (CSC) EXAMENSARBETE VID CSC, KTH Stereoskopisk 3D i spel Stereoscopic 3D in games Bennet, Henning & Lindström, David E-postadress vid KTH: [email protected] & [email protected] Exjobb i: Datalogi Program: Civilingenjör Datateknik Handledare: Arnborg, Stefan Examinator: Arnborg, Stefan Uppdragsgivare: Fabrication Games Datum: 2015-06-17 Sammanfattning Stereoskopisk 3D i spel I den här rapporten undersöks stereoskopisk 3D. Vi utreder hur ett spel ska anpassas för att ta fram en så bra och tydlig stereoskopisk 3D-effekt som möjligt och så att betraktaren upplever ett tydligt djup utan att uppleva ett obehag på grund av effekten. Rapporten tittar djupare på vilka tekniska aspekter man behöver ta hänsyn till vid spelutveckling i stereoskopisk 3D. Samt vilka prestandabegränsningar som man bör ta hänsyn till vid stereoskopisk 3D. Vi beskriver hur processen och framtagandet av prototypen Kodo med anaglyfisk stereoskopisk 3D såg ut. Prototypen togs fram för att testa och analysera resultatet av stereoskopisk 3D-effekten. Abstract Stereoscopic 3D in games In this report we investigate the technique of stereoscopic 3D. This report investigates the steps needed to create a game adapted for an improved stereoscopic 3D effect. Furthermore we investigate what improvements one should make to avoid the beholder to experience any discomfort due to the effect. The report talks about technical aspects one needs to consider when using stereoscopic 3D, as well as performance issues we might need to take into consideration.
    [Show full text]
  • The Effect of Audio Cues and Sound Source Stimuli on Looming
    The Effect of Audio Cues and Sound Source Stimuli on Looming Perception Sonia Wilkie Submitted in partial fulfilment of the requirements of the Degree of Doctor of Philosophy School of Electronic Engineering and Computer Science Queen Mary, University of London January 28, 2015 Statement of Originality I, Sonia Wilkie, confirm that the research included within this thesis is my own work or that where it has been carried out in collaboration with, or supported by others, that this is duly acknowledged below and my contribution indicated. Previously published material is also acknowledged below. I attest that I have exercised reasonable care to ensure that the work is original, and does not to the best of my knowledge break any UK law, infringe any third partys copyright or other Intellectual Property Right, or contain any confidential material. I accept that the College has the right to use plagiarism detection software to check the electronic version of the thesis. I confirm that this thesis has not been previously submitted for the award of a degree by this or any other university. The copyright of this thesis rests with the author and no quotation from it or in- formation derived from it may be published without the prior written consent of the author. Signature: Date: Abstract Objects that move in depth (looming) are ubiquitous in the real and virtual worlds. How humans interact and respond to these approaching objects may affect their continued survival in both the real and virtual words, and is dependent on the individual's capacity to accurately interpret depth and movement cues.
    [Show full text]
  • The Current State of the Consumer 3D Experience
    The Current State of the Consumer 3D Experience By Philip Lelyveld Program Manager, Consumer 3D Experience Lab / Program USC Entertainment Technology Center June 7, 2012 Table of Contents Introduction 3 Theatrical 3D 4 3D TVs 7 3D Cameras, Laptops, Phones, Tablets 12 3D Gaming 13 Head Mounted Displays 15 Theme Parks 15 Other Markets for 3D 16 Conclusion: Looking Backward, Looking Forward 18 Appendix 1 – 3D Lab Presentation / Tour Log 20 Appendix 2 – 3D Lab Event Speaker / Moderator Log 21 Appendix 3 – 3D Audio 22 Appendix 4 – 3D Printers 24 2 The Current State of the Consumer 3D Experience Introduction The introduction of non-anaglyph, digital S3D as a consumer experience started with Disney’s Chicken Little in 2005. But the 2009 theatrical release of Avatar, with James Cameron’s strong marketing of the 3D aspect of the feature, was the benchmark event that defined consumer expectations for a 3D experience. Mr. Cameron and company also laid the groundwork for the rapid decent into the trough of disillusionment by telegraphing what consumers should expect before those expectations could be adequately met. At that moment the production, theatrical exhibition, distribution, and consumer electronics technologies infrastructures were still being developed, and content creation was just ramping up. The ETC’s Consumer 3D Experience Lab and Program has ridden the heart of the Hype Cycle; from a few years after digital stereoscopic 3D’s (S3D’s) launch in cinemas through the peak of inflated expectations, down to the trough of disillusionment, and now upward along the slope of enlightenment. We are approaching, but not yet at, the plateau of productivity for 3D.
    [Show full text]
  • Learning in Virtual 3D Environments: All About Immersive 3D Interfaces
    LEARNING IN VIRTUAL 3D ENVIRONMENTS: ALL ABOUT IMMERSIVE 3D INTERFACES Vojtěch Juřík1,2, Čeněk ŠašInka1,2 1Department of Psychology, Faculty of Arts, Masaryk University, Arna Nováka 1/1, 602 00 Brno, Czech Republic 2HUME lab - Experimental Humanities Laboratory, Masaryk University, Arna Nováka 1/1, 602 00 Brno, Czech Republic Abstract Human-computer interaction have entered the 3D era. The use of immersive virtual 3D environments in the area of education and learning is important field of research interest. 3D virtual worlds can be dynamically modified, updated or customized and more operators can cooperate when solving specific task regardless to their physical presence. With respect to the features of immersive human-computer interaction as well as specific cues included in virtual environments it is necessary to consider every aspect of 3D interface in the way how it influence the process of learning. Especially, the social and communicational aspects in Collaborative Virtual Learning Environments (CVLEs) is the state of the art for the current research. Based on our research of interactive geographical 3D environments, we summarize and discuss the relevant theoretical and technological aspects entering into the issue of interaction with 3D virtual environments. The operators' manipulation and evaluation of the displayed content is discussed regarding such phenomena as fidelity, presence/immersion, informational and computational equivalence, situation awareness, cognitive workload or human error. We also describe specific interface developed for recording and measuring the participants’ behavior in virtual spaces. Further we suggest the methodological background for the research of 3D virtual interfaces with respect to virtual collaboration and learning. Keywords: 3D technology, situation awareness, user interface, human error, human-computer interaction, fidelity, presence 1 INTRODUCTION With the rapid development of informational technologies, the communication and collaboration in virtual worlds (VWs) have entered a 3D era (Boughzala et al., 2012).
    [Show full text]
  • WO 2013/158322 Al 24 October 2013 (24.10.2013) P O P C T
    (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International Publication Date WO 2013/158322 Al 24 October 2013 (24.10.2013) P O P C T (51) International Patent Classification: (74) Agent: STEIN, Michael, D.; Woodcock Washburn LLP, G06T 15/08 (201 1.01) Cira Centre, 2929 Arch Street, 12th Floor, Philadelphia, PA 19104-2891 (US). (21) International Application Number: PCT/US20 13/032821 (81) Designated States (unless otherwise indicated, for every kind of national protection available): AE, AG, AL, AM, (22) Date: International Filing AO, AT, AU, AZ, BA, BB, BG, BH, BN, BR, BW, BY, 18 March 2013 (18.03.2013) BZ, CA, CH, CL, CN, CO, CR, CU, CZ, DE, DK, DM, (25) Filing Language: English DO, DZ, EC, EE, EG, ES, FI, GB, GD, GE, GH, GM, GT, HN, HR, HU, ID, IL, IN, IS, JP, KE, KG, KM, KN, KP, (26) Publication Language: English KR, KZ, LA, LC, LK, LR, LS, LT, LU, LY, MA, MD, (30) Priority Data: ME, MG, MK, MN, MW, MX, MY, MZ, NA, NG, NI, 61/635,075 18 April 2012 (18.04.2012) US NO, NZ, OM, PA, PE, PG, PH, PL, PT, QA, RO, RS, RU, RW, SC, SD, SE, SG, SK, SL, SM, ST, SV, SY, TH, TJ, (71) Applicant: THE REGENTS OF THE UNIVERSITY TM, TN, TR, TT, TZ, UA, UG, US, UZ, VC, VN, ZA, OF CALIFORNIA [US/US]; 1111 Franklin Street, ZM, ZW. Twelfth Floor, Oakland, CA 94607 (US). (84) Designated States (unless otherwise indicated, for every (72) Inventors: DAVIS, James E.; 1156 High Street: SOE3, kind of regional protection available): ARIPO (BW, GH, Santa Cruz, CA 95064 (US).
    [Show full text]
  • The Difference of Distance Stereoacuity Measured with Different Separating Methods
    468 Original Article Page 1 of 9 The difference of distance stereoacuity measured with different separating methods Lingzhi Zhao1, Yu Zhang2, Huang Wu2, Jun Xiao3 1Department of Medical Equipment, 2Department of Optometry, 3Department of Medical Retina, the Second Hospital of Jilin University, Changchun 130041, China Contributions: (I) Conception and design: H Wu, J Xiao; (II) Administrative support: L Zhao; (III) Provision of study materials or patients: All authors; (IV) Collection and assembly of data: All authors; (V) Data analysis and interpretation: H Wu, J Xiao; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors. Correspondence to: Jun Xiao. Department of Medical Retina, the Second Hospital of Jilin University, No. 218, Ziqiang Street, Nanguan District, Changchun 130041, China. Email: [email protected]. Background: The majority of tests to evaluate stereopsis should separate two eyes first. Whether different binocular separating manner may affect the test result of stereopsis is the main purpose of this study. Red- green anaglyphs, polarized light technology, active shutter 3D system, and auto-stereoscopic technique were chosen to evaluate distance stereoacuity. Methods: Red-green anaglyphs test system was established with an ASUS laptop with the aid of TNO Stereotest glasses. Active shutter 3D system was set up with the same ASUS laptop with the aid of NVidia 3D Vision 2 Wireless Glasses Kit. The polarized 3D system adopted the AOC display. A Samsung naked- eye 3D laptop was used to set up an auto-stereoscopic system. Thirty subjects were recruited. Distance stereoacuity was measured with those computer systems. Results: The auto-stereoscopic system was failed to measure distance stereopsis.
    [Show full text]
  • Book V Camera
    b bb bbbera bbbbon.com bbbb Basic Photography in 180 Days Book V - Camera Editor: Ramon F. aeroramon.com Contents 1 Day 1 1 1.1 Camera ................................................ 1 1.1.1 Functional description ..................................... 2 1.1.2 History ............................................ 2 1.1.3 Mechanics ........................................... 5 1.1.4 Formats ............................................ 8 1.1.5 Camera accessories ...................................... 8 1.1.6 Camera design history .................................... 8 1.1.7 Image gallery ......................................... 12 1.1.8 See also ............................................ 14 1.1.9 References .......................................... 15 1.1.10 Bibliography ......................................... 16 1.1.11 External links ......................................... 17 2 Day 2 18 2.1 Camera obscura ............................................ 18 2.1.1 Physical explanation ...................................... 19 2.1.2 Technology .......................................... 19 2.1.3 History ............................................ 20 2.1.4 Role in the modern age .................................... 31 2.1.5 Examples ........................................... 32 2.1.6 Public access ......................................... 33 2.1.7 See also ............................................ 33 2.1.8 Notes ............................................. 34 2.1.9 References .......................................... 34 2.1.10 Sources
    [Show full text]