Achieving Near-Correct Focus Cues Using Multiple Image Planes
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
20 Years of Opengl
20 Years of OpenGL Kurt Akeley © Copyright Khronos Group, 2010 - Page 1 So many deprecations! • Application-generated object names • Depth texture mode • Color index mode • Texture wrap mode • SL versions 1.10 and 1.20 • Texture borders • Begin / End primitive specification • Automatic mipmap generation • Edge flags • Fixed-function fragment processing • Client vertex arrays • Alpha test • Rectangles • Accumulation buffers • Current raster position • Pixel copying • Two-sided color selection • Auxiliary color buffers • Non-sprite points • Context framebuffer size queries • Wide lines and line stipple • Evaluators • Quad and polygon primitives • Selection and feedback modes • Separate polygon draw mode • Display lists • Polygon stipple • Hints • Pixel transfer modes and operation • Attribute stacks • Pixel drawing • Unified text string • Bitmaps • Token names and queries • Legacy pixel formats © Copyright Khronos Group, 2010 - Page 2 Technology and culture © Copyright Khronos Group, 2010 - Page 3 Technology © Copyright Khronos Group, 2010 - Page 4 OpenGL is an architecture Blaauw/Brooks OpenGL SGI Indy/Indigo/InfiniteReality Different IBM 360 30/40/50/65/75 NVIDIA GeForce, ATI implementations Amdahl Radeon, … Code runs equivalently on Top-level goal Compatibility all implementations Conformance tests, … It’s an architecture, whether Carefully planned, though Intentional design it was planned or not . mistakes were made Can vary amount of No feature subsetting Configuration resource (e.g., memory) Config attributes (e.g., FB) Not a formal -
A Novel Walk-Through 3D Display
A Novel Walk-through 3D Display Stephen DiVerdia, Ismo Rakkolainena & b, Tobias Höllerera, Alex Olwala & c a University of California at Santa Barbara, Santa Barbara, CA 93106, USA b FogScreen Inc., Tekniikantie 12, 02150 Espoo, Finland c Kungliga Tekniska Högskolan, 100 44 Stockholm, Sweden ABSTRACT We present a novel walk-through 3D display based on the patented FogScreen, an “immaterial” indoor 2D projection screen, which enables high-quality projected images in free space. We extend the basic 2D FogScreen setup in three ma- jor ways. First, we use head tracking to provide correct perspective rendering for a single user. Second, we add support for multiple types of stereoscopic imagery. Third, we present the front and back views of the graphics content on the two sides of the FogScreen, so that the viewer can cross the screen to see the content from the back. The result is a wall- sized, immaterial display that creates an engaging 3D visual. Keywords: Fog screen, display technology, walk-through, two-sided, 3D, stereoscopic, volumetric, tracking 1. INTRODUCTION Stereoscopic images have captivated a wide scientific, media and public interest for well over 100 years. The principle of stereoscopic images was invented by Wheatstone in 1838 [1]. The general public has been excited about 3D imagery since the 19th century – 3D movies and View-Master images in the 1950's, holograms in the 1960's, and 3D computer graphics and virtual reality today. Science fiction movies and books have also featured many 3D displays, including the popular Star Wars and Star Trek series. In addition to entertainment opportunities, 3D displays also have numerous ap- plications in scientific visualization, medical imaging, and telepresence. -
Realityengine Graphics
RealityEngine Graphics Kurt Akeley Silicon Graphics Computer Systems Abstract Silicon Graphics Iris 3000 (1985) and the Apollo DN570 (1985). Toward the end of the ®rst-generation period advancesin technology The RealityEngineTM graphics system is the ®rst of a new genera- allowed lighting, smooth shading, and depth buffering to be imple- tion of systems designed primarily to render texture mapped, an- mented, but only with an order of magnitude less performance than tialiased polygons. This paper describes the architecture of the was available to render ¯at-shaded lines and polygons. Thus the RealityEngine graphics system, then justi®es some of the decisions target capability of these machines remained ®rst-generation. The made during its design. The implementation is near-massively par- Silicon Graphics 4DG (1986) is an example of such an architecture. allel, employing 353 independent processors in its fullest con®gura- tion, resulting in a measured ®ll rate of over 240 million antialiased, Because ®rst-generation machines could not ef®ciently eliminate texture mapped pixels per second. Rendering performance exceeds hidden surfaces, and could not ef®ciently shade surfaces even if the 1 million antialiased, texture mapped triangles per second. In ad- application was able to eliminate them, they were more effective dition to supporting the functions required of a general purpose, at rendering wireframe images than at rendering solids. Begin- high-end graphics workstation, the system enables realtime, ªout- ning in 1988 a second-generation of graphics systems, primarily the-windowº image generation and interactive image processing. workstations rather than terminals, became available. These ma- chines took advantage of reduced memory costs and the increased availability of ASICs to implement deep framebuffers with multiple CR Categories and Subject Descriptors: I.3.1 [Computer rendering processors. -
State-Of-The-Art in Holography and Auto-Stereoscopic Displays
State-of-the-art in holography and auto-stereoscopic displays Daniel Jönsson <Ersätt med egen bild> 2019-05-13 Contents Introduction .................................................................................................................................................. 3 Auto-stereoscopic displays ........................................................................................................................... 5 Two-View Autostereoscopic Displays ....................................................................................................... 5 Multi-view Autostereoscopic Displays ...................................................................................................... 7 Light Field Displays .................................................................................................................................. 10 Market ......................................................................................................................................................... 14 Display panels ......................................................................................................................................... 14 AR ............................................................................................................................................................ 14 Application Fields ........................................................................................................................................ 15 Companies ................................................................................................................................................. -
Demystifying the Future of the Screen
Demystifying the Future of the Screen by Natasha Dinyar Mody A thesis exhibition presented to OCAD University in partial fulfillment of the requirements for the degree of Master of Design in DIGITAL FUTURES 49 McCaul St, April 12-15, 2018 Toronto, Ontario, Canada, April, 2018 © Natasha Dinyar Mody, 2018 AUTHOR’S DECLARATION I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I authorize OCAD University to lend this thesis to other institutions or individuals for the purpose of scholarly research. I understand that my thesis may be made electronically available to the public. I further authorize OCAD University to reproduce this thesis by photocopying or by other means, in total or in part, at the request of other institutions or individuals for the purpose of scholarly research. Signature: ii ABSTRACT Natasha Dinyar Mody ‘Demystifying the Future of the Screen’ Master of Design, Digital Futures, 2018 OCAD University Demystifying the Future of the Screen explores the creation of a 3D representation of volumetric display (a graphical display device that produces 3D objects in mid-air), a technology that doesn’t yet exist in the consumer realm, using current technologies. It investigates the conceptual possibilities and technical challenges of prototyping a future, speculative, technology with current available materials. Cultural precedents, technical antecedents, economic challenges, and industry adaptation, all contribute to this thesis proposal. It pedals back to the past to examine the probable widespread integration of this future technology. By employing a detailed horizon scan, analyzing science fiction theories, and extensive user testing, I fabricated a prototype that simulates an immersive volumetric display experience, using a holographic display fan. -
Vers Des Supports D'exécution Capables D'exploiter Les Machines
Vers des supports d’exécution capables d’exploiter les machines multicœurs hétérogènes Cédric Augonnet To cite this version: Cédric Augonnet. Vers des supports d’exécution capables d’exploiter les machines multicœurs hétérogènes. [Travaux universitaires] 2008, pp.48. inria-00289361 HAL Id: inria-00289361 https://hal.inria.fr/inria-00289361 Submitted on 20 Jun 2008 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. M´emoirede MASTER RECHERCHE pr´esent´epar C´edricAugonnet Vers des supports d’execution´ capables d’exploiter les machines multicœurs het´ erog´ enes` Encadrement : Raymond Namyst Fevrier´ – Juin 2008 Laboratoire Bordelais de Recherche en Informatique (LaBRI) INRIA Bordeaux Universit´eBordeaux 1 Table des mati`eres Remerciements v 1 Introduction 1 2 Etat´ de l’art 3 2.1 Architecture : du multicœur a` l’het´ erog´ ene` . 3 2.1.1 Adoption du multicœur . 3 2.1.2 Utilisation d’accel´ erateurs´ . 4 2.1.3 Vers un multicœur het´ erog´ ene` . 5 2.2 Programmation multicœur homogene` . 5 2.2.1 Gestion explicite du parallelisme´ . 6 2.2.2 Se reposer sur un langage . 6 2.3 Programmation d’accel´ erateurs´ . 6 2.3.1 GPGPU . -
NETRA: Interactive Display for Estimating Refractive Errors and Focal Range
NETRA: Interactive Display for Estimating Refractive Errors and Focal Range The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Vitor F. Pamplona, Ankit Mohan, Manuel M. Oliveira, and Ramesh Raskar. 2010. NETRA: interactive display for estimating refractive errors and focal range. In ACM SIGGRAPH 2010 papers (SIGGRAPH '10), Hugues Hoppe (Ed.). ACM, New York, NY, USA, Article 77 , 8 pages. As Published http://dx.doi.org/10.1145/1778765.1778814 Publisher Association for Computing Machinery (ACM) Version Author's final manuscript Citable link http://hdl.handle.net/1721.1/80392 Terms of Use Creative Commons Attribution-Noncommercial-Share Alike 3.0 Detailed Terms http://creativecommons.org/licenses/by-nc-sa/3.0/ NETRA: Interactive Display for Estimating Refractive Errors and Focal Range Vitor F. Pamplona1;2 Ankit Mohan1 Manuel M. Oliveira1;2 Ramesh Raskar1 1Camera Culture Group - MIT Media Lab 2Instituto de Informatica´ - UFRGS http://cameraculture.media.mit.edu/netra Abstract We introduce an interactive, portable, and inexpensive solution for estimating refractive errors in the human eye. While expensive op- tical devices for automatic estimation of refractive correction exist, our goal is to greatly simplify the mechanism by putting the hu- man subject in the loop. Our solution is based on a high-resolution programmable display and combines inexpensive optical elements, interactive GUI, and computational reconstruction. The key idea is to interface a lenticular view-dependent display with the human eye in close range - a few millimeters apart. Via this platform, we create a new range of interactivity that is extremely sensitive to parame- ters of the human eye, like refractive errors, focal range, focusing speed, lens opacity, etc. -
Present Status of 3D Display
Recent 3D Display Technologies (Excluding Holography [that will be lectured later]) Byoungho Lee School of Electrical Engineering Seoul National University Seoul, Korea [email protected] Contents • Introduction to 3D display • Present status of 3D display • Hardware system • Stereoscopic display • Autostereoscopic display • Volumetric display • Other recent techniques • Software • 3D information processing: depth extraction, depth plane image reconstruction, view image reconstruction • 3D correlator using 2D sub-images • 2D to 3D conversion Outline of presentation Display device (3D display) Display device • Stereoscopic High resolution pickup device display • Autostereoscopic display Image Consumer pickup Image Consumer pickup • Depth • Image perception processing • Visual (2D : 3D) fatigue Brief history of 3D display Stereoscope: Wheatstone (1838) Lenticular: Hess (1915) Lenticular stereoscope (prism): Parallax barrier: Brewster (1844) Kanolt (1915) Electro-holography: Benton (1989) 1850 1900 1950 1830 2000 Autostereoscopic: Hologram: Maxwell (1868) Gabor (1948) Stereoscopic movie camera: Integral Edison & Dickson (1891) photography: Anaglyph: Du Hauron (1891) Lippmann (1908) Integram: de Montebello (1970) 3D movie: La’arrivee du train (1903) 3D movies Starwars (1977) Avatar (2009) Minority Report (2002) Superman returns (2006) Cues for depth perception of human (I) • Physiological cues • Psychological cues • Accommodation • Linear perspective • Convergence • Overlapping (occlusion) Binocular parallax • • Shading and shadow • Motion -
Applications of Pixel Textures in Visualization and Realistic Image Synthesis
Applications of Pixel Textures in Visualization and Realistic Image Synthesis Wolfgang Heidrich, Rudiger¨ Westermann, Hans-Peter Seidel, Thomas Ertl Computer Graphics Group University of Erlangen fheidrich,wester,seidel,[email protected] Abstract which can be used for volume rendering, and the imaging subset, a set of extensions useful not only for image-processing, have been With fast 3D graphics becoming more and more available even on added in this version of the specification. Bump mapping and pro- low end platforms, the focus in developing new graphics hardware cedural shaders are only two examples for features that are likely to is beginning to shift towards higher quality rendering and addi- be implemented at some point in the future. tional functionality instead of simply higher performance imple- On this search for improved quality it is important to identify a mentations of the traditional graphics pipeline. On this search for powerful set of orthogonal building blocks to be implemented in improved quality it is important to identify a powerful set of or- hardware, which can then be flexibly combined to form new algo- thogonal features to be implemented in hardware, which can then rithms. We think that the pixel texture extension by Silicon Graph- be flexibly combined to form new algorithms. ics [9, 12] is a building block that can be useful for many applica- Pixel textures are an OpenGL extension by Silicon Graphics that tions, especially when combined with the imaging subset. fits into this category. In this paper, we demonstrate the benefits of In this paper, we use pixel textures to implement four different this extension by presenting several different algorithms exploiting algorithms for applications from visualization and realistic image its functionality to achieve high quality, high performance solutions synthesis: fast line integral convolution (Section 3), shadow map- for a variety of different applications from scientific visualization ping (Section 4), realistic fog models (Section 5), and finally en- and realistic image synthesis. -
Information Display Magazine September/October V34 N5 2018
pC1_Layout 1 9/6/2018 7:27 AM Page 1 THE FUTURE LOOKS RADIANT ENSURING QUALITY FOR THE NEXT GENERATION OF AUTOMOTIVE DISPLAYS Radiant light & color measurement solutions replicate human visual perception to evaluate new technologies like head-up and free-form displays. Visit Radiant at Table #46 Vehicle Displays Detroit Sept. 25-26 | Livonia, Michigan ID TOC Issue5 p1_Layout 1 9/6/2018 7:31 AM Page 1 Information SOCIETY FOR INFORMATION DISPLAY SID SEPTEMBER/OCTOBER 2018 DISPLAY VOL. 34, NO. 5 ON THE COVER: Scenes from Display Week 2018 in Los Angeles include (center and then clockwise starting at upper right): the exhibit hall floor; AUO’s 8-in. microLED display (Photo: contents AUO); JDI’s cockpit demo, with dashboard and center-console displays (Photo: Karlheinz 2 Editorial: Looking Back at Display Week and Summer, Looking Ahead to Fall Blankenbach); the 2018 I-Zone; LG Display’s n By Stephen Atwood flexible OLED screen (Photo: Ken Werner); Women in Tech second annual conference, with 4 President’s Corner: Goals for a Sustainable Society moderator and panelists; the entrance to the n By Helge Seetzen conference center at Display Week 2018; Industry News foldable AMOLED e-Book (Photo: Visionox). 6 n By Jenny Donelan 8 Display Week Review: Best in Show Winners The Society for Information Display honored four exhibiting companies with Best in Show awards at Display Week 2018 in Los Angeles: Ares Materials, AU Optronics, Tianma, and Visionox. n By Jenny Donelan 10 Display Week Review: Emissive Materials Generate Excitement at the Show MicroLEDs created the most buzz at Display Week 2018, but quantum dots and OLEDs sparked a lot of interest too. -
Rasterization Pipeline Aaron Lefohn - Intel / University of Washington Mike Houston – AMD / Stanford
A Trip Down The (2003) Rasterization Pipeline Aaron Lefohn - Intel / University of Washington Mike Houston – AMD / Stanford Winter 2011 – Beyond Programmable Shading 1 Acknowledgements In addition to a little content by Aaron Lefohn and Mike Houston, this slide deck is based on slides from • Tomas Akenine-Möller (Lund University / Intel) • Eric Demers (AMD) • Kurt Akeley (Microsoft/Refocus Imaging) - CS248 Autumn Quarter 2007 Winter 2011 – Beyond Programmable Shading 2 This talk • Overview of the real-time rendering pipeline available in ~2003 corresponding to graphics APIs: – DirectX 9 – OpenGL 2.x • To clarify – There are many rendering pipelines in existence – REYES – Ray tracing – DirectX11 – … – Today’s lecture is about the ~2003 GPU hardware rendering pipeline Winter 2011 – Beyond Programmable Shading 3 If you need a deeper refresher • See Kurt Akeley’s CS248 from Stanford – http://www-graphics.stanford.edu/courses/cs248-07/schedule.php – This material should serve as a solid refresher • For an excellent “quick” review of programmable shading in OpenCL, see Andrew Adams’ lecture at the above link • GLSL tutorial – http://www.lighthouse3d.com/opengl/glsl/ • Direct3D 9 tutorials – http://www.directxtutorial.com/ – http://msdn.microsoft.com/en-us/library/bb944006(v=vs.85).aspx • More references at the end of this deck Winter 2011 – Beyond Programmable Shading 4 The General Rasterization Pipeline Winter 2011 – Beyond Programmable Shading 5 Rendering Problem Statement • Rendering is the process of creating an image from a computer representation -
Kurt Akeley Interview
Kurt Akeley Interview Interviewer by: Dag Spicer Recorded: November 5, 2010 Mountain View, California CHM Reference number: X5984.2011 © 2010 Computer History Museum Kurt Akeley Interview Dag Spicer: Okay. So today is Friday, November the 11th, 2010 in Mountain View, California at the Computer History Museum and we’re delighted today to have Kurt Akeley, one of the co-founders of Silicon Graphics--a world leader in computer graphic systems, with us today. Thank you for being with us, Kurt. Kurt Akeley: You’ve very welcome. Spicer: I wanted to ask you what it was like at SGI [Silicon Graphics, Inc.] in the early days in terms of the work ethic and company culture. Akeley: Yeah. Well, we … there was a strong work ethic. We all worked really hard, but I think, right from the beginning we also worked hard and played hard. And it was a very collegial environment. We were-- a lot of us weren’t very old-- I mean, Jim Clark, the real founder of the company was, I don’t know, 35 or something I forget how old at that point. But the rest of us were quite a bit younger. A lot of us were just either graduate students or very recently graduate students. So young, not so much family, and so I think the company became our home as well as our place of work. And Jim, at least to me, [became] very much of a father figure. He was out to do something great and he was very ambitious about it but he was- - I think he really enjoyed building this little family and kind of bringing us with him where we were going.