Achieving Near-Correct Focus Cues Using Multiple Image Planes

Total Page:16

File Type:pdf, Size:1020Kb

Achieving Near-Correct Focus Cues Using Multiple Image Planes ACHIEVING NEAR-CORRECT FOCUS CUES USING MULTIPLE IMAGE PLANES A DISSERTATION SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Kurt Akeley June 2004 c Copyright by Kurt Akeley 2004 All Rights Reserved ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Patrick Hanrahan (Principal Adviser) I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Martin S. Banks (Visual Space Perception Laboratory University of California, Berkeley) I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Mark A. Horowitz Approved for the University Committee on Graduate Studies. iii Abstract Typical stereo displays stimulate incorrect focus cues because the light comes from a single surface. The consequences of incorrect focus cues include discomfort, difficulty perceiving the depth of stereo images, and incorrect perception of distance. This thesis describes a prototype stereo display comprising two independent volumetric displays. Each volumetric display is implemented with three image planes, which are fixed in position relative to the viewer, and are arranged so that the retinal image is the sum of light from three different focal distances. Scene geometry is rendered separately to the two fixed-viewpoint volumetric displays, using projections that are exact for each eye. Rendering is filtered in the depth dimension such that object radiance is apportioned to the two nearest image planes, in linear proportion based on reciprocal distance. Fixed-viewpoint volumetric displays with adequate depth resolution are shown to generate near- correct stimulation of focus cues. Depth filtering, which is necessary to avoid visible artifacts, also improves the accuracy of the stimulus that directs changes in the focus of the eye. Specifically, the stimulus generated by depth filtering an object whose simulated distance falls between image-plane distances closely matches the stimulus that would be generated by an image plane at the desired distance. Viewers of the prototype display required substantially less time to perceive the depth of stereo images that were rendered with depth filtering to approximate correct focal distance. Fixed-viewpoint volumetric displays are shown to be a potentially practical solution for virtual reality viewing. In addition to near-correct stimulation of focus cues, and unlike more familiar autostereoscopic volumetric displays, fixed-viewpoint volumetric displays retain important qualities of 3-D projective graphics. These include correct depiction of occlusions and reflections, utilization of modern graphics processor and 2-D display technology, and implementation of realistic fields of iv view and depths of field. The design and verification of the prototype display are fully described. While not a practical solution for general-purpose viewing, the prototype is a proof of concept and a platform for ongoing vision research. v Acknowledgement I have been blessed with wonderful parents, teachers, mentors, and colleagues. My parents, David and Marcy Akeley, inspired my appreciation of education, allowed me to disassemble the family car and washing machine, and brought our family together for thousands of evening meals where we shared the excitements of our day. Peter Warter, chairman of the University of Delaware elec- trical engineering department when I was an undergraduate there during the late 70’s, trusted me with project work at the graduate-student level, took me into his family, and encouraged me to at- tend graduate school at Stanford. James Clark, a new faculty member in the Stanford electrical engineering department when I met him in 1980, created the professional opportunities at Silicon Graphics that made my career in computer graphics possible. When I left Silicon Graphics after 20 years, Mark Horowitz and Pat Hanrahan made it possible for me to return to Stanford to complete my degree; and Martin Banks provided me with my thesis topic and the laboratory infrastructure at Berkeley in which much of the work was done. Without the support of all of these people this thesis would not have been possible. Many colleagues contributed directly to my thesis work. Simon Watt taught me the basics of vision-science experimental methods, and ran many of the experiments that are described in this thesis. Ahna Reza Girshick picked up where Simon left off, designing and running experiments that will be reported in future publications. Sergei Gepshtein improved my MATLAB proficiency with many useful tips and techniques. Ian Buck shared his deep knowledge of modern graphics systems with me, and set up the environment that allowed this old-time Unix user to run make and vi on his PC. Mike Cammarano helped to define the external image interface to the prototype software, vi and provided the ray traced images that are included in this thesis. And NVIDIA, my part-time employer, gave me the schedule flexibility I needed to get my thesis work done. My immediate family supported me emotionally, financially, and parentally throughout this work. Jian Zhao and Manli Wei, my parents-in-law, created an extended family that gave my chil- dren four parents, allowing me to spend many long days in the labs at Stanford and Berkeley. My deepest appreciation and love go to my wife, Jenny Zhao, and to my children, David and Scarlett. They gave me the freedom to go back to school, and the love and support that allowed me to enjoy every moment of it. vii Contents Abstract iv Acknowledgement vi 1 Introduction 1 1.1RelatedWork..................................... 2 1.2MyContribution................................... 4 2 Fixed-viewpoint Volumetric Display 6 2.1 Principles and Optimizations ............................. 6 2.1.1 Non-homogeneous Voxel Distribution . ................... 8 2.1.2 CollapsingtheImageStack......................... 10 2.1.3 Solid-angle Filtering ............................. 13 2.1.4 ReducedDepthResolution.......................... 18 2.1.5 WideFieldofView.............................. 23 2.2Attributes....................................... 24 2.2.1 Tractable Voxel Count ............................ 24 2.2.2 Standard Components ............................ 25 2.2.3 Multiple Focal Distances ........................... 25 2.2.4 NoEyeTracking............................... 27 2.3Concerns....................................... 29 2.3.1 Aperture-related Lighting Errors ....................... 29 viii 2.3.2 Viewpoint Instability ............................. 30 2.3.3 Silhouette Artifacts .............................. 32 2.3.4 IncorrectRetinalFocusCues......................... 35 2.3.5 HeadMounting................................ 35 3 Prototype Display 37 3.1DesignDecisions................................... 38 3.2ImplementationDetails................................ 41 3.2.1 OverlappingFieldsofView......................... 41 3.2.2 Ergonomics . ................................ 43 3.2.3 Software................................... 44 3.2.4 Construction................................. 46 3.3Validation....................................... 47 3.3.1 IntensityConstancy.............................. 48 3.3.2 ImageAlignment............................... 49 3.3.3 Silhouette Visibility ............................. 52 3.3.4 High-quality Imagery ............................. 54 3.4Issues......................................... 56 4 User Performance 58 4.1TheFuseExperiment................................. 58 4.2FuseExperimentResults............................... 61 4.3ResultsRelatedtoAnalysis.............................. 63 5 Discussion and Future Work 69 A Apparent Focal Distance 73 B Blur Radius 78 C Incorrect Accommodation 81 ix D Summed Images 90 E Dilated Pupil 98 F Software Configuration 104 G Design Drawings 139 H Prototype Display Specifications 147 I Glossary 149 Bibliography 154 x List of Tables 2.1 Box depth-filter discontinuity as a function of depth resolution. ........... 21 2.2Relationshipbetweenvergenceangleandfixationdistance.............. 28 3.1Prototypeimageplanedistances............................ 40 D.1 ATF-sum maxima distances for various intensity ratios and spatial frequencies. 95 H.1Prototypespecifications................................ 147 H.2Prototypespatialresolution.............................. 148 xi List of Figures 2.1Viewdependentlightingeffects............................ 7 2.2Actualandideallinespreadfunctions......................... 11 2.3 Modulation transfer functions of the actual and ideal linespread functions. ..... 11 2.4Idealdisplay...................................... 12 2.5 Box filter shapes. ................................ 14 2.6 Voxel lighting with box and tent depth filters. ................... 16 2.7 Box and tent depth-filter shapes. ........................... 18 2.8 Modeling the visual discontinuity due to box depth filtering. ............ 19 2.9 Tent depth filtering eliminates visual discontinuities. ................ 20 2.10 Multiple focal distances along a visual line. ................... 27 2.11Alignmenterrorduetoopticalcentermovement................... 32 2.12 Intensity discontinuity at foreground/background transitions.
Recommended publications
  • 20 Years of Opengl
    20 Years of OpenGL Kurt Akeley © Copyright Khronos Group, 2010 - Page 1 So many deprecations! • Application-generated object names • Depth texture mode • Color index mode • Texture wrap mode • SL versions 1.10 and 1.20 • Texture borders • Begin / End primitive specification • Automatic mipmap generation • Edge flags • Fixed-function fragment processing • Client vertex arrays • Alpha test • Rectangles • Accumulation buffers • Current raster position • Pixel copying • Two-sided color selection • Auxiliary color buffers • Non-sprite points • Context framebuffer size queries • Wide lines and line stipple • Evaluators • Quad and polygon primitives • Selection and feedback modes • Separate polygon draw mode • Display lists • Polygon stipple • Hints • Pixel transfer modes and operation • Attribute stacks • Pixel drawing • Unified text string • Bitmaps • Token names and queries • Legacy pixel formats © Copyright Khronos Group, 2010 - Page 2 Technology and culture © Copyright Khronos Group, 2010 - Page 3 Technology © Copyright Khronos Group, 2010 - Page 4 OpenGL is an architecture Blaauw/Brooks OpenGL SGI Indy/Indigo/InfiniteReality Different IBM 360 30/40/50/65/75 NVIDIA GeForce, ATI implementations Amdahl Radeon, … Code runs equivalently on Top-level goal Compatibility all implementations Conformance tests, … It’s an architecture, whether Carefully planned, though Intentional design it was planned or not . mistakes were made Can vary amount of No feature subsetting Configuration resource (e.g., memory) Config attributes (e.g., FB) Not a formal
    [Show full text]
  • A Novel Walk-Through 3D Display
    A Novel Walk-through 3D Display Stephen DiVerdia, Ismo Rakkolainena & b, Tobias Höllerera, Alex Olwala & c a University of California at Santa Barbara, Santa Barbara, CA 93106, USA b FogScreen Inc., Tekniikantie 12, 02150 Espoo, Finland c Kungliga Tekniska Högskolan, 100 44 Stockholm, Sweden ABSTRACT We present a novel walk-through 3D display based on the patented FogScreen, an “immaterial” indoor 2D projection screen, which enables high-quality projected images in free space. We extend the basic 2D FogScreen setup in three ma- jor ways. First, we use head tracking to provide correct perspective rendering for a single user. Second, we add support for multiple types of stereoscopic imagery. Third, we present the front and back views of the graphics content on the two sides of the FogScreen, so that the viewer can cross the screen to see the content from the back. The result is a wall- sized, immaterial display that creates an engaging 3D visual. Keywords: Fog screen, display technology, walk-through, two-sided, 3D, stereoscopic, volumetric, tracking 1. INTRODUCTION Stereoscopic images have captivated a wide scientific, media and public interest for well over 100 years. The principle of stereoscopic images was invented by Wheatstone in 1838 [1]. The general public has been excited about 3D imagery since the 19th century – 3D movies and View-Master images in the 1950's, holograms in the 1960's, and 3D computer graphics and virtual reality today. Science fiction movies and books have also featured many 3D displays, including the popular Star Wars and Star Trek series. In addition to entertainment opportunities, 3D displays also have numerous ap- plications in scientific visualization, medical imaging, and telepresence.
    [Show full text]
  • Realityengine Graphics
    RealityEngine Graphics Kurt Akeley Silicon Graphics Computer Systems Abstract Silicon Graphics Iris 3000 (1985) and the Apollo DN570 (1985). Toward the end of the ®rst-generation period advancesin technology The RealityEngineTM graphics system is the ®rst of a new genera- allowed lighting, smooth shading, and depth buffering to be imple- tion of systems designed primarily to render texture mapped, an- mented, but only with an order of magnitude less performance than tialiased polygons. This paper describes the architecture of the was available to render ¯at-shaded lines and polygons. Thus the RealityEngine graphics system, then justi®es some of the decisions target capability of these machines remained ®rst-generation. The made during its design. The implementation is near-massively par- Silicon Graphics 4DG (1986) is an example of such an architecture. allel, employing 353 independent processors in its fullest con®gura- tion, resulting in a measured ®ll rate of over 240 million antialiased, Because ®rst-generation machines could not ef®ciently eliminate texture mapped pixels per second. Rendering performance exceeds hidden surfaces, and could not ef®ciently shade surfaces even if the 1 million antialiased, texture mapped triangles per second. In ad- application was able to eliminate them, they were more effective dition to supporting the functions required of a general purpose, at rendering wireframe images than at rendering solids. Begin- high-end graphics workstation, the system enables realtime, ªout- ning in 1988 a second-generation of graphics systems, primarily the-windowº image generation and interactive image processing. workstations rather than terminals, became available. These ma- chines took advantage of reduced memory costs and the increased availability of ASICs to implement deep framebuffers with multiple CR Categories and Subject Descriptors: I.3.1 [Computer rendering processors.
    [Show full text]
  • State-Of-The-Art in Holography and Auto-Stereoscopic Displays
    State-of-the-art in holography and auto-stereoscopic displays Daniel Jönsson <Ersätt med egen bild> 2019-05-13 Contents Introduction .................................................................................................................................................. 3 Auto-stereoscopic displays ........................................................................................................................... 5 Two-View Autostereoscopic Displays ....................................................................................................... 5 Multi-view Autostereoscopic Displays ...................................................................................................... 7 Light Field Displays .................................................................................................................................. 10 Market ......................................................................................................................................................... 14 Display panels ......................................................................................................................................... 14 AR ............................................................................................................................................................ 14 Application Fields ........................................................................................................................................ 15 Companies .................................................................................................................................................
    [Show full text]
  • Demystifying the Future of the Screen
    Demystifying the Future of the Screen by Natasha Dinyar Mody A thesis exhibition presented to OCAD University in partial fulfillment of the requirements for the degree of Master of Design in DIGITAL FUTURES 49 McCaul St, April 12-15, 2018 Toronto, Ontario, Canada, April, 2018 © Natasha Dinyar Mody, 2018 AUTHOR’S DECLARATION I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I authorize OCAD University to lend this thesis to other institutions or individuals for the purpose of scholarly research. I understand that my thesis may be made electronically available to the public. I further authorize OCAD University to reproduce this thesis by photocopying or by other means, in total or in part, at the request of other institutions or individuals for the purpose of scholarly research. Signature: ii ABSTRACT Natasha Dinyar Mody ‘Demystifying the Future of the Screen’ Master of Design, Digital Futures, 2018 OCAD University Demystifying the Future of the Screen explores the creation of a 3D representation of volumetric display (a graphical display device that produces 3D objects in mid-air), a technology that doesn’t yet exist in the consumer realm, using current technologies. It investigates the conceptual possibilities and technical challenges of prototyping a future, speculative, technology with current available materials. Cultural precedents, technical antecedents, economic challenges, and industry adaptation, all contribute to this thesis proposal. It pedals back to the past to examine the probable widespread integration of this future technology. By employing a detailed horizon scan, analyzing science fiction theories, and extensive user testing, I fabricated a prototype that simulates an immersive volumetric display experience, using a holographic display fan.
    [Show full text]
  • Vers Des Supports D'exécution Capables D'exploiter Les Machines
    Vers des supports d’exécution capables d’exploiter les machines multicœurs hétérogènes Cédric Augonnet To cite this version: Cédric Augonnet. Vers des supports d’exécution capables d’exploiter les machines multicœurs hétérogènes. [Travaux universitaires] 2008, pp.48. inria-00289361 HAL Id: inria-00289361 https://hal.inria.fr/inria-00289361 Submitted on 20 Jun 2008 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. M´emoirede MASTER RECHERCHE pr´esent´epar C´edricAugonnet Vers des supports d’execution´ capables d’exploiter les machines multicœurs het´ erog´ enes` Encadrement : Raymond Namyst Fevrier´ – Juin 2008 Laboratoire Bordelais de Recherche en Informatique (LaBRI) INRIA Bordeaux Universit´eBordeaux 1 Table des mati`eres Remerciements v 1 Introduction 1 2 Etat´ de l’art 3 2.1 Architecture : du multicœur a` l’het´ erog´ ene` . 3 2.1.1 Adoption du multicœur . 3 2.1.2 Utilisation d’accel´ erateurs´ . 4 2.1.3 Vers un multicœur het´ erog´ ene` . 5 2.2 Programmation multicœur homogene` . 5 2.2.1 Gestion explicite du parallelisme´ . 6 2.2.2 Se reposer sur un langage . 6 2.3 Programmation d’accel´ erateurs´ . 6 2.3.1 GPGPU .
    [Show full text]
  • NETRA: Interactive Display for Estimating Refractive Errors and Focal Range
    NETRA: Interactive Display for Estimating Refractive Errors and Focal Range The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Vitor F. Pamplona, Ankit Mohan, Manuel M. Oliveira, and Ramesh Raskar. 2010. NETRA: interactive display for estimating refractive errors and focal range. In ACM SIGGRAPH 2010 papers (SIGGRAPH '10), Hugues Hoppe (Ed.). ACM, New York, NY, USA, Article 77 , 8 pages. As Published http://dx.doi.org/10.1145/1778765.1778814 Publisher Association for Computing Machinery (ACM) Version Author's final manuscript Citable link http://hdl.handle.net/1721.1/80392 Terms of Use Creative Commons Attribution-Noncommercial-Share Alike 3.0 Detailed Terms http://creativecommons.org/licenses/by-nc-sa/3.0/ NETRA: Interactive Display for Estimating Refractive Errors and Focal Range Vitor F. Pamplona1;2 Ankit Mohan1 Manuel M. Oliveira1;2 Ramesh Raskar1 1Camera Culture Group - MIT Media Lab 2Instituto de Informatica´ - UFRGS http://cameraculture.media.mit.edu/netra Abstract We introduce an interactive, portable, and inexpensive solution for estimating refractive errors in the human eye. While expensive op- tical devices for automatic estimation of refractive correction exist, our goal is to greatly simplify the mechanism by putting the hu- man subject in the loop. Our solution is based on a high-resolution programmable display and combines inexpensive optical elements, interactive GUI, and computational reconstruction. The key idea is to interface a lenticular view-dependent display with the human eye in close range - a few millimeters apart. Via this platform, we create a new range of interactivity that is extremely sensitive to parame- ters of the human eye, like refractive errors, focal range, focusing speed, lens opacity, etc.
    [Show full text]
  • Present Status of 3D Display
    Recent 3D Display Technologies (Excluding Holography [that will be lectured later]) Byoungho Lee School of Electrical Engineering Seoul National University Seoul, Korea [email protected] Contents • Introduction to 3D display • Present status of 3D display • Hardware system • Stereoscopic display • Autostereoscopic display • Volumetric display • Other recent techniques • Software • 3D information processing: depth extraction, depth plane image reconstruction, view image reconstruction • 3D correlator using 2D sub-images • 2D to 3D conversion Outline of presentation Display device (3D display) Display device • Stereoscopic High resolution pickup device display • Autostereoscopic display Image Consumer pickup Image Consumer pickup • Depth • Image perception processing • Visual (2D : 3D) fatigue Brief history of 3D display Stereoscope: Wheatstone (1838) Lenticular: Hess (1915) Lenticular stereoscope (prism): Parallax barrier: Brewster (1844) Kanolt (1915) Electro-holography: Benton (1989) 1850 1900 1950 1830 2000 Autostereoscopic: Hologram: Maxwell (1868) Gabor (1948) Stereoscopic movie camera: Integral Edison & Dickson (1891) photography: Anaglyph: Du Hauron (1891) Lippmann (1908) Integram: de Montebello (1970) 3D movie: La’arrivee du train (1903) 3D movies Starwars (1977) Avatar (2009) Minority Report (2002) Superman returns (2006) Cues for depth perception of human (I) • Physiological cues • Psychological cues • Accommodation • Linear perspective • Convergence • Overlapping (occlusion) Binocular parallax • • Shading and shadow • Motion
    [Show full text]
  • Applications of Pixel Textures in Visualization and Realistic Image Synthesis
    Applications of Pixel Textures in Visualization and Realistic Image Synthesis Wolfgang Heidrich, Rudiger¨ Westermann, Hans-Peter Seidel, Thomas Ertl Computer Graphics Group University of Erlangen fheidrich,wester,seidel,[email protected] Abstract which can be used for volume rendering, and the imaging subset, a set of extensions useful not only for image-processing, have been With fast 3D graphics becoming more and more available even on added in this version of the specification. Bump mapping and pro- low end platforms, the focus in developing new graphics hardware cedural shaders are only two examples for features that are likely to is beginning to shift towards higher quality rendering and addi- be implemented at some point in the future. tional functionality instead of simply higher performance imple- On this search for improved quality it is important to identify a mentations of the traditional graphics pipeline. On this search for powerful set of orthogonal building blocks to be implemented in improved quality it is important to identify a powerful set of or- hardware, which can then be flexibly combined to form new algo- thogonal features to be implemented in hardware, which can then rithms. We think that the pixel texture extension by Silicon Graph- be flexibly combined to form new algorithms. ics [9, 12] is a building block that can be useful for many applica- Pixel textures are an OpenGL extension by Silicon Graphics that tions, especially when combined with the imaging subset. fits into this category. In this paper, we demonstrate the benefits of In this paper, we use pixel textures to implement four different this extension by presenting several different algorithms exploiting algorithms for applications from visualization and realistic image its functionality to achieve high quality, high performance solutions synthesis: fast line integral convolution (Section 3), shadow map- for a variety of different applications from scientific visualization ping (Section 4), realistic fog models (Section 5), and finally en- and realistic image synthesis.
    [Show full text]
  • Information Display Magazine September/October V34 N5 2018
    pC1_Layout 1 9/6/2018 7:27 AM Page 1 THE FUTURE LOOKS RADIANT ENSURING QUALITY FOR THE NEXT GENERATION OF AUTOMOTIVE DISPLAYS Radiant light & color measurement solutions replicate human visual perception to evaluate new technologies like head-up and free-form displays. Visit Radiant at Table #46 Vehicle Displays Detroit Sept. 25-26 | Livonia, Michigan ID TOC Issue5 p1_Layout 1 9/6/2018 7:31 AM Page 1 Information SOCIETY FOR INFORMATION DISPLAY SID SEPTEMBER/OCTOBER 2018 DISPLAY VOL. 34, NO. 5 ON THE COVER: Scenes from Display Week 2018 in Los Angeles include (center and then clockwise starting at upper right): the exhibit hall floor; AUO’s 8-in. microLED display (Photo: contents AUO); JDI’s cockpit demo, with dashboard and center-console displays (Photo: Karlheinz 2 Editorial: Looking Back at Display Week and Summer, Looking Ahead to Fall Blankenbach); the 2018 I-Zone; LG Display’s n By Stephen Atwood flexible OLED screen (Photo: Ken Werner); Women in Tech second annual conference, with 4 President’s Corner: Goals for a Sustainable Society moderator and panelists; the entrance to the n By Helge Seetzen conference center at Display Week 2018; Industry News foldable AMOLED e-Book (Photo: Visionox). 6 n By Jenny Donelan 8 Display Week Review: Best in Show Winners The Society for Information Display honored four exhibiting companies with Best in Show awards at Display Week 2018 in Los Angeles: Ares Materials, AU Optronics, Tianma, and Visionox. n By Jenny Donelan 10 Display Week Review: Emissive Materials Generate Excitement at the Show MicroLEDs created the most buzz at Display Week 2018, but quantum dots and OLEDs sparked a lot of interest too.
    [Show full text]
  • Rasterization Pipeline Aaron Lefohn - Intel / University of Washington Mike Houston – AMD / Stanford
    A Trip Down The (2003) Rasterization Pipeline Aaron Lefohn - Intel / University of Washington Mike Houston – AMD / Stanford Winter 2011 – Beyond Programmable Shading 1 Acknowledgements In addition to a little content by Aaron Lefohn and Mike Houston, this slide deck is based on slides from • Tomas Akenine-Möller (Lund University / Intel) • Eric Demers (AMD) • Kurt Akeley (Microsoft/Refocus Imaging) - CS248 Autumn Quarter 2007 Winter 2011 – Beyond Programmable Shading 2 This talk • Overview of the real-time rendering pipeline available in ~2003 corresponding to graphics APIs: – DirectX 9 – OpenGL 2.x • To clarify – There are many rendering pipelines in existence – REYES – Ray tracing – DirectX11 – … – Today’s lecture is about the ~2003 GPU hardware rendering pipeline Winter 2011 – Beyond Programmable Shading 3 If you need a deeper refresher • See Kurt Akeley’s CS248 from Stanford – http://www-graphics.stanford.edu/courses/cs248-07/schedule.php – This material should serve as a solid refresher • For an excellent “quick” review of programmable shading in OpenCL, see Andrew Adams’ lecture at the above link • GLSL tutorial – http://www.lighthouse3d.com/opengl/glsl/ • Direct3D 9 tutorials – http://www.directxtutorial.com/ – http://msdn.microsoft.com/en-us/library/bb944006(v=vs.85).aspx • More references at the end of this deck Winter 2011 – Beyond Programmable Shading 4 The General Rasterization Pipeline Winter 2011 – Beyond Programmable Shading 5 Rendering Problem Statement • Rendering is the process of creating an image from a computer representation
    [Show full text]
  • Kurt Akeley Interview
    Kurt Akeley Interview Interviewer by: Dag Spicer Recorded: November 5, 2010 Mountain View, California CHM Reference number: X5984.2011 © 2010 Computer History Museum Kurt Akeley Interview Dag Spicer: Okay. So today is Friday, November the 11th, 2010 in Mountain View, California at the Computer History Museum and we’re delighted today to have Kurt Akeley, one of the co-founders of Silicon Graphics--a world leader in computer graphic systems, with us today. Thank you for being with us, Kurt. Kurt Akeley: You’ve very welcome. Spicer: I wanted to ask you what it was like at SGI [Silicon Graphics, Inc.] in the early days in terms of the work ethic and company culture. Akeley: Yeah. Well, we … there was a strong work ethic. We all worked really hard, but I think, right from the beginning we also worked hard and played hard. And it was a very collegial environment. We were-- a lot of us weren’t very old-- I mean, Jim Clark, the real founder of the company was, I don’t know, 35 or something I forget how old at that point. But the rest of us were quite a bit younger. A lot of us were just either graduate students or very recently graduate students. So young, not so much family, and so I think the company became our home as well as our place of work. And Jim, at least to me, [became] very much of a father figure. He was out to do something great and he was very ambitious about it but he was- - I think he really enjoyed building this little family and kind of bringing us with him where we were going.
    [Show full text]