A Shading Reuse Method for Efficient Micropolygon Ray Tracing • 151:3

Total Page:16

File Type:pdf, Size:1020Kb

A Shading Reuse Method for Efficient Micropolygon Ray Tracing • 151:3 A Shading Reuse Method for Efficient Micropolygon Ray Tracing Qiming Hou Kun Zhou State Key Lab of CAD&CG, Zhejiang University∗ Abstract We present a shading reuse method for micropolygon ray trac- ing. Unlike previous shading reuse methods that require an ex- plicit object-to-image space mapping for shading density estima- tion or shading accuracy, our method performs shading density con- trol and actual shading reuse in different spaces with uncorrelated criterions. Specifically, we generate the shading points by shoot- ing a user-controlled number of shading rays from the image space, while the evaluated shading values are assigned to antialiasing sam- ples through object-space nearest neighbor searches. Shading sam- ples are generated in separate layers corresponding to first bounce ray paths to reduce spurious reuse from very different ray paths. This method eliminates the necessity of an explicit object-to-image space mapping, enabling the elegant handling of ray tracing effects Figure 1: An animated scene rendered with motion blur and re- such as reflection and refraction. The overhead of our shading reuse flection. This scene contains 1.56M primitives and is rendered with operations is minimized by a highly parallel implementation on the 2 reflective bounces at 1920 × 1080 resolution and 11 × 11 su- GPU. Compared to the state-of-the-art micropolygon ray tracing al- persampling. The total render time is 289 seconds. On average gorithm, our method is able to reduce the required shading evalua- only 3.48 shading evaluations are performed for each pixel and are tions by an order of magnitude and achieve significant performance reused for other samples. gains. high-quality rendering [Parker et al. 2010], this may become a ma- Keywords: micropolygon, GPU, Reyes, ray tracing jor obstacle in future applications. Links: DL PDF In this paper, we introduce a simple but effective method to reuse shading evaluations for efficient micropolygon ray tracing. Com- 1 Introduction pared to the state-of-the-art micropolygon ray tracing algorithm, our method is able to reduce the required shading evaluations by an order of magnitude and achieve significant performance gains. Shading is typically the performance bottleneck in cinematic- quality rendering, which is often based on the Reyes architecture and uses micropolygons to represent high order surfaces or highly 1.1 Related Work detailed objects [Cook et al. 1987]. In order to reduce shading costs, Extensive research has been done on micropolygon rendering and state-of-the-art micropolygon renderers (e.g., Pixar’s RenderMan) ray tracing. perform shading computation on micropolygon vertices, and reuse the shading values to evaluate the color of each visibility sample (or Researchers have explored efficient parallel implementations of mi- antialiasing sample) and composite the final image. Such a shad- cropolygon rendering on GPUs [Wexler et al. 2005; Patney and ing reuse strategy enables a shading rate significantly lower than Owens 2008; Zhou et al. 2009; Hou et al. 2010]. In particular, the visibility sampling rate, which is vital for efficient high-quality Hou et al. [2010] introduced a GPU-based micropolygon ray trac- rendering where extremely high supersampling of visibility is nec- ing algorithm. They demonstrated that for high-quality defocus and essary, especially when rendering defocus and motion blur. motion blur ray tracing can greatly outperform rasterization meth- ods. In their method, ray tracing is only used for visibility sampling Existing shading reuse methods for micropolygon rendering are and shading is still performed on micropolygon vertices. Another primarily designed for rasterization based pipelines. Ray trac- branch of research also seeks to accelerate micropolygon rendering ing effects such as reflection and refraction are typically consid- using GPUs [Fisher et al. 2009; Fatahalian et al. 2009; Fatahalian ered as a part of shading in such methods. Consequently, all re- et al. 2010; Ragan-Kelley et al. 2011; Burns et al. 2010]. The key flected/refracted samples have to be shaded, incurring significant difference between our work and theirs is that while they propose overhead. As ray tracing achieves more significance in modern new GPU architecture designs that support real-time micropolygon rasterization, we aim to accelerate high-quality, off-line ray tracing ∗Email: [email protected], [email protected] using software approaches on current GPU hardware. ACM Reference Format A majority of micropolygon rendering algorithms are designed to Hou, Q., Zhou, K. 2011. A Shading Reuse Method for Effi cient Micropolygon Ray Tracing. ACM Trans. Graph. 30, 6, Article 151 (December 2011), 7 pages. DOI = 10.1145/2024156.2024185 reuse the expensive shading computations across multiple visibility http://doi.acm.org/10.1145/2024156.2024185. samples, assuming that shading is continuous and does not vary sig- Copyright Notice nificantly across adjacent visibility samples. Existing shading reuse Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profi t or direct commercial advantage methods can be categorized into object-space methods [Cook et al. and that copies show this notice on the fi rst page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with 1987; Burns et al. 2010] and image-space methods [Ragan-Kelley credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any et al. 2011]. In [Cook et al. 1987], shading is performed on microp- component of this work in other works requires prior specifi c permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701, fax +1 olygon vertices and reused within the same micropolygon. Burns (212) 869-0481, or [email protected]. © 2011 ACM 0730-0301/2011/12-ART151 $10.00 DOI 10.1145/2024156.2024185 et al. [2010] define a shading grid based on a priori shading density http://doi.acm.org/10.1145/2024156.2024185 estimation, while shading values are evaluated lazily after visibility ACM Transactions on Graphics, Vol. 30, No. 6, Article 151, Publication date: December 2011. 151:2 • Q. Hou et al. is resolved and reused within the same shading grid cell. Decoupled Listing 1 Pseudocode of our shading reuse method sampling [Ragan-Kelley et al. 2011] implicitly partitions antialias- 1 //Generate and shade seeding rays ing samples into equivalent classes using shading density control- 2 shading_rays = generateShadingRays(n*shading_rate) ling hash functions. Shading is reused within the same equivalent 3 shading_samples = pathTraceAll(shading_rays) class. 4 shade(shading_samples) 5 //Trace antialiasing rays and reuse shading Stoll et al. [2006] introduced object-space shading reuse into a ray 6 aa_rays = generateAntialiasRays(n*m) tracing pipeline. They use ray derivatives [Igehy 1999] to control 7 aa_samples = pathTraceAll(aa_rays) shading computation rates. Specifically, they conservatively dis- 8 failed = new List cretize the minimum width of the derivative beam cross section 9 for each s in aa_samples into a few predefined object space tessellation grids. Shading is 10 s.shading = shading_samples.findNearest(s) then computed for all tessellation grids hit by at least one ray. As 11 if s.shading == NOT_FOUND: noted in their paper, such an approach may result in considerable 12 failed.add(s) over-shading when there are highly anisotropic derivatives or sig- 13 //Shade failure samples with image space reuse nificant over-tessellation. To avoid such reliance on ray derivative 14 cache = new Hash behaviors, our method controls shading reuse using nearest neigh- 15 for each s in failed bor searches, which are independent of tessellation and automati- 16 if cache.canReuseShadingAt(s): cally adapt to anisotropy. 17 s.shading = cache.fetch(s) 18 else 19 shade(s) 1.2 Challenge and Contribution 20 cache.add(s) 21 //Filter all samples to produce the final image The main challenge in generalizing existing shading reuse methods 22 final_image = downSampleAndFilter(aa_samples) to ray tracing is that ray tracing complicates the object-to-image space mapping. Such a mapping is inherently required for shading reuse as the ideal shading evaluation density is defined in the im- age space while the shading continuity assumption is only valid in 2 Shading Reuse Method the object space. Object-space methods such as [Burns et al. 2010] reuse shading values based on object-space proximities and rely on Our shading reuse method works as the top level rendering loop of the image-space polygon size to control the shading evaluation den- a ray tracer. For an image rendered at the resolution of n pixels sity. In a rasterization-based pipeline, the size can be directly com- with m× supersampling, Listing 1 shows the pseudocode of our puted by simply projecting polygons to the image space. However, method. ray tracing may introduce arbitrary distortion and the image-space The method mainly consists of four steps. First, a certain num- polygon size is no longer practical to compute. Image-space meth- ber of shading rays are generated uniformly in the image space and ods (e.g., [Ragan-Kelley et al. 2011]) reuse shading values based traced to get the shading samples, which are then shaded. Second, on image-space proximities and rely on a continuous object-image antialiasing rays are generated and traced to obtain the antialiasing space mapping for shading accuracy. This assumption is trivially samples. For each antialiasing sample, the nearest reusable shading valid for direct rasterization. Defocus and motion blur effects can sample is located and its shading value is assigned to the antialias- be handled by using the non-blurred image space for the shading ing sample. Third, for those antialiasing samples that cannot find reuse purpose.
Recommended publications
  • An Advanced Path Tracing Architecture for Movie Rendering
    RenderMan: An Advanced Path Tracing Architecture for Movie Rendering PER CHRISTENSEN, JULIAN FONG, JONATHAN SHADE, WAYNE WOOTEN, BRENDEN SCHUBERT, ANDREW KENSLER, STEPHEN FRIEDMAN, CHARLIE KILPATRICK, CLIFF RAMSHAW, MARC BAN- NISTER, BRENTON RAYNER, JONATHAN BROUILLAT, and MAX LIANI, Pixar Animation Studios Fig. 1. Path-traced images rendered with RenderMan: Dory and Hank from Finding Dory (© 2016 Disney•Pixar). McQueen’s crash in Cars 3 (© 2017 Disney•Pixar). Shere Khan from Disney’s The Jungle Book (© 2016 Disney). A destroyer and the Death Star from Lucasfilm’s Rogue One: A Star Wars Story (© & ™ 2016 Lucasfilm Ltd. All rights reserved. Used under authorization.) Pixar’s RenderMan renderer is used to render all of Pixar’s films, and by many 1 INTRODUCTION film studios to render visual effects for live-action movies. RenderMan started Pixar’s movies and short films are all rendered with RenderMan. as a scanline renderer based on the Reyes algorithm, and was extended over The first computer-generated (CG) animated feature film, Toy Story, the years with ray tracing and several global illumination algorithms. was rendered with an early version of RenderMan in 1995. The most This paper describes the modern version of RenderMan, a new architec- ture for an extensible and programmable path tracer with many features recent Pixar movies – Finding Dory, Cars 3, and Coco – were rendered that are essential to handle the fiercely complex scenes in movie production. using RenderMan’s modern path tracing architecture. The two left Users can write their own materials using a bxdf interface, and their own images in Figure 1 show high-quality rendering of two challenging light transport algorithms using an integrator interface – or they can use the CG movie scenes with many bounces of specular reflections and materials and light transport algorithms provided with RenderMan.
    [Show full text]
  • Computer Graphics on a Stream Architecture
    COMPUTER GRAPHICS ON A STREAM ARCHITECTURE A DISSERTATION SUBMITTED TO THE DEPARTMENT OF DEPARTMENT OF ELECTRICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY John Douglas Owens November 2002 c Copyright by John Douglas Owens 2003 All Rights Reserved ii I certify that I have read this dissertation and that, in my opin- ion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. William J. Dally (Principal Adviser) I certify that I have read this dissertation and that, in my opin- ion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Patrick Hanrahan I certify that I have read this dissertation and that, in my opin- ion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Matthew Regan Approved for the University Committee on Graduate Stud- ies: iii Abstract Media applications, such as signal processing, image and video processing, and graphics, are an increasing and important part of the way people use computers today. However, modern microprocessors cannot provide the performance necessary to meet the demands of these media applications, and special purpose processors lack the flexibility and pro- grammability necessary to address the wide variety of media applications. For the proces- sors of the future, we must design and implement architectures and programming models that meet the performance and flexibility requirements of these applications. Streams are a powerful programming abstraction suitable for efficiently implement- ing complex and high-performance media applications.
    [Show full text]
  • The Renderman Interface Specification, V3.1
    The RenderMan Interface Version 3.1 September 1989 (with typographical corrections through May 1995) Copyright 1987, 1988, 1989, 1995 Pixar. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or other- wise, without the prior permission of Pixar. The information in this publication is furnished for informational use only, is subject to change without notice, and should not be construed as a commitment by Pixar. Pixar as- sumes no responsibility or liability for any errors or inaccuracies that may appear in this publication. RenderMan is a registered trademark of Pixar. Pixar 1001 W. Cutting Blvd. Richmond, CA 94804 (510) 236-4000 Fax: (510) 236-0388 ii TABLE OF CONTENTS LIST OF ILLUSTRATIONS ................................................................................................. vii LIST OF TABLES ............................................................................................................... viii PREFACE ............................................................................................................................ ix Part I The RenderMan Interface Section 1 INTRODUCTION................................................................................................ 3 Features and Capabilities ....................................................................................................... 5 Required features ............................................................................................................
    [Show full text]
  • Renderman for Artist 01
    RenderMan For Artists #01 RenderMan Architecture Wanho Choi (wanochoi.com) The Road Ahead • Learning RenderMan is not easy or quick. However, it is not rocket science either. - Rudy Cortes - RenderMan for Artists wanochoi.com We will explore … • The RenderMan Shading Language Guide – Rudy Cortes and Saty Raghavachary • Rendering for Beginners: Image synthesis using RenderMan – Saty Raghavachary • Advanced RenderMan: Creating CGI for Motion Pictures – Anthony A. Apodaca and Larry Gritz • Essential RenderMan – Ian Stephenson • The RenderMan Companion: A Programmer's Guide to Realistic Computer Graphics – Steve Upstill • SIGGRAPH course notes – 1992, 1995, 2000, 2001, 2002, 2003, 2006 • pdf files from web • Etc. RenderMan for Artists wanochoi.com Rendering • What is rendering? – A series of works for determining the color and opacity value of a pixel – Scene (objects, lights, camera in 3D) Image (2D) • Rendering algorithms – Scanline – Ray tracing – Radiosity – etc. • Commercial renderers – RenderMan – Mental Ray – V-Ray – POV-Ray – FurryBall – etc. RenderMan for Artists wanochoi.com Ray Tracing Algorithm • A technique for generating an image by tracing the path of light through pixels in an image plane. RenderMan for Artists wanochoi.com RenderMan • A standard technical specification created by Pixar for 3D scene description – RiSpec: RenderMan Interface Specification • There are some RenderMan compliant renderers. – PRMan, AIR, Pixie, 3Delight, Aqsis, RenderDotC, BMRT, Mantra, etc. – It must meet all of the standard requirements laid out
    [Show full text]
  • Motion Blur Rendering: State of the Art
    DOI: 10.1111/j.1467-8659.2010.01840.x COMPUTER GRAPHICS forum Volume 30 (2011), number 1 pp. 3–26 Motion Blur Rendering: State of the Art Fernando Navarro1, Francisco J. Seron´ 2 and Diego Gutierrez2 1Lionhead Studios, Microsoft Games Studios [email protected] 2Universidad de Zaragoza, Instituto de Investigacion´ en Ingenier´ıa de Aragon´ (I3A), Spain {seron, diegog}@unizar.es Abstract Motion blur is a fundamental cue in the perception of objects in motion. This phenomenon manifests as a visible trail along the trajectory of the object and is the result of the combination of relative motion and light integration taking place in film and electronic cameras. In this work, we analyse the mechanisms that produce motion blur in recording devices and the methods that can simulate it in computer generated images. Light integration over time is one of the most expensive processes to simulate in high-quality renders, as such, we make an in-depth review of the existing algorithms and we categorize them in the context of a formal model that highlights their differences, strengths and limitations. We finalize this report proposing a number of alternative classifications that will help the reader identify the best technique for a particular scenario. Keywords: motion blur, temporal antialiasing, sampling and reconstruction, rendering, shading, visibility, analytic methods, geometric substitution, Monte Carlo sampling, postprocessing, hybrid methods Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation—Antialiasing 1. Introduction Motion blur for synthetic images has been an active area of research from the early 1980s. Unlike recorded footage that Motion blur is an effect that manifests as a visible streak automatically integrates motion blur, rendering algorithms generated by the movement of an object in front of a record- need to explicitly simulate it.
    [Show full text]
  • Micropolygon Rendering on the GPU
    Micropolygon Rendering on the GPU DIPLOMARBEIT zur Erlangung des akademischen Grades Diplom-Ingenieur im Rahmen des Studiums Visual Computing eingereicht von Thomas Weber Matrikelnummer 0526341 an der Fakultät für Informatik der Technischen Universität Wien Betreuung: Associate Prof. Dipl.-Ing. Dipl.-Ing. Dr.techn. Michael Wimmer Mitwirkung: Associate Prof. John D. Owens Wien, 4.12.2014 (Unterschrift Verfasser) (Unterschrift Betreuung) Technische Universität Wien A-1040 Wien Karlsplatz 13 Tel. +43-1-58801-0 www.tuwien.ac.at Micropolygon Rendering on the GPU MASTER’S THESIS submitted in partial fulfillment of the requirements for the degree of Diplom-Ingenieur in Visual Computing by Thomas Weber Registration Number 0526341 to the Faculty of Informatics at the Vienna University of Technology Advisor: Associate Prof. Dipl.-Ing. Dipl.-Ing. Dr.techn. Michael Wimmer Assistance: Associate Prof. John D. Owens Vienna, 4.12.2014 (Signature of Author) (Signature of Advisor) Technische Universität Wien A-1040 Wien Karlsplatz 13 Tel. +43-1-58801-0 www.tuwien.ac.at Erklärung zur Verfassung der Arbeit Thomas Weber Zieglergasse 27, 1070 Wien Hiermit erkläre ich, dass ich diese Arbeit selbständig verfasst habe, dass ich die verwende- ten Quellen und Hilfsmittel vollständig angegeben habe und dass ich die Stellen der Arbeit - einschließlich Tabellen, Karten und Abbildungen -, die anderen Werken oder dem Internet im Wortlaut oder dem Sinn nach entnommen sind, auf jeden Fall unter Angabe der Quelle als Ent- lehnung kenntlich gemacht habe. (Ort, Datum) (Unterschrift Verfasser) i Acknowledgements I’d like to thank Anjul Patney, Stanley Tzeng, Julian Fong, and Tim Foley for their valuable input. Another “thank you” goes to Nuwan Jayasena of AMD for supplying me with testing harware and giving support on driver issues.
    [Show full text]
  • Design and Applications of a Real-Time Shading System Master’S Thesis
    TARTU UNIVERSITY FACULTY OF MATHEMATICS Institute of Computer Science Design and Applications of a Real-Time Shading System Master's Thesis Mark Tehver Supervisor: Eero Vainikko Author: ................................. ". " May 2004 Supervisor: ............................ ". " May 2004 Head of Chair: ...................... ". " May 2004 Tartu 2004 Contents 1 Introduction 4 1.1 Prolog . 4 1.2 Overview of the Thesis . 4 2 OpenGL Pipeline and Shading Extensions 6 2.1 Overview of OpenGL . 6 2.2 OpenGL Pipeline . 6 2.3 Shading Extensions . 7 2.3.1 Standard transformation and lighting model . 7 2.3.2 Issues with the standard lighting model . 9 2.4 Vertex Programs . 9 2.4.1 Parallelism . 10 2.5 ARB Vertex Program Extension . 11 2.5.1 Registers . 11 2.5.2 Texture coordinates . 12 2.5.3 Instructions . 13 2.5.4 Operand modi¯ers . 14 2.5.5 Resource limits . 14 2.5.6 Other vertex programming extensions . 15 2.6 Fragment Programs . 15 2.7 ARB Fragment Program Extension . 16 2.7.1 Registers . 17 2.7.2 Instructions . 17 2.7.3 Dependent texturing . 18 2.7.4 Operand modi¯ers, limits . 18 2.7.5 Other fragment programming extensions . 18 3 High-Level Shading Languages 20 3.1 Overview of Illumination Models . 20 3.1.1 Bidirectional Reflectance Distribution Function . 21 3.2 Shading Languages . 22 3.2.1 Shade trees . 23 3.2.2 Pixel Stream Editor . 24 3.3 RenderMan . 24 3.3.1 Types . 25 3.3.2 Built-in functions . 26 3.3.3 Execution environment . 26 3.4 Real-Time High-Level Shading Languages .
    [Show full text]
  • Ray Tracing for the Movie 'Cars'
    Ray Tracing for the Movie ‘Cars’ Per H. Christensen∗ Julian Fong David M. Laur Dana Batali Pixar Animation Studios ABSTRACT texture cache keeps recently accessed texture tiles ready for fast ac- cess. This combination of ray differentials and caching makes ray This paper describes how we extended Pixar’s RenderMan renderer tracing of very complex scenes feasible. with ray tracing abilities. In order to ray trace highly complex This paper first gives a more detailed motivation for the use of scenes we use multiresolution geometry and texture caches, and ray tracing in ‘Cars’, and lists the harsh rendering requirements in use ray differentials to determine the appropriate resolution. With the movie industry. It then gives an overview of how the REYES al- this method we are able to efficiently ray trace scenes with much gorithm deals with complex scenes and goes on to explain our work more geometry and texture data than there is main memory. Movie- on efficient ray tracing of equally complex scenes. An explanation quality rendering of scenes of such complexity had only previously of our hybrid rendering approach, combining REYES with ray trac- been possible with pure scanline rendering algorithms. Adding ray ing, follows. Finally we measure the efficiency of our method on a tracing to the renderer enables many additional effects such as ac- test scene, and present a few production details from the use of ray curate reflections, detailed shadows, and ambient occlusion. tracing for ‘Cars’. The ray tracing functionality has been used in many recent Please refer to Christensen et al. [3] for an overview of previ- movies, including Pixar’s latest movie ‘Cars’.
    [Show full text]
  • Aqsis Documentation Release 1.6
    Aqsis Documentation Release 1.6 Paul Gregory May 12, 2010 CONTENTS 1 Welcome to Aqsis 1 1.1 Features..................................................1 1.2 What’s New...............................................2 1.3 Legal...................................................6 2 The Aqsis Tools 9 2.1 Getting Started..............................................9 2.2 Using the Aqsis Tool Suite........................................ 11 2.3 Aqsis and the Ri Standard........................................ 14 2.4 Further Reading............................................. 14 3 Aqsis Programmers Guide 15 3.1 Building from Source.......................................... 15 3.2 Using the RenderMan Interface C API................................. 15 3.3 The Display Driver API......................................... 15 3.4 Creating DSO Shadeops......................................... 25 3.5 Creating Procedural Plugins....................................... 25 3.6 Texture Library Reference........................................ 25 4 RenderMan Tutorials & Examples 27 4.1 Tutorials................................................. 27 4.2 Example Content............................................. 27 4.3 External Tools.............................................. 27 i ii CHAPTER ONE WELCOME TO AQSIS Aqsis is a high quality, open source, renderer that adheres to the RenderMan(tm) standard. Aqsis provides a suite of tools that together enable the generation of production quality images from 3D scene data. Based on the Reyes rendering architecture,
    [Show full text]
  • Ray Tracing for the Movie 'Cars'
    Ray Tracing for the Movie ‘Cars’ Per H. Christensen∗ Julian Fong David M. Laur Dana Batali Pixar Animation Studios ABSTRACT texture cache keeps recently accessed texture tiles ready for fast ac- cess. This combination of ray differentials and caching makes ray This paper describes how we extended Pixar’s RenderMan renderer tracing of very complex scenes feasible. with ray tracing abilities. In order to ray trace highly complex This paper first gives a more detailed motivation for the use of scenes we use multiresolution geometry and texture caches, and ray tracing in ‘Cars’, and lists the harsh rendering requirements in use ray differentials to determine the appropriate resolution. With the movie industry. It then gives an overview of how the REYES al- this method we are able to efficiently ray trace scenes with much gorithm deals with complex scenes and goes on to explain our work more geometry and texture data than there is main memory. Movie- on efficient ray tracing of equally complex scenes. An explanation quality rendering of scenes of such complexity had only previously of our hybrid rendering approach, combining REYES with ray trac- been possible with pure scanline rendering algorithms. Adding ray ing, follows. Finally we measure the efficiency of our method on a tracing to the renderer enables many additional effects such as ac- test scene, and present a few production details from the use of ray curate reflections, detailed shadows, and ambient occlusion. tracing for ‘Cars’. The ray tracing functionality has been used in many recent Please refer to Christensen et al. [3] for an overview of previ- movies, including Pixar’s latest movie ‘Cars’.
    [Show full text]
  • Micropolygon Ray Tracing with Defocus and Motion Blur
    Micropolygon Ray Tracing With Defocus and Motion Blur Qiming Hou∗ Hao Qin† Wenyao Li† Baining Guo∗‡ Kun Zhou† ∗Tsinghua University † State Key Lab of CAD&CG, Zhejiang University ‡ Microsoft Research Asia (a) Perfect focus (b) Motion blur + defocus Figure 1: A car rendered with defocus, motion blur, mirror reflection and ambient occlusion at 1280 × 720 resolution with 23 × 23 supersampling. The scene is tessellated into 48.9M micropolygons (i.e., 53.1 micropolygons per pixel). The blurred image is rendered in 4 minutes on an NVIDIA GTX 285 GPU. The image rendered in perfect focus takes 2 minutes and is provided to help the reader to assess the defocus and motion blur effects. Abstract 1 Introduction We present a micropolygon ray tracing algorithm that is capable of Current cinematic-quality rendering systems are based on the Reyes efficiently rendering high quality defocus and motion blur effects. architecture [Cook et al. 1987], which uses micropolygons to rep- A key component of our algorithm is a BVH (bounding volume hi- resent high order surfaces and highly detailed objects. In a Reyes erarchy) based on 4D hyper-trapezoids that project into 3D OBBs pipeline, all input geometric primitives are first tessellated into mi- (oriented bounding boxes) in spatial dimensions. This acceleration cropolygons (i.e., quads less than one pixel in size). Shading is structure is able to provide tight bounding volumes for scene ge- then calculated on micropolygon vertices. In the subsequent sam- ometries, and is thus efficient in pruning intersection tests during pling stage, micropolygons are rasterized into fragments, which are ray traversal.
    [Show full text]
  • Directx 11 Reyes Rendering
    DirectX 11 Reyes Rendering Lihan Bin Vineet Goel Jorg Peters University of Florida AMD/ATI, University of Florida [email protected]fl.edu [email protected] [email protected]fl.edu ABSTRACT We map reyes-rendering to the GPU by leveraging new features of modern GPUs exposed by the Microsoft DirectX11 API. That is, we show how to accelerate reyes-quality rendering for realistic real-time graphics. In detail, we discuss the implementation of the split-and-dice phase via the Geometry Shader, including bounds on displacement mapping; micropolygon rendering without standard rasterization; and the incorporation of motion blur and camera defocus as one pass rather than a multi-pass. Our implementation achieves interactive rendering rates on the Radeon 5000 series. Keywords: reyes, motion blur, camera defocus, DirectX11, real-time , rasterization, displacement 1 INTRODUCTION The reyes (render everything you ever saw) architecture was designed by Pixar [CCC87] in 1987 to provide photo-realistic rendering of complex animated scenes. Pixar‘s PhotoRealistic RenderMan is an Academy Award-winning offline renderer that implements reyes and is widely used in the film industry. A main feature Figure 1: The three types of passes in our frame- of the reyes architecture is its adaptive tessellation and work:the split stage (consisting of N adaptive refine- micropolygon rendering of higher-order (not linear) ment passes), one pre-rendering pass and one final com- surfaces such as Bézier patches, NURBS patches and position pass. Catmull-Clark subdivision surfaces. (A micropolygon is a polygon that is expected to project to no more than 3 shows how our implementation takes specific 1 pixel.) advantage of the Direct X11 pipeline to obtain real- The contribution of our paper is to judiciously choose time performance.
    [Show full text]