The Path to Path-Traced Movies
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Physically Based Rendering: from Theory to Implementation Pdf, Epub, Ebook
PHYSICALLY BASED RENDERING: FROM THEORY TO IMPLEMENTATION PDF, EPUB, EBOOK Matt Pharr, Greg Humphreys, Wenzel Jakob | 1266 pages | 18 Oct 2016 | ELSEVIER SCIENCE & TECHNOLOGY | 9780128006450 | English | San Francisco, United States Physically Based Rendering: From Theory to Implementation PDF Book The text should make more clear that we're making use of the tristimulus theory of color perception in that step. I wrote a technical perspective, The Ray-Tracing Engine That Could , that introduced the article and helped frame the work's achievements for a non-graphics audience. Design and novel uses of higher-dimensional rasterization. Lecture 7: Splines, Curves and Surfaces Shirley et al. Hardcover ISBN: Stanford csb I recently had a great time teaching the and installments of csb, the graduate-level rendering course at Stanford. A practical shading model for ray tracing. Free Shipping Free global shipping No minimum order. Then, in the code below, the negation of 'd' should be removed and in the places where 'd' is added in the computations of tx and ty, subtraction should be used. Toggle navigation pbrt. Feiner, and Kurt Akeley. Osgood this is an outstanding reference Lecture 4: Transforms Shirley et al. I was also fortunate to have the opportunity to teach the class. Given its unconventional preparation style, this textbook stands out because of its descriptions of the tradeoffs involved in developing a complete working renderer. Index of Miscellaneous Identifiers. Wenzel is also the lead developer of the Mitsuba renderer , a research-oriented rendering system. Physically Based Rendering, Second Edition , describes both the mathematical theory behind a modern photorealistic rendering system as well as its practical implementation. -
Domain-Specific Programming Systems
Lecture 22: Domain-Specific Programming Systems Parallel Computer Architecture and Programming CMU 15-418/15-618, Spring 2020 Slide acknowledgments: Pat Hanrahan, Zach Devito (Stanford University) Jonathan Ragan-Kelley (MIT, Berkeley) Course themes: Designing computer systems that scale (running faster given more resources) Designing computer systems that are efficient (running faster under constraints on resources) Techniques discussed: Exploiting parallelism in applications Exploiting locality in applications Leveraging hardware specialization (earlier lecture) CMU 15-418/618, Spring 2020 Claim: most software uses modern hardware resources inefficiently ▪ Consider a piece of sequential C code - Call the performance of this code our “baseline performance” ▪ Well-written sequential C code: ~ 5-10x faster ▪ Assembly language program: maybe another small constant factor faster ▪ Java, Python, PHP, etc. ?? Credit: Pat Hanrahan CMU 15-418/618, Spring 2020 Code performance: relative to C (single core) GCC -O3 (no manual vector optimizations) 51 40/57/53 47 44/114x 40 = NBody 35 = Mandlebrot = Tree Alloc/Delloc 30 = Power method (compute eigenvalue) 25 20 15 10 5 Slowdown (Compared to C++) Slowdown (Compared no data no 0 data no Java Scala C# Haskell Go Javascript Lua PHP Python 3 Ruby (Mono) (V8) (JRuby) Data from: The Computer Language Benchmarks Game: CMU 15-418/618, http://shootout.alioth.debian.org Spring 2020 Even good C code is inefficient Recall Assignment 1’s Mandelbrot program Consider execution on a high-end laptop: quad-core, Intel Core i7, AVX instructions... Single core, with AVX vector instructions: 5.8x speedup over C implementation Multi-core + hyper-threading + AVX instructions: 21.7x speedup Conclusion: basic C implementation compiled with -O3 leaves a lot of performance on the table CMU 15-418/618, Spring 2020 Making efficient use of modern machines is challenging (proof by assignments 2, 3, and 4) In our assignments, you only programmed homogeneous parallel computers. -
Rendering Complex Scenes with Memory-Coherent Ray Tracing
To appear in Proceedings of SIGGRAPH 1997 Rendering Complex Scenes with Memory-Coherent Ray Tracing Matt Pharr Craig Kolb Reid Gershbein Pat Hanrahan Computer Science Department, Stanford University Abstract both. For example, Z-buffer rendering algorithms operate on a sin- gle primitive at a time, which makes it possible to build rendering Simulating realistic lighting and rendering complex scenes are usu- hardware that does not need access to the entire scene. More re- ally considered separate problems with incompatible solutions. Ac- cently, the Talisman architecture was designed to exploit frame-to- curate lighting calculations are typically performed using ray trac- frame coherence as a means of accelerating rendering and reducing ing algorithms, which require that the entire scene database reside memory bandwidth requirements[21]. in memory to perform well. Conversely, most systems capable of Kajiya has written a whitepaper that proposes an architecture rendering complex scenes use scan-conversion algorithms that ac- for Monte Carlo ray tracing systems that is designed to improve cess memory coherently, but are unable to incorporate sophisticated coherence across all levels of the memory hierarchy, from pro- illumination. We have developed algorithms that use caching and cessor caches to disk storage[13]. The rendering computation is lazy creation of texture and geometry to manage scene complexity. decomposed into parts—ray-object intersections, shading calcula- To improve cache performance, we increase locality of reference tions, and calculating spawned rays—that are performed indepen- by dynamically reordering the rendering computation based on the dently. The coherence of memory references is increased through contents of the cache. We have used these algorithms to compute careful management of the interaction of the computation and the images of scenes containing millions of primitives, while storing memory that it references, reducing overall running time and facil- ten percent of the scene description in memory. -
Course Overview
Course overview Digital Image Synthesis Yung-Yu Chuang with slides by Mario Costa Sousa, Pat Hanrahan and Revi Ramamoorthi Logistics • Meeting time: 2:20pm-5:20pm, Thursday • Classroom: CSIE Room 111 • Instructor: Yung-Yu Chuang ([email protected]) • TA:陳育聖 • Webpage: http://www.csie.ntu.edu.tw/~cyy/rendering id/password • Mailing list: [email protected] Please subscribe via https://cmlmail.csie.ntu.edu.tw/mailman/listinfo/rendering/ Prerequisites • C++ programming experience is required. • Basic knowledge on algorithm and data structure is essential. • Knowledge on linear algebra, probability, calculus and numerical methods is a plus. • Though not required, it is recommended that you have background knowledge on computer graphics. Requirements (subject to change) • 3 programming assignments (60%) • Class participation (5%) • Final project (35%) Textbook Physically Based Rendering from Theory to Implementation, 2nd ed, by Matt Pharr and Greg Humphreys •Authors have a lot of experience on ray tracing •Complete (educational) code, more concrete •Has been used in many courses and papers •Implement some advanced or difficult-to-implement methods: subdivision surfaces, Metropolis sampling, BSSRDF, PRT. •3rd edition is coming next year! pbrt won Oscar 2014 •ToMatt Pharr, Greg Humphreys and Pat Hanrahan for their formalization and reference implementation of the concepts behind physically based rendering, as shared in their book Physically Based Rendering. Physically based rendering has transformed computer graphics lighting by more accurately simulating materials and lights, allowing digital artists to focus on cinematography rather than the intricacies of rendering. First published in 2004, Physically Based Rendering is both a textbook and a complete source-code implementation that has provided a widely adopted practical roadmap for most physically based shading and lighting systems used in film production. -
An Advanced Path Tracing Architecture for Movie Rendering
RenderMan: An Advanced Path Tracing Architecture for Movie Rendering PER CHRISTENSEN, JULIAN FONG, JONATHAN SHADE, WAYNE WOOTEN, BRENDEN SCHUBERT, ANDREW KENSLER, STEPHEN FRIEDMAN, CHARLIE KILPATRICK, CLIFF RAMSHAW, MARC BAN- NISTER, BRENTON RAYNER, JONATHAN BROUILLAT, and MAX LIANI, Pixar Animation Studios Fig. 1. Path-traced images rendered with RenderMan: Dory and Hank from Finding Dory (© 2016 Disney•Pixar). McQueen’s crash in Cars 3 (© 2017 Disney•Pixar). Shere Khan from Disney’s The Jungle Book (© 2016 Disney). A destroyer and the Death Star from Lucasfilm’s Rogue One: A Star Wars Story (© & ™ 2016 Lucasfilm Ltd. All rights reserved. Used under authorization.) Pixar’s RenderMan renderer is used to render all of Pixar’s films, and by many 1 INTRODUCTION film studios to render visual effects for live-action movies. RenderMan started Pixar’s movies and short films are all rendered with RenderMan. as a scanline renderer based on the Reyes algorithm, and was extended over The first computer-generated (CG) animated feature film, Toy Story, the years with ray tracing and several global illumination algorithms. was rendered with an early version of RenderMan in 1995. The most This paper describes the modern version of RenderMan, a new architec- ture for an extensible and programmable path tracer with many features recent Pixar movies – Finding Dory, Cars 3, and Coco – were rendered that are essential to handle the fiercely complex scenes in movie production. using RenderMan’s modern path tracing architecture. The two left Users can write their own materials using a bxdf interface, and their own images in Figure 1 show high-quality rendering of two challenging light transport algorithms using an integrator interface – or they can use the CG movie scenes with many bounces of specular reflections and materials and light transport algorithms provided with RenderMan. -
The Mission of IUPUI Is to Provide for Its Constituents Excellence in Teaching and Learning; Research, Scholarship, and Creative Activity; and Civic Engagement
INFO H517 Visualization Design, Analysis, and Evaluation Department of Human-Centered Computing Indiana University School of Informatics and Computing, Indianapolis Fall 2016 Section No.: 35557 Credit Hours: 3 Time: Wednesdays 12:00 – 2:40 PM Location: IT 257, Informatics & Communications Technology Complex 535 West Michigan Street, Indianapolis, IN 46202 [map] First Class: August 24, 2016 Website: http://vis.ninja/teaching/h590/ Instructor: Khairi Reda, Ph.D. in Computer Science (University of Illinois, Chicago) Assistant Professor, Human–Centered Computing Office Hours: Mondays, 1:00-2:30PM, or by Appointment Office: IT 581, Informatics & Communications Technology Complex 535 West Michigan Street, Indianapolis, IN 46202 [map] Phone: (317) 274-5788 (Office) Email: [email protected] Website: http://vis.ninja/ Prerequisites: Prior programming experience in a high-level language (e.g., Java, JavaScript, Python, C/C++, C#). COURSE DESCRIPTION This is an introductory course in design and evaluation of interactive visualizations for data analysis. Topics include human visual perception, visualization design, interaction techniques, and evaluation methods. Students develop projects to create their own web- based visualizations and develop competence to undertake independent research in visualization and visual analytics. EXTENDED COURSE DESCRIPTION This course introduces students to interactive data visualization from a human-centered perspective. Students learn how to apply principles from perceptual psychology, cognitive science, and graphics design to create effective visualizations for a variety of data types and analytical tasks. Topics include fundamentals of human visual perception and cognition, 1 2 graphical data encoding, visual representations (including statistical plots, maps, graphs, small-multiples), task abstraction, interaction techniques, data analysis methods (e.g., clustering and dimensionality reduction), and evaluation methods. -
Computational Photography
Computational Photography George Legrady © 2020 Experimental Visualization Lab Media Arts & Technology University of California, Santa Barbara Definition of Computational Photography • An R & D development in the mid 2000s • Computational photography refers to digital image-capture cameras where computers are integrated into the image-capture process within the camera • Examples of computational photography results include in-camera computation of digital panoramas,[6] high-dynamic-range images, light- field cameras, and other unconventional optics • Correlating one’s image with others through the internet (Photosynth) • Sensing in other electromagnetic spectrum • 3 Leading labs: Marc Levoy, Stanford (LightField, FrankenCamera, etc.) Ramesh Raskar, Camera Culture, MIT Media Lab Shree Nayar, CV Lab, Columbia University https://en.wikipedia.org/wiki/Computational_photography Definition of Computational Photography Sensor System Opticcs Software Processing Image From Shree K. Nayar, Columbia University From Shree K. Nayar, Columbia University Shree K. Nayar, Director (lab description video) • Computer Vision Laboratory • Lab focuses on vision sensors; physics based models for vision; algorithms for scene interpretation • Digital imaging, machine vision, robotics, human- machine interfaces, computer graphics and displays [URL] From Shree K. Nayar, Columbia University • Refraction (lens / dioptrics) and reflection (mirror / catoptrics) are combined in an optical system, (Used in search lights, headlamps, optical telescopes, microscopes, and telephoto lenses.) • Catadioptric Camera for 360 degree Imaging • Reflector | Recorded Image | Unwrapped [dance video] • 360 degree cameras [Various projects] Marc Levoy, Computer Science Lab, Stanford George Legrady Director, Experimental Visualization Lab Media Arts & Technology University of California, Santa Barbara http://vislab.ucsb.edu http://georgelegrady.com Marc Levoy, Computer Science Lab, Stanford • Light fields were introduced into computer graphics in 1996 by Marc Levoy and Pat Hanrahan. -
Surfacing Visualization Mirages
Surfacing Visualization Mirages Andrew McNutt Gordon Kindlmann Michael Correll University of Chicago University of Chicago Tableau Research Chicago, IL Chicago, IL Seattle, WA [email protected] [email protected] [email protected] ABSTRACT In this paper, we present a conceptual model of these visual- Dirty data and deceptive design practices can undermine, in- ization mirages and show how users’ choices can cause errors vert, or invalidate the purported messages of charts and graphs. in all stages of the visual analytics (VA) process that can lead These failures can arise silently: a conclusion derived from to untrue or unwarranted conclusions from data. Using our a particular visualization may look plausible unless the an- model we observe a gap in automatic techniques for validating alyst looks closer and discovers an issue with the backing visualizations, specifically in the relationship between data data, visual specification, or their own assumptions. We term and chart specification. We address this gap by developing a such silent but significant failures visualization mirages. We theory of metamorphic testing for visualization which synthe- describe a conceptual model of mirages and show how they sizes prior work on metamorphic testing [92] and algebraic can be generated at every stage of the visual analytics process. visualization errors [54]. Through this combination we seek to We adapt a methodology from software testing, metamorphic alert viewers to situations where minor changes to the visual- testing, as a way of automatically surfacing potential mirages ization design or backing data have large (but illusory) effects at the visual encoding stage of analysis through modifications on the resulting visualization, or where potentially important to the underlying data and chart specification. -
(Gpus) for Department of Defense (Dod) Digital Signal Processing (DSP) Applications
Assessment of Graphic Processing Units (GPUs) for Department of Defense (DoD) Digital Signal Processing (DSP) Applications John D. Owens, Shubhabrata Sengupta, and Daniel Horn† University of California, Davis † Stanford University Abstract In this report we analyze the performance of the fast Fourier transform (FFT) on graphics hardware (the GPU), comparing it to the best-of-class CPU implementation FFTW.WedescribetheFFT,thearchitectureoftheGPU,andhowgeneral-purpose computation is structured on the GPU. We then identify the factors that influence FFT performance and describe several experiments that compare these factors between the CPUandtheGPU.Weconcludethattheoverheadoftransferringdataandinitiating GPU computation are substantially higher than on the CPU, and thus for latency- critical applications, the CPU is a superior choice. We show that the CPU imple- mentation is limited by computation and the GPU implementation by GPU memory bandwidth and its lack of a writable cache. The GPU is comparatively better suited for larger FFTs with many FFTs computed in parallel in applications where FFT through- putismostimportant;ontheseapplicationsGPUandCPUperformanceisroughly on par. We also demonstrate that adding additional computation to an application that includes the FFT, particularly computation that is GPU-friendly, puts the GPU at an advantage compared to the CPU. The future of desktop processing is parallel.Thelastfewyearshaveseenanexplosionof single-chip commodity parallel architectures that promises to be the centerpieces of future computing platforms. Parallel hardware on the desktop—new multicore microprocessors, graphics processors, and stream processors—has the potential to greatly increase the com- putational power available to today’s computer users, with a resulting impact in computa- tion domains such as multimedia, entertainment, signal and image processing, and scientific computation. -
Physically-Based Rendering Uses Physics to Simulate the Interaction Between Matter and Light, Realism Is the Primary Goal
Logistics • Meeting time: 2:20pm-5:20pm, Thursday • Classroom: CSIE Room 111 • Instructor: Yung-Yu Chuang ([email protected]) Course overview • TA:賴威昇 • Webpage: http://www.csie.ntu.edu.tw/~cyy/rendering id/password Digital Image Synthesis • Mailing list: [email protected] Yung-Yu Chuang Please subscribe via https://cmlmail.csie.ntu.edu.tw/mailman/listinfo/rendering/ with slides by Mario Costa Sousa, Pat Hanrahan and Revi Ramamoorthi Prerequisites Requirements (subject to change) • C++ programming experience is required. • 3 programming assignments (60%) • Basic knowledge on algorithm and data • Class participation (5%) structure is essential. • Final project (35%) • Knowledge on linear algebra, probability, calculus and numerical methods is a plus. • Though not required, it is recommended that you have background knowledge on computer graphics. Textbook pbrt won Oscar 2014 Physically Based Rendering from Theory to Implementation, •ToMatt Pharr, Greg Humphreys and Pat 2nd ed, by Matt Pharr and Greg Humphreys Hanrahan for their formalization and reference •Authors have a lot of implementation of the concepts behind experience on ray tracing physically based rendering, as shared in their •Complete (educational) code, book Physically Based Rendering. more concrete Physically based rendering has •Has been used in many courses transformed computer graphics lighting by more accurately simulating materials and and papers lights, allowing digital artists to focus on •Implement some advanced or cinematography rather than the intricacies of rendering. First published in difficult-to-implement methods: 2004, Physically Based Rendering is both subdivision surfaces, Metropolis a textbook and a complete source-code sampling, BSSRDF, PRT. implementation that has provided a widely adopted practical roadmap for most •3rd edition is coming next year! physically based shading and lighting systems used in film production. -
To Draw a Tree
To Draw a Tree Pat Hanrahan Computer Science Department Stanford University Motivation Hierarchies File systems and web sites Organization charts Categorical classifications Similiarity and clustering Branching processes Genealogy and lineages Phylogenetic trees Decision processes Indices or search trees Decision trees Tournaments Page 1 Tree Drawing Simple Tree Drawing Preorder or inorder traversal Page 2 Rheingold-Tilford Algorithm Information Visualization Page 3 Tree Representations Most Common … Page 4 Tournaments! Page 5 Second Most Common … Lineages Page 6 http://www.royal.gov.uk/history/trees.htm Page 7 Demonstration Saito-Sederberg Genealogy Viewer C. Elegans Cell Lineage [Sulston] Page 8 Page 9 Page 10 Page 11 Evolutionary Trees [Haeckel] Page 12 Page 13 [Agassiz, 1883] 1989 Page 14 Chapple and Garofolo, In Tufte [Furbringer] Page 15 [Simpson]] [Gould] Page 16 Tree of Life [Haeckel] [Tufte] Page 17 Janvier, 1812 “Graphical Excellence is nearly always multivariate” Edward Tufte Page 18 Phenograms to Cladograms GeneBase Page 19 http://www.gwu.edu/~clade/spiders/peet.htm Page 20 Page 21 The Shape of Trees Page 22 Patterns of Evolution Page 23 Hierachical Databases Stolte and Hanrahan, Polaris, InfoVis 2000 Page 24 Generalization • Aggregation • Simplification • Filtering Abstraction Hierarchies Datacubes Star and Snowflake Schemes Page 25 Memory & Code Cache misses for a procedure for 10 million cycles White = not run Grey = no misses Red = # misses y-dimension is source code x-dimension is cycles (time) Memory & Code zooming on y zooms from fileprocedurelineassembly code zooming on x increases time resolution down to one cycle per bar Page 26 Themes Cognitive Principles for Design Congruence Principle: The structure and content of the external representation should correspond to the desired structure and content of the internal representation. -
01 the Implementation of a Scalable Texture Cache
01 THE IMPLEMENTATION OF A SCALABLE TEXTURE CACHE Matt Pharr 5 March 2017 Abstract Texture caching—loading rectangular tiles of texture maps on demand and keeping a fixed number of them cached in memory—has proven to be an effective approach for limiting memory use when rendering complex scenes; scenes with gigabytes of texture can often be rendered with just tens or hundreds of megabytes of texture resident in memory. However, implementing an efficient texture cache is challenging in the presence of tens of rendering threads; each traced ray generally performs multiple cache lookups, one for each texel accessed during texture filtering, all while new tiles are being added to the cache and old tiles are recycled. Traditional thread synchronization approaches based on mutexes give poor parallel scalability with this workload. We present a texture cache implementation based on lock-free algorithms and the “read-copy update” technique, and show that it scales linearly up to (at least) 32 threads. Our implementation performs just as well as pbrt does with preloaded textures where there are no such cache maintenance complexities. Note: This document describes new functionality that may be included in a future version of pbrt. The implementation is available in the texcache branch in the pbrt-v3 repository on github. We’re interested in feedback (especially bug reports) about our implementation; please send a note to [email protected] or post to the pbrt Google Groups list if you have comments or questions. (Given that the implementation does some fairly tricky things to minimize locking and inter-thread coordination, it is entirely possible that subtle bugs remain.) 1.1 INTRODUCTION Most visually complex computer graphics scenes make extensive use of tex- ture maps to represent variation in surface appearance.