<<

CMSC740 Advanced

Spring 2021 Matthias Zwicker Today • Course overview • Organization • Introduction to rendering

2 Computer graphics “Algorithms to create and manipulate images”

3 Core areas • Rendering: computing images given 3D scene descriptions • Modeling: creating 3D scene descriptions • : making things move

This course • First half: rendering (realistic) • Second half: modeling using AI/deep learning methods

Other graphics courses at UMD • CMSC425, Game Programming • CMSC427, Computer Graphics • CMSC828X, Physically-based Modeling, Simulation, and Animation

4 Today • Course overview • Organization • Introduction to rendering

5 Organization

• Class material – Course page on UMD Canvas – Log in with directory ID • Teaching assistant – Saeed Hadadan, 2nd year CS PhD student

6 Organization • Lectures Tuesdays and Thursdays, 11:00am- 12:15pm, online, synchronous • Office hours – TA: TBA – Instructor: By appointment (send me email)

7 Grading • Four programming assignments – 40% of final grade – Individual projects – During first half of course • Final project – 40% of final grade – Groups of two – During second half of course • Final exam – 15% of final grade • Class participation – Answering clicker questions – 5% of final grade

8 Policies • Standard UMD academic integrity policies https://www.studentconduct.umd.edu/academic-dishonesty • Programming assignments – Each student submits own solution (no copying, direct collaboration on code) – Penalty for late submission: 50% of original score – Public posting (github etc.) of solutions prohibited

9 What you should know • Programming experience in Java or C++, data structures (CMSC420) • Linear algebra (MATH240) – Vectors, matrices, systems of linear equations, coordinate transformations • (Basic background in 3D graphics) – E.g., Shirley, Fundamentals of Computer Graphics

10 What you should know

• Mathematics – Vectors in 3D (dot product, vector product) – Matrix algebra (addition, multiplication, inverse, determinant, transpose, orthogonal matrices) – Coordinate systems (homogeneous coordinates, change of coordinates) – Transformation matrices (scaling, rotation, translation, shearing) – Plane equations – Systems of linear equations (matrix representation, elementary solution procedures) • Graphics – Color – 3D scene representation (triangle meshes, scene graphs)

11 Learning objectives • Understand and apply mathematical concepts and advanced algorithms in computer graphics – Rendering (rendering equation, Monte Carlo integration, path tracing, 3D geometry processing, numerical techniques, etc.) – Deep learning in computer graphics • Sharpen your programming skills (C++)

12 C/C++ programming experience

A. Never used B. Beginner (worked with it less than 16 hours/2 days) C. Intermediate (worked with it 16-80 hours/2-10 days) D. Advanced

13 I can explain matrix multiplication

A. Strongly Agree B. Agree C. Somewhat Agree D. Neutral E. Somewhat Disagree F. Disagree G. Strongly Disagree

14 I can explain the concept of the basis of a vector space

A. Strongly Agree B. Agree C. Somewhat Agree D. Neutral E. Somewhat Disagree F. Disagree G. Strongly Disagree

15 Today • Course overview • Organization • Introduction to rendering

16 First half of course • 7 week crash course “theory and practice of photo-realistic rendering for computer graphics”

17 Rendering algorithms

• Problem statement – Given computer representation of 3D scene, generate image of the scene that would be captured by a virtual camera at a given location • Applications – Movies, games, design, architecture, virtual/augmented reality, , tele- collaboration, etc.

18 Challenges • „Photo-realism“ – Realistic simulation of light transport

• Efficiency – 2,300 CPU years for „Cars“, Pixar (2006) – 10,000 CPU years for „ Monsters University“, Pixar (2013) https://venturebeat.com/2013/04/24/the-making-of-pixars-latest-technological-marvel-monsters-university/

19 Challenges • Real-time rendering – Games, VR, AR (90 frames per second (fps), 7-15ms motion-to-photon latency) • Modeling & animation complexity – Avatar: several hundred million $ budget, 1000 people – Mostly busy with modeling and animating complex 3D scenes

20 Rendering strategies

• Two fundamentally different strategies for image generation (rendering) 1. Z-buffering, rasterization 2. Ray tracing • In both cases, most popular object representation are triangle meshes – Data structure: array of x,y,z vertex positions; index array indicating triplets of vertices forming triangles

21 Z-buffering Rendering pipeline https://en.wikipedia.org/wiki/Graphics_pipeline Input data (triangles, etc.) Object order Transformation, algorithm, projection paint (transform, project, rasterize) one Rasterization object (usually triangle) after an other, visibility via z- Shading buffer

Display

22 Z-buffering • Implemented in graphics hardware (ATI, NVidia GPUs) • Standardized APIs (OpenGL, Direct3D, Vulcan) • Interactive rendering, AR/VR, games • Limited photo-realism

23 Ray tracing

Output image Imager (loop Primary over pixels)

Primary view ray for each pixel [http://en.wikipedia.org/wiki/Ray_tracing_(graphics)]

Find intersection Image order with scene 3D scene algorithm, representation paint one Shading (using pixel after shadow ray) an other 24 Ray tracing vs. rasterization

• Rasterization – Desirable memory access pattern („streaming“, data locality, avoids random scene access) – Suitable for real time rendering (OpenGL, DirectX) – Full not possible with purely object order algorithm • Ray tracing – Undesirable memory access pattern (random scene access) – Requires sophisticated data structures for fast scene access – Full global illumination possible – Most popular for photo-realistic rendering – Recently with hardware support https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf

25 Goal of first half of class

• Learn theory and practice of light transport simulation for computer graphics – “Rendering algorithms” – Spatial data structures for ray tracing, physics background, rendering equation, (quasi) Monte Carlo integration, (multiple) importance sampling, Markov Chain Monte Carlo sampling, etc.

26 Realistic rendering

Ray Soft tracing shadows

Global Caustics illum.

[Wann Jensen]

27 Caustics

[Wann Jensen] http://graphics.ucsd.edu/~henrik/papers/book/ 28 Indirect illumination

[Wann Jensen]

29 Atmospheric effects

[Wann Jensen] http://graphics.ucsd.edu/~henrik/papers/nightsky/

30 Participating media

[Novak et al.] http://zurich.disneyresearch.com/~wjarosz/publications/novak12vrls.html 31 Complex materials

Reflection and transmission Maxwell renderer in a hair fiber [http://www.cs.cornell.edu/~srm/publications/SG03-hair-abstract.html] http://www.maxwellrender.com/index.php/features 32 Rendering artifacts, noise

Big challenge, address with more computation time, and sophisticatd algorithms

Cornell box http://en.wikipedia.org/wiki/Cornell_Box 33 Recommended book

• „Physically based rendering“, by

Pharr, Jakob, Humphreys, http://pbrt.org/ – „Bible“ of realistic rendering – Available freely online http://www.pbr-book.org/3ed-2018/contents.html – Detailed, useful as reference – Includes many advanced concepts – With C++ code

34 Other books

• Ray tracing from the ground up“, by Kevin Suffern – Good intro, with Java code • „Realistic image synthesis using photon mapping“, by Henrik Wann Jensen – Focus on photon mapping • „Advanced global illumination“, by Dutré et al. – More on the theoretical side – Many advanced concepts

35 Other books

• „Realistic ray tracing“, by Peter Shirley – Good/short intro, with C code

36 Ray tracing pseudocode rayTrace() { construct scene representation

for each pixel ray = computePrimaryViewRay( pixel ) hit = first intersection with scene color = shade( hit ) // using shadow ray set pixel color } Primary

hit

[http://en.wikipedia.org/wiki/Ray_tracing_(graphics)] 37 Computational complexity of ray tracing depends on

A. Number of pixels in image B. Number of triangles in 3D triangle meshes C. Product of number of pixels and number of triangles

38 Computing primary rays • Primary rays sometimes called “camera rays”, “view rays” • Need to define camera coordinate system – Specifies position and orientation of virtual camera in 3D space • Need to define viewing frustum – Pyramid shaped volume that contains 3D volume seen by camera • Given an image pixel 1. Determine ray in camera coordinates 2. Transform ray to world coordinates (sometimes also called “canonic” coordinates)

39 Camera coordinate system

User provided parameters (up, from, to) vectors to place camera in world

Position & orientation of virtual camera

Scene object World coordinates given by x,y,z basis vectors

40 Camera coordinate system

Need to construct basis of camera coordinate system (u,v,w) based on user parameters (up, from, to)

u,v,w camera coordinates

Scene object x,y,z world coordinates Right handed coordinate systems! https://en.wikipedia.org/wiki/Right-hand_rule 41 Camera coordinate system • Given from, to, up Use homogeneous coordinates to enable affine transformations (including rotation, translation) with matrix multiplications https://en.wikipedia.org/wiki/Transformation_matrix#Affine_transformations

Camera coordinates

Scene object World coordinates 42 Viewing frustum • 3D volume that is projected within image boundaries – “Pyramidal cone” that contains all camera rays Viewing frustum • Defined by – Vertical field-of-view θ – Aspect ratio aspect = width/height – Image plane at w=-1

43 Viewing frustum

Vertical field-of-view

Image plane, at w=-1 by convention

44 From pixel i,j to primary viewing ray

Vertical Primary viewing field-of-view ray (in camera coordinates) Pixel, integer index i,j

Image plane

45 Primary rays in camera coords. Primary ray

Pixel i,j corresponds to point

Image plane Caution: origin in homogeneous coordinates 46 Bounds of image plane t,b,l,r

• Vertical field-of-view θ • Aspect ratio width/height aspect • Image coordinates u, v

Top, bottom, left, right coords. of image corners

47 Image plane position u,v given pixel index i,j

• Image resolution m × n pixels Pixel (i=m-1, j=n-1)

Pixel (i, j)

Camera coords. of center of pixel i, j

Pixel (i=0, j=0)

48 Area of pixl i,j in u,v cords.

• Image resolution m × n pixels Pixel (i=m-1, j=n-1)

Pixel (i, j)

Square area around pixel i, j

Pixel (i=0, j=0) Often, want primary ray for pixel through random position within pixel 49 Primary rays in camera coords.

• Given location (u,v) on image plane, get

ray puvw(t) in camera coords. through (u,v)

Image plane

50 Transformation to world coords.

• World coordinates also called “canonic” coordinates • Basis vectors of camera coords: u,v,w,e column vectors • Change of basis matrix (camera to world) https://en.wikipedia.org/wiki/Change_of_basis

• Ray pxyz(t) in world coordinates

51 Ray tracing pseudocode rayTrace() { construct scene representation

for each pixel ray = computePrimaryViewRay( pixel ) hit = first intersection with scene color = shade( hit ) // using shadow ray set pixel color }

52 Ray-surface intersection

• Implicit surfaces • Parametric surfaces

53 Implicit surfaces http://en.wikipedia.org/wiki/Surface#Extrinsically_defined_surfaces_and_embeddings • Defined as set of points p such that

• Implicit representation defines a solid – Including the interior volume – Not just the surface

54 Implicit surfaces • 3D visualization, p=(x,y,z)

f (x,y,z) = 0

f (x,y,z) < 0

f (x,y,z) > 0 z y x

55 Implicit surfaces http://en.wikipedia.org/wiki/Surface#Extrinsically_defined_surfaces_and_embeddings Ray-surface intersection • Given a ray with origin e, direction d

• Ray-surface intersection: solve for t such that

• In general, non-linear equation of variable t – Use numerical root-finding algorithm http://en.wikipedia.org/wiki/Root-finding_algorithm – For example, secant method http://en.wikipedia.org/wiki/Secant_method

56 Infinite plane • Unit normal n, arbitrary point on plane a • Plane defined as set of points p where

• f(p) is distance to plane! n p • Intersection with p(t) p - a a

Point on ray

57 Parametric surfaces • Parametric surfaces are boundary representations (b-reps) http://en.wikipedia.org/wiki/Boundary_representation – „Solid object is given by describing its boundary between solid and non-solid“ • In general, surfaces can be described either implicitly or parametrically

58 Parametric surfaces • Parameters

Point p on surface p = (x,y,z)

– Complicated surfaces require fitting together many parametric patches (charts)

http://en.wikipedia.org/wiki/B%C3%A9zier_surface 59 Parametric surfaces • Visualization

x = f(0.8,0.7) y = g(0.8,0.7)

v z = h(0.8,0.7)

z 1 y u (u,v) = (0.8,0.7) v x 0 u 1 2D parameter domain

60 Parametric surfaces • Ray-surface intersection Point on ray = point on surface Ray origin, direction:

• Solve 3 equations for 3 unknowns t, u, v • Easy to solve for parametric planes (see ray- triangle intersection) • Non-linear equation for higher order surfaces (e.g., NURBS) – Requires iterative solution 61 Ray-triangle intersection

• Parametric description of a triangle with vertices a, b, c (parameters α,β instead of u,v) a

c • Convention: vertices are b ordered counter-clockwise as seen from the outside

62 Ray-triangle intersection • Outward-pointing normal a

c b • Triangle in barycentric coordinates α,β,γ

• Given two of the coordinates, can easily compute third, e.g. α = 1 − β − γ

63 Ray-triangle intersection • Intersection condition

Point on ray Point on triangle

• One equation for each coordinate x,y,z

64 Ray-triangle intersection • In matrix form

• Solve for using Cramer’s rule (Shirley page 157 ) • Ray intersects triangle if 0 < β +γ < 1

65 Example: unit sphere • Radius 1, centered at origin (0,0,0) • Implicit – f(x,y,z) = x2 + y2 + z2 - 1 • Parametric – Spherical coordinates (polar, azimuthal angles) θ, φ – x = sin(θ)cos(φ), y = sin(θ)sin(φ), z = cos(θ)

66 What is easier to ray trace, implicit or parametric sphere?

A. Implicit equations for sphere B. Parametric equations for sphere

67 Surface normals https://en.wikipedia.org/wiki/Normal_(geometry) • Unit vector perpendicular to tangent plane at each point on surface (i.e., surface normal is normal of tangent plane at each point) • Required for shading caldulations

Surface normal is Normal field of curved perpendicular to tangent surface plane at each point 68 Surface normals • How to compute for – Parametric surfaces? – Implicit surfaces?

69 Surface normals • Implicit surfaces: gradient of f

• Parametric surface: cross product of tangent vectors (partial derivatives)

70 Transforming Normal Vectors

• Normal n is perpendicular to tangent t

• Under transformation Mt, find transformation X of normal n that satisfies normal constraint

• Note: if and only if M includes only rotation and translation (no shear)

71 Rendering assignments

• Assignments available on UMD ELMS/Canvas https://myelms.umd.edu • Extend an educational renderer with additional features, step-by-step – «Nori», developed by Wenzel Jakob http://rgl.epfl.ch/people/wjakob/ • First assignment – Getting started with base code – Deadline: Tuesday Feb. 2nd – Turn in via file upload to ELMS/Canvas

72 “Nori” notes • Construction of basis vectors for camera: parser.cpp, loadFromXML • Construction of primary rays – Loop over pixels: main.cpp, renderBlock (block-wise multi-threaded rendering via thread building blocks, TBB, https://www.threadingbuildingblocks.org/) – Make ray in world space based on camera pose and image location: .cpp, sampleRay • Ray-triangle intersection: mesh.cpp, rayIntersect • Shading: integrator.h, Integrator::Li 73 Homework policies • No public git forks of your homework! – Of course, a private git repo on github or elsewhere in the cloud is a good idea • Adhere to the academic integrity policies of UMD/UMD CS http://osc.umd.edu/OSC/Default.aspx http://www.cs.umd.edu/class/resources/academicIntegrity.html – Discussing homework with colleagues is fine, but no sharing of code – No copying of code found online

74 Next • Acceleration structures for ray tracing

75