Deferred Rendering Due: Wednesday November 15 at 10Pm
Total Page:16
File Type:pdf, Size:1020Kb
CMSC 23700 Introduction to Computer Graphics Project 4 Autumn 2017 November 2, 2017 Deferred Rendering Due: Wednesday November 15 at 10pm 1 Summary This assignment uses the same application architecture as the earlier projects, but it replaces the forward rendering approach with deferred rendering, which means that the rendering engine must be rewritten. The rendering techniques that you have implemented in the previous projects is sometimes called forward rendering. While it is very effective for many applications, forward rendering does not handle large numbers of light sources well. In a scene with many lights (e.g., street lights going down a highway), we might have to run a forward-rendering pass per light (or fixed-size group of lights) resulting in substantial extra work. Furthermore, we may have to compute lighting for many fragments that are later overdrawn by other, closer, fragments. (Note: some hardware uses an early Z-buffer test to avoid running the fragment shader on fragments that will be discarded later, but this optimization only helps with fragments that are further away from the ones already drawn). Deferred rendering is a solution to this problem. 2 Deferred rendering The idea of deferred shading (or deferred rendering) is to do one pass of geometry rendering, storing the resulting per-fragment geometric information in the G-buffer. A G-buffer is essentially a frame buffer with multiple attachments or Multiple Render Targets (MTR). These include • fragment positions (i.e., world space coordinates) • the diffuse surface color without lighting • the specular surface color and exponent without lighting • the emissive surface color • world-space surface normals • depth values We then use additional passes (typically one per light) to compute lighting information and to blend the results into the final render target. Figure 1 shows the G-buffer contents and final result for a G-buffer with three components (diffuse color, normal, and depth). Diffuse-color buffer Depth buffer Normal buffer Final result Figure 1: The G Buffer contents and final result In the remainder of this section, we describe deferred rendering in more detail. You can also find additional information in Chapter 13 of the OpenGL SuperBible (pp. 613–624). 2.1 The Lighting Equation Recall that the general form of the lighting equation for a surface S is X I = LaSa + Se + (LdiSd + LsiSs) i where • La is the ambient light • Sa is the surface’s ambient color (usually the same as Sd) • Se is the surface’s emissive color • Sd is the surface’s diffuse color • Ss is the surface’s specular color • Ldi is the diffuse intensity of the ith light at the point being lit • Lsi is the specular intensity of the ith light at the point being lit, which is computing using the surface’s specular exponent (a.k.a. Phong exponent). The idea of deferred rendering is that we record sufficient information in the G-buffer that we can then do the lighting computations (i.e. the summation) in per-light passes. 2 2.2 The Geometry Pass The first pass is called the geometry pass and consists of a vertex shader that transforms the vertices, normals, and texture coordinates as usual, but which also outputs the world-space coordinates. The geometry-pass fragment shader takes the interpolated outputs from the rasterizer and writes the various pieces of information, plus sampled texture values, into individual off-screen buffers. The collection of these buffers are called the G-buffer. G Buffer Textures Diffuse Emissive Scene Vertex Fragment Rasterizer objects Shader Shader Position Normal Depth . Note that the geometry phase does not do any calculations involving lights, but it does do texture lookups to get the diffuse and specular colors. If normal mapping is being used, then it also com- putes the surface normal using the tangent-space normal map and then converts it to world space and writes that value to the normal buffer. 2.3 Lighting passes The second phase of the technique is to run a rendering pass for each light in the scene. For a given light, we want to compute the lighting equation for any pixel in the frame buffer whose final color will be affected by the light. To iterate over this set of pixels we render a triangle mesh that encapsulates the light volume. Since the light’s volume is contained in the mesh, we know that any visible pixel affected by the light will be contained in the projection of the light to screen space.1 The vertex shader for the lighting pass is trivial; it just transforms the vertex into clip-space coordinates. The rasterizer then generates fragments that provide the iteration structure for the fragment shader to compute lighting. We blend the output of each lighting pass into an accumulation buffer (called the final buffer). 1In the case of directional lights, we render a quad that covers the whole screen. 3 G Buffer Light 1 Vertex Fragment Rasterizer object Shader Shader Light 2 Vertex Fragment Final Fragment Render Rasterizer object Shader Shader Buffer Shader Buffer Light 3 Vertex Fragment Rasterizer object Shader Shader Screen Vertex Rasterizer quad Shader At this point, we might also compute screen-space ambient occlusion in a separate pass (see Section 5.1 below). 2.4 Final blending Once we have computed the lighting contributions for all of the lights, we need to combine the contents of the final buffer with the ambient and emissive lighting factors and then display the results on the screen. We do this by drawing a screen-sized quad (i.e., two triangles) and using the generated fragment coordinates to drive the blending of the lighting information in the final buffer with the ambient and emissive color information from the G-buffer. 2.5 An alternative approach A slightly different approach is to accumulate the diffuse and specular intensites in separate buffers during the lighting passes. These values are then multiplied by the diffuse and specular surface colors when compositing the framebuffer in the final blending phase. This approach is used for High Dynamic Range (HDR) lighting, but requires significantly more bandwidth. 3 Implementation details Given the rendering approach described in the previous section, there are a number of implementa- tion details that we need to resolve. 3.1 Light volumes The first implementation detail is determining the representation of the light volumes. A conserva- tive approach would be to render the whole screen, but that would be wasteful. Instead, we want to 4 Color Buffer Eye Light Figure 2: Rendering a light volume. The left side shows the view frustum from above, while the right side shows the buffer. The yellow circle on the right is the region of screen-space fragments for which we have to perform a lighting calculation for the light. generate a mesh that encloses any point in space that might be affected by a given light. Rendering this mesh will then produce the fragments that we need to consider for lighting.2 The basic idea is to render point lights and spot lights as geometric objects representing the light volume (i.e., spheres and cones), where we use the light’s attenuation factor to determine the size of the volume. In the case of point lights, we can determine the sphere’s radius as follows. Let Imax = max(Ir;Ig;Ib) maximum per-channel intensity for the light 1 (1) At(d) = ad2+bd+c attenuation as function of distance d Assuming that we have 8-bits per channel, we solve for d (assuming a 6= 0) 1 At(d)I = max 255 2 255Imax = ad + bd + c 2 0 = ad + br + (c − 255Imax) −b + pb2 − 4a(c − 255I ) d = max 2a For example, if a = 0:3, b = 0, and c = 0, then we get d ≈ 29, so a sphere of radius 29 can be used to represent the light’s volume. The lights in the project are spot lights, so they are represented as cones (the sample code includes code for generating a cone mesh that can represent a spot-light volume). The attenuation calculation from above determines the height of the cone. Once we have the light-volume mesh, we can render it as illustrated in Figure 2. In this figure, we would compute lighting information for all of the fragments in the yellow region of the screen. 2Note that we can further speed things up by using view-frustum culling on light volumes to avoid rendering off-screen lights. 5 Although we would compute lighting for the green box, since it is too far away from the light, the resulting lighting contribution would be zero. Only the region covered by the blue object would have non-zero lighting. 3.2 Avoiding unnecessary lighting computations Rendering light volumes reduces the number of fragments for which we must compute lighting information, but it can still result in unnecessary lighting computations. For example, the green object in Figure 2 is outside the light volume, but we would still spend time computing lighting for it. We can use a technique similar to stencil shadow-volumes to further reduce the lighting work. The basic idea is to do two passes for the light. The first puts non-zero values into the stencil buffer for those fragments whose world-space position is inside the light volume. The second pass uses this stencil information to discard fragments before lighting. Specifically, we do the following for each point light or spot light in the scene:3 1. Clear the stencil buffer. 2. Enable stencil test (GL_ALWAYS) and depth tests (GL_LESS) against the depth buffer com- puted in the geometry pass.