Deferred Rendering Due: Wednesday November 15 at 10Pm

Total Page:16

File Type:pdf, Size:1020Kb

Deferred Rendering Due: Wednesday November 15 at 10Pm CMSC 23700 Introduction to Computer Graphics Project 4 Autumn 2017 November 2, 2017 Deferred Rendering Due: Wednesday November 15 at 10pm 1 Summary This assignment uses the same application architecture as the earlier projects, but it replaces the forward rendering approach with deferred rendering, which means that the rendering engine must be rewritten. The rendering techniques that you have implemented in the previous projects is sometimes called forward rendering. While it is very effective for many applications, forward rendering does not handle large numbers of light sources well. In a scene with many lights (e.g., street lights going down a highway), we might have to run a forward-rendering pass per light (or fixed-size group of lights) resulting in substantial extra work. Furthermore, we may have to compute lighting for many fragments that are later overdrawn by other, closer, fragments. (Note: some hardware uses an early Z-buffer test to avoid running the fragment shader on fragments that will be discarded later, but this optimization only helps with fragments that are further away from the ones already drawn). Deferred rendering is a solution to this problem. 2 Deferred rendering The idea of deferred shading (or deferred rendering) is to do one pass of geometry rendering, storing the resulting per-fragment geometric information in the G-buffer. A G-buffer is essentially a frame buffer with multiple attachments or Multiple Render Targets (MTR). These include • fragment positions (i.e., world space coordinates) • the diffuse surface color without lighting • the specular surface color and exponent without lighting • the emissive surface color • world-space surface normals • depth values We then use additional passes (typically one per light) to compute lighting information and to blend the results into the final render target. Figure 1 shows the G-buffer contents and final result for a G-buffer with three components (diffuse color, normal, and depth). Diffuse-color buffer Depth buffer Normal buffer Final result Figure 1: The G Buffer contents and final result In the remainder of this section, we describe deferred rendering in more detail. You can also find additional information in Chapter 13 of the OpenGL SuperBible (pp. 613–624). 2.1 The Lighting Equation Recall that the general form of the lighting equation for a surface S is X I = LaSa + Se + (LdiSd + LsiSs) i where • La is the ambient light • Sa is the surface’s ambient color (usually the same as Sd) • Se is the surface’s emissive color • Sd is the surface’s diffuse color • Ss is the surface’s specular color • Ldi is the diffuse intensity of the ith light at the point being lit • Lsi is the specular intensity of the ith light at the point being lit, which is computing using the surface’s specular exponent (a.k.a. Phong exponent). The idea of deferred rendering is that we record sufficient information in the G-buffer that we can then do the lighting computations (i.e. the summation) in per-light passes. 2 2.2 The Geometry Pass The first pass is called the geometry pass and consists of a vertex shader that transforms the vertices, normals, and texture coordinates as usual, but which also outputs the world-space coordinates. The geometry-pass fragment shader takes the interpolated outputs from the rasterizer and writes the various pieces of information, plus sampled texture values, into individual off-screen buffers. The collection of these buffers are called the G-buffer. G Buffer Textures Diffuse Emissive Scene Vertex Fragment Rasterizer objects Shader Shader Position Normal Depth . Note that the geometry phase does not do any calculations involving lights, but it does do texture lookups to get the diffuse and specular colors. If normal mapping is being used, then it also com- putes the surface normal using the tangent-space normal map and then converts it to world space and writes that value to the normal buffer. 2.3 Lighting passes The second phase of the technique is to run a rendering pass for each light in the scene. For a given light, we want to compute the lighting equation for any pixel in the frame buffer whose final color will be affected by the light. To iterate over this set of pixels we render a triangle mesh that encapsulates the light volume. Since the light’s volume is contained in the mesh, we know that any visible pixel affected by the light will be contained in the projection of the light to screen space.1 The vertex shader for the lighting pass is trivial; it just transforms the vertex into clip-space coordinates. The rasterizer then generates fragments that provide the iteration structure for the fragment shader to compute lighting. We blend the output of each lighting pass into an accumulation buffer (called the final buffer). 1In the case of directional lights, we render a quad that covers the whole screen. 3 G Buffer Light 1 Vertex Fragment Rasterizer object Shader Shader Light 2 Vertex Fragment Final Fragment Render Rasterizer object Shader Shader Buffer Shader Buffer Light 3 Vertex Fragment Rasterizer object Shader Shader Screen Vertex Rasterizer quad Shader At this point, we might also compute screen-space ambient occlusion in a separate pass (see Section 5.1 below). 2.4 Final blending Once we have computed the lighting contributions for all of the lights, we need to combine the contents of the final buffer with the ambient and emissive lighting factors and then display the results on the screen. We do this by drawing a screen-sized quad (i.e., two triangles) and using the generated fragment coordinates to drive the blending of the lighting information in the final buffer with the ambient and emissive color information from the G-buffer. 2.5 An alternative approach A slightly different approach is to accumulate the diffuse and specular intensites in separate buffers during the lighting passes. These values are then multiplied by the diffuse and specular surface colors when compositing the framebuffer in the final blending phase. This approach is used for High Dynamic Range (HDR) lighting, but requires significantly more bandwidth. 3 Implementation details Given the rendering approach described in the previous section, there are a number of implementa- tion details that we need to resolve. 3.1 Light volumes The first implementation detail is determining the representation of the light volumes. A conserva- tive approach would be to render the whole screen, but that would be wasteful. Instead, we want to 4 Color Buffer Eye Light Figure 2: Rendering a light volume. The left side shows the view frustum from above, while the right side shows the buffer. The yellow circle on the right is the region of screen-space fragments for which we have to perform a lighting calculation for the light. generate a mesh that encloses any point in space that might be affected by a given light. Rendering this mesh will then produce the fragments that we need to consider for lighting.2 The basic idea is to render point lights and spot lights as geometric objects representing the light volume (i.e., spheres and cones), where we use the light’s attenuation factor to determine the size of the volume. In the case of point lights, we can determine the sphere’s radius as follows. Let Imax = max(Ir;Ig;Ib) maximum per-channel intensity for the light 1 (1) At(d) = ad2+bd+c attenuation as function of distance d Assuming that we have 8-bits per channel, we solve for d (assuming a 6= 0) 1 At(d)I = max 255 2 255Imax = ad + bd + c 2 0 = ad + br + (c − 255Imax) −b + pb2 − 4a(c − 255I ) d = max 2a For example, if a = 0:3, b = 0, and c = 0, then we get d ≈ 29, so a sphere of radius 29 can be used to represent the light’s volume. The lights in the project are spot lights, so they are represented as cones (the sample code includes code for generating a cone mesh that can represent a spot-light volume). The attenuation calculation from above determines the height of the cone. Once we have the light-volume mesh, we can render it as illustrated in Figure 2. In this figure, we would compute lighting information for all of the fragments in the yellow region of the screen. 2Note that we can further speed things up by using view-frustum culling on light volumes to avoid rendering off-screen lights. 5 Although we would compute lighting for the green box, since it is too far away from the light, the resulting lighting contribution would be zero. Only the region covered by the blue object would have non-zero lighting. 3.2 Avoiding unnecessary lighting computations Rendering light volumes reduces the number of fragments for which we must compute lighting information, but it can still result in unnecessary lighting computations. For example, the green object in Figure 2 is outside the light volume, but we would still spend time computing lighting for it. We can use a technique similar to stencil shadow-volumes to further reduce the lighting work. The basic idea is to do two passes for the light. The first puts non-zero values into the stencil buffer for those fragments whose world-space position is inside the light volume. The second pass uses this stencil information to discard fragments before lighting. Specifically, we do the following for each point light or spot light in the scene:3 1. Clear the stencil buffer. 2. Enable stencil test (GL_ALWAYS) and depth tests (GL_LESS) against the depth buffer com- puted in the geometry pass.
Recommended publications
  • NVIDIA GPU Programming Guide
    Version 2.4.0 1 Notice ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. No license is granted by implication or otherwise under any patent or patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to change without notice. This publication supersedes and replaces all information previously supplied. NVIDIA Corporation products are not authorized for use as critical components in life support devices or systems without express written approval of NVIDIA Corporation. Trademarks NVIDIA, the NVIDIA logo, GeForce, and NVIDIA Quadro are registered trademarks of NVIDIA Corporation. Other company and product names may be trademarks of the respective companies with which they are associated. Copyright © 2005 by NVIDIA Corporation. All rights reserved. HISTORY OF MAJOR REVISIONS Version Date Changes 2.4.0 07/08/2005 Updated cover Added GeForce 7 Series content 2.3.0 02/08/2005 Added 2D & Video Programming chapter Added more SLI information 2.2.1 11/23/2004 Minor formatting improvements 2.2.0 11/16/2004 Added normal map format advice Added ps_3_0 performance advice Added General Advice chapter 2.1.0 07/20/2004 Added Stereoscopic Development chapter 2.0.4 07/15/2004 Updated MRT section 2.0.3 06/25/2004 Added Multi-GPU Support chapter 2 NVIDIA GPU Programming Guide Table of Contents Chapter 1.
    [Show full text]
  • Deferred Rendering Using Compute Shaders
    Deferred rendering using Compute shaders A comparative study using shader model 4.0 and 5.0 Benjamin Golba 1 | P a g e This thesis is submitted to the Department of Interaction and System Design at Blekinge Institute of Technology in partial fulfillment of the requirements for the Bachelor degree in Computer Science. The thesis is equivalent to 10 weeks of full time studies. Contact Information: Author: Benjamin Golba Address: Folkparksvägen 10:17, 372 40 Ronneby E-mail: [email protected] University advisor: Stefan Petersson Department of Software Engineering and Computer Science Address: Soft Center, RONNEBY Phone: +46 457 38 58 15 Department of Interaction and System Design Blekinge Institute of Technology SE - 372 25 RONNEBY Sweden Internet: http://www.bth.se/tek/ais Phone: +46 457 38 58 00 Fax: +46 457 271 25 2 | P a g e Abstract Game developers today are putting a lot of effort into their games. Consumers are hard to please and demand a game which can provide both fun and visual quality. This is why developers aim to make the most use of what hardware resources are available to them to achieve the best possible quality of the game. It is easy to use too many performance demanding techniques in a game, making the game unplayable. The hard part is to make the game look good without decreasing the performance. This can be done by using techniques in a smart way to make the graphics as smooth and efficient as they can be without compromising the visual quality. One of these techniques is deferred rendering.
    [Show full text]
  • The Opengl Rendering Pipeline
    The OpenGL Rendering Pipeline CSE 781 Winter 2010 Han-Wei Shen Brief History of OpenGL Originated from a proprietary API called Iris GL from Silicon Graphics, Inc. Provide access to graphics hardware capabilities at the lowest possible level that still provides hardware independence The evolution is controlled by OpenGL Architecture Review Board, or ARB. OpenGL 1.0 API finalized in 1992, first implementation in 1993 In 2006, OpenGL ARB became a workgroup of the Khronos Group 10 revisions since 1992 OpenGL Evolution 1.1 (1997): vertex arrays and texture objects 1.2 (1998): 3D textures 1.3 (2001): cubemap textures, compressed textures, multitextures 1.4 (2002): mipmap generation, shadow map textures, etc 1.5 (2003): vertex buffer object, shadow comparison functions, occlusion queries, non-power-of-2 textures OpenGL Evolution 2.0 (2004): vertex and fragment shading (GLSL 1.1), multiple render targets, etc 2.1 (2006): GLSL 1.2, pixel buffer objects, etc 3.0 (2008): GLSL 1.3, deprecation model, etc 3.1 (2009): GLSL 1.4, texture buffer objects, move much of deprecated functions to ARB compatible extension 3.2 (2009) OpenGL Extensions New features/functions are marked with prefix Supported only by one vendor NV_float_buffer (by nvidia) Supported by multiple vendors EXT_framebuffer_object Reviewed by ARB ARB_depth_texture Promoted to standard OpenGL API Deprecation Model, Contexts, and Profiles Redundant and In-efficient functions are deprecated – to be removed in the future glBegin(), glEnd() OpenGL Contexts – data
    [Show full text]
  • Deferred Shading Tutorial
    Deferred Shading Tutorial Fabio Policarpo1 Francisco Fonseca2 [email protected] [email protected] CheckMate Games1,2 Pontifical Catholic University of Rio de Janeiro2 ICAD/Igames/VisionLab 1. Introduction Techniques usually consider non-interactive a few years ago are now possible in real-time using the flexibility and speed of new programmable graphics hardware. An example of that is the deferred shading technique, which is an approach that postpones shading calculations for a fragment1 until the visibility of that fragment is completely determined. In other words, it implies that only fragments that really contribute to the resultant image are shaded. Although deferred shading has become practical for real-time applications in recent years, this technique was firstly published in 1988 by Michael Deering et al. [Deering88]. In that work, the authors proposed a VLSI system where a pipeline of triangle processors rasterizes the geometry, and then a pipeline of shading processors applies Phong shading [Phong75] with multiple light sources to such geometry. After the initial research performed by Deering et al., the next relevant work involving deferred shading was developed by Saito and Takahashi [Saito90] in 1990. The authors of this article proposed a rendering technique that produces 3D images that favor the recognition of shapes and patterns, since shapes can be readily understood if certain geometric properties are enhanced. In order to optimize the enhancement process, geometric properties of the surfaces are preserved as Geometric-Buffers (G-buffers). So, by using G-buffers as intermediate results, artificial enhancement processes are separated from geometric processes (projection and hidden surface removal) and physical processes (shading and texture mapping), and performed as a post-processing pass.
    [Show full text]
  • Efficient Partitioning of Fragment Shaders for Multiple-Output Hardware
    Graphics Hardware (2004) T. Akenine-Möller, M. McCool (Editors) Efficient Partitioning of Fragment Shaders for Multiple-Output Hardware Tim Foley, Mike Houston and Pat Hanrahan y Stanford University Abstract Partitioning fragment shaders into multiple rendering passes is an effective technique for virtualizing shading resource limits in graphics hardware. The Recursive Dominator Split (RDS) algorithm is a polynomial-time algo- rithm for partitioning fragment shaders for real-time rendering that has been shown to generate efficient partitions. RDS does not, however, work for shaders with multiple outputs, and does not optimize for hardware with support for multiple render targets. We present Merging Recursive Dominator Split (MRDS), an extension of the RDS algorithm to shaders with arbitrary numbers of outputs which can efficiently utilize hardware support for multiple render targets, as well as a new cost metric for evaluating the quality of multipass partitions on modern consumer graphics hardware. We demonstrate that partitions generated by our algorithm execute more efficiently than those generated by RDS alone, and that our cost model is effective in predicting the relative performance of multipass partitions. Categories and Subject Descriptors (according to ACM CCS): I.3.1 [Computer Graphics]: Graphics processors G.2.2 [Mathematics of Computing]: Graph AlgorithmsTrees 1. Introduction shaders [CNS∗02]. However, RDS is limited to operating on shaders with a single output color. Real-time shading languages for graphics hardware simplify the task of writing shader code that is portable across a range Recently, it has been shown that graphics hardware can of hardware and graphics APIs. However, most current high- also be used to run a large number of non-shading algorithms level shading language compilers do not virtualize platform- including ray tracing [PBMH02], fluid dynamics [HBSL03], specific resource limits such as number of instructions, input and stream processing based applications [BFH∗04].
    [Show full text]
  • Optimization to Deferred Shading Pipeline
    UNIVERSITY OF DUBLIN,TRINITY COLLEGE MASTERS THESIS Optimization to Deferred Shading Pipeline Author: Supervisor: Samir KUMAR Dr. Michael MANZKE August 2017 A thesis submitted in fulfillment of the requirements for the degree of MSc Computer Science (Interactive Entertainment Technology) in the IET School of Computer Science and Statistics iii Declaration of Authorship I, Samir KUMAR, declare that this thesis titled, “Optimization to Deferred Shading Pipeline” and the work presented in it are my own. I confirm that: • This work was done wholly or mainly while in candidature for a master degree at this University. • I declare that this thesis has not been submitted at any other university. • Where I have consulted the published work of others, this is always clearly attributed. • Where I have quoted from the work of others, the source is always given. With the exception of such quotations, this thesis is entirely my own work. • I have acknowledged all main sources of help. • Where the thesis is based on work done by myself jointly with others, I have made clear exactly what was done by others and what I have contributed myself. Signed: Date: v University of Dublin, Trinity College Abstract The Faculty of Engineering, Mathematics and Science School of Computer Science and Statistics MSc Computer Science (Interactive Entertainment Technology) Optimization to Deferred Shading Pipeline by Samir KUMAR Rendering pipelines in the field of graphics are continuously evolved. De- ferred rendering pipeline is a step forward from most Common pipeline Forward rendering pipeline. Deferred rendering pipeline has its disadvan- tages such as its does not support blending.
    [Show full text]
  • Efficient Multifragment Effects on Graphics Processing Units
    EFFICIENT MULTIFRAGMENT EFFECTS ON GRAPHICS PROCESSING UNITS by Louis Frederic Bavoil A thesis submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Master of Science in Computer Science School of Computing The University of Utah May 2007 Copyright c Louis Frederic Bavoil 2007 All Rights Reserved THE UNIVERSITY OF UTAH GRADUATE SCHOOL SUPERVISORY COMMITTEE APPROVAL of a thesis submitted by Louis Frederic Bavoil This thesis has been read by each member of the following supervisory committee and by majority vote has been found to be satisfactory. Chair: Claudio´ T. Silva Joao˜ L.D. Comba Peter Shirley THE UNIVERSITY OF UTAH GRADUATE SCHOOL FINAL READING APPROVAL To the Graduate Council of the University of Utah: I have read the thesis of Louis Frederic Bavoil in its final form and have found that (1) its format, citations, and bibliographic style are consistent and acceptable; (2) its illustrative materials including figures, tables, and charts are in place; and (3) the final manuscript is satisfactory to the Supervisory Committee and is ready for submission to The Graduate School. Date Claudio´ T. Silva Chair: Supervisory Committee Approved for the Major Department Gary Lindstrom Chair/Director Approved for the Graduate Council David S. Chapman Dean of The Graduate School ABSTRACT Current GPUs (Graphics Processing Units) are very efficient at rendering opaque surfaces with local lighting based on the positions of point lights and the nearest fragment to the eye for every pixel. However, global illumination features such as shadows and transparent surfaces require more fragments per pixel.
    [Show full text]
  • Efficient Partitioning of Fragment Shaders for Multiple-Output Hardware
    Graphics Hardware (2004), pp. 1–9 T. Akenine-Möller, M. McCool (Editors) Efficient Partitioning of Fragment Shaders for Multiple-Output Hardware Tim Foley, Mike Houston and Pat Hanrahan y Stanford University Abstract Partitioning fragment shaders into multiple rendering passes is an effective technique for virtualizing shading resource limits in graphics hardware. The Recursive Dominator Split (RDS) algorithm is a polynomial-time algo- rithm for partitioning fragment shaders for real-time rendering that has been shown to generate efficient partitions. RDS does not, however, work for shaders with multiple outputs, and does not optimize for hardware with support for multiple render targets. We present Merging Recursive Dominator Split (MRDS), an extension of the RDS algorithm to shaders with arbitrary numbers of outputs which can efficiently utilize hardware support for multiple render targets, as well as a new cost metric for evaluating the quality of multipass partitions on modern consumer graphics hardware. We demonstrate that partitions generated by our algorithm execute more efficiently than those generated by RDS alone, and that our cost model is effective in predicting the relative performance of multipass partitions. Categories and Subject Descriptors (according to ACM CCS): I.3.1 [Computer Graphics]: Graphics processors G.2.2 [Mathematics of Computing]: Graph AlgorithmsTrees 1. Introduction shaders [CNS∗02]. However, RDS is limited to operating on shaders with a single output color. Real-time shading languages for graphics hardware simplify the task of writing shader code that is portable across a range Recently, it has been shown that graphics hardware can of hardware and graphics APIs. However, most current high- also be used to run a large number of non-shading algorithms level shading language compilers do not virtualize platform- including ray tracing [PBMH02], fluid dynamics [HBSL03], specific resource limits such as number of instructions, input and stream processing based applications [BFH∗04].
    [Show full text]
  • BACHELOR's THESIS GPU-Based Real Time Rendering of 3D-Video
    2010:011 HIP BACHELOR'S THESIS GPU-based real time rendering of 3D-Video Kennet Johansson Luleå University of Technology BSc Programmes in Engineering BSc programme in Computer Engineering Department of Skellefteå Campus Division of Leisure and Entertainment 2010:011 HIP - ISSN: 1404-5494 - ISRN: LTU-HIP-EX--10/011--SE GPU-based real time rendering of 3D-Video Kennet Johansson Supervised by Roger Olsson (MIUN) Abstract This thesis was the result of a project to create a rendering pipeline capable of rendering a variable number of disparate views from a V+D (video plus depth) video source for usage with a lenticular display. I initially based the work on a thesis written in 2006 by one E.I. Verburg at the Technische Universiteit Eindhoven called 'GPU-based Rendering to a Multiview Display' but lack of implementationsal details led me to create my own algorithm focusing on multiple render targets. Within I explain the background of the project, about 3D video and formats and the details of the rendering engine and the algorithm that was developed with a final discussion on the practical usefulness of the resulting images, which amounts to the algorithm working but being potentially unnecessary due to the rapid increase in GPU processing power. Sammanfattning Detta examensarbete var resultatet av ett projekt för att skapa en renderingspipeline kapabel att rendera ett variabelt nummer av skilda vyer från en V+D (video plus depth) videokälla för användning med lentikulära displayer. Jag baserade först arbetet på ett tidigare examensarbete skrivet av en E.I. Verburg vid Technische Universiteit Eindhoven kallat ' GPU-based Rendering to a Multiview Display' men avsaknad av implementationsdetaljer ledde mig till att skapa min egen algoritm fokuserande på användning av multiple render targets.
    [Show full text]
  • A Directional Occlusion Shading Model for Interactive Direct Volume Rendering
    Eurographics/ IEEE-VGTC Symposium on Visualization 2009 Volume 28 (2009), Number 3 H.-C. Hege, I. Hotz, and T. Munzner (Guest Editors) A Directional Occlusion Shading Model for Interactive Direct Volume Rendering Mathias Schott1, Vincent Pegoraro1, Charles Hansen1, Kévin Boulanger2, Kadi Bouatouch2 1SCI Institute, University of Utah, USA 2INRIA Rennes, Bretagne-Atlantique, France Abstract Volumetric rendering is widely used to examine 3D scalar fields from CT/MRI scanners and numerical simula- tion datasets. One key aspect of volumetric rendering is the ability to provide perceptual cues to aid in under- standing structure contained in the data. While shading models that reproduce natural lighting conditions have been shown to better convey depth information and spatial relationships, they traditionally require considerable (pre)computation. In this paper, a shading model for interactive direct volume rendering is proposed that provides perceptual cues similar to those of ambient occlusion, for both solid and transparent surface-like features. An image space occlusion factor is derived from the radiative transport equation based on a specialized phase func- tion. The method does not rely on any precomputation and thus allows for interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions while modifications to the volume via clipping planes are incorporated into the resulting occlusion-based shading. Categories and Subject Descriptors (according to ACM CCS): Computer Graphics [I.3.7]: Three-Dimensional Graphics and Realism— Subjects: Color, shading, shadowing, and texture 1. Introduction desire for an interactive volume rendering technique with user flexibility. Allowing the change of transfer functions Volumetric rendering is widely used to examine 3D scalar or clipping planes precludes expensive precomputations and fields from CT/MRI scanners and numerical simulation makes it necessary to recompute shading parameters for ev- datasets.
    [Show full text]
  • Ambient Occlusion and Shadows for Molecular Graphics
    Ambient Occlusion and Shadows for Molecular Graphics Robert Easdon A thesis submitted for the degree of Master of Science at the University of East Anglia September 2013 Ambient Occlusion and Shadows for Molecular Graphics Robert Easdon c This copy of the thesis has been supplied on condition that anyone who consults it is understood to recognise that its copyright rests with the author and that no quotation from the thesis, nor any information derived therefrom, may be published without the author’s prior, written consent. Abstract Computer based visualisations of molecules have been produced as early as the 1950s to aid researchers in their understanding of biomolecular structures. An important consideration for Molecular Graphics software is the ability to visualise the 3D struc- ture of the molecule in a clear manner. Recent advancements in computer graphics have led to improved rendering capa- bilities of the visualisation tools. The capabilities of current shading languages allow the inclusion of advanced graphic effects such as ambient occlusion and shadows that greatly improve the comprehension of the 3D shapes of the molecules. This thesis focuses on finding improved solutions to the real time rendering of Molecular Graphics on modern day computers. The methods of calculating ambient occlusion and both hard and soft shadows are examined and implemented to give the user a more complete experience when navigating large molecular structures. i Acknowledgements I would like to thank both my primary supervisor Dr. Stephen Laycock and my secondary supervisor Dr. Steven Hayward for their support and advice throughout my degree, helping advance both my computer graphics knowledge, as well as my understanding of the biological aspects of this research.
    [Show full text]
  • Deferred Rendering 1 Summary 2 Description
    CMSC 23700 Introduction to Computer Graphics Project 4 Autumn 2015 November 3, 2015 Deferred Rendering Due: Monday November 16 at 10pm 1 Summary This assignment uses the same application architecture as the earlier projects, but it replaces the forward rendering approach with deferred rendering. 2 Description The rendering techniques that you have implemented in the previous projects is sometimes called forward rendering. While it is very effective for many applications, it does not handle large numbers of light sources well. In a scene with many lights (e.g., street lights going down a highway), we might have to run a forward-rendering pass per light (or fixed-size group of lights) resulting in substantial extra work. Furthermore, we may have to compute lighting for many fragments that are later overdrawn by other, closer, fragments. (Note: some hardware uses an early Z-buffer test to avoid running the fragment shader on fragments that will be discarded later, but this optimization only helps with fragments that are further away from the ones already drawn). 2.1 Basic technique The idea of deferred shading (or deferred rendering) is to do one pass of geometry rendering, storing the resulting per-fragment geometric information in the G-buffer. A G-buffer is essentially a frame buffer with multiple attachments or Multiple Render Targets (MTR). These include • fragment positions (i.e., world space coordinates) • the diffuse surface color without lighting • the specular surface color and exponent without lighting • world-space surface normals • depth values We then use additional passes (typically one per light) to compute lighting information and to blend the results into the final render target.
    [Show full text]