Geometry-Aware Framebuffer Level of Detail
Total Page:16
File Type:pdf, Size:1020Kb
Eurographics Symposium on Rendering 2008 Volume 27 (2008), Number 4 Steve Marschner and Michael Wimmer (Guest Editors) Geometry-Aware Framebuffer Level of Detail Lei Yang1 Pedro V. Sander1 Jason Lawrence2 1Hong Kong University of Science and Technology 2University of Virginia Abstract This paper introduces a framebuffer level of detail algorithm for controlling the pixel workload in an interactive rendering application. Our basic strategy is to evaluate the shading in a low resolution buffer and, in a second ren- dering pass, resample this buffer at the desired screen resolution. The size of the lower resolution buffer provides a trade-off between rendering time and the level of detail in the final shading. In order to reduce approximation er- ror we use a feature-preserving reconstruction technique that more faithfully approximates the shading near depth and normal discontinuities. We also demonstrate how intermediate components of the shading can be selectively resized to provide finer-grained control over resource allocation. Finally, we introduce a simple control mecha- nism that continuously adjusts the amount of resizing necessary to maintain a target framerate. These techniques do not require any preprocessing, are straightforward to implement on modern GPUs, and are shown to provide significant performance gains for several pixel-bound scenes. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation 1. Introduction and Related Work al. [SGNS07], we apply the joint bilateral filter [TM98, Programmable graphics hardware has brought with it a KCLU07] to consider differences in scene depth and sur- steady increase in the complexity of real-time rendering ap- face orientation during framebuffer upsampling. Second, we plications, often expressed as a greater number of arithmetic extend traditional resizing to allow computing partial com- operations and texture accesses required to shade each pixel. ponents of the shading at different resolutions. This allows, In response, researchers have begun investigating fully gen- for example, evaluating a detailed texture map at the full eral and automatic methods for reducing the pixel process- framebuffer resolution while still reaping the performance ing workload at an acceptable amount of approximation er- benefits of resizing a more computationally intensive inter- ror. These include techniques for automatically simplifying mediate calculation. Third, we present a feedback control shader programs [OKS03, Pel05] and data reuse [ZWL05, mechanism that automatically determines the degree of re- ∗ sizing necessary to achieve a target framerate. This approach DWWL05, NSL 07] which exploit the spatio-temporal co- ∗ herence in animated image sequences by reusing data gener- is analogous to geometry LOD methods [LWC 02] in that it ated in previously rendered frames to accelerate the calcula- controls one aspect of the rendering workload by reducing tion of the shading in the current frame. A related approach the number of pixels that are processed. We evaluate these is to exploit the spatial coherence within just a single frame techniques using several pixel-bound test scenes to demon- by simply rendering the scene to a lower resolution buffer strate their benefits over conventional resizing. that is then upsampled to the desired screen resolution in a Our approach bears a number of similarities to interleaved second rendering pass. Introduced as dynamic video resizing sampling techniques [SIMP06, LSK∗07] which reconstruct in the InfiniteReality rendering system [MBDM97], this sim- a smooth approximation of some component of the shading ple technique is effective for controlling the pixel workload, (e.g., indirect illumination) from a noisy estimate. As this but suffers from noticeable sampling artifacts near silhou- noisy approximation is smoothed to produce the final frame, ettes and other discontinuities in the shading (Figure 2). care must be taken to avoid integrating across discontinu- This paper extends conventional dynamic resizing in three ities in scene geometry. The method proposed by Laine et key ways to provide a framebuffer level of detail (LOD) al. [LSK∗07] uses a bilateral filter similar to the one we ap- system. Following Laine et al. [LSK∗07] and Sloan et ply here. However, interleaved sampling reduces the average c 2008 The Author(s) Journal compilation c 2008 The Eurographics Association and Blackwell Publishing Ltd. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Yang et al. / Geometry-Aware Framebuffer Level of Detail number of samples processed in the shader at every pixel features of the shading. We use a geometry-aware recon- (e.g., virtual point light sources accumulated [LSK∗07]) struction filter inspired by the bilateral filter [TM98] to re- whereas dynamic resizing reduces the number of expensive duce these artifacts. The basic idea is to modify the re- shader invocations within a single frame. In this respect, they construction kernel at each pixel to avoid integrating across are complimentary approaches for addressing the same prob- shading data associated with different regions in the scene. lem of reducing shading costs. Specifically, we weight the contribution of each pixel in L to- ward the reconstruction of each pixel in H based not only on The benefit of selectively resizing an expensive compo- their distance in the image plane, but also on the difference nent of the shading has been demonstrated in interactive between their respective scene depths and surface normals. systems that simulate certain aspects of global illumina- At each rendered pixel x in H, a pixel shader first computes tion [SGNS07, HPB06]. In fact, Sloan et al. [SGNS07] also i the surface normal nH and scene depth zH and reconstructs apply a reconstruction filter similar to ours. However, these i i the surface color cH according to systems have all focused on optimizing a specific render- i ing task in which resizing some partial value brings a prof- L H L H L H ∑c j f (xˆi,x j) g(|ni − n j |,σn) g(|zi − z j |,σz) itable speed/error trade-off. This paper focuses on the role ci = H L H L , (1) ∑ f (xˆi,x j) g(|n − n |,σn) g(|z − z |,σz) geometry-aware resizing plays within a general strategy for i j i j regulating shading costs and we present a number of differ- L L L where c j , n j , and z j are the respective counterparts in L ent effects for which resizing has clear benefits. Finally, a andx ˆi is the position of xi after having been downsampled number of hybrid CPU/GPU cache-based interactive render- in each dimension by r. Note that the normals are defined ing systems have also explored edge-aware framebuffer re- in screen-space so the vector n = (0,0,1) points toward the construction [BWG03,VALBW06]. Although these methods viewer. This sum is carried out over all the pixels in L. The may apply in this context as well, our emphasis is on provid- domain filter f weights the contribution of each pixel in L ing a lightweight method that runs entirely on the GPU and according to its distance in the image plane. Instead of mod- can be easily incorporated into existing real-time systems. eling this as a Gaussian, as is commonly done with the bi- lateral filter, we instead use a simple tent function which 2. Geometry-Aware Framebuffer Resizing restricts this sum to only the 2 × 2 grid of nearest pixels. In our approach each frame is rendered in two passes. In the We found this gave acceptable results at a fraction of the first pass, the scene geometry is rendered to a low resolution compute overhead. The range filters g(·,σn) and g(·,σz) are buffer L which stores the color, depth, and surface normal modeled as Gaussians and weight each pixel based on dif- at each pixel. In the second pass, the scene geometry is ren- ferences in surface normals and scene depths, respectively. dered to the framebuffer H at the desired screen resolution A similar generalization of the bilateral filter was pro- using a pixel shader that reconstructs the shading from L. posed by Toler-Franklin et al. [TFFR07] for manipulating The resizing factor r is equal to the ratio of the width of H natural images augmented with per-pixel surface normals. relative to that of L; resizing the framebuffer by a factor of They note the importance of accounting for foreshorten- r = 2 will thus reduce the pixel load by roughly a factor of ing when computing the distance between screen-space nor- 4. We next describe each of these passes in detail. mals. Following that work, we transform each normal vector (nx,ny,nz) by its z-component (nx/nz,ny/nz,1) before com- 2.1. Rendering L puting the Euclidean distance. The original pixel shader is used to render the scene to the The standard deviations σn and σz are important parame- low resolution buffer L. In addition to surface color, we also ters that determine the extent to which shading information store the depth and surface normal at each pixel (this is is shared between surface regions with different depths and similar to the G-buffer used in deferred shading [DWS∗88, orientations. Figure 1 illustrates the effect of these parame- ST90]). This information allows the second pass to consider ters at different resizing factors. Smaller values of σz limit discontinuities in the scene geometry when reconstructing the amount of blending across depth boundaries as can be the shading. Our current implementation packs this data into seen along the silhouette edges of the car. Smaller values of a single 16-bit RGBA render target (8-bit RGBA packed on σn limit the amount of blending across orientation bound- RG, 16-bit depth on B, and 8 bits for each of the normal’s x- aries like where the rear-view mirror meets the car’s body.