Accelerated Stereo Rendering with Hybrid Reprojection-Based Rasterization and Adaptive Ray-Tracing

Accelerated Stereo Rendering with Hybrid Reprojection-Based Rasterization and Adaptive Ray-Tracing

2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) Accelerated Stereo Rendering with Hybrid Reprojection-Based Rasterization and Adaptive Ray-Tracing Niko Wißmann *† Martin Misiakˇ *‡ Arnulph Fuhrmann§ TH Koln,¨ Computer Graphics Group TH Koln,¨ Computer Graphics Group TH Koln,¨ Computer Graphics Group University Wurzburg,¨ HCI Group Marc Erich Latoschik¶ University Wurzburg,¨ HCI Group Figure 1: The proposed hybrid rendering system reprojects the source image of the left viewpoint into the right viewpoint by using an adaptive 3D grid warping approach (left), detects disoccluded regions (middle, marked in green) and resolves them correctly through adaptive real-time ray-tracing (right). ABSTRACT 1INTRODUCTION Stereoscopic rendering is a prominent feature of virtual reality ap- As recent years have shown, the Virtual Reality (VR) community plications to generate depth cues and to provide depth perception continues to grow steadily. More and more hardware manufacturers in the virtual world. However, straight-forward stereo rendering are launching new head-mounted displays (HMDs) on the market. methods usually are expensive since they render the scene from This includes mobile systems such as the Oculus Go or the more ad- two eye-points which in general doubles the frame times. This is vanced Oculus Quest. But also classic desktop systems like the HTC particularly problematic since virtual reality sets high requirements Vive Pro will remain on the market. Nevertheless, all systems have for real-time capabilities and image resolution. Hence, this paper one characteristic in common - they require stereoscopic rendering presents a hybrid rendering system that combines classic rasteriza- with a high spatial and temporal resolution. Despite continuous tion and real-time ray-tracing to accelerate stereoscopic rendering. improvements in the underlying GPU hardware, the increasing ren- The system reprojects the pre-rendered left half of the stereo image dering requirements continue to pose a challenge for the latest VR pair into the right perspective using a forward grid warping technique devices. Not only increasing resolutions and refresh rates but also and identifies resulting reprojection errors, which are then efficiently increasing demands on visualization fidelity can have a considerable resolved by adaptive real-time ray-tracing. A final analysis shows impact on the rendering performance. A high degree of immersion that the system achieves a significant performance gain, has a neg- is essential for every VR application. A low frame rate or a high ligible quality impact, and is suitable even for higher rendering system latency can have a negative impact on it and can lead to a resolutions. reduced sense of presence [29], or even cause motion sickness. That is why the underlying rendering of a VR application has to meet the Index Terms: Computing methodologies—Computer Graphics— given requirements. Rendering—Ray tracing; Computing methodologies—Computer Optimization methods for stereoscopic rendering, frequently used Graphics—Rendering—Rasterization; Computing methodologies— in popular game engines, often target only at the CPU-side applica- Computer Graphics—Graphics systems and interfaces—Virtual re- tion loop or improve geometry processing to a certain extent. For ality example, single-pass stereo rendering using geometry duplication in a geometry shader or using instanced API drawcalls. But these *Authors contributed equally to this work. optimization options do not reduce the workload in the fragment †e-mail: [email protected] shaders. Still, all pixels of both output frames of the stereo image ‡e-mail: [email protected] pair have to be shaded entirely in the fragment shaders. §e-mail: [email protected] This paper presents a rendering system that accelerates the ren- ¶e-mail: [email protected] dering process of one image of the stereo pair using spatial reprojec- tion. The reprojection technique is integrated into a classic deferred rendering pipeline, which generates the left reference image via regular rasterization and fragment shading. This is followed by a forward reprojection step using an adaptive 3D grid warping ap- 2642-5254/20/$31.00 ©2020 IEEE 828 DOI 10.1109/VR46266.2020.00009 mirror the depth values of the pixels. Such a warping grid can then be transformed via a vertex shader into another perspective [15], where rasterization and shading take place. This approach does not suffer from the aforementioned pixel holes. Instead, foreground and background objects are connected by so-called “rubber-sheets”. A native resolution warping grid (one-to-one mapping between vertex and pixel) is not necessary, as adjacent pixels with similar attributes can be warped together as bigger patches into the target perspective [12]. Didyk et al. [16] subdivide only grid cells, which cover pixels of a higher depth disparity. Using this adaptive sub- division approach results in less grid triangles and hence in better warping performance without compromising quality. Schollmeyer et al. [34] subdivide the warping grid based on a colinearity measure, Figure 2: Overview of the hybrid rendering system, consisting of which prevents over-tessellation of slanted surfaces and reduces the a standard deferred rasterization pass for the left viewpoint and a number of grid triangles even further. In addition the authors also subsequent reprojection pass with ray-traced hole-filling for the right handle the reprojection of transparent geometry via ray-casting into viewpoint. an A-Buffer. A reprojection can also be done in backward order, where each pixel in the target image searches for corresponding pixels in one proach [16,34] to reproject the pixel information of the left image or more source images. Nehab et al. [30] presented the reverse into the right viewpoint. Reprojection errors and appearing visual reprojection cache, which allowed fragments to reuse shading results artifacts are correctly solved via an adaptive real-time ray-tracing from previous frames. This is achieved by explicitly storing and hole-filling technique. To this end, the DirectX Raytracing (DXR) transforming vertex attributes for multiple frames. Another approach API is used to perform hardware-accelerated ray-tracing on sup- is proposed by Bowles et al. [11], which extends the work of Yang et ported hardware, and hence eliminating the typical bottleneck that al. [41] on fixed point iteration to search for a given pixels location in arises from performing compute-intensive ray-tracing operations in the previous frame. The advantage of the method is that no additional parallel to rasterization. The resulting image (cf. Fig. 1) correctly vertex attribute is needed for the position in the previous frame, and captures disocclusions and is very close to a reference rendering. also a second transform of the vertex into the previous position is not In our paper we make the following contributions: required. A more recent usage of fixed point iteration is proposed by Lee et al. [24], but only in the context of depth buffer warping to • A hybrid reprojection system that combines rasterized ren- accelerate, e.g., occlusion culling, and not for the rendering of color dering with adaptive ray-tracing to accelerate stereoscopic images. rendering • A perceptually motivated heuristic for an adaptive grid subdi- vision used during the 3D grid warping stage 2.2 Hole-Filling • A comprehensive performance analysis of said system, includ- The common problem of all mentioned reprojection methods is that ing a comparison between hardware-accelerated ray-tracing they cannot render a perfect image for a new camera perspective. and rasterization for the shading of disoccluded regions If previously hidden geometry becomes visible in the new perspec- tive, disocclusions occur, as no color information is available in 2RELATED WORK the source image. These pixel regions cannot be filled properly by reprojection alone, and erroneous pixels, also called holes, remain in Increasing display resolutions and higher frame rates do not neces- the image. A large body of work has been done in the field of digital sarily imply an equal increase in the required computational effort to image and video post-processing on hole-filling, or inpainting, tech- render a frame. Reprojection algorithms leverage existing spatial and niques [10, 14, 40, 42]. However due to performance considerations, temporal coherences within a scene to reuse previously computed these are rarely used in a real-time context. Instead authors rely on values. A large body of work exists for these algorithms extending simpler techniques, as they are required to execute in a fraction of over various different fields such as view interpolations of video the available frame-time budget. footage, acceleration of preview image rendering for offline render- ers, up to performance optimization for real-time rendering. Moti- The correct, but also the most expensive solution is to rerender vated by our use case of stereoscopic rendering, we focus primarily the missing information. While the shading calculations can be on reviewing reprojection algorithms aimed at real-time rendering. restricted only to specific pixels [30], in raster-based rendering the entire captured scene geometry has to be processed nonetheless, 2.1 Reprojection introducing a significant overhead. Didyk et al. [16] use a simple Graphics rendering was accelerated via reprojection very early [1– inpainting strategy, which selects random

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us