Chapter 7 Visual Rendering
Total Page:16
File Type:pdf, Size:1020Kb
Chapter 7 Visual Rendering Steven M. LaValle University of Oulu Copyright Steven M. LaValle 2019 Available for downloading at http://vr.cs.uiuc.edu/ 186 S. M. LaValle: Virtual Reality screen (this was depicted in Figure 3.13). The next steps are to determine which screen pixels are covered by the transformed triangle and then illuminate them according to the physics of the virtual world. An important condition must also be checked: For each pixel, is the triangle even visible to the eye, or will it be blocked by part of another triangle? This classic visibility computation problem dramatically complicates the rendering pro- Chapter 7 cess. The general problem is to determine for any pair of points in the virtual world, whether the line segment that connects them intersects with any objects (triangles). If an intersection occurs, then the line-of-sight visibility between the Visual Rendering two points is blocked. The main difference between the two major families of rendering methods is how visibility is handled. This chapter addresses visual rendering, which specifies what the visual display Object-order versus image-order rendering For rendering, we need to con- should show through an interface to the virtual world generator (VWG). Chapter sider all combinations of objects and pixels. This suggests a nested loop. One 3 already provided the mathematical parts, which express where the objects in way to resolve the visibility is to iterate over the list of all triangles and attempt the virtual world should appear on the screen. This was based on geometric to render each one to the screen. This is called object-order rendering, and is the models, rigid body transformations, and viewpoint transformations. We next main topic of Section 7.2. For each triangle that falls into the field of view of need to determine how these objects should appear, based on knowledge about the screen, the pixels are updated only if the corresponding part of the triangle is light propagation, visual physiology, and visual perception. These were the topics closer to the eye than any triangles that have been rendered so far. In this case, of Chapters 4, 5, and 6, respectively. Thus, visual rendering is a culmination of the outer loop iterates over triangles whereas the inner loop iterates over pixels. everything covered so far. The other family of methods is called image-order rendering, and it reverses the Sections 7.1 and 7.2 cover the basic concepts; these are considered the core order of the loops: Iterate over the image pixels and for each one, determine of computer graphics, but VR-specific issues also arise. They mainly address which triangle should influence its RGB values. To accomplish this, the path of the case of rendering for virtual worlds that are formed synthetically. Section light waves that would enter each pixel is traced out through the virtual environ- 7.1 explains how to determine the light that should appear at a pixel based on ment. This method will be covered first, and many of its components apply to light sources and the reflectance properties of materials that exist purely in the object-order rendering as well. virtual world. Section 7.2 explains rasterization methods, which efficiently solve the rendering problem and are widely used in specialized graphics hardware, called Ray tracing To calculate the RGB values at a pixel, a viewing ray is drawn GPUs. Section 7.3 addresses VR-specific problems that arise from imperfections from the focal point through the center of the pixel on a virtual screen that is in the optical system. Section 7.4 focuses on latency reduction, which is critical to placed in the virtual world; see Figure 7.1. The process is divided into two phases: VR, so that virtual objects appear in the right place at the right time. Otherwise, many side effects could arise, such as VR sickness, fatigue, adaptation to the flaws, or simply having an unconvincing experience. Finally, Section 7.5 explains 1. Ray casting, in which the viewing ray is defined and its nearest point of rendering for captured, rather than synthetic, virtual worlds. This covers VR intersection among all triangles in the virtual world is calculated. experiences that are formed from panoramic photos and videos. 2. Shading, in which the pixel RGB values are calculated based on lighting conditions and material properties at the intersection point. 7.1 Ray Tracing and Shading Models The first step is based entirely on the virtual world geometry. The second step uses Suppose that a virtual world has been defined in terms of triangular primitives. simulated physics of the virtual world. Both the material properties of objects Furthermore, a virtual eye has been placed in the world to view it from some and the lighting conditions are artificial, and are chosen to produce the desired particular position and orientation. Using the full chain of transformations from effect, whether realism or fantasy. Remember that the ultimate judge is the user, Chapter 3, the location of every triangle is correctly positioned onto a virtual who interprets the image through perceptual processes. 185 7.1. RAY TRACING AND SHADING MODELS 187 188 S. M. LaValle: Virtual Reality Figure 7.2: In the Lambertian shading model, the light reaching the pixel de- pends on the angle θ between the incoming light and the surface normal, but is independent of the viewing angle. Figure 7.1: The first step in a ray tracing approach is called ray casting, which extends a viewing ray that corresponds to a particular pixel on the image. The is sufficiently represented by R, G, and B values. If the viewing ray hits the surface as shown in Figure 7.2, then how should the object appear? Assumptions ray starts at the focal point, which is the origin after the eye transform Teye has been applied. The task is to determine what part of the virtual world model is about the spectral reflection function are taken into account by a shading model. visible. This is the closest intersection point between the viewing ray and the set The simplest case is Lambertian shading, for which the angle that the viewing of all triangles. ray strikes the surface is independent of the resulting pixel R, G, B values. This corresponds to the case of diffuse reflection, which is suitable for a “rough” surface (recall Figure 4.4). All that matters is the angle that the surface makes with Ray casting Calculating the first triangle hit by the viewing ray after it leaves respect to the light source. the image pixel (Figure 7.1) is straightforward if we neglect the computational Let n be the outward surface normal and let ℓ be a vector from the surface performance. Starting with the triangle coordinates, focal point, and the ray intersection point to the light source. Assume both n and ℓ are unit vectors, and direction (vector), the closed-form solution involves basic operations from analytic let θ denote the angle between them. The dot product n · ℓ = cos θ yields the geometry, including dot products, cross products, and the plane equation [26]. For amount of attenuation (between 0 and 1) due to the tilting of the surface relative each triangle, it must be determined whether the ray intersects it. If not, then to the light source. Think about how the effective area of the triangle is reduced the next triangle is considered. If it does, then the intersection is recorded as due to its tilt. A pixel under the Lambertian shading model is illuminated as the candidate solution only if it is closer than the closest intersection encountered so far. After all triangles have been considered, the closest intersection point R = dRIR max(0,n · ℓ) will be found. Although this is simple, it is far more efficient to arrange the G = dGIG max(0,n · ℓ) (7.1) triangles into a 3D data structure. Such structures are usually hierarchical so that B = dBIB max(0,n · ℓ), many triangles can be eliminated from consideration by quick coordinate tests. Popular examples include BSP-trees and Bounding Volume Hierarchies [7, 11]. in which (dR,dG,dB) represents the spectral reflectance property of the material Algorithms that sort geometric information to obtain greater efficiently generally (triangle) and (Ir,IG,IR) is represents the spectral power distribution of the light fall under computational geometry [9]. In addition to eliminating many triangles source. Under the typical case of white light, IR = IG = IB. For a white or gray from quick tests, many methods of calculating the ray-triangle intersection have material, we would also have dR = dG = dB. been developed to reduce the number of operations. One of the most popular is Using vector notation, (7.1) can be compressed into the M¨oller-Trumbore intersection algorithm [19]. L = dI max(0,n · ℓ) (7.2) Lambertian shading Now consider lighting each pixel and recall the basic in which L = (R,G,B), d = (dR,dG,dB), and I = (IR,IG,IB). Each triangle is behavior of light from Section 4.1. The virtual world simulates the real-world assumed to be on the surface of an object, rather than the object itself. Therefore, physics, which includes the spectral power distribution and spectral reflection if the light source is behind the triangle, then the triangle should not be illumi- function. Suppose that a point-sized light source is placed in the virtual world. nated because it is facing away from the light (it cannot be lit from behind). To Using the trichromatic theory from Section 6.3, its spectral power distribution handle this case, the max function appears in (7.2) to avoid n · ℓ< 0.