Chapter 15: Per-Pixel Operations
Total Page:16
File Type:pdf, Size:1020Kb
Chapter 15: Per-Pixel Operations Introduction So far in the book we have focused on graphics APIs that are strongly polygon-oriented, and have learned how to use this kind of tools to make effective images. However, this is not the only kind of computer graphics that can be done, and it is also not the only kind of computer graphics that can be effective. Another kind of computer graphics works with a geometric model to determine the color of each pixel in a scene independently. We call this per-pixel graphics, and there are several different approaches that can be taken to this. There is ray casting, a straightforward approach to generating an image in which rays are generated from the eye point through each pixel of a virtual screen, and the closest intersection of each ray with the geometry of the scene defines the color of that pixel. There is ray tracing, a more sophisticated approach that begins in the same way as ray casting, but you may generate more than one ray for each pixel and when the rays intersect with the geometry, they may define secondary rays by reflection or refraction that will help define the color that the pixel will eventually be given. There are also several uses of pixels as ways to record some computational value, such as one might find in a study of iterated functions or of fractals. With a wide variety of ways to take advantage of this kind of graphics, we believe it should be included in a beginning course. In this chapter you will see examples of several per-pixel operations that can be used to handle several kinds of computer graphics problems. We will not go into these operations in great depth, but we will introduce them to you so you can understand their capabilities and decide whether you want to go into them in more depth with other reading. To get the best from this chapter, you should be familiar with geometric operations in an algebraic setting, such as were described in the earlier chapter on mathematical concepts for modeling. Definitions The terms ray tracing and ray casting are not always well-defined in computer graphics. We will define ray casting as the process of generating an image from a model by sending out one ray per pixel and computing the color of that pixel by considering its intersection with a model with no shadow, reflection, or refraction computation. We will define ray tracing as an extension of ray casting without limitations: multiple rays may be cast for each pixel as needed, and may use any kind of shadow, refraction, and/or reflection computation as you wish. We will discuss both these processes, although we will not go into much depth on either and will refer you to a resource such as the Glassner text or a public-domain ray tracer such as POVRay to get more experience. Ray casting If we recall the standard viewing setup and viewing volume for the perspective transformation from the chapter on viewing and projection, we see that the front of the viewing frustum can be an effective equivalent of the actual display screen. We can make this more concrete by applying a reverse window-to-viewport operation and, in effect, create a screen-to-viewplane operation that will give us a virtual screen at the front of the frustum. To do this, we can take advantage of the idea of the perspective viewing volume that we saw in the viewing and projection chapter. We can create the viewing transformation as we illustrated in the chapter on mathematics for modeling, which allows us to consider the eye point at the origin and the front of the viewing volume in the plane z = –1. Just as the perspective projection in OpenGL is defined in terms of the field of view a and the aspect ratio r , we can determine the coordinates of the front of the view volume as x = tan(a ) and y = r *x. This space is the virtual screen, and you can divide up the screen into pixels by using a step size of 2*x/N where N is the number of pixels across the horizontal screen. A virtual screen in world space, along with the eyepoint and a sample ray, is shown in Figure 15.1. In order to manage the aspect ratio you have set, you should also use the same step size for the vertical screen. The final parameters of the OpenGL perspective projection are the front and back clipping planes for the view; you may use these parameters to limit the range where you will check for intersections or you may choose to use your model with no limits. Figure 15.1: the eye point and virtual screen in 3D space The key point of ray casting is to generate a ray, which you can model as a parametric line segment with only positive parameter values, from the eye point through each pixel in the virtual screen. You may choose what point you want to use in the pixel because each pixel actually corresponds to a small quad determined by adjacent horizontal and vertical lines in the virtual grid. In the general discussion it doesn’t matter which point you choose, but you might start with the middle point in the quad. The ray you generate will start at the origin and will go through a point (x,y,1), so the parametric equations for the ray are (xt, yt, t). Once you have the parametric equation for a ray, you must determine the point in your model where the ray intersects some object. If there is no such point, then you simply get the background color; if there is an intersection, then you must determine the nearest intersection point. The pixel is then given the color of your model at that point of intersection. The ray-object intersection problem is the heart of the ray casting problem, and a great deal of time is spent in calculating intersections. We discussed collision testing in the chapter on mathematics for modeling, and the ray-object intersection is just a special case of collision testing. The easiest intersection test is made with a sphere, where you can use the formula for the distance between a point and a line to see how close the line comes to the center of the sphere; if that distance is less than the radius of the sphere, there is an intersection and a simple quadratic equation will give you the intersection points. Determining the color depends a great deal on the model you are viewing. If your model does not use a lighting model, then each object will have its own color, and the color of the pixel will simply be the color of the object the ray intersects. If you do have lighting, however, the local lighting model is not provided by the system; you must develop it yourself. This means that at the point where a ray meets an object, you must do the computations for ambient, diffuse, and specular lighting that we described in the chapter on lighting and shading. This will require computing the Page 15.2 four vectors involved in local lighting: the normal vector, eye vector, reflection vector and light vector. The eye vector is simple because this is the ray we are using; the normal vector is calculated using analytic techniques or geometric computations just as it was for the OpenGL glNormal*(...) function, the reflection vector is calculated as in the discussion on mathematics for modeling, and it is straightforward to create a vector from the point to each of the lights in the scene. Details of the lighting computations are sketched in the chapter on lighting and shading, so we will not go into them in more detail here. Because you are creating your own lighting, you have the opportunity to use more sophisticated shading models than are provided by OpenGL. These shading models generally use the standard ambient and diffuse light, but they can change the way specular highlights appear on an object by modifying the way light reflects from a surface. This is called anisotropic shading and involves creating a different reflection vector through a bidirectional reflection distribution function or BRDF. This kind of function takes the x- and y-components of the light vector and determines a direction that light will be reflected; the standard specular lighting computation is then done with that reflection direction vector. Details are much more complex than are appropriate here, but you can consult advanced references for more information. Ray casting can also be used to view pre-computed models. Where each ray intersects the model, the color of that point is returned to the pixel in the image. This technique, or a more sophisticated ray-tracing equivalent, is often used to view a model that is lighted with a global lighting model such as radiosity or photon mapping. The lighting process is used to calculate the light at each surface in the model, and other visual properties such as texture mapping are applied, and then for each ray, the color of the intersection point is read from the model or is computed from information in the model. The resulting image carries all the sophistication of the model, even though it is created by a simple ray casting process. Ray casting has aliasing problems because we sample with only one ray per pixel; the screen coordinates are quite coarse when compared to the real-valued fine detail of the real world.