Hardware Accelerated Displacement Mapping for Image Based Rendering
Total Page:16
File Type:pdf, Size:1020Kb
Hardware Accelerated Displacement Mapping for Image Based Rendering Jan Kautz Hans-Peter Seidel Max-Planck-Institut für Informatik Max-Planck-Institut für Informatik Saarbrücken, Germany Saarbrücken, Germany Abstract In this paper, we present a technique for rendering dis- placement mapped geometry using current graphics hard- ware. Our method renders a displacement by slicing through the enclosing volume. The α-test is used to render only the appropriate parts of every slice. The slices need not to be aligned with the base surface, e.g. it is possible to do screen-space aligned slicing. We then extend the method to be able to render the intersection between several displacement mapped poly- gons. This is used to render a new kind of image-based objects based on images with depth, which we call image based depth objects. This technique can also directly be used to acceler- Figure 1: Furry donut (625 polygons) using displacement ate the rendering of objects using the image-based visual mapping. It was rendered at 35Hz on a PIII/800 using an hull. Other warping based IBR techniques can be accel- NVIDIA GeForce 2 GTS. erated in a similar manner. Key words: Displacement Mapping, Image Warping, Displacement mapping recently made its way into Hardware Acceleration, Texture Mapping, Frame-Buffer hardware accelerated rendering using standard features. Tricks, Image-Based Rendering. The basic technique was introduced by Schaufler [24] in the context of warping for layered impostors. It was then 1 Introduction reintroduced in the context of displacement mapping by Displacement mapping is an effective technique to add Dietrich [8]. This algorithm encodes the displacement in detail to a polygon-based surface model while keeping the α-channel of a texture. It then draws surface-aligned the polygon count low. For every pixel on a polygon a slices through the volume defined by the maximum dis- value is given that defines the displacement of that partic- placement. The α-test is used to render only the appropri- ular pixel along the normal direction effectively encoding ate parts of every slice. Occlusions are handled properly a heightfield. So far, displacement mapping has mainly by this method. been used in software rendering [21, 29] since the graph- This algorithm works well only for surface-aligned ics hardware was not capable of rendering displacement slices. At grazing angles it is possible to look through maps, although ideas exist on how to extend the hardware the slices. In this case, Schaufler [24] regenerates the lay- with this feature [9, 10, 20]. ered impostor, i.e. the texture and the displacement map, A similar technique used in a different context is im- according to the new viewpoint, which is possible since age warping. It is very similar to displacement mapping, he does have the original model that the layered impostor only that in image warping adjacent pixels need not to be represents. connected, allowing to see through them for certain view- We will introduce an enhanced method that supports ing directions. Displacement mapping is usually applied arbitrary slicing planes, allowing orthogonal slicing di- to a larger number of polygons, whereas image warp- rections or screen-space aligned slicing commonly used ing is often done for a few images only. Techniques in volume rendering, eliminating the need to regenerate that use image warping are also traditionally software- the texture and displacement map. based [12, 16, 19, 25, 26, 27]. On the one hand, we use this new method to render traditional displacement mapped objects; see Figure 1. All of them work in software, but some are designed to This works at interactive rates even for large textures and be turned into hardware. displacements employing current graphics hardware. The only known hardware accelerated method to do On the other hand, this new method can be extended image warping was introduced by Schaufler [24]. Diet- to render a new kind of image-based object, based on im- rich [8] used it later on for displacement mapping. This ages with depth, which we will refer to as image based algorithm will be explained in more detail in the next sec- depth objects. How to reconstruct an object from sev- tion. It has the main problem of introducing severe arti- eral images with depth has been known for many years facts at grazing viewing angles. now [2, 3, 6]. The existing methods are purely software Many IBR techniques employ (forward) image warp- based, very slow, and often working on a memory con- ing [12, 19, 20, 25, 26, 27] but also using a software im- suming full volumetric representation. We introduce a plementation. way to directly render these objects at interactive rates New hardware has also been proposed that would al- using graphics hardware without the need to reconstruct low displacement mapping [9, 10, 20], but none of these them in a preprocessing step. The input images are as- methods have found their way into actual hardware. sumed to be registered beforehand. The reconstruction of objects from images with depth We will also show how the image-based visual hull al- has been researched for many years now. Various differ- gorithm [13] can be implemented using this new method, ent algorithms have been proposed [2, 3, 6] using two and which runs much faster than the original algorithm. different approaches: reconstruction from unorganized Many other image based rendering algorithms also use point clouds, and reconstruction that uses the underlying some kind of image warping [12, 16, 19, 25, 26, 27]. The structure. None of these algorithms using either approach acceleration of these algorithms using our technique is can reconstruct and display such an object in real-time, conceivable. whereas our method is capable of doing this. There are many publications on image based objects; 2 Prior Work we will briefly review the closely related ones. Pulli et We will briefly review previous work from the areas al. [23] hand-model sparse view-dependent meshes from of displacement mapping, image warping, object recon- images with depth in a preprocessing step and recom- struction, and image-based objects. bine them on-the-fly using a soft z-buffer. McAllister et Displacement Mapping was introduced by Cook [4] al. [14] use images with depth to render complex environ- and has been traditionally used in software based meth- ments. Every seen surface is stored once in exactly one of ods, e.g. using raytracing or micro-polygons. Patter- the images. Rendering is done using splatting or with tri- son et al. [21] have introduced a method that can ray- angles. Layered depth images (LDI) [27] store an image trace displacement mapped polygons by applying the in- plus multiple depth values along the direction the image verse of this mapping to the rays. Pharr and Hanra- was taken; reconstruction is done in software. Image- han [22] have used geometry caching to accelerate dis- based objects [19] combine six LDI arranged as a cube placement mapping. Smits et al. [29] have used an ap- with a single center of projection to represent objects. An proach which is similar to intersecting a ray with a height- object defined by its image-based visual hull [13] can be field. The REYES rendering architecture subdivided the rendered interactively using a software renderer. displacement maps into micro-polygons which are then Our method for rendering image-based objects is one rendered [5]. of the first purely hardware accelerated method achieving On the other hand many image-based rendering (IBR) high frame rates and quality. It does not need any prepro- techniques revolve around image warping, which was cessing like mesh generation, it only takes images with e.g. used by McMillan et al. [16] in this context. There depths. are two different ways to implement the warping: forward and backward mapping. Forward mapping loops over all 3 Displacement Mapping pixels in the original image and projects them into the The basic idea of displacement mapping is simple. A desired image. Backward mapping loops over all pixels base geometry is displaced according to a displacement in the desired image and searches for the corresponding function, which is usually sampled and stored in an ar- pixels in the original image. Forward mapping is usu- ray, the so-called displacement map. The displacement is ally preferred, since the search process used by backward performed along the interpolated normals across the base mapping is expensive, although forward mapping may in- geometry. See Figure 2 for a 2D example where a flat troduce holes in the final image. Many algorithms have line is displaced according to a displacement map along been proposed to efficiently warp images [1, 15, 20, 28]. the interpolated normals. n n the inside of a displacement (which will be needed later on). base geometry displacement map displacement mapped Schaufler [24] used a slightly different method. In ev- geometry ery slice only those pixels are drawn whose α-values lie within a certain bound of the slice’s height. For many Figure 2: Displacement Mapping. viewpoints this allows to see through neighboring pix- els whose displacement values differ more than the used 3.1 Basic Hardware Accelerated Method bound. This is suited to image warping in the traditional sense, where it is assumed that pixels with very differ- First we would like to explain the basic algorithm for do- ent depth values are not connected. The method we de- ing displacement mapping using graphics hardware as it scribed is more suited to displacement mapping, where it was introduced by Dietrich [8] (and in a similar way by is assumed that neighboring pixels are always connected. Schaufler [24]). Both methods have the problem that at grazing angles The input data for our displacement mapping algo- it is possible to look through the slices; see Figure 4 for an rithm is an RGBα-texture, which we call displacement example.