Pixel pipeline Texture Mapping OpenGL has a pixel pipeline along with the geometry pipeline (from processor memory to frame buffer) Tom Shermer Richard (Hao) Zhang There is a whole set of OpenGL functions on pixel Introduction to Computer Graphics manipulation, see OpenGL reference cards CMPT 361 – Lecture 18 Many functions for texture mapping and manipulation 1 2 Texture mapping Texture mapping in OpenGL How to map an image (texture) onto a surface In older versions, 1D and 2D texture mapping Now 3D/solid textures are supported in OpenGL Basic steps of texture mapping, see Chapter 7 in text 1. Generate a texture image and place it in texture memory on the GPU, e.g., glGenTextures(), glBindTexture(), etc. 2. Assign texture coordinates (references to texture image) to Image-related (2D) but also incorporating 3D data each fragment (pixel) in-between, texture mapping between Textures are assigned by texture coordinates image to surface (then fragments) associated with points on a surface 3. Apply texture to each fragment 3 4 1 Why texture mapping? Effect of texture mapping Real-world objects often have complex surface details even of a stochastic nature, e.g., the surface of an orange, patterns on a wood table, etc. Using many small polygons and smooth shading them to capture the details is too expensive A “cheat” : use an image (it is easy to take a photo and digitize it) to capture the details and patterns and paint it onto simple surfaces 5 6 Effect of texture mapping Textures Textures are patterns e.g., stripes, checkerboard, or those representing and characterizing natural material (wood, grass field) They can be in 1D, 2D, or 3D form 2D textures, given as an image, is most common Key problem is to map a 2D texture onto an arbitrary (polygonal or curved) surface 2D texturing techniques can be generalized 7 8 2 Texture synthesis Some terminologies A single captured image is not Texture map: a 2D image to paint on a surface rich enough for applications Texels (texture elements): the elements of the texture map – to distinguish from pixels Textures are often synthesized Textures are defined in texture (s, t) coordinates – think of it as a continuous image from a small texture examplar, Mathematically, texture mapping assigns a unique texture point (s, t) e.g., example-based synthesis to each point on a surface (one-to-one) Most natural textures appear One pixel may be covered by many texels or vice versa random, e.g., use of noise texels pixel texels pixel models such as Perlin noise 9 10 Rendering with texture mapped Rending approach texel space object space pixel space 2. Use texture mapping to map the four 1. Map the four corners of a screen pixel onto surface points to points in texture map a surface in the scene (e.g., ray casting) 11 12 3 Rendering approach A more simplistic approach Such an approach (area sampling) is much better than 1. Cast a single ray through the center of a pixel P 2. Find point T in texture map corresponding to the first ray- surface intersection 3. Use color of T in the texture map for the pixel P This is called point sampling: 3. Use a weighted sum of texels covered by the could be a severe case of quadrilateral in texture map to color pixel under-sampling (aliasing) — This is called area sampling 13 14 Point vs. area sampling Aliasing in texture mapping Aliasing resulting from Result of area sampling of point sampling of pixels pixels – we get a shade of gray without seeing the true patterns – aliasing, but not much we can do due to limited screen resolution 15 16 4 Two main issues Texture mapping on surfaces Finding the right mapping between the 2D texture and surface in 3D One-to-one Minimize distortion, e.g., angle-, area-, or distance-preserving maps Honoring certain constraints Greiner and Hormann (IMA Workshop, [Zieglman et al. 02] Geometric Design, 2001) Finding the right color for each pixel, i.e., the right way to sample and combine texels to shade a pixel Surfaces modeled using triangle meshes The key is to reduce aliasing effects Assuming texture (image) is sufficiently large it has been synthesized/replicated 17 18 Interpolation of texture coordinates Texture interpolation in 2D Given texture coordinates at three vertices of a Why? - use scanline and incremental algorithm triangle, how to find texture coordinate of an OK for orthographic projections interior point? midpoint (u1, v1) But wrong in perspective! Note that the The textures appear to be warped triangle is in 3D (?, ?) There is no foreshortening (u3, v3) (u2, v2) Recreated from Java Applet at: Bilinear interpolation in 3D would be right http://graphics.lcs.mit.edu/classes/6.837 But is it right to interpolate in screen (2D) space? 19 20 5 Color interpolation Texture vs. color interpolation But remember Gouraud shading? Yes, Gouraud shading was done in screen space Did we not do interpolation in screen space? This is not right, but we barely notice it since color variation is smooth – not so for textures! Can do texture color interpolation in 2D, like Gouraud 21 22 Texture interpolation in perspective Perspective-correct interpolation Let P’ and Q’ be the projections of points P = (xp, yp, zp) and How to do interpolation correctly Q = (xq, yq, zq). Again, concentrate on x. Plane of in perspective projection? projection Interpolate between P’ and Q’ in screen space, First recall how projections are x x x P computed p q p R(s) R'(t)x P'x t(Q'x P'x ) t( ) P’ z p zq z p Assume projection plane d = 1, R’(t) (xp, xq, zp, zq in world space) Q also focus on x (case for y is Q’ the same), we have Problem statement: given t and texture coordinates at P’ and Q’, find the texture coordinate corresponding to point R’(t)x xp = x/z 23 24 6 Perspective-correct interpolation Perspective-correct interpolation First, find world-space P and Q corresponding to Plane of Recalling that xp xq xp (screen-space) P’ and Q’. projection R'(t)x t( ) z p zq z p Next, a lerp between P and Q gives us P xp xq xp R(s) We set that quantity equal to R(s) P s(Q P) s xp xq xp xp s(xq xp ) P’ the one in the previous t( ) z p zq z p z z z z s(z z ) equation to get p q p p q p R’(t) y ignored Q Q’ Now we solve for s, grinding tz p Projecting R(s) onto the image plane z = 1, the x- s through to component of the projection is zq t(z p zq ) xp s(xq xp ) proj(R(s))x R'(t)x z p s(zq z p ) s depends only on t and the z’s 25 26 Perspective-correct interpolation Perspective-correct interpolation We can use this s to interpolate any attributes at the A comparison: vertices of our 3D triangle. To do that for texture coordinate up at P and uq at Q : A lerp of u/z with parameter t Plane of u p z p tuq zq u p z p projection u(s) u p s(uq u p ) 1 z p t1 zq 1 z p P A lerp of 1/z with parameter t R(s) P’ So we can interpolate in screen space (with respect R’(t) Q to t), but instead of interpolating u and v, we Q’ interpolate u/z, v/z, and 1/z. At each pixel, we then divide the interpolated values to get u and v. Next question: what texel value (color) do we get given a texture coordinate or texture region? 27 28 7 Sampling of texture maps Magnification of texture map Resolution of screen space (size of pixel) rarely What to do with oversampling? – interpolation (over matches resolution of texture map (size of texel) triangle after determining color at vertices) Under-sampling of texture map or minification: one texels pixels pixel corresponds to a texel region texels pixel How to determine a texel (color) value given a texture texels pixel coordinate (u, v)? Nearest point sampling: use texel closest to (hit by) the point sample Over-sampling of texture map or magnification: one pixel corresponds to portion of a texel Linear filtering: use average of group (2 2) of texels close to the point sample 29 30 Point sampling vs. linear filtering Point sampling vs. linear filtering In areas where there is under-sampling, moiré patterns (false high-frequency) still appear. This is because a 2 2 filter is too Linear filtering reduces the jaggies over areas where small. By enlarging the filter mask, we can get a smoother shade there is texture over-sampling (magnification) of gray, but we can’t eliminate moiré altogether. 31 32 8 Minification: fast averaging of texels Mipmaps When one pixel covers a region of texels, need MIP (multum in parvo in Latin) = many in a small place to sum up contribution of the covered texels Mipmaps: stores a texture map in a multiresolution manner – called an image pyramid Need to do this quickly – O(1) time per pixel How much extra storage is required? Computing texel average during rasterization can slow things down significantly So do some preprocessing – called prefiltering – of the texture map to enable faster computations 1. Use of mipmaps 2. Use of summed area table (SAT) 33 34 Mipmaps How to use mipmaps Each texel at level i+1 is the average of a 2 2 area of The rough idea: the texture map at level i Level i+1 is a blurred version of level i at reduced size To rasterize a pixel P in screen space 1.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages12 Page
-
File Size-