ME-C3100 , Fall 2019 Jaakko Lehtinen Many slides from Frédo Durand Sampling II, Anti-, Texture Filtering

CS-C3100 Fall 2019 – Lehtinen 1 Admin and Other Things

• Give class feedback – please! – You will get an automated email with link – I take feedback seriously – One additional assignment point for submitting • 2nd midterm / exam on Wed Dec 18 – Two sheets of two-sided A4 papers allowed – Those who took 1st midterm: only complete 1st half – Those who didn’t: answer all questions

• Last lecture today

Aalto CS-C3100 Fall 2019 – Lehtinen 2 Sampling • The process of mapping a function defined on a continuous domain to a discrete one is called sampling • The process of mapping a continuous variable to a discrete one is called quantization • To represent or render an image using a computer, we must both sample and quantize – Today we focus on the effects of sampling

continuous discrete value value

continuous position discrete position CS-C3100 Fall 2019 – Lehtinen 3 Sampling & Reconstruction

The visual array of light is a continuous function 1/ We sample it... – with a digital camera, or with our ray tracer – This gives us a finite set of numbers (the samples) – We are now in a discrete world 2/ We need to get this back to the physical world: we reconstruct a continuous function from the samples – For example, the point spread of a pixel on a CRT or LCD – Reconstruction also happens inside the rendering process (e.g. ) • Both steps can create problems – Insufficient sampling or pre-aliasing (our focus today) – Poor reconstruction or post-aliasing CS-C3100 Fall 2019 – Lehtinen 4 Sampling Example

1.4

1.2

1

0.8

0.6

0.4

0.2

0 0 1 2 3 4 5 6 7 8 9 10 Continuous function

CS-C3100 Fall 2019 – Lehtinen 5 Sampling Example

1.4

1.2

1

0.8

0.6

0.4

0.2

0 0 1 2 3 4 5 6 7 8 9 10 Sampled discrete function

CS-C3100 Fall 2019 – Lehtinen 6 Reconstruction Example

1.4

1.2

1

0.8

0.6

0.4

0.2

0 0 1 2 3 4 5 6 7 8 9 10 Nearest neighbor reconstruction

CS-C3100 Fall 2019 – Lehtinen 7 Reconstruction Example

1.4

1.2

1

0.8

0.6

0.4

0.2

0 0 1 2 3 4 5 6 7 8 9 10 Piecewise linear reconstruction

CS-C3100 Fall 2019 – Lehtinen 8 Reconstruction Example

1.4 As you can see, there are many 1.2 ways of 1 reconstructing a function from the 0.8 samples. 0.6

Some are more 0.4 accurate than others. 0.2

0 0 1 2 3 4 5 6 7 8 9 10 Piecewise cubic reconstruction

CS-C3100 Fall 2019 – Lehtinen 9 Wiggliness vs. Sampling Density

1.4

1.2

1

0.8

0.6

0.4

0.2

0 0 1 2 3 4 5 6 7 8 9 10

CS-C3100 Fall 2019 – Lehtinen 10 Wiggliness vs. Sampling Density

2x sampling density

1.4

1.2

1

0.8

0.6

0.4

0.2

0 0 1 2 3 4 5 6 7 8 9 10

CS-C3100 Fall 2019 – Lehtinen 11 Sampling Density

• If we sample too coarsely, the samples can be mistaken for something simpler during reconstruction • This is why it’s called aliasing – The new (erroneous) low-frequency sine wave is an alias/ghost of the high-frequency one

Input Reconstructed CS-C3100 Fall 2019 – Lehtinen 12 Pre- vs. Post-Aliasing

• If the sampling density is too low, no reconstruction method can recover the original function – This is pre-aliasing, information was lost in sampling • Poor reconstruction is post-aliasing

1.4 1.4

1.2 1.2

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

CS-C3100 Fall 2019 – Lehtinen 13 Recap: Texture Aliasing

CS-C3100 Fall 2019 – Lehtinen 14 Aliasing – What Does it Look Like?

Nasty mess

“Nearest neighbor” Mess gets blurred out

“Tri-linear Mip-mapping”

CS-C3100 Fall 2019 – Lehtinen 15 Questions?

CS-C3100 Fall 2019 – Lehtinen 16 Sampling Density Revisited

• If we sample too coarsely, the samples can be mistaken for something simpler during reconstruction • When does this happen? – “The signal must not contain frequencies higher than half the sampling density” (Nyquist, Shannon, and others)

Input Reconstructed CS-C3100 Fall 2019 – Lehtinen 17 Sampling Density Revisited

• So, in order to prevent aliasing, we should either.. – Filter the high frequencies out from the signal before sampling (this is called “low-pass filtering”) – Sample at a higher rate

Input Reconstructed CS-C3100 Fall 2019 – Lehtinen 18 Solution?

• Removing high freq’s = low-pass filtering = blur • In practice, we blur before sampling in order to stop high frequencies from messing our image up – In audio, use an analog low-pass filter before sampling – For ray tracing/rasterization: compute image at higher resolution, blur, then resample at lower resolution • Not exactly a proper low-pass prefilter, because aliasing still happens at the higher resolution, but it’s all we can do – For textures, we blur the texture before doing the lookup (not necessary to compute in higher resolution first) • To understand what really happens, we need semi-serious math (Fourier transforms) CS-C3100 Fall 2019 – Lehtinen 19 Digital Cameras

• Most digital cameras have an optical low-pass filter in front of the sensor – Yes, it’s essentially a piece of frosted glass (maitolasi) • Some cameras don’t – See comparison between Nikon D800 and D800e

CS-C3100 Fall 2019 – Lehtinen 20 Blurring Removes High Frequencies

Original image High frequencies removed by blur

CS-C3100 Fall 2019 – Lehtinen 21 Types of Blur

• Blur = local weighted average • Different set of weights => different blur – e.g.: box, Gaussian, bilinear, bicubic

Box filter: Equally-weighted average of N samples

CS-C3100 Fall 2019 – Lehtinen 22 Types of Blur

• Blur = local weighted average • Different set of weights => different blur – e.g.: box, Gaussian, bilinear, bicubic

Bicubic filter: A wider filter with small negative lobes

CS-C3100 Fall 2019 – Lehtinen 23 Filter Examples

Notice that there are still sharp features in the result

Original Box filter

CS-C3100 Fall 2019 – Lehtinen 24 Filter Examples

Notice that there are still sharp features in the result

Gaussian blur Box filter

CS-C3100 Fall 2019 – Lehtinen 25 Filter Examples

No sharp edges like box, but overall sharper (good)

Gaussian blur Bicubic filter

CS-C3100 Fall 2019 – Lehtinen 26 Blurring: (Discrete) Convolution

Convolution Input sign Kernel

Convolution of a function and a kernel gives another function. All values in the output function are locally weighted averages of the input function.

CS-C3100 Fall 2019 – Lehtinen 27 Blurring: (Discrete) Convolution

Convolution Input sign Kernel

Same shape, just reduced contrast!!!

The sine wave is an eigenvector (output is the input multiplied by a constant) This means sines and cosines are particularly simple for convolution

CS-C3100 Fall 2019 – Lehtinen 28 Animated Convolution

• See Wikipedia

CS-C3100 Fall 2019 – Lehtinen 29 (Convolutional Neural Nets)

• Yes, the CNNs everyone is talking about now are based on precisely this operation. – Many layers of convolutions where the kernels are learned – Layers separated by point-wise nonlinearities (eg. ReLU)

• See wikipedia or Goodfellow’s Deep Learning book

CS-C3100 Fall 2019 – Lehtinen 30 Types of Convolution

• That was discrete; the functions and the kernel were sampled representations defined on a grid • There is also a continuous convolution – input and output functions and the kernel are defined on a continuous domain – weighted sum is replaced by an integral • And actually, a semi-discrete one as well! – Reconstructing a continuous signal from samples is a convolution as well: you convolve the “spiky” samples with a continuous reconstruction kernel

CS-C3100 Fall 2019 – Lehtinen 31 Aalto CS-E5520 Spring 2019 – Lehtinen 32 Filter f(x-xj,y-yj) centered at (xj, yj)

Aalto CS-E5520 Spring 2019 – Lehtinen 33 Filter f(x-xj,y-yj) centered at (xj, yj) times the underlying signal Aalto CS-E5520 Spring 2019 – Lehtinen 34 dxdy Z

value of convolution at (xj,yj) is the integral of the filter times the underlying signal Aalto CS-E5520 Spring 2019 – Lehtinen 35 Low-pass filtered continuous image (convolution of f and input image) = previous integral evaluated at all points Aalto CS-E5520 Spring 2019 – Lehtinen 36 Samples at pixel centers

Aalto CS-E5520 Spring 2019 – Lehtinen 37 Samples evaluate convolution result at pixel centers

Aalto CS-E5520 Spring 2019 – Lehtinen 38 Example: Samples

CS-C3100 Fall 2019 – Lehtinen 39 Example: Reconstruction Kernel

CS-C3100 Fall 2019 – Lehtinen 40 1 Sample * Translated Reconstruction Kernel

CS-C3100 Fall 2019 – Lehtinen 41 All Samples * Translated Kernels

CS-C3100 Fall 2019 – Lehtinen 42 Final Reconstruction = sum of samples * translated kernels

CS-C3100 Fall 2019 – Lehtinen 43 Questions?

CS-C3100 Fall 2019 – Lehtinen 44 Signal Processing 101

• Sampling and filtering are best understood in terms of Fourier analysis • We already saw aliasing for sine waves: a high frequency sine wave turns into a low frequency one when undersampled

CS-C3100 Fall 2019 – Lehtinen 45 Remember Fourier Analysis? • A signal in the spatial domain has a dual representation in the frequency domain

spatial domain frequency domain

• This particular signal is band-limited, meaning it has no frequencies above some threshold

CS-C3100 Fall 2019 – Lehtinen 46 Remember Fourier Analysis? • We can transform from one domain to the other using the Fourier Transform – Just a change of basis: Represent the image in terms of sines and cosines of different frequencies and orientations – An invertible transform frequency domain spatial domain

Fourier Transform

Inverse Fourier Transform

CS-C3100 Fall 2019 – Lehtinen 47 The Fourier basis functions are sines and cosines each with a certain orientation and frequency

v

u

CS-C3100 Fall 2019 – Lehtinen The Fourier basis functions are sines and cosines each with a certain orientation and frequency

v

u

CS-C3100 Fall 2019 – Lehtinen The Fourier basis functions are sines and cosines each with a certain orientation and frequency

v

u

CS-C3100 Fall 2019 – Lehtinen Fourier Transforms and Series

• …are bread-and-butter signal processing tools you learned in your introductory math classes

• You can refresh at Wikipedia, again – If the complex exponentials feel weird to you, you can just think of it as doing sines and cosines at once through Euler’s formula exp(ix) = cos(x) + i sin(x)

CS-C3100 Fall 2019 – Lehtinen 51 • Great animated GIF that illustrates the decomposition

CS-C3100 Fall 2019 – Lehtinen 52 Questions?

CS-C3100 Fall 2019 – Lehtinen 53 Convolution

• Some operations that are difficult in the spatial domain are simpler in the frequency domain • For example, convolution in the spatial domain (local weighted average) is the same as multiplication in the frequency domain. – This is because sine waves are eigenfunctions!

• And, convolution in the frequency domain is the same as multiplication in the spatial domain

CS-C3100 Fall 2019 – Lehtinen 54 Low-Pass Filter http://www.reindeergraphics.com

black means 0, white means 1 => frequencies outside circle get set to zero

CS-C3100 Fall 2019 – Lehtinen High-Pass Filter http://www.reindeergraphics.com

black means 0, white means 1 => frequencies inside circle get set to zero

CS-C3100 Fall 2019 – Lehtinen Sampling in the Frequency Domain

Fourier Transform original signal

Fourier Transform sampling grid (“impulse train”) (multiplication) (convolution)

Fourier Transform sampled signal Replicated spectra CS-C3100 Fall 2019 – Lehtinen 57 Reconstruction

• Sampling gives us discrete samples at the cost of frequency replicas • If we can get rid of the replicas, we can reconstruct the original signal in spatial domain • How to get rid of replicas? – Low-pass filter, i.e., blur the “spiky” sampled function!

• But there may be overlap between the copies

– This is aliasing Spectrum of sampled function CS-C3100 Fall 2019 – Lehtinen 58 Guaranteeing Proper Reconstruction

• Separate by removing high frequencies from the original signal (low pass pre-filtering before sampling)... • ...or separate replicas by increasing sampling density.

• If we can't separate the copies, we will have overlapping frequency spectrum during reconstruction → aliasing.

CS-C3100 Fall 2019 – Lehtinen 59 Recap: Sampling/Reconstruction

• 1/ Prefilter – Blur signal with given low-pass filter to remove frequencies that would alias when sampled • 2/ Sample – From continuous to discrete domain – Creates replicas in frequency domain • 3/ Reconstruct – Get rid of the replicas – Another blur: Apply a low-pass filter to the “spiky” discrete sampled function

CS-C3100 Fall 2019 – Lehtinen 60 The “Sampling Theorem”

• When sampling a signal at discrete intervals, the sampling frequency must be greater than twice the highest frequency of the input signal in order to be able to reconstruct the original perfectly from the sampled version (Shannon, Nyquist, Whittaker, Kotelnikov)

CS-C3100 Fall 2019 – Lehtinen 61 Questions?

CS-C3100 Fall 2019 – Lehtinen 62 Supersampling in Graphics • Pre-filtering (blurring before sampling) is hard – Our primitives have infinite frequencies (e.g., a mathematical triangle has perfectly sharp edges) – And we can usually only take point samples of the image • Think of your ray tracer! – And still difficult to integrate analytically with prefilter • Possible for lines, or if visibility is ignored – Not very interesting...

• Usually, fall back to supersampling – Just bite the bullet and sample more

CS-C3100 Fall 2019 – Lehtinen 63 In Practice: Supersampling • Your intuitive solution is to compute multiple color values per pixel (at slightly different locations) and average them jaggies w/ antialiasing

CS-C3100 Fall 2019 – Lehtinen 64 In Practice: Supersampling • Your intuitive solution is to compute multiple color values per pixel and average them • A better interpretation of the same idea is that – You first render a high resolution image – You blur it (low-pass, prefilter) – You resample it at a lower resolution

CS-C3100 Fall 2019 – Lehtinen 65 Uniform Supersampling

• Compute image at resolution k*width, k*height • Downsample using low-pass filter (e.g. Gaussian, bicubic, etc.)

CS-C3100 Fall 2019 – Lehtinen 66 Uniform Supersampling

• Advantage – The first (super)sampling captures more high frequencies that are not aliased – Downsampling can use a good filter • Issue – Frequencies above the (super)sampling limit are still aliased • Works well for edges • Not as well for repetitive textures ✔ – We only pushed the problem farther – But solutions exist ✘ CS-C3100 Fall 2019 – Lehtinen 67 Related Idea: Multisampling

• Problem – Shading is very expensive today (complicated shaders) – Full supersampling has linear cost in #samples (k*k) • Goal: High-quality edge antialiasing at lower cost • Solution – Compute shading only once per pixel for each primitive, but resolve visibility at “sub-pixel” level • Store (k*width, k*height) frame and z buffers, but share shading results between sub-pixels within a real pixel – When visibility samples within a pixel hit different primitives, we get an average of their colors • Edges get antialiased without large shading cost CS-C3100 Fall 2019 – Lehtinen 68 Multisampling, Visually

= sub-pixel visibility sample

One pixel

CS-C3100 Fall 2019 – Lehtinen 69 Multisampling, Visually

= sub-pixel visibility sample

One pixel

CS-C3100 Fall 2019 – Lehtinen 70 Multisampling, Visually

= sub-pixel visibility sample

The color is only computed once per pixel per triangle and reused for all the visibility samples that are covered by the triangle.

One pixel

CS-C3100 Fall 2019 – Lehtinen 71 Supersampling, Visually

= sub-pixel visibility sample

When supersampling, we compute colors independently for all the visibility samples.

One pixel

CS-C3100 Fall 2019 – Lehtinen 72 Multisampling Pseudocode

For each triangle For each pixel if pixel overlaps triangle color=shade() // only once per pixel! for each sub-pixel sample compute edge equations & z if subsample passes edge equations && z < zbuffer[subsample] zbuffer[subsample]=z framebuffer[subsample]=color »

CS-C3100 Fall 2019 – Lehtinen 73 Multisampling Pseudocode

For each triangle For each pixel if pixel overlaps triangle color=shade() // only once per pixel! for each sub-pixel sample compute edge equations & z if subsample passes edge equations && z < zbuffer[subsample] zbuffer[subsample]=z framebuffer[subsample]=color At display time: //this is called “resolving” For each pixel

color = averageCS-C3100 of Fall 2019subsamples – Lehtinen 74 Multisampling vs. Supersampling

• Supersampling – Compute an entire image at a higher resolution, then downsample (blur + resample at lower res) • Multisampling – Supersample visibility, compute expensive shading only once per pixel, reuse shading across visibility samples • But Why? – Visibility edges are where supersampling really works – Shading can be prefiltered more easily than visibility • This is how GPUs perform antialiasing these days

CS-C3100 Fall 2019 – Lehtinen 75 Questions?

CS-C3100 Fall 2019 – Lehtinen 76 BUT: Supersampling has its limits

CS-C3100 Fall 2019 – Lehtinen 77 1 Sample / Pixel

CS-C3100 Fall 2019 – Lehtinen 78 Supersampling, 16 Samples / Pixel

CS-C3100 Fall 2019 – Lehtinen 79 100 Samples / Pixel

Even this sampling rate cannot get rid of all aliasing artifacts!

We are really only pushing the problem farther.

CS-C3100 Fall 2019 – Lehtinen 80 Questions?

CS-C3100 Fall 2019 – Lehtinen 81 Texture Filtering

• Problem: Prefiltering is impossible when you can only take point samples – This is why visibility (edges) need supersampling • Texture mapping is simpler – Imagine again we are looking at an infinite textured plane

textured plane

CS-C3100 Fall 2019 – Lehtinen 82 Texture Filtering

• We should pre-filter image function before sampling – That means blurring the image function with a low-pass filter (convolution of image function and filter)

textured plane

Low-pass filter CS-C3100 Fall 2019 – Lehtinen 83 Texture Filtering

• We can combine low-pass and sampling – The value of a sample is the integral of the product of the image f and the filter h centered at the sample location • “A local average of the image f weighted by the filter h”

fˆi = f(x) h(x)dx image

textured plane

Low-pass filter CS-C3100 Fall 2019 – Lehtinen 84 Texture Filtering

• Well, we can just as well change variables and compute this integral on the textured plane instead – In effect, we are projecting the pre-filter onto the plane

textured plane

Low-pass filter CS-C3100 Fall 2019 – Lehtinen 85 Texture Filtering

• Well, we can just as well change variables and compute this integral on the textured plane instead – In effect, we are projecting the pre-filter onto the plane – It’s still a weighted average of the texture under filter

fˆi = f(x) h(x) J(x, x) dx ⇥ plane

textured plane

Just the usual change of variables formula, you learned in math class, see here Low-pass filter CS-C3100 Fall 2019 – Lehtinen 86 Texture Pre-Filtering, Visually

textured surface (texture map) image plane Imageadapted from McCormackal.et

image-space filter image-space filter projected onto plane • Must still integrate product of projected filter and texture – That doesn’t sound any easier...

CS-C3100 Fall 2019 – Lehtinen 87 Solution: Precomputation

• We’ll precompute and store a set of prefiltered results from each texture with different sizes of prefilters

CS-C3100 Fall 2019 – Lehtinen 88 Solution: Precomputation

• We’ll precompute and store a set of prefiltered results from each texture with different sizes of prefilters

CS-C3100 Fall 2019 – Lehtinen 89 Solution: Precomputation

• We’ll precompute and store a set of prefiltered results from each texture with different sizes of prefilters – Because it’s low-passed, we can also subsample

CS-C3100 Fall 2019 – Lehtinen 90 Solution: Precomputation

• We’ll precompute and store a set of prefiltered results from each texture with different sizes of prefilters – Because it’s low-passed, we can also subsample

CS-C3100 Fall 2019 – Lehtinen 91 This is Called “MIP-Mapping”

• Construct a pyramid of images that are pre-filtered and re-sampled at 1/2, 1/4, 1/8, etc., of the original image's sampling • During rasterization we compute the index of the decimated image that is sampled at a rate closest to the density of our desired sampling rate • MIP stands for multum in parvo which means many in a small place

CS-C3100 Fall 2019 – Lehtinen 92 MIP-Mapping

• When a pixel wants an integral of the pre-filtered texture, we must find the “closest” results from the precomputed MIP-map pyramid Projected pre-filter – Must compute the “size” of the projected pre-filter in the texture UV domain

CS-C3100 Fall 2019 – Lehtinen 93 MIP-Mapping

• When a pixel wants an integral of the pre-filtered texture, we must find the “closest” results from the precomputed MIP-map pyramid Projected pre-filter – Must compute the “size” of the projected pre-filter in the texture UV domain – Simplest method: Just pick the one scale that is closest (in some sense), then perform usual reconstruction on that level (e.g. bilinear)

CS-C3100 Fall 2019 – Lehtinen 94 Tri-Linear MIP-Mapping

• Next-simplest method: See which two scales are closest, compute reconstruction results from both, and linearly interpolate between them Projected pre-filter – When using bilinear reconstruction on each scale, this is called “tri-linear MIP-mapping”

• Note that we are not getting the correct answer because precomputed filters are not the right shape CS-C3100 Fall 2019 – Lehtinen 95 Anisotropic MIP-Mapping

• Anisotropic MIP-mapping approximates the true prefilter with a number of samples from different levels of the pyramid Projected pre-filter – Mostly black magic

CS-C3100 Fall 2019 – Lehtinen 96 MIP Mapping Example

Nearest Neighbor MIP Mapped (Tri-Linear)

CS-C3100 Fall 2019 – Lehtinen 97 MIP Mapping Example

• Small details may "pop" in and out of view

Nearest Neighbor MIP Mapped (Tri-Linear)

CS-C3100 Fall 2019 – Lehtinen 98 MIP Mapping Example

nearest neighbor/ point sampling

& linear interpolation (tri-linear)

CS-C3100 Fall 2019 – Lehtinen 99 Storing MIP Maps

• Can be stored compactly: Only 1/3 more space!

CS-C3100 Fall 2019 – Lehtinen 100 Finding the MIP Level

• Often we think of the pre-filter as a box Projected pre-filter – What is the projection of the rectangular pixel “window” in texture space? – MIP-map prefilters are symmetric (isotropic) so there is no one right way to pick the level

CS-C3100 Fall 2019 – Lehtinen 101 Finding the MIP Level

• Often we think of the Projection of pixel center pre-filter as a box Projected pre-filter

– What is the projection px = (du/dx, dv/dx) of the rectangular py = (du/dy, dv/dy) pixel “window” in texture space? – MIP-map prefilters are symmetric (isotropic) so there is no one right way to pick the level – Answer is in the partial derivatives px and py of (u,v) w.r.t. screen (x,y) CS-C3100 Fall 2019 – Lehtinen 102 Finding the MIP Level

• Two most common Projection of pixel center approaches are Projected pre-filter

– Pick level according to px = (du/dx, dv/dx) the length (in texels) of py = (du/dy, dv/dy) the longer partial log max w p ,hp 2 { | x| | y|} – Pick level according to the length of their sum log (w p )2 +(h p )2 2 | x| | y| w x h

CS-C3100 Fall 2019 – Lehtinen 103 Questions?

CS-C3100 Fall 2019 – Lehtinen 104 How Are Partials Computed?

• You can derive closed form formulas based on the uv and xyw coordinates of the vertices... – This is what used to be done • ..but shaders may compute texture coordinates programmatically, not necessarily interpolated – No way of getting analytic derivatives!

• In practice, use finite differences – GPUs process pixels in blocks of (at least) 4 anyway • These 2x2 blocks are called quads

CS-C3100 Fall 2019 – Lehtinen 105 Texture Filtering: Magnification

• Pre-aliasing problems happen when function varies too fast for samples to capture • But when we are looking at a texture closer than the sampling rate, poor reconstruction (post-aliasing) is apparent Nearest neighbor Bilinear – Must also use proper filtering... – If not enough texture resolution, result is blurry • Oh well, not much you can do

there! CS-C3100 Fall 2019 – Lehtinen 106 Elliptical Weighted Average (EWA) • Isotropic filter w.r.t. screen space – Becomes anisotropic in texture space • Use e.g. anisotropic Gaussian • Very, very high quality, but slow – No hardware support

CS-C3100 Fall 2019 – Lehtinen 107 EWA Filtering in Action

CS-C3100 Fall 2019 – Lehtinen 108 Image Quality Comparison

• EWA vs. tri-linear MIP-mapping

EWA trilinear mipmapping

CS-C3100 Fall 2019 – Lehtinen 109 Further Reading

• Paul Heckbert published seminal work on texture mapping and filtering in his master’s thesis (!) – Including EWA – Highly recommended reading! – See http://www.cs.cmu.edu/~ph/texfund/texfund.pdf • More reading

– Feline: Fast Elliptical Lines for Arf! Anisotropic Texture Mapping, McCormack, Perry, Farkas, Jouppi SIGGRAPH 1999 – Texram: A Smart Memory for Texturing Schilling, Knittel, Strasser,. IEEE CG&A, 16(3): 32-41 CS-C3100 Fall 2019 – Lehtinen 110 That’s All for Today!

• Let’s see a cool real-time video to get back from mathland!

CS-C3100 Fall 2019 – Lehtinen 111