Sampling and Monte-Carlo Integration

Sampling and Monte-Carlo Integration

MIT EECS 6.837 MIT EECS 6.837

Last Time Quiz solution: Homogeneous sum

are samples •(x1, y1, z1, 1) + (x2, y2, z2, 1) = (x +x , y +y , z +z , 2) • Sampling theorem 1 2 1 2 1 2 ≈ ((x1+x2)/2, (y1+y2)/2, (z1+z2)/2) • Convolution & multiplication • This is the average of the two points • : spectrum replication • General case: consider the homogeneous version of (x , y , z ) and (x , y , z ) with w coordinates w and w • Ideal filter 1 1 1 2 2 2 1 2 – And its problems •(x1 w1, y1 w1, z1 w1, w1) + (x2 w2, y2 w2, z2 w2, w2) = (x w +x w , y w +y w , z w +z w , w +w ) • Reconstruction 1 1 2 2 1 1 2 2 1 1 2 2 1 2 ≈ ((x1 w1+x2w2 )/(w1+w2) ,(y1 w1+y2w2 )/(w1+w2), • Texture prefiltering, mipmaps (z1 w1+z2w2 )/(w1+w2)) • This is the weighted average of the two geometric points

MIT EECS 6.837 MIT EECS 6.837

Today’s lecture Ideal sampling/reconstruction

• Antialiasing in graphics • Pre-filter with a perfect low-pass filter • Sampling patterns – Box in frequency • Monte-Carlo Integration – Sinc in time • Probabilities and variance • Sample at Nyquist limit • Analysis of Monte-Carlo Integration – Twice the frequency cutoff • Reconstruct with perfect filter – Box in frequency, sinc in time • And everything is great!

MIT EECS 6.837 MIT EECS 6.837

1 Difficulties with perfect sampling At the end of the day

• Hard to prefilter • Fourier analysis is great to understand aliasing • Perfect filter has infinite support • But practical problems kick in – Fourier analysis assumes infinite signal and complete • As a result there is no perfect solution knowledge • Compromises between – Not enough focus on local effects – Finite support • And negative lobes – Avoid negative lobes – Emphasizes the two problems above – Avoid high-frequency leakage – Avoid low-frequency attenuation – Negative light is bad • Everyone has their favorite cookbook recipe – Ringing artifacts if prefiltering or supports are not perfect – Gaussian, tent, Mitchell bicubic

MIT EECS 6.837 MIT EECS 6.837

The special case of edges Anisotropy of the sampling grid

• An edge is poorly captured by Fourier analysis • More vertical and horizontal bandwidth – It is a local feature – E.g. less bandwidth in diagonal – It combines all frequencies (sinc) • Practical issues with edge aliasing lie more in the • A hexagonal grid would be better jaggies (tilted lines) than in actual spectrum – Max anisotropy replication – But less practical

MIT EECS 6.837 MIT EECS 6.837

Anisotropy of the sampling grid Philosophy about mathematics

• More vertical and horizontal bandwidth • Mathematics are great tools to model • A hexagonal grid would be better (i.e. describe) your problems – But less practical • They afford incredible power, formalism, • Practical effect: vertical and horizontal direction generalization show when doing bicubic upsampling • However it is equally important to understand the practical problem and how much the mathematical model fits

Low-res image MIT EECS 6.837 Bicubic upsampling MIT EECS 6.837

2 Questions? Today’s lecture

• Antialiasing in graphics • Sampling patterns • Monte-Carlo Integration • Probabilities and variance • Analysis of Monte-Carlo Integration

MIT EECS 6.837 MIT EECS 6.837

Supersampling in graphics Uniform supersampling

• Pre-filtering is hard • Compute image at resolution k*width, k*height – Requires analytical visibility • Downsample using low-pass filter – Then difficult to integrate (e.g. Gaussian, sinc, bicubic) analytically with filter • Possible for lines, or if visibility is ignored • usually, fall back to supersampling

MIT EECS 6.837 MIT EECS 6.837

Uniform supersampling Multisampling vs. supersampling

• Advantage: • Observation: – The first (super)sampling captures more high – Edge aliasing mostly comes from frequencies that are not aliased visibility/rasterization issues – Downsampling can use a good filter – Texture aliasing can be prevented using prefiltering • Issues • Multisampling idea: – Frequencies above the (super)sampling limit are still aliased – Sample rasterization/visibility at a higher rate than • Works well for edges, since shading/texture spectrum replication is less an issue • In practice, same as supersampling, except that • Not as well for repetitive textures all the subpixel get the same color if visible – But mipmapping can help

MIT EECS 6.837 MIT EECS 6.837

3 Multisampling vs. supersampling Questions?

For each triangle For each Compute pixelcolor //only once for all subpixels For each subpixel If (all edge equations positive && zbuffer [subpixel] > currentz ) Then [subpixel]=pixelcolor • The subpixels of a pixel get different colors only at edges of triangles or at occlusion boundaries

Example: 2 Gouraud- Subpixels in Subpixels in shaded supersampling multisampling triangles MIT EECS 6.837 MIT EECS 6.837

Uniform supersampling Jittering

• Problem: supersampling only pushes the problem • Uniform sample + random perturbation further: The signal is still not bandlimited • Sampling is now non-uniform • Aliasing happens • gets more complex • In practice, adds noise to image • But noise is better than aliasing Moiré patterns

MIT EECS 6.837 MIT EECS 6.837

Jittered supersampling Jittering

• Regular, Jittered Supersampling • Displaced by a vector a fraction of the size of the subpixel distance • Low-frequency Moire (aliasing) pattern replaced by noise • Extremely effective • Patented by Pixar! • When jittering amount is 1, equivalent to stratified sampling (cf. later)

MIT EECS 6.837 MIT EECS 6.837

4 Poisson disk sampling and blue noise Poisson disk sampling and blue noise

• Essentially random points that are not allowed to • Essentially random points that are not allowed to be closer than some radius r be closer than some radius r • Dart-throwing algorithm: • The spectrum of the Poisson disk pattern is called Initialize sampling pattern as empty blue noise: Do Get random point P • No low frequency If P is farther than r from all samples From Hiller et al. • Other criterion: Add P to sampling pattern Isotropy Until unable to add samples for a long time r (frequency content Poisson disk pattern Fourier transform must be the same Anisotropy (power spectrum for all direction) per direction)

MIT EECS 6.837 MIT EECS 6.837

Recap Adaptive supersampling

• Uniform supersampling • Use more sub-pixel samples around edges – Not so great • Jittering – Great, replaces aliasing by noise

• Poisson disk sampling – Equally good, but harder to generate – Blue noise and good (lack of) anisotropy

MIT EECS 6.837 MIT EECS 6.837

Adaptive supersampling Adaptive supersampling

• Use more sub-pixel samples around edges • Use more sub-pixel samples around edges Compute color at small number of sample Compute color at small number of sample If their variance is high If variance with neighbor pixels is high Compute larger number of samples Compute larger number of samples

MIT EECS 6.837 MIT EECS 6.837

5 Problem with non-uniform distribution Problem with non-uniform distribution

• Reconstruction can be complicated • Reconstruction can be complicated • Solution: do a multi-level reconstruction – Reconstruct uniform sub-pixels – Filter those uniform sub-pixels

80% of the samples are black Yet the pixel should be light grey

MIT EECS 6.837 MIT EECS 6.837

Recap Questions?

• Uniform supersampling – Not so great • Jittering – Great, replaces aliasing by noise • Poisson disk sampling – Equally good, but harder to generate – Blue noise and good (lack of) anisotropy • Adaptive sampling – Focus computation where needed – Beware of false negative – Complex reconstruction

MIT EECS 6.837 MIT EECS 6.837

Today’s lecture Shift of perspective

• Antialiasing in graphics • So far, Antialiasing as signal processing • Sampling patterns • Monte-Carlo Integration • Now, Antialiasing as integration • Probabilities and variance • Analysis of Monte-Carlo Integration • Complementary yet not always the same

MIT EECS 6.837 MIT EECS 6.837

6 Why integration? Monte-Carlo computation of π • Simple version: • Take a square compute pixel coverage • Take a random point (x,y) in the square • Test if it is inside the ¼ disc (x2+y2 < 1) • More advanced: • The probability is π /4 Filtering (convolution) is an integral pixel = ∫ filter * color • And integration is useful in tons of places in y graphics x MIT EECS 6.837 MIT EECS 6.837

Monte-Carlo computation of π Why not use Simpson integration?

• The probability is π /4 • Yeah, to compute π, • Count the inside ratio n = # inside / total # trials Monte Carlo is not very efficient • π≈n * 4 • But convergence is independent of dimension • The error depends on the number or trials • Better to integrate high-dimensional functions • For d dimensions, Simpson requires Nd domains

MIT EECS 6.837 MIT EECS 6.837

Dumbest Monte-Carlo integration Questions?

• Compute 0.5 by flipping a coin • 1 flip: 0 or 1=> average error =0.5 • 2 flips: 0, 0.5, 0.5 or 1 =>average error=0. 25 • 4 flips: 0 (*1),0.25 (*4), 0.5 (*6), 0.75(*4), 1(*1) => average error =0.1875

• Does not converge very fast • Doubling the number of samples does not double accuracy

MIT EECS 6.837 MIT EECS 6.837

7 Today’s lecture Review of probability (discrete)

• Antialiasing in graphics • Random variable can take discrete values xi

• Sampling patterns • Probability pi for each xi RE: • Monte-Carlo Integration EWA 0 · pi · 1 B SIDE HS IN • Probabilities and variance MAT If the events are mutually exclusive, Σ pi =1 • Analysis of Monte-Carlo Integration • Expected value

• Expected value of function of random variable

– f(xi) is also a random variable

MIT EECS 6.837 MIT EECS 6.837

Ex: fair dice Variance & standard deviation

• Variance σ 2: Measure of deviation from expected value • Expected value of square difference (MSE)

•Also

• Standard deviation σ: square root of variance (notion of error, RMS)

MIT EECS 6.837 MIT EECS 6.837

Questions? Continuous random variables

• Real-valued random variable x • Probability density function (PDF) p(x) – Probability of a value between x and x+dx is p(x) dx • Cumulative Density Function (CDF) P(y): – Probability to get a value lower than y

MIT EECS 6.837 MIT EECS 6.837

8 Properties Properties

• p(x) ≥ 0 but can be greater than 1 !!!! • p(x) ≥ 0 but can be greater than 1 !!!!

• P is positive and non-decreasing • P is positive and non-decreasing

MIT EECS 6.837 MIT EECS 6.837

Example Expected value

• Uniform distribution between a and b

• Dirac distribution

• Expected value is linear

E[f1(x) + a f2(x)] = E[f1(x)] + a E[f2(x)]

MIT EECS 6.837 MIT EECS 6.837

Variance

• Variance is not linear !!!! • σ2[x+y] = σ2[x] + σ2[y] + 2 Cov[x,y] • Where Cov is the covariance – Cov[x,y] = E[xy] - E[x] E[y] – Tells how much they are big at the same time – Null if variables are independent • But σ2[ax] = a2 σ2[x]

MIT EECS 6.837

9