CS6640 Computational Photography 11. Gradient Domain Image
Total Page:16
File Type:pdf, Size:1020Kb
CS6640 Computational Photography 11. Gradient Domain Image Processing © 2012 Steve Marschner 1 Problems with direct copy/paste slide by Frédo Durand, MIT From Perez et al. 2003 Image gradient • Gradient: derivative of a function Rn → R (n = 2 for images) f = df df = f f r dx dy x y h i ⇥ ⇤ • Note it turns a function R2 → R into a function R2 → R2 • Most such functions are not the derivative of anything! • How do you if some function g is the derivative of something? in 2D, simple: mixed partials are equal (g is conservative) y x gx = gy because g = f and fxy = fyx r Cornell CS6640 Fall 2012 3 A nonconservative gradient? M.C. Escher Ascending and Descending 1960 Lithograph 35.5 x 28.5 cm Cornell CS6640 Fall 2012 4 Gradient: intuition Gradient: slide by Frédo Durand, MIT Gradient: intuition Gradient: slide by Frédo Durand, MIT Gradient: intuition Gradient: slide by Frédo Durand, MIT Gradient: intuition Gradient: slide by Frédo Durand, MIT Key gradient domain idea 1. Construct a vector field that we wish was the gradient of our output image 2. Look for an image that has that gradient 3. That won’t work, so look for an image that has approximately the desired gradient Gradient domain image processing is all about clever choices for (1) and efficient algorithms for (3) Cornell CS6640 Fall 2012 6 Solution: paste gradient hacky visualization of gradient slide by Frédo Durand, MIT Problem setup Given desired gradient g on a domain D, and some constraints on a subset B of the domain 2 ~g : D IR B D f ⇤ : B IR ! ⇢ ! Find a function f on D that fits the constraints and has a gradient close to g min f ~g 2 subject to f B = f ⇤ f kr − k | Since the gradient is a linear operator, this is a (constrained) linear least squares problem. Cornell CS6640 Fall 2012 8 Discretization • Of course images are made up of finitely many pixels • Use discrete derivative [–1 1] to approximate gradient there are other choices but this works fine here • Minimize sum-squared rather than integral-squared difference sum is over edges joining neighboring pixels • Result is a matrix that maps f to its derivative min Gf g subject to ⇧Bf = f ⇤ f k − k discrete pixels listed desired projection constrained gradient as vector gradient that selects values operator pixels in B Cornell CS6640 Fall 2012 9 Handling constraints • To deal with constraints just leave out the constrained pixels T T f = ⇧Bf ⇤ + ⇧B¯ f 0 T T min G ⇧Bf ⇤ + ⇧B¯ f 0 g f 0 − T T min G⇧B¯ f 0 g ⇧Bf ⇤ f 0 − − G without⇥ shorter⇤ ⇥ augmented⇤ constrained vector of right hand side columns unknowns • The result is an unconstrained problem to be solved for the unknown variables in f′ one column per unknown pixel; one row per neighbor edge (any zero rows can be left out) Cornell CS6640 Fall 2012 10 Discrete 1D example: minimization boundary 6 • Copy to 6 5 +2 -1 +1 -1 -1 5 4 4 3 3 2 2 unknowns 1 1 ? ? ? ? 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 slide byFrédo Durand, MIT orange: pixel outside the mask red: source pixel to be pasted blue: boundary conditions (in background) Discrete 1D example: minimization 6 • Copy to 6 5 +2 -1 +1 -1 -1 5 4 4 3 3 2 2 1 1 ? ? ? ? 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 slide byFrédo Durand, MIT Discrete 1D example: minimization 6 • Copy to 6 5 +2 -1 +1 -1 -1 5 4 4 3 3 2 2 1 1 ? ? ? ? 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 2 Min [(f2-f1)-1] 2 + [(f3-f2)-(-1)] slide byFrédo Durand, MIT 2 + [(f4-f3)-2] With 2 + [(f5-f4)-(-1)] f1=6 2 + [(f6-f5)-(-1)] f6=1 1D example: minimization 6 • Copy to 6 5 +2 -1 +1 -1 -1 5 4 4 3 3 2 2 1 1 ? ? ? ? 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 2 Min [(f2-f1)-1] 2 + [(f3-f2)-(-1)] slide byFrédo Durand, MIT 2 + [(f4-f3)-2] 2 + [(f5-f4)-(-1)] 2 + [(f6-f5)-(-1)] 1D example: minimization 6 • Copy to 6 5 +2 -1 +1 -1 -1 5 4 4 3 3 2 2 1 1 ? ? ? ? 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 2 2 Min [(f2-f1)-1] ==> f2 +49-14f2 2 2 2 + [(f3-f2)-(-1)] ==> f3 +f2 +1-2f3f2 +2f3-2f2 slide byFrédo Durand, MIT 2 2 2 + [(f4-f3)-2] ==> f4 +f3 +4-2f3f4 -4f4+4f3 2 2 2 + [(f5-f4)-(-1)] ==> f5 +f4 +1-2f5f4 +2f5-2f4 2 2 + [(f6-f5)-(-1)] ==> f5 +4-4f5 1D example: big quadratic 6 • Copy to 6 5 +2 -1 +1 -1 -1 5 4 4 3 3 2 2 1 1 ? ? ? ? 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 2 • Min (f2 +49-14f2 2 2 + f3 +f2 +1-2f3f2 +2f3-2f2 2 2 + f4 +f3 +4-2f3f4 -4f4+4f3 slide byFrédo Durand, MIT 2 2 + f5 +f4 +1-2f5f4 +2f5-2f4 2 + f5 +4-4f5) Denote it Q 1D example: derivatives 6 • Copy to 6 5 +2 -1 +1 -1 -1 5 4 4 3 3 2 2 1 1 ? ? ? ? 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 2 Min (f2 +49-14f2 2 2 + f3 +f2 +1-2f3f2 +2f3-2f2 2 2 + f4 +f3 +4-2f3f4 -4f4+4f3 2 2 + f5 +f4 +1-2f5f4 +2f5-2f4 slide byFrédo Durand, MIT 2 + f5 +4-4f5) Denote it Q 1D example: derivatives 6 • Copy to 6 5 +2 -1 +1 -1 -1 5 4 4 3 3 2 2 1 1 ? ? ? ? 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 2 Min (f2 +49-14f2 2 2 + f3 +f2 +1-2f3f2 +2f3-2f2 2 2 + f4 +f3 +4-2f3f4 -4f4+4f3 2 2 + f5 +f4 +1-2f5f4 +2f5-2f4 slide byFrédo Durand, MIT 2 + f5 +4-4f5) Denote it Q 1D example: set derivatives to zero 6 • Copy to 6 5 +2 -1 +1 -1 -1 5 4 4 3 3 2 2 1 1 ? ? ? ? 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 slide byFrédo Durand, MIT 1D example: set derivatives to zero 6 • Copy to 6 5 +2 -1 +1 -1 -1 5 4 4 3 3 2 2 1 1 ? ? ? ? 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 =0 =0 =0 slide byFrédo Durand, MIT =0 1D example: set derivatives to zero 6 • Copy to 6 5 +2 -1 +1 -1 -1 5 4 4 3 3 2 2 1 1 ? ? ? ? 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 =0 =0 =0 slide byFrédo Durand, MIT =0 ==> 1D example: set derivatives to zero 6 • Copy to 6 5 +2 -1 +1 -1 -1 5 4 4 3 3 2 2 1 1 ? ? ? ? 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 =0 =0 =0 slide byFrédo Durand, MIT =0 ==> 1D example recap • Copy to 6 6 +2 -1 5 5 +1 -1 -1 4 4 3 3 2 2 1 1 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 slide byFrédo Durand, MIT ==> Matrix structure • That matrix is GTG; least squares system reads and the solution to (GTG)f = GTb is the minimizer. (This system is the normal equations for the LLS problem.) • Interesting that it looks like a second derivative… Cornell CS6640 Fall 2012 18 Matrix structure • That matrix is GTG; least squares system reads 1000 7 f 11 0 0 2 1 0 − 1 f3 0 − 1 0 11 0 0 1 = 2 − f4 B 00 11C B 1 C B − C B f5 C B − C B 000 1 C B C B 2 C B − C @ A B − C @ A @ A and the solution to (GTG)f = GTb is the minimizer. (This system is the normal equations for the LLS problem.) • Interesting that it looks like a second derivative… Cornell CS6640 Fall 2012 18 Matrix structure in 2D • The matrix G has: one column for each pixel (one per unknown pixel after projection) min xT Ax 2xT As + sT As (1) one row− for each neighbor-edge joining two pixels and we will call the above energya 1 andQ (ax –1) in each row (some with just 1 or zero after projection) T Given that A is defined• The as A matrix= G AG =, whichGTG has: is more or less the application of the gradient twice, it correspondsone row toand the column second for derivative each (unknown) of the pixel image, often d2 d2 called the Laplacian ∆ = •dxAway2 + dy from2 . In constraints, the discrete G worldTG implements of digital a images, convolution the with a Laplacian is the convolutiondiscrete by Laplacian filter 0 10 14− 1 2 −0 10− 3 − 4 5 We can minimize the aboveno surprise equation this by is settinga second its derivative: derivative applied with derivative respect twice to the unknown image x as zero.