<<

FFTs in Graphics and Vision

The Laplace

1 Outline Math Stuff  Symmetric/Hermitian Matrices  Lagrange Multipliers  Diagonalizing Symmetric Matrices The Laplacian Operator

2 Linear Operators Definition: Given a real (푉, ⋅,⋅ ) and a linear operator 퐿: 푉 → 푉, the adjoint of the 퐿 is the linear operator 퐿∗, with the property that: 푣, 퐿푤 = 퐿∗푣, 푤 ∀푣, 푤 ∈ 푉

3 Linear Operators Note: If 푉 is the space of 푛-dimensional, real-valued, arrays with the standard inner product: 푛 푣 ⋅ , 푤 ⋅ = 푣 푖 ⋅ 푤 푖 = 푣푡푤 푖=1 then the adjoint of a matrix 푀 is its : 푣, 푀푤 = 푣푡푀푤 = 푀푡푣 푡푤 = 〈푀푡푣, 푤〉

4 Linear Operators Definition: A real linear operator 퐿 is self-adjoint if it is its own adjoint, i.e.: 푣, 퐿푤 = 퐿푣, 푤 ∀푣, 푤 ∈ 푉

5 Linear Operators Note: If 푉 is the space of 푛-dimensional, real-valued, arrays with the standard inner product: 푛 푣 ⋅ , 푤 ⋅ = 푣 푖 ⋅ 푤 푖 = 푣푡푤 푖=1 then a matrix 푀 is self-adjoint if it is symmetric: 푀 = 푀푡

6 Linear Operators Definition: Given a complex inner product space (푉, ⋅,⋅ ) and given a linear operator 퐿: 푉 → 푉, the adjoint of the 퐿 is the linear operator 퐿∗, with the property that: 푣, 퐿푤 = 퐿∗푣, 푤 ∀푣, 푤 ∈ 푉

7 Linear Operators Note: If 푉 is the space of 푛-dimensional, complex- valued, arrays with the standard inner product: 푛 푣 ⋅ , 푤 ⋅ = 푣 푖 ⋅ 푤 푖 = 푣푡푤 푖=1 then the adjoint of a matrix 푀 is just the complex conjugate of the transpose: 푣, 푀푤 = 푣푡푀푤 = 푀 푡푣 푡푤 = 푀 푡푣, 푤 8 Linear Operators Definition: A complex linear operator 퐿 is self-adjoint if it is its own adjoint, i.e.: 푣, 퐿푤 = 퐿푣, 푤 ∀푣, 푤 ∈ 푉

9 Linear Operators Note: If 푉 is the space of 푛-dimensional, complex- valued, arrays with the standard inner product: 푛 푣 ⋅ , 푤 ⋅ = 푣 푖 ⋅ 푤 푖 = 푣푡푤 푖=1 then a matrix 푀 is self-adjoint if it is Hermitian: 푀 = 푀 푡

10 Outline Math  Symmetric/Hermitian Matrices  Lagrange Multipliers  Diagonalizing Symmetric Matrices The Laplacian Operator

11 Implicit Definition: Given a 푔(푥, 푦, 푧), the implicit surface or iso-surface defined by 푔(푥, 푦, 푧) is the set of points in 3D satisfying the condition: 푔 푥, 푦, 푧 = 0

2 푔 푥, 푦, 푧 = 푥2 + 푦2 + 푧2 − 1 푔 푥, 푦, 푧 = 푥2 + 푦2 + 푧2 − 푅2 + 푟2 − 4푅2 푟2 − 푧2 Lagrange Multipliers Goal: Given an implicit surface defined by a function 푔(푥, 푦, 푧) and given a function 푓(푥, 푦, 푧), we want to find the extrema of 푓 on the surface. This can be done by finding the points on the surface where the 0 of 푓 is parallel to the surface .

13 http://www.answers.com Lagrange Multipliers Since the implicit surface is the set of points with: 푔 푥, 푦, 푧 = 0 the normal at a point on the surface is parallel to the gradient of 푔. Finding the extrema amounts to finding the points (푥, 푦, 푧) such that:  푔 푥, 푦, 푧 = 0 (the point is on the surface)  휆∇푓 = ∇푔 (the point is a local extrema)

14 Outline Math  Symmetric/Hermitian Matrices  Lagrange Multipliers  Diagonalizing Symmetric Matrices The Laplacian Operator

15 Diagonalizing Symmetric Matrices Claim: Given the space of 푛-dimensional, real-valued, arrays and given a symmetric matrix 푀: The eigenvectors of 푀 form an orthogonal basis

16 Diagonalizing Symmetric Matrices The Eigenvectors Form an Orthogonal Basis: To show this we will show two things: 1. If 푣 is an eigenvector, then the space of vectors orthogonal to 푣 is fixed by 푀. 2. At least one eigenvector exists.

17 Diagonalizing Symmetric Matrices 1. If 푣 is an eigenvector, then the space of vectors orthogonal to 푣 is fixed by 푀. Suppose that 푣 is an eigenvector and 푤 is some other vector that is orthogonal to 푣: 푣, 푤 = 0 Since 푣 is an eigenvector, this implies that: 푀푣, 푤 = 휆푣, 푤 = 0 Since 푀 is symmetric, we have: 푣, 푀푤 = 푀푣, 푤 = 0

18 Diagonalizing Symmetric Matrices 1. If 푣 is an eigenvector, then the space of vectors orthogonal to 푣 is fixed by 푀.

If 푊 is the subspace of vectors orthogonal to 푣: 푊 = 푤 ∈ 푉 푤, 푣 = 0 then we have: 푣, 푀푤 = 0 ∀푤 ∈ 푊 ⇕ 푀 푤 ∈ 푊 ∀푤 ∈ 푊

19 Diagonalizing Symmetric Matrices 1. If 푣 is an eigenvector, then the space of vectors perpendicular to 푣 is fixed by 푀. Implications: If we know that we can find one eigenvector 푣, we can consider the restriction of 푀 to 푊. We know that:  푀 푊 ⊂ 푊  푀푢, 푤 = 푢, 푀푤 ∀푢, 푤 ∈ 푊 So we can repeat to find the next eigenvector.

20 Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist We will show this using Lagrange multipliers:  The implicit surface will be the sphere in ℝ푛: 2 2 푔 푥1, ⋯ , 푥푛 = 푥1 + ⋯ 푥푛 − 1 ⇕ 푔 푣 = 푣 2 − 1  The function we optimize will be: 푓 푣 = 〈푣, 푀푣〉 Because the sphere is compact, an extrema must exist.

푔 푥, 푦 = 푥2 + 푦2 − 1 Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist The normal of a point on the sphere is parallel to the gradient of 푔: ∇푔 푥1, ⋯ , 푥푛 = 2 푥1, ⋯ , 푥푛 ⇕ ∇푔 푣 = 2푣

푔 푥, 푦 = 푥2 + 푦2 − 1 Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist Claim: The gradient of 푓 is: ∇푓 푣 = 2푀푣

23 Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist Proof:

Let 푒푖 be the vector with zeros everywhere but in the 푖-th entry: 푒푖 = (0, ⋯ , 0,1,0, ⋯ , 0)

푖-th entry

24 Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist Proof:

Let 푒푖 be the vector with zeros everywhere but in the 푖-th entry. Then the 푖-th coefficient of the gradient is: 휕 푑 푓 푣 = 푓 푣 + 푠푒푖 휕푥푖 푑푠 푠=0 푑 푡 = 푣 + 푠푒푖 푀 푣 + 푠푒푖 푑푠 푠=0 푑 푡 푡 푡 2 푡 = 푣 푀푣 + 푠푒푖 푀푣 + 푠푣 푀푒푖 + 푠 푒푖 푀푒푖 푑푠 푠=0

25 Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist Proof:

Let 푒푖 be the vector with zeros everywhere but in the 푖-th entry. Then the 푖-th coefficient of the gradient is: 휕 푡 푡 푓 푣 = 푒푖 푀푣 + 푣 푀푒푖 휕푥푖 푡 = 푒푖, 푀푣 + 푀 푣, 푒푖 = 2〈푒푖, 푀푣〉 = 2 푀푣 푖

26 Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist Proof: Since, the 푖-th coefficient of the gradient of 푓 at 푣 is twice the 푖-th coefficient of 푀푣, we have: ∇푓 푣 = 2푀푣

27 Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist We know that the normal of the point 푣 on the is parallel to the gradient, which is: ∇푔 푣 = 2푣 And we know that the gradient of 푓 is: ∇푓 푣 = 2푀푣

28 Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist Since the function 푔 must have a maximum on the sphere, we know that there exists 푣 s.t.: 휆∇푔 푣 = ∇푓(푣) ⇕ 휆푣 = 푀푣 So at the maximum, we have our eigenvalue.

29 Outline Math  Symmetric/Hermitian Matrices  Lagrange Multipliers  Diagonalizing Symmetric Matrices The Laplacian Operator

30 The Laplacian Operator Recall: The Laplacian of a function 푓 at a point measures how similar the value of 푓 at the point is to the average values of its neighbors.

푓(휃) Δ푓 휃

31 The Laplacian Operator Recall: Formally, for a function in 2D, the Laplacian is the sum of unmixed partial second : 휕2푓 휕2푓 Δ푓 푥, 푦 = + 휕푥2 휕푦2

32 The Laplacian Operator Observation 1: The Laplacian is a self-adjoint operator. To show this, we need to show that for any two functions 푓 and 푔, we have: 푓, Δ푔 = 〈Δ푓, 푔〉

33 The Laplacian Operator Observation 1: First, we show this in the 1D case, for functions 푓(휃) and 푔(휃): 푓, 푔′′ = 〈푓′′, 푔〉

Writing the dot-product as an gives: 2휋 푓, 푔′′ = 푓 휃 ⋅ 푔′′ 휃 푑휃 0

34 The Laplacian Operator Observation 1: Using the for derivatives: 푓 ⋅ 푔 ′ = 푓′ ⋅ 푔 + 푓 ⋅ 푔′ ⇓ 2휋 2휋 2휋 푓 ⋅ 푔 ′ 휃 푑휃 = 푓′ 휃 ⋅ 푔 휃 푑휃 + 푓 휃 ⋅ 푔′ 휃 푑휃 0 0 0 Since 푓 and 푔 are functions on a circle, their values at 0 and 2휋 are the same: 2휋 푓 ⋅ 푔 ′ 휃 푑휃 = 푓 ⋅ 푔 2휋 − 푓 ⋅ 푔 0 = 0 0

35 The Laplacian Operator Observation 1: Thus, we have: 2휋 2휋 푓 휃 ⋅ 푔′ 휃 푑휃 = − 푓′ 휃 ⋅ 푔 휃 푑휃 0 0 “Moving” the twice gives: 2휋 2휋 푓′′ 휃 ⋅ 푔 휃 푑휃 = − 푓′ 휃 ⋅ 푔′ 휃 푑휃 0 0 2휋 = −1 2 푓 휃 ⋅ 푔′′ 휃 푑휃 0 ⇕ 푓′′, 푔 = 〈푓, 푔′′〉 36 The Laplacian Operator Observation 1: To generalize this to higher dimensions, we write out the dot-product as: 2휋 2휋 휕2푓 2휋 2휋 휕2푓 Δ푓, 푔 = 2 ⋅ 푔 푑휃 푑휙 + 2 ⋅ 푔 푑휙 푑휃 0 0 휕휃 0 0 휕휙 2휋 2휋 휕2푔 2휋 2휋 휕2푔 = 푓 ⋅ 2 푑휃 푑휙 + 푓 ⋅ 2 푑휙 푑휃 0 0 휕휃 0 0 휕휙 = 푓, Δ푔

37 The Laplacian Operator Observation 2: The Laplacian operator commutes with – i.e. computing the Laplacian and rotating gives the same function as first rotating and then computing the Laplacian: Δ 휌푅 푓 = 휌푅 Δ 푓

38 The Laplacian Operator Implications:  Observation 1: Since the Laplacian operator is self-adjoint, it must be diagonalizable.  There is an orthogonal basis of eigenvectors.  If we group the eigenvectors with the same eigenvalues together, we get a set of vector spaces 퐹휆 such that for any function 푓 ∈ 퐹휆: Δ푓 = 휆푓

39 The Laplacian Operator Implications:  Observation 2: Since the Laplacian operator commutes with rotation, map vectors in 퐹휆 back into 퐹휆. Δ 휌푅 푓 = 휌푅 Δ 푓 = 휌푅 휆푓 = 휆 휌푅 푓  The space 퐹휆 fixed under the action of rotation.  The space 퐹휆 is a sub-representations for the group of rotation.

40 The Laplacian Operator Going back to the problem of finding the irreducible representations, this means we can begin by looking for the eigenspaces of the Laplacian operator.

41 Computing the Laplacian We know how to compute the Laplacian of a circular function represented by parameter: Δ푓 휃 = 푓′′(휃) How do we compute the Laplacian for a function represented by restriction?

42 Computing the Laplacian If we define a function on a circle as the restriction of a 2D function, the 2D Laplacian is not the same as the circular Laplacian!

Example: Consider the function 푓(푥, 푦) = 푥.  In the plane, the Laplacian is: Δ푓 푥, 푦 = 0  On the circle this is the function 푓 휃 = cos 휃 : Δ푓 휃 = −cos(휃)

43 Computing the Laplacian If we define a function on a circle as the restriction of a 2D function, the 2D Laplacian is not the same as the circular Laplacian! Intuitively: The Laplacian measures the difference between the value of a point and the average value of the “neighbors”. Who the “neighbors” are changes depending on whether we are considering the plane or the circle.

44 Computing the Laplacian Recall: For a vector :

퐹 푥, 푦 = 퐹1 푥, 푦 , 퐹2 푥, 푦 the is defined: 휕퐹 휕퐹 div 퐹 = ∇ ⋅ 퐹 = 1 + 2 휕푥 휕푦 We can also express the Laplacian as the divergence of the gradient: Δ푓 = ∇ ⋅ (∇푓)

45 Computing the Gradient In general, the gradient of the function 푓(푥, 푦) need not lie along the unit-circle.

푓(푥, 푦) = 푥 ∇푓

46 Computing the Gradient In general, the gradient of the function 푓(푥, 푦) need not lie along the unit-circle. We can fix this by projecting the gradient on to the unit circle: 휋 ∇푓 = ∇푓 − 〈∇푓, 푥, 푦 〉(푥, 푦)

∇푓 휋 ∇푓

47 Computing the Laplacian

The divergence of a 퐹 can be expressed as the sum of partials: 휕퐹 휕퐹 div 퐹 = ∇ ⋅ 퐹 = 1 + 2 휕푥 휕푦

48 Computing the Laplacian Given any orthogonal basis {푣, 푤}, the divergence is the derivative of the 푣-component of the vector field in the 푣-direction, plus the derivative of the 푤-component of the vector field in the 푤-direction: 휕〈퐹 , 푣〉 휕〈퐹 , 푤〉 div 퐹 = ∇ ⋅ 퐹 = + 휕푣 휕푤

49 Computing the Laplacian Thus, to compute the divergence of the vector field along the circle, we can compute the 2D divergence, and subtract off the contribution from the normal direction: 휕 퐹 , 푛 div 퐹 = div 퐹 − circle 2D 휕푛 Since the component of 퐹 in the normal direction is a function, its derivative in the normal direction can be expressed as a gradient: 휕 퐹 , 푛 = 〈∇ 퐹 , 푛 , 푛〉 휕푛 50 Computing the Laplacian (풇 풙, 풚 = 풙) Example: Consider the function: 푓 푥, 푦 = 푥

푓(푥, 푦) = 푥

51 Computing the Laplacian (풇 풙, 풚 = 풙) Example: Its gradient is: ∇푓 = (1,0)

푓(푥, 푦) = 푥 ∇푓 푥, 푦 = (1,0)

52 Computing the Laplacian (풇 풙, 풚 = 풙) Example: ∇푓 = (1,0) Projecting the gradient onto the unit-circle we get: 휋 ∇푓 = ∇푓 − ∇푓, 푛 푛 = ∇푓 − ∇푓, 푥, 푦 푥, 푦 = 1,0 − 푥(푥, 푦)

푓(푥, 푦) = 푥 ∇푓 푥, 푦 = (1,0) 휋(∇푓)

53 Computing the Laplacian (풇 풙, 풚 = 풙) Example: 휋 ∇푓 = 1,0 − 푥(푥, 푦) The divergence of the vector field 휋(∇푓) is: div2D 휋 ∇푓 = −2푥 − 푥 = −3푥

푓(푥, 푦) = 푥 ∇푓 푥, 푦 = (1,0) 휋(∇푓)

54 Computing the Laplacian (풇 풙, 풚 = 풙) Example: 휋 ∇푓 = 1,0 − 푥(푥, 푦) Projecting 휋 ∇푓 on the normal direction gives: 휋 ∇푓 , 푛 = 1,0 − 푥 푥, 푦 , 푥, 푦 = 푥 − 푥 푥2 + 푦2 = 푥 − 푥3 + 푥푦2

푓(푥, 푦) = 푥 ∇푓 푥, 푦 = (1,0) 휋(∇푓)

55 Computing the Laplacian (풇 풙, 풚 = 풙) Example: 휋 ∇푓 , 푛 = 푥 − 푥3 + 푥푦2 The gradient of the projection is: ∇ 휋 ∇푓 , 푛 = 1,0 − (3푥2 + 푦2, 2푥푦)

푓(푥, 푦) = 푥 ∇푓 푥, 푦 = (1,0) 휋(∇푓)

56 Computing the Laplacian (풇 풙, 풚 = 풙) Example: ∇ 휋 ∇푓 , 푛 = 1,0 − 3푥2 + 푦2, 2푥푦 So the divergence in the normal direction is: 2 2 div푛 휋 ∇푓 = 1,0 − 3푥 + 푦 , 2푥푦 , 푥, 푦 = 푥 − 3푥 − 푥푦2 − 2푥푦2 = 푥 − 3푥3 − 3푥푦2

푓(푥, 푦) = 푥 ∇푓 푥, 푦 = (1,0) 휋(∇푓)

57 Computing the Laplacian (풇 풙, 풚 = 풙) Example: 3 2 div2D 휋 ∇푓 = −3푥 div푛 휋 ∇푓 = 푥 − 3푥 − 3푥푦 Thus the circular Laplacian can be expressed as the difference between the 2D divergence and the divergence in the normal direction: Δcircle푓 푥, 푦 = div2퐷 휋 ∇푓 − div푛 휋 ∇푓 = −3푥 − 푥 − 3푥3 − 3푥푦2 = −4푥 + 3푥(푥2 + 푦2) Since points on the circle satisfy 푥2 + 푦2 = 1, this implies that for (푥, 푦) on the circle: Δcircle푓 푥, 푦 = −푥 58 Computing the Laplacian (풇 풙, 풚 = 풙) Example: 3 2 div2D 휋 ∇푓 = −3푥 div푛 휋 ∇푓 = 푥 − 3푥 − 3푥푦 Thus the circular Laplacian can be expressed as the difference between the 2D divergence and the divergence in the normal direction: Δcircle푓 푥, 푦 = div2퐷 휋 ∇푓 − div푛 휋 ∇푓 = −3푥 − 푥 − 3푥3 − 3푥푦2 = −4푥 + 3푥(푥2 + 푦2) Since points on the circle satisfy 푥2 + 푦2 = 1, this implies that for (푥, 푦) on the circle: Δcircle푓 푥, 푦 = −푓(푥, 푦) 59 Computing the Laplacian (풇 풙, 풚 = 풙) Example: Thus just as in the parameter case the function 푓, is an eigenvector of the circular Laplacian operator, with eigenvalue −1.

60