<<

The Laplace in polar coordinates

We now consider the with Dirichlet boundary conditions on a circular region Ω = {( x,y) | x2 + y2 ≤ A }. Our goal is to compute eigenvalues and of the Laplace operator on this region.

-Δ u = λ u

As before, we seek a solution by the method of separation of variables. However, given the geometry of the region it does not make sense to seek solutions of the form

u(x,y) = u1(x) u2(y)

Instead, we will change to polar coordinates and seek solutions of the form

u(r,θ) = R(r) Θ(θ)

Given the geometry of the problem at hand, this is a more natural to use, and as we will see, the problem does separate cleanly into ODEs in r and θ.

Our first task it to rewrite the Laplace operator in polar coordinates. Initially we have that

Δu = ∂2u + ∂2u ∂x2 ∂y2

To change coordinates from (x,y) to (r,θ) we have to introduce change of formulas and use the to rewrite x and y as r and θ derivatives. Here are some of the details.

r = x2 + y2

tan(θ) = y/x

To start the process we develop some basic partial facts.

∂r = x = r cos(θ) = cos(θ) ∂x r x2 + y2

∂r = y = r sin(θ) = sin(θ) ∂y r x2 + y2

sec 2(θ) ∂θ = - y ∂x x2

∂θ = cos 2(θ) - r sin(θ) = - sin(θ) ∂x ( r2 cos 2(θ) ) r

sec 2(θ) ∂θ = 1 ∂y x

1 ∂θ = cos 2(θ) 1 = cos(θ) ∂y r cos(θ) r

We are now in position to start applying the chain rule:

∂u = ∂u ∂r + ∂u ∂θ = ∂u cos(θ) + ∂u - sin(θ) ∂x ∂r ∂x ∂θ ∂x ∂r ∂θ ( r )

∂u = ∂u ∂r + ∂u ∂θ = ∂u sin(θ) + ∂u cos(θ) ∂y ∂r ∂y ∂θ ∂y ∂r ∂θ ( r )

The higher order derivatives proceed similarly. I refer you to the text for further details. At the end of the process we learn that

∂2u + ∂2u = ∂2u + 1 ∂2u + 1 ∂u 2 2 2 2 2 r ∂r ∂x ∂y ∂r r ∂θ

Separation of Variables

We are now in position to implement the separation of variables. Once again, we want to compute eigenvalues and eigenfunctions of the Laplace operator:

-Δ u = λ u

To apply separation of variables we assume that

u(r,θ) = R(r) Θ(θ) and substitute into the polar form of the Laplace equation:

2 2 -Δ u = - ∂2u - 1 ∂2u - 1 ∂u = - ∂ R(r) Θ(θ) - 1 ∂ R(r) Θ(θ) - 1 ∂R(r) Θ(θ) 2 2 2 r ∂r 2 2 2 r ∂r ∂r r ∂θ ∂r r ∂θ

2 2 Θ(θ) d R(r) - 1 R(r) d Θ(θ) - 1 Θ(θ) dR(r) = λ u = λ R(r) Θ(θ) 2 2 2 r dr dr r dθ Dividing both sides by a factor of R(r) Θ(θ) multiplying both sides by r2 gives

2 2 r2 d R(r) - 1 d Θ(θ) - r dR(r) = λ r2 2 2 dr R(r) dr Θ(θ) dθ R(r) If we move all of the terms that depend on r and R(r) to one side and all of the terms that depend on θ and Θ(θ) to the other we get

2 2 - r2 d R(r) + r dR(r) + λ r2 = - 1 d Θ(θ) 2 dr 2 R(r) dr R(r) Θ(θ) dθ The only way for these two expressions to equal for all possible values of r and θ is to have them both equal a constant, γ.

2 2 - 1 d Θ(θ) = γ 2 Θ(θ) dθ

2 - r2 d R(r) + r dR(r) + λ r2 = γ R(r) dr2 R(r) dr For convenience, these two equations are typically rewritten as

2 d Θ(θ) + γ Θ(θ) = 0 2 dθ

2 d R(r) + 1 dR(r) + λ - γ R(r) = 0 r dr2 dr ( r2 ) The problem is now fully separated. The first of these two problems is the easier to work with. The appropriate boundary conditions to apply to this problem state that the Θ(θ) and its first derivative with respect to θ are periodic in θ:

Θ(- π) = Θ(π)

dΘ(- π) = dΘ(π) dθ dθ The ODE

2 d Θ(θ) + γ Θ(θ) = 0 2 dθ

Θ(- π) = Θ(π)

dΘ(- π) = dΘ(π) dθ dθ has eigenvalues γ = 0 with associated

Θ(θ) = 1 and γ = n2 with associated eigenfunctions

Θ(θ) = sin(n θ)

Θ(θ) = cos(n θ)

As a side effect of solving the Θ(θ) problem we have now determined all of the legal values of γ:

γ = n2 for n = 0, 1, 2, …

We can now focus our attention on the R(r) equation.

3 2 d R(r) + 1 dR(r) + λ - n2 R(r) = 0 r dr2 dr ( r2) The standard technique for solving this equation starts by multiplying both sides of the equation by a factor of r2.

2 r2 d R(r) + r dR(r) + (λr2 - n2) R(r) = 0 dr2 dr Next, and for reasons that will become clear below, we introduce a simple .

s = λ r

With this change of variables we will rewrite the equation in terms of a function

S(s) = R(r) = R( s ) λ The equation uses various derivatives of R(r), so we will have to determine what happens to those derivatives once we make the change of variables.

dR(r) = dR(r) ds = dS(s) λ dr ds dr ds dR(r) 2 d 2 2 2 d R(r) = dr ds = d S(s) ( λ) = λ d S(s) dr2 ds dr ds2 ds2 We now have

2 ( s )2 λ d S(s) + s dS(s) λ + (s2 - n2) S(s) = 0 λ ds2 λ ds

2 s2 d S(s) + s dS(s) + (s2 - n2) S(s) = 0 ds2 ds This equation is a well-known ODE, the Bessel equation of order n.

Solving the Bessel Equation

The Bessel equation can be solved by assuming that the solution takes the form

∞ α k S(s) = s ∑ak s k = 0 Substituting this into the equation gives

∞ ∞ ∞ ∞ 2 k+α-2 k+α-1 2 k+α 2 k+α s ∑ak (k+α)( k+α-1) s + s ∑ak (k+α) s + s ∑ak s - n ∑ak s = 0 k = 0 k = 0 k = 0 k = 0 To facilitate consolidating these sums into a single sum we shift the indices of summation in the third term.

4 ∞ ∞ ∞ ∞ k+α k+α k+α 2 k+α ∑ak (k+α)( k+α-1) s + ∑ak (k+α) s + ∑ak-2 s - n ∑ak s = 0 k = 0 k = 0 k = 2 k = 0 If we consider only terms with the form sα we see that

2 a0 (α (α-1) + α - n ) = 0 or

α2 - n2 = 0

This tells us that solutions must have the form

∞ n k S(s) = s ∑ak s k = 0 or

∞ -n k S(s) = s ∑ak s k = 0 The latter form does not produce reasonable solutions, since for n ≥ 1 this function is unbounded at s = 0. Thus we will continue with the assumpation that α = n. With this assumption we have that

∞ ∞ ∞ ∞ k+n k+n k+n 2 k+n ∑ak (k+n)( k+n-1) s + ∑ak (k+n) s + ∑ak-2 s - n ∑ak s = 0 k = 0 k = 0 k = 2 k = 0 Considering terms with factors of sn+1 we see that

2 a1 ((n+1 )n + (n+1 ) - n ) = 0 or

a1 (2 n + 1) = 0 or

a1 = 0

For terms beyond sn+2 we have

2 ak ((k+n)( k+n-1) + (k+n)- n ) + ak-2 = 0 or

2 2 ak ((k+n) - n ) + ak-2 = 0

This leads to a recurrence relation that allows us to compute the coefficients ak in terms of ak-2 .

5 ak-2 ak-2 ak = - = - (k+n)2 - n2 k (k + 2 n)

Since a1 = 0, the recurrence relation tells us that ak = 0 for all odd k. For the even k we have that

a0 a2 = - 2 (2 + 2 n)

a2 a0 a4 = - = 4 (4 + 2 n) 2! 24(n+1)( n+2)

a4 a0 a6 = - = - 6 (6 + 2 n) 3! 26(n+3)( n+2)( n+1)

More generally, for k = 2 j we have that

j a0 a2j = (-1 ) j! 22 j(n+1)( n+2)⋯( n+j)

Since we are free to pick a0, we select for our convenience 1 a0 = 2n n! so that

(-1) j a2j = 22j+nj!(n+j)!

This now gives

∞ j 2j+n S(s) = ∑ (-1) s j = 0 j!( n+j)! (2 )

This function is known as the Bessel function of order n, usually written Jn(s).

Eigenvalues and Eigenfunctions of the Laplacian

Following the reasoning above we have that

u(r,θ) = R(r) Θ(θ) where

Θ(θ) = sin(n θ) or

Θ(θ) = cos(n θ)

6 and

R(r) = S(s) = Jn(s)

If we are going to require that u(A,θ) = 0 we must have that

R(A) = S( λ A) = Jn( λ A) = 0 or that λ A must be a zero of Jn(s). It turns out that for each value of n ≥ 0 the Bessel function Jn(s) has an infinite sequence sn,m of zeroes. Corresponding to each of these zeroes is an eigenvalue of the Laplacian:

λn,m A = sn,m

2 (sn,m) λn,m = A2 Corresponding to each of these eigenvalues are two eigenfunctions

(1) sn,m φ n,m(r,θ) = Jn r cos(n θ) ( A )

(2) sn,m φ n,m(r,θ) = Jn r sin(n θ) ( A )

In the n = 0 case we have instead

s0, m φ0, m(r,θ) = J0 r ( A )

Orthogonality Properties of the Eigenfunctions

Since the operator -Δ is a symmetric operator, eigenfunctions corresponding to distinct eigenvalues will have to be orthogonal. It is also possible to show that the eigenfunctions with the same eigenvalues are also orthogonal.

(1) (2) (φ n,m(r,θ), φ n,m(r,θ)) = 0

Solving PDEs on the disk

We are now in a position to solve PDEs containing the Laplace operator on the disk. Consider for example the Poisson equation with Dirichlet boundary conditions on the disk.

-Δ u = f(r,θ)

Using the method of eigenfunction expansions, we start by assuming that

∞ ∞ ∞ (1) (2) u(r,θ) = ∑a0, m φ0, m(r,θ) + ∑ ∑(an,m φ n,m(r,θ) + bn,m φ n,m(r,θ)) m = 1 n = 1m = 1

(1) (2) Since φ n,m(r,θ) and φ n,m(r,θ) are both eigenfunctions of the negative Laplacian with eigenvalues of λn,m we

7 have that

∞ ∞ ∞ (1) (2) -Δ u = ∑a0, m λ0, mφ0, m(r,θ) + ∑ ∑λn,m(an,m φ n,m(r,θ) + bn,m φ n,m(r,θ)) m = 1 n = 1m = 1

To solve for the coefficients an,m and bn,m we need to construct the eigenfunction expansion for the forcing function

∞ ∞ ∞ (1) (2) f(r,θ) = ∑c0, m φ0, m(r,θ) + ∑ ∑(cn,m φ n,m(r,θ) + dn,m φ n,m(r,θ)) m = 1 n = 1m = 1

The coefficients cn,m and dn,m are computed in the usual way via the orthogonality properties of the eigenfunctions.

(f ,φ0, m(r,θ) ) c0, m = (φ0, m(r,θ), φ0, m(r,θ))

(1) (f ,φ n,m(r,θ) ) cn,m = (1) (1) (φ n,m(r,θ), φ n,m(r,θ))

(2) (f ,φ n,m(r,θ) ) dn,m = (2) (2) (φ n,m(r,θ), φ n,m(r,θ))

Once we have computed these coefficients we can solve for the coefficients an,m and bn,m.

2 c0, m A c0, m a0, m = = λ 2 0, m (s0, m)

2 cn,m A cn,m an,m = = λn,m 2 (sn,m)

2 dn,m A dn,m bn,m = = λn,m 2 (sn,m)

8