3 Laplace's Equation
Total Page:16
File Type:pdf, Size:1020Kb
3 Laplace’s Equation We now turn to studying Laplace’s equation ∆u = 0 and its inhomogeneous version, Poisson’s equation, ¡∆u = f: We say a function u satisfying Laplace’s equation is a harmonic function. 3.1 The Fundamental Solution Consider Laplace’s equation in Rn, ∆u = 0 x 2 Rn: Clearly, there are a lot of functions u which satisfy this equation. In particular, any constant function is harmonic. In addition, any function of the form u(x) = a1x1 +:::+anxn for constants ai is also a solution. Of course, we can list a number of others. Here, however, we are interested in finding a particular solution of Laplace’s equation which will allow us to solve Poisson’s equation. Given the symmetric nature of Laplace’s equation, we look for a radial solution. That is, we look for a harmonic function u on Rn such that u(x) = v(jxj). In addition, to being a natural choice due to the symmetry of Laplace’s equation, radial solutions are natural to look for because they reduce a PDE to an ODE, which is generally easier to solve. Therefore, we look for a radial solution. If u(x) = v(jxj), then x u = i v0(jxj) jxj 6= 0; xi jxj which implies 1 x2 x2 u = v0(jxj) ¡ i v0(jxj) + i v00(jxj) jxj 6= 0: xixi jxj jxj3 jxj2 Therefore, n ¡ 1 ∆u = v0(jxj) + v00(jxj): jxj Letting r = jxj, we see that u(x) = v(jxj) is a radial solution of Laplace’s equation implies v satisfies n ¡ 1 v0(r) + v00(r) = 0: r Therefore, 1 ¡ n v00 = v0 r v00 1 ¡ n =) = v0 r =) ln v0 = (1 ¡ n) ln r + C C =) v0(r) = ; rn¡1 1 which implies ½ c1 ln r + c2 n = 2 v(r) = c1 (2¡n)rn¡2 + c2 n ¸ 3: From these calculations, we see that for any constants c1; c2, the function ½ c1 ln jxj + c2 n = 2 u(x) ´ c1 (3.1) (2¡n)jxjn¡2 + c2 n ¸ 3: for x 2 Rn, jxj 6= 0 is a solution of Laplace’s equation in Rn ¡ f0g. We notice that the function u defined in (3.1) satisfies ∆u(x) = 0 for x 6= 0, but at x = 0, ∆u(0) is undefined. We claim that we can choose constants c1 and c2 appropriately so that ¡∆xu = ±0 in the sense of distributions. Recall that ±0 is the distribution which is defined as follows. For all Á 2 D, (±0;Á) = Á(0): Below, we will prove this claim. For now, though, assume we can prove this. That is, assume we can find constants c1; c2 such that u defined in (3.1) satisfies ¡∆xu = ±0: (3.2) Let Φ denote the solution of (3.2). Then, define Z v(x) = Φ(x ¡ y)f(y) dy: Rn Formally, we compute the Laplacian of v as follows, Z ¡∆xv = ¡ ∆xΦ(x ¡ y)f(y) dy ZRn = ¡ ∆yΦ(x ¡ y)f(y) dy Z Rn = ±xf(y) dy = f(x): Rn That is, v is a solution of Poisson’s equation! Of course, this set of equalities above is entirely formal. We have not proven anything yet. However, we have motivated a solution formula for Poisson’s equation from a solution to (3.2). We now return to using the radial solution (3.1) to find a solution of (3.2). Define the function Φ as follows. For jxj 6= 0, let ½ 1 ¡ 2¼ ln jxj n = 2 Φ(x) = 1 1 (3.3) n(n¡2)®(n) jxjn¡2 n ¸ 3; where ®(n) is the volume of the unit ball in Rn. We see that Φ satisfies Laplace’s equation n on R ¡ f0g. As we will show in the following claim, Φ satisfies ¡∆xΦ = ±0. For this reason, we call Φ the fundamental solution of Laplace’s equation. 2 Claim 1. For Φ defined in (3.3), Φ satisfies ¡∆xΦ = ±0 in the sense of distributions. That is, for all g 2 D, Z ¡ Φ(x)∆xg(x) dx = g(0): Rn Proof. Let FΦ be the distribution associated with the fundamental solution Φ. That is, let FΦ : D! R be defined such that Z (FΦ; g) = Φ(x)g(x) dx Rn for all g 2 D. Recall that the derivative of a distribution F is defined as the distribution G such that (G; g) = ¡(F; g0) for all g 2 D. Therefore, the distributional Laplacian of Φ is defined as the distribution F∆Φ such that (F∆Φ; g) = (FΦ; ∆g) for all g 2 D. We will show that (FΦ; ∆g) = ¡(±0; g) = ¡g(0); and, therefore, (F∆Φ; g) = ¡g(0); which means ¡∆xΦ = ±0 in the sense of distributions. By definition, Z (FΦ; ∆g) = Φ(x)∆g(x) dx: Rn Now, we would like to apply the divergence theorem, but Φ has a singularity at x = 0. We get around this, by breaking up the integral into two pieces: one piece consisting of the ball of radius ± about the origin, B(0; ±) and the other piece consisting of the complement of this ball in Rn. Therefore, we have Z (FΦ; ∆g) = Φ(x)∆g(x) dx ZRn Z = Φ(x)∆g(x) dx + Φ(x)∆g(x) dx B(0;±) Rn¡B(0;±) = I + J: 3 We look first at term I. For n = 2, term I is bounded as follows, ¯ Z ¯ ¯Z ¯ ¯ ¯ ¯ ¯ ¯ 1 ¯ ¯ ¯ ¯¡ ln jxj∆g(x) dx¯ · Cj∆gjL1 ¯ ln jxj dx¯ B(0;±) 2¼ B(0;±) ¯Z Z ¯ ¯ 2¼ ± ¯ ¯ ¯ · C ¯ ln jrjr dr dθ¯ ¯Z0 0 ¯ ¯ ± ¯ ¯ ¯ · C ¯ ln jrjr dr¯ 0 · C ln j±j±2: For n ¸ 3, term I is bounded as follows, ¯Z ¯ Z ¯ 1 1 ¯ 1 ¯ ∆g(x) dx¯ · Cj∆gj 1 dx ¯ n¡2 ¯ L n¡2 B(0;±) n(n ¡ 2)®(n) jxj B(0;±) jxj Z µZ ¶ ± 1 · C n¡2 dS(y) dr 0 @B(0;r) jyj Z µZ ¶ ± 1 = n¡2 dS(y) dr 0 r @B(0;r) Z ± 1 = n®(n)rn¡1 dr rn¡2 0 Z ± n®(n) = n®(n) r dr = ±2: 0 2 Therefore, as ± ! 0+, jIj ! 0. Next, we look at term J. Applying the divergence theorem, we have Z Z Z @Φ Φ(x)∆xg(x) dx = ∆xΦ(x)g(x) dx ¡ g(x) dS(x) Rn Rn Rn @º ¡B(0;±) ¡ZB(0;±) @( ¡B(0;±)) @g + Φ(x) dS(x) Rn @º Z @( ¡B(0;±)) Z @Φ @g = ¡ g(x) dS(x) + Φ(x) dS(x) @(Rn¡B(0;±)) @º @(Rn¡B(0;±)) @º ´ J1 + J2: n using the fact that ∆xΦ(x) = 0 for x 2 R ¡ B(0; ±). We first look at term J1. Now, by assumption, g 2 D, and, therefore, g vanishes at 1. Consequently, we only need to calculate the integral over @B(0; ²) where the normal derivative º is the outer normal to Rn ¡ B(0; ±). By a straightforward calculation, we see that x r Φ(x) = ¡ : x n®(n)jxjn The outer unit normal to Rn ¡ B(0; ±) on B(0; ±) is given by x º = ¡ : jxj 4 Therefore, the normal derivative of Φ on B(0; ±) is given by µ ¶ µ ¶ @Φ x x 1 = ¡ ¢ ¡ = : @º n®(n)jxjn jxj n®(n)jxjn¡1 Therefore, J1 can be written as Z Z Z 1 1 ¡ n¡1 g(x) dS(x) = ¡ n¡1 g(x) dS(x) = ¡¡ g(x) dS(x): @B(0;±) n®(n)jxj n®(n)± @B(0;±) @B(0;±) Now if g is a continuous function, then Z ¡¡ g(x) dS(x) !¡g(0) as ± ! 0: Lastly, we look at term J2. Now using the fact that g vanishes as jxj ! +1, we only need to integrate over @B(0; ±). Using the fact that g 2 D, and, therefore, infinitely differentiable, we have ¯Z ¯ ¯ ¯ Z ¯ ¯ ¯ ¯ ¯ @g ¯ ¯@g ¯ ¯ Φ(x) dS(x)¯ · ¯ ¯ jΦ(x)j dS(x) @º @º 1 @B(0;±) Z L (@B(0;±)) @B(0;±) · C jΦ(x)j dS(x): @B(0;±) Now first, for n = 2, Z Z jΦ(x)j dS(x) = C j ln jxjj dS(x) @B(0;±) @B(0;±)Z · Cj ln j±jj dS(x) @B(0;±) = Cj ln j±jj(2¼±) · C±j ln j±jj: Next, for n ¸ 3, Z Z 1 jΦ(x)j dS(x) = C dS(x) jxjn¡2 @B(0;±) @BZ(0;±) C · n¡2 dS(x) ± @B(0;±) C = n®(n)±n¡1 · C±: ±n¡2 Therefore, we conclude that term J2 is bounded in absolute value by C±j ln ±j n = 2 C± n ¸ 3: Therefore, jJ2j ! 0 as ± ! 0+. 5 Combining these estimates, we see that Z Φ(x)∆xg(x) dx = lim I + J1 + J2 = ¡g(0): Rn ±!0+ Therefore, our claim is proved. Solving Poisson’s Equation. We now return to solving Poisson’s equation ¡∆u = f x 2 Rn: From our discussion before the above claim, we expect the function Z v(x) ´ Φ(x ¡ y)f(y) dy Rn to give us a solution of Poisson’s equation. We now prove that this is in fact true. First, we make a remark. Remark. If we hope that the function v defined above solves Poisson’s equation, we must first verify that this integral actually converges. If we assume f has compact support on some bounded set K in Rn, then we see that Z Z Φ(x ¡ y)f(y) dy · jfjL1 jΦ(x ¡ y)j dy: Rn K If we additionally assume that f is bounded, then jfjL1 · C.