Laplace ’s Equation 1. Equilibrium Phenomena Consider a general conservation statement for a region U in Rn containing a material which ⃗ is being transported through U by a flux field, F = F⃗x,t. Let u = u⃗x,t denote the scalar concentration field for the material (u equals the concentration at ⃗x,t). Note that u is a scalar valued function while F⃗x,t is a vector valued function whose value at each ⃗x,t is a vector whose direction is the direction of the material flow at ⃗x,t and whose magnitude is proportional to the speed of the flow at ⃗x,t. In addition, suppose there is a scalar source density field denoted by s⃗x,t. This value of this scalar at ⃗x,t indicates the rate at which material is being created or destroyed at ⃗x,t. If B denotes an arbitrary ball inside U, then for any time interval t1,t2 conservation of material requires that ⃗ ⃗ t2 ⃗ ⃗ ⃗ t2 ⃗ ∫ ux,t2 dx = ∫ ux,t1 dx − ∫ ∫ Fx,t ⋅ nxdS xdt + ∫ ∫ sx,tdxdt B B t1 ∂B t1 B Now ⃗ ⃗ t2 ⃗ ∫ ux,t2 dx − ∫ ux,t1 dx = ∫ ∫ ∂tux,tdxdt B B t1 B and t t ∫ 2 ∫ F⃗x,t ⋅ ⃗n⃗xdS xdt = ∫ 2 ∫ divF⃗x,t dxdt t1 ∂B t1 B hence t 2 ⃗ + ⃗ ⃗ = ∫ ∫ ∂tux,t divFx,t − sx,tdxdt 0 for all B ⊂ U, and all t1,t2 . 1.1 t1 B Since the integrand here is assumed to be continuous, it follows that ⃗ ⃗ ⃗ ⃗ ∂tux,t + divFx,t − sx,t = 0, for all x ∈ U, and all t. 1.2
Equation (1.1) is the integral form of the conservation statement, while (1.2) is the differential form of the same statement. This conservation statement describes a large number of physical processes. We consider now a few special cases, a) Transport u = u⃗x,t ⃗ ⃗ F⃗x,t = ux,t V where V = constant, s⃗x,t = 0. ⃗ ⃗ ⃗ In this case, the equation becomes ∂tux,t + V ⋅ gradux,t = 0 b) Steady Diffusion u = u⃗x F⃗x,t = −K∇ux where K = constant > 0, s = s⃗x. In this case, the equation becomes − K divgradu⃗x = s⃗x or − K ∇2u⃗x = s⃗x. This is the equation that governs steady state diffusion of the contaminant through the region U. The equation is called Poisson ’s equation if s⃗x ≠ 0,
1 and Laplace ’s equation when s⃗x = 0. These are the equations we will study in this section.
Another situation which leads to Laplace’s equation involves a steady state vector field ⃗ ⃗ ⃗ ⃗ V = V⃗x having the property that div Vx = 0. When V denotes the velocity field for an ⃗ ⃗ incompressible fluid, the vanishing divergence expresses that V conserves mass. When V denotes the magnetic force field in a magnetostatic field, the vanishing divergence asserts ⃗ that there are no magnetic sources. In the case that V represents the vector field of electric force, the equation is the statement that U contains no electric charges. In addition to the ⃗ ⃗ ⃗ equation div Vx = 0, it may happen that V satisfies the equation, curl Vx = 0. This ⃗ condition asserts that the field V is conservative (energy conserving). Moreover, it is a ⃗ ⃗ standard result in vector calculus that curl Vx = 0 implies that V = −grad u⃗x, for some scalar field, u = u⃗x. Then the pair of equations, ⃗ ⃗ div Vx = 0 and curl Vx = 0, taken together, imply that ⃗ ∇2u⃗x = 0 and V = −grad u⃗x. ⃗ We say that the conservative field V is ”derivable from the potential, u = u⃗x”. To say that u is a potential is to say that it satisfies Laplace’s equation. The unifying feature of all of these physical models that lead to Laplace’s equation is the fact that they are all in a state of equilibrium. Whatever forces are acting in each model, they have come to a state of equilibrium so that the state of the system remains constant in time. If the balance of the system is disturbed then it will have to go through another transient process until the forces once again all balance each other and the system is in a new equilibrium state. 2. Harmonic Functions A function u = ux is said to be harmonic in U ⊂ Rn if: i) u ∈ C2U; i.e., u, together with all its derivatives of order ≤ 2, is continuous in U ii) ∇2u⃗x = 0 at each point in U Note that in Cartesian coordinates,
∂u/∂x1 ⃗ = = 2 2 + + 2 2 div ∇ux ∂/∂x1, ...,∂/∂xn ⋅ ⋮ ∂ u/∂x1 ... ∂ u/∂xn
∂u/∂xn
= ∂∂u⃗x = ∇2u⃗x It is clear from this that all linear functions are harmonic. = 2 + + 2 A function depending on x only through the radial variable, r x1 ... xn is said to be a radial function . If u is a radial function then
2 1 = ′ = 1 2 + + 2 − 2 = ∂u/∂xi u r∂r/∂xi and ∂r/∂xi 2 x1 ... xn 2xi xi/r
2 2 2 ” 2 ′ r − xi xi/r ” 2 ′ 1 xi ∂ u/∂x = u r∂r/∂xi + u r = u r∂r/∂xi + u r − i r2 r r3 and 2 2 ⃗ = n 2 2 = ” n 2 + ′ n 1 xi ∇ ux ∑ ∂ u/∂xi u r ∑ xi/r u r ∑ − i=1 i=1 i=1 r r3
= ” + ′ n 1 = ” + n − 1 ′ u r u r r − r u r r u r
We see from this computation that the radial function u = unr is harmonic for various n if:
n = 1 u1”r = 0; i.e., u1r = Ar + B = + 1 ′ = 1 d ′ = = n 2 : u2”r r u r r dr ru2r 0; i.e., u2r C ln r > ” + n−1 ′ = 1−n d n−1 ′ = = 2−n n 2 : unr r unr r dr r unr 0; unr Cr 2 2 Note also that since ∇ ∂u/∂xi = ∂/∂xi∇ u, for any i, it follows that every derivative of a harmonic function is itself, harmonic. Of course this presupposes that the derivative exists but it will be shown that every harmonic function is automatically infinitely differentiable so every derivative exists and is therefore harmonic.
It is interesting to note that if u and u2 are both harmonic, then u must be constant. To see this, write ∇2u2 = div gradu2 = div 2u∇u = 2∇u ⋅ ∇u + 2u∇2u = 2|∇u|2 Then ∇2u2 = 0 implies |∇u|2 = 0 which is to say, u is constant. Evidently, then, the product of harmonic functions need not be harmonic.
It is easy to see that any linear combination of harmonic functions is harmonic so the harmonic functions form a linear space. It is also easy to see that if u = ux is harmonic on Rn then for any z ∈ Rn, the translate, vx = ux − z is harmonic as is the scaled function, w = wλx for all scalars λ. Finally, ∇2 is invariant under orthogonal transformations. To see this suppose coordinates x and y are related by
Q11x1 + ... + Q1nxn Qx⃗ = ⃗y = ⋮
Qn1x1 + ... + Qnn xn Then ∇x = ∂/∂x1, ...,∂/∂xn and = + + = + + ∂xi ∂y1/∂xi∂y1 ... ∂yn/∂xi∂yn Q1i ∂y1 ... Qni ∂yn
= (i-th row of Q)⋅∇y
i.e., ∇x = Q ∇y and ∇x = Q ∇y = ∇y Q
3 2 Then ∇x = ∇x ⋅ ∇x = ∇y QQ ∇y = ∇y ∇y, for QQ = I. A transformation Q with this property, QQ = I, is said to be an orthogonal transformation. Such transformations include rotations and reflections.
Problem 6 Suppose u and v are both harmonic on R3. Show that, in general, the product of u times v is not harmonic. Give one or more examples of a special case where the product does turn out to be harmonic. 3. Integral Identities Let U denote a bounded, open, connected set in Rn having a smooth boundary, ∂U. This is ⃗ sufficient in order for the divergence theorem to be valid on U. That is, if F⃗x denotes a smooth vector field over U, (i.e., F ∈ CŪ ∩ C1U and if ⃗nx denotes the outward unit normal to ∂U at x ∈ ∂U, then the divergence theorem asserts that ⃗ ⃗ ∫ divF dx = ∫ F ⋅ ⃗ndS x 3.1 U ∂U ⃗ Consider the integral identity (3.1) in the special case that Fx = ∇ux for u ∈ C1Ū ∩ C2U. Then ⃗ divFx = div ∇ux = ∇2ux ⃗ ⃗ ⃗ and F ⋅ n = ∇u ⋅ n = ∂N ux the normal derivative of u) Then (3.1) becomes 2 = ∫ ∇ uxdx ∫ ∂N uxdS x 3.2 U ∂U The identity (3.2) is known as Green ’s first identity . If functions u and v both belong to ⃗ C1Ū ∩ C2U and if Fx = vx∇ux, then ⃗ divFx = div vx∇ux = vx∇2ux + ∇u ⋅ ∇v ⃗ ⃗ ⃗ and F ⋅ n = vx∇u ⋅ n = v∂N ux ⃗ and, with this choice for F, (3.1) becomes Green ’s second identity ,
2 + = ∫ vx∇ ux ∇u ⋅ ∇v dx ∫ vx∂N uxdS x 3.3 U ∂U Finally, writing (3.3) with u and v reversed, and subtracting the result from (3.3), we obtain Green ’s symmetric identity ,
2 2 = ∫ vx∇ ux − ux∇ vx dx ∫ vx∂N ux − ux∂N vx dS x 3.4 U ∂U Problem 7 Let u = ux,y,z be a smooth function on R3 and let A denote a 3 by 3 matrix ⃗ whose entries are all smooth functions on R3 Let F = A∇u. If U denotes a bounded open set in R3 having smooth boundary ∂U, then find a surface integral over the boundary whose ⃗ value equals the integral of the divergence of F over U. If v = vx,y,z is also a smooth ⃗ function on R3 then write the integral of vdivF over U as the sum of 2 integrals, one of which
4 is a surface integral over ∂U. 4. The Mean Value Theorem for Harmonic Functions We begin by introducing some notation:
n Bra = x ∈ R : |x − a| < r the open ball of radius r with center at x=a ̄ n Bra = x ∈ R : |x − a| ≤ r the closed ball of radius r with center at x=a n Sra = x ∈ R : |x − a| = r the surface of the ball of radius r with center at x=a
Let An denote the n-dimensional volume of B10. Then A2 = π,A3 = 4π/3, and, in n/2 n general An = π /Γn/2 + 1. Then the volume of the n-ball of radius r is r An. Also let Sn n denote the area of the (n-1)-dimensional surface of B10 in R , (i.e, Sn is the area of n−1 ∂B10).Then Sn = nA n and the area of ∂Br0 is equal to nA nr . In particular, 2 2 S2r = 2πr, S3r = 4πr , etc . We will also find it convenient to introduce the notation ̂ = 1 = ∫ fxdx n ∫ fxdx average value of fx over Bra Bra Anr Bra and f x dŜ x = 1 f x dS x = average value of f x over B a ∫ n−1 ∫ ∂ r ∂Bra Snr ∂Bra Recall that it follows from Green’s first identity that if ux is harmonic in U, then for any ball, Bra contained in U, we have
2 ∫ ∂N uxdS x = ∫ ∇ uxdx = 0. ∂Bra Bra This simple observation is the key to the proof of the following theorem. Theorem 4.1 (Mean Value Theorem for Harmonic Functions ) Suppose u ∈ C2U and ∇2ux = 0 for every x in the bounded, open set U in Rn. Then for every Brx ⊂ U, ux = ∫ uydŜy = ∫ uydŷ 4.1 ∂Brx Brx i.e., 4.1 asserts that for every x in U, and r > 0, sufficiently small that Brx is contained in U, ux is equal to the average value of u over the surface, ∂Brx, and ux is also equal to the average value of u over the entire ball, Brx. A function with the property asserted by (4.1) is said to have the mean value property .
Proof- Fix a point x in U and an r > 0 such that Brx is contained in the open set U. Let gr = ∫ uydŜy = ∫ ux + rz dŜz. ∂Brx ∂B10 Here we used the change of variable, y = x + rz , or z = y − x/r so as y ranges over , ∂Brx, z ranges over ∂B10. Then y − x g′r = ∫ ∇ux + rz ⋅ z dŜz. = ∫ ∇uy ⋅ dŜy ∂B10 ∂Brx r
It is evident that as y ranges over ∂Brx, |y − x| = r, hence y − x/r is just the outward unit normal to the surface ∂Brx which means that
5 y − x = ∇uy ⋅ r ∂N uy. Then ′ 2 g r = ∫ ∂N uydŜy = ∫ ∇ uydŷ = 0 (since u is harmonic in U) ∂Brx Brx Now g′r = 0 implies that gr = constant which leads to,
gr = lim t→0 gt = lim t→0 ∫ ux + tz dŜz = ux; ∂B10 i.e., ux = ∫ uydŜy for all r > 0 such that Brx ⊂ U. ∂Brx Notice that this result also implies, r r n−1 n ∫ uydy = ∫ ∫ uydS ydt = ∫ uxSnt dt = uxAnr Brx 0 ∂Btx 0 or, = 1 = ux n ∫ uydy ∫ uydŷ Anr Brx Brx which completes the proof of the theorem.■
The converse of theorem 4.1 is also true. Theorem 4.2 Suppose U is a bounded open, connected set in Rn and u ∈ C2U has the mean value property; i.e., for every x in U and for each r > 0 such that Brx ⊂ U, ux = ∫ uydŜy. ∂Brx Then ∇2ux = 0 in U. 2 Proof- If it is not the case that ∇ ux = 0 throughout U, then there is some Brx ⊂ U such 2 that ∇ ux is (say) positive on Brx. Then for gr as in the proof of theorem 4.1, ′ r 2 ̇ 0 = g r = ∫ ∂N uydŜy = ∫ ∇ uydy > 0 ∂Brx n Brx 2 This contradiction shows there can be no Brx ⊂ U on which ∇ ux > 0, and hence no point in U where ∇2ux is different from zero.■
For u = ux,y a smooth function of two variables, we have
2 ∂xx ux,y ux + h,y − 2ux,y + ux − h,y/h 2 ∂yy ux,y ux,y + h − 2ux,y + ux,y − h/h hence h2∇2ux,y −4ux,y + ux + h,y + ux − h,y + ux,y + h + ux,y − h Then the equation, ∇2ux,y = 0 in U, is approximated by the equation, ux,y = ux + h,y + ux − h,y + ux,y + h + ux,y − h/4. The expression on the right side of this equation is recognizable as an approximation for ∫ uydŜy. ∂Brx Thus, in the discrete setting, the connection between the property of being harmonic and
6 the mean value property is more immediate. 5. Maximum -minimum Principles The following theorem, known as the strong maximum -minimum principle , is an immediate consequence of the mean value property. Theorem 5.1 strong maximum − minimum principle Suppose U is a bounded open, connected set in Rn and u is harmonic in U and continuous on, Ū, the closure of U. Let M and m denote, respectively, the maximum and minimum values of u on ∂U. Then either ux is constant on Ū (so then ux = m = M), or else for every x in U we have m < ux < M.
Proof Let M denote the maximum value of ux on Ū and suppose ux0 = M. If x0 is inside U then there exists an r > 0 such that Brx0 ⊂ U and ux ≤ ux0 for all x ∈ Brx0 . Suppose there is some y0 in Brx0 such that uy0 < ux0 . But this contradicts the mean value property since it implies
M = ux0 = ∫ uydŷ < M. Brx0
It follows that ux = ux0 for all x in Brx0 . Similarly, for any other point y0 ∈ U, the assumption that uy0 < ux0 leads to a contradiction of the mean value property. Then if x0 is an interior point of U we a force to conclude that ux is identically equal to M on U and, by continuity, on the closure, Ū. On the other hand, if u is not constant on U, then x0 must lie on the boundary of U.■ Note that if u = ux,y satisfies the discrete Laplace equation, ux,y = ux + h,y + ux − h,y + ux,y + h + ux,y − h/4, on a square grid, then u can have neither a max nor a min at an interior point of the grid since at such a point, the left side of the equation could not equal the right side. At an interior maximum, the left side would be greater than all four of the values on the right side, preventing equality. A similar situation would apply at an interior minimum. Unless u is constant on the grid, the only possible location for an extreme value is at a boundary point of the grid. There is a weaker version of theorem 5.1 that is based on simple calculus arguments. Theorem 5.2 (Weak Maximum -minimum principle ) Suppose U is a bounded open, connected set in Rn and u ∈ CŪ ∩ C2U. Let M and m denote, respectively, the maximum and minimum values of u on ∂U. Then a − ∇2ux ≤ 0 in U implies ux ≤ M for all x ∈ Ū b − ∇2ux ≥ 0 in U implies ux ≥ m for all x ∈ Ū c − ∇2ux = 0 in U implies m ≤ ux ≤ M for all x ∈ Ū Proof of (a): The argument we plan to use can not be applied directly to ux. Instead, let vx = ux + |x|2 for x ∈ U and note that −∇2vx = −∇2ux − 2n < 0 for all x in U.
It follows that vx can have no interior maximum, since at such a point, x0 , we would have
7 = 2 2 = ∂v/∂xi 0 and ∂ v/∂xi ≤ 0, for 1 ≤ i ≤ n, x x0. This is in contradiction to the previous inequality since it implies −∇2vx ≥ 0. This allows us to conclude that vx has no interior max and vx must therefore assume its maximum value at a point on the boundary of U. Now U is bounded so for some R sufficiently large, we have U ⊂ BR0 and this implies the following bound on max x∈U vx, max vx ≤ max vx ≤ M + |x|2 ≤ M + R2. x∈U x∈∂U Finally, we have, ux ≤ vx ≤ M + R2 for all x in U and all > 0. Since this holds for all > 0, it follows that ux ≤ M for all x in Ū. Statement (b) can be proved by a similar argument, or, by applying (a) to -u. Then (c) follows from (a) and (b).■
In the special case, n = 1 , it is easy to see why theorem 5.2 holds. In that case U = a,b and ∇2u = u”x and the figure illustrates (a), (b) and (c).
a ux ≤ M b ux ≥ m c m ≤ ux ≤ M
The following figure illustrates why it is necessary to have both of the hypotheses, u ∈ CŪ, and u ∈ C2U.
u ∈ CŪ, u ∉ CŪ, but u ∉ C2U but u ∈ C2U
If U is not bounded, then the max-min principle fails in general. For example, if U denotes the unbounded wedge, x,y : y > |x| in R2 then ux,y = y2 − x2 is harmonic in U, equals zero on the boundary of U, but is not the zero function inside U. An extended version of the max-min principle, due to E Hopf, is frequently useful.
Theorem 5.3 Suppose U is a bounded open, connected set in Rn and u ∈ CŪ ∩ C2U. Suppose also that ∇2ux = 0 in U and that u is not constant. Finally, suppose U is such that for each point y on the boundary of U, there is a ball, contained in U with y lying on the boundary of the ball. If uy = M, then ∂N uy > 0 and if uy = m, then ∂N uy < 0.
8 (i.e., at a point on the boundary of U where ux assumes an extreme value, the normal derivative does not vanish).
Problem 8 Let ux be harmonic on U and let vx = |∇ux|2. Show that vx ≤ max x∈∂U vx for x ∈ Ū. (Hint: compute ∇2v and show that it is non-negative on U)
6. Consequences of the Mean Value Theorem and M-m Principles Throughout this section, U is assumed to be a bounded open, connected set in Rn. We list now several consequences of the results of the previous two sections.
It is a standard result in elementary real analysis that if a sequence of continuous functions um converges uniformly to a limit u on a compact set K, then u is also continuous. Moreover, for any open subset W in K, we have
lim m→ um dx = udx . ∞ ∫W ∫W
Lemma 6.1 Suppose umx is a sequence of functions which are harmonic in U and which converge uniformly on Ū. Then u = lim m→∞ um is harmonic in U.
Proof Since each um is harmonic in U, theorem 4.1 implies that for every ball, Brx ⊂ U , we have
umx = ∫ umydŜy = ∫ umydŷ. ∂Brx Brx The uniform convergence of the sequence on U implies that
umx → ux, ∫ umydŜy → ∫ uydŜy, ∫ umydŷ → ∫ uydŷ ∂Brx ∂Brx Brx Brx hence ux = ∫ uydŜy = ∫ uydŷ. ∂Brx Brx But this says u has the mean value property and so, by theorem 4.2, u is harmonic.■
Lemma 6.2 Suppose u ∈ CŪ ∩ C2U satisfies the conditions ∇2ux = 0, in U, and ux = 0, on ∂U. Then ux = 0 for all x in U. Proof- The hypotheses, u ∈ CŪ ∩ C2U and ∇2ux = 0, in U, imply that m ≤ ux ≤ M, in Ū. Then ux = 0, on ∂U implies m = M = 0.■
Lemma 6.2 asserts that the so called Dirichlet boundary value problem
9 ∇2ux = Fx, x ∈ U, and ux = gx, x ∈ ∂U, has at most one solution in the class CŪ ∩ C2U. Solutions having this degree of smoothness are called classical solutions of the Dirichlet boundary value problem. The partial differential equation is satisfied at each point of U and the boundary condition is satisfied at each point of the boundary. Later we are going to consider solutions in a wider sense. Lemma 6.3 For any F ∈ CU and g ∈ C∂U, there exists at most one u ∈ CŪ ∩ C2U satisfying −∇2ux = F, in U, and ux = g, on ∂U. 2 Proof Suppose u1, u2 ∈ CŪ ∩ C U both satisfy the conditions of the boundary value problem. Then w = u1 − u2 satisfies the hypotheses of lemma 6.2 and is therefore zero on the closure of U. Then u1 = u2 on the closure of U.■
Lemma 6.4 Suppose u ∈ CŪ ∩ C2U satisfies ∇2ux = 0, in U, and ux = g, on ∂U, where gx ≥ 0. If gx0 > 0 at some point x0 ∈ ∂U then ux > 0 at every x ∈ U.
Proof First, gx ≥ 0 implies that m = 0. Then gx0 > 0 at some point x0 ∈ ∂U implies M > 0. It follows now from the strong M-m principle that 0 < ux < M at every x ∈ U.■
Note that lemma 6.4 asserts that if a harmonic function that is non-negative on the boundary of its domain is positive at some point of the boundary, then it must be positive at every point inside the domain; i.e., a local stimulus applied to the ”skin” of the body produces a global response felt everywhere inside the body. This could be referred to as the organic behavior of harmonic functions. This mathematical behavior is related to the fact that Laplace’s equation models physical systems that are in a state of equilibrium. If the boundary state of a system in equilibrium is disturbed, even if the disturbance is very local, then the system must readjust itself at each point inside the boundary to achieve a new state of equilibrium. This is the physical interpretation of ”organic behavior”.
Lemma 6.5 For F ∈ CŪ and g ∈ C∂U, suppose u ∈ CŪ ∩ C2U satisfies −∇2ux = Fx, x ∈ U, and ux = gx, x ∈ ∂U.
Then max x∈U |ux| ≤ Cg + MCF where Cg = max x∈∂U |gx|, CF = max x∈U |Fx|, M = a constant depending on U.
Proof The estimate asserts that −Cg + MCF ≤ ux ≤ Cg + MCF for x ∈ Ū. First, let = + 2 CF vx ux |x| 2n 2 2 Then −∇ vx = −∇ ux − CF = Fx − CF ≤ 0 in U + 2 CF and vx ≤ max x∈∂U ux |x| 2n for x ∈ Ū. Since U is bounded, there exists some R > 0 such that |x|2 ≤ R2 for x ∈ U. Then
10 + 2 CF + vx ≤ Cg R 2n and ux ≤ vx ≤ Cg MCF forx ∈ Ū. Similarly, let = 2 CF wx ux − |x| 2n and show that ux ≥ wx ≥ −Cg + MCF for x ∈ Ū.■
If we define a mapping, S : CŪ × C∂U → CŪ ∩ C2U that associates the data pair, (F,g), for the boundary value problem of lemma 6.5 to the solution ux, then we would write u = SF,g. Evidently, lemma 6.5 asserts that the mapping S is continuous. To make this statement precise, we must explain how to measure distance between data pairs F1,g1 ,F2,g2 in the data space CŪ × C∂U and between solutions u1,u2 in the solution space CŪ. Although we know that the solutions belong to the space CŪ ∩ C2U, this is a subspace of the larger space, CŪ, so we are entitled to view the solutions as belonging to this larger space. We are using the term ”space” to mean a linear space of functions; that is, a set that is closed under the operation of forming linear combinations.
Define the distance between u1,u2 in the solution space CŪ as follows = ||u1 − u2 ||CŪ max x∈Ū |u1x − u2x|.
Similarly, define the distance from F1,g1 to F2,g2 in the data space CŪ × C∂U by = + ||F1,g1 − F2,g2 ||CŪ×C∂U max x∈Ū |F1x − F2x| max x∈∂U |g1x − g2x|.. Each of these ”distance functions” defines what is called a norm on the linear space where it has been defined. In order to be called a norm, the functions have to satisfy the following conditions, i ||αu|| = |α|||u|| for all scalars α and for all functions u ii ||u + v|| ≤ ||u|| + ||v||, for all functions u,v iii ||u|| ≥ 0, for all u and ||u|| = 0 if and only if u = 0. One can check that the distance functions defined above both satisfy all three of these conditions and they therefore qualify as norms on the spaces where they have been defined. Now the estimate of lemma 6.5 asserts that if uj solves the boundary value problem with data Fj,gj , j = 1,2 then
max x∈U |u1x − u2x| ≤ max x∈∂U |g1x − g2x| + M max x∈U |F1x − F2x| i.e., ||u1 − u2 ||CŪ ≤ max 1,M ||F1,g1 − F2,g2 ||CŪ×C∂U. Evidently, if the data pairs are close in the data space, then the solutions are correspondingly close in the solution space. This is what is meant by continuous dependence of the solution on the data. Note that if we were to change the definition of the norm in one or the other (or both) of the spaces, the solution might no longer depend continuously on the data. Consider the solution for the following boundary value problem
∇2ux,y = 0 for 0 < x < π, y > 0,
11 = = = 1 < < ux,0 0, ∂yux,0 gx n sin nx , 0 x π, u0,y = uπ,y = 0, y > 0. For any integer, n, the solution is given by ux,y = 1 sin nx sinh ny . n2 Evidently, the distance between g and zero in the data space is = 1 1 ||gx − 0||CR max x∈R| n sin nx | ≤ n , while the distance between ux,y and zero in the solution space is ||ux,y 0|| = max 1 sin nx sinh ny eny − C 0
7. Uniqueness from Integral Identities Integral identities can be used to prove that various boundary value problems cannot have more than one solution. For example, consider the following boundary value problem
2 ∇ ux = Fx, x ∈ U, ∂N ux = gx, x ∈ ∂U. This is known as the Neumann boundary value problem for Poisson’s equation. Green’s first identity leads to
2 ∫ Fxdx = ∫ ∇ uxdx = ∫ ∂N uxdS x = ∫ gxdS x. U U ∂U ∂U Then a necessary condition for the existence of a solution to this problem is that the data, F,g satisfies ∫ Fxdx = ∫ gxdS x. U ∂U
If this condition is satisfied, and if u1,u2 denote two solutions to the problem, then w = u1 − u2 satisfies the problem with F = g = 0. Then we have
2 2 0 = ∫ w∇ wdx = ∫ w∂N wdS x − ∫ ∇w ⋅ ∇wdx = −∫ |∇w| dx U ∂U U U But this implies that |∇w| = 0 which is to say, w is constant in U. Then the solutions to this boundary value problem may differ by a constant, they are not unique. We should point out that in order for the equation and the boundary condition to have meaning in the classical sense, we must assume that the solutions to this problem belong to the class, C1Ū ∩ C2U.
On the other hand, consider the problem,
2 ∇ ux = Fx, x ∈ U, ux = g1x, x ∈ ∂U1, ∂N ux = g2x, x ∈ ∂U2, where ∂U is composed of two distinct pieces, ∂U1, and ∂U2. Now if u1,u2 denote two solutions to the problem, and w = u1 − u2, then we have, as before
12 2 2 0 = ∫ w∇ wdx = ∫ w∂N wdS x + ∫ ∇w ⋅ ∇wdx = ∫ w∂N wdS x + ∫ w∂N wdS x + ∫ |∇w| U ∂U U ∂U1 ∂U2 U
In this case, w = 0 on ∂U1 and ∂N w = 0 on ∂U2, so we again reach the conclusion that w is 1 2 constant in U. Since w ∈ C Ū ∩ C U, it follows that if w = 0 on ∂U1, then w = 0 on Ū. Then the solution to this problem is unique.
Finally, consider the Neumann problem for the so called Helmholtz equation , −∇2ux + cxux = Fx, x ∈ U, ux = gx, x ∈ ∂U, where we suppose that cx ≥ C0 > 0 for x ∈ U. We can use integral identities to show that this problem has at most one smooth solution. As usual, we begin by supposing the problem has two solutions and we let wx denote their difference. Then −∇2wx + cxwx = 0, x ∈ U, wx = 0, x ∈ ∂U, and,
2 2 0 = ∫ wx−∇ wx + cxwxdx = −∫ w∂N wdS x + ∫ ∇w ⋅ ∇wdx + ∫ cxwx dx U ∂U U U Since w = 0 on ∂U, it follows that
2 2 2 | w| + cxwx dx C0 wx dx = 0, ∫U ∇ ≥ ∫U and this implies that wx vanishes at every point of Ū. Notice that this proof of uniqueness doesn’t work if we don’t know that the coefficient cx is non-negative. (How would the proof have to be modified if we knew only that cx ≥ 0?.
Problem 9 Prove that the following problem has at most one smooth solution −∇2ux = Fx, x ∈ U, and ux = gx, x ∈ ∂U. Use first the Green’s identity approach and then use the result in lemma 6.5. Note that this result was already established by means of the M-m principle.
Problem 10 Prove that the following problem has at most one smooth solution
2 −∇ ux = Fx, in U, and ux + ∂N ux = gx, on ∂U.
Eigenvalues for the Laplacian The eigenvalues for the Dirichlet problem for the Laplace operator are any scalars, λ, for which there exist nontrivial solutions to the Dirichlet boundary value problem, −∇2ux = λux, x ∈ U, ux = 0, x ∈ ∂U. Note that if ux = 0 then any choice of λ will satisfy the conditions of the problem. Therefore we allow only nontrivial solutions and we refer to these as eigenfunctions . If ux is an eigenfunction for this problem corresponding to an eigenvalue λ then
2 2 2 λ ∫ ux dx = −∫ ux∇ uxdx = −∫ u∂NudS x + ∫ |∇u| dx . U U ∂U U
13 Then λ satisfies ∫ |∇u|2dx . λ = U > 0. ux2dx ∫U Note that |∇u| ≠ 0 since this would lead to u = 0 which is not allowed if u is an eigenfunction. We have shown that all eigenvalues of the Dirichlet problem for the Laplace operator are strictly positive.
Problem 11 Show that the Neumann problem,
2 −∇ ux = λux, x ∈ U, ∂N ux = 0, x ∈ ∂U. has a zero eigenvalue which has the corresponding eigenfunction, ux =constant.
Problem 12 Under what conditions on the function αx, does the boundary value problem,
2 −∇ ux = λux, x ∈ U, αxux + ∂N ux = 0, x ∈ ∂U. have only positive eigenvalues?
Problem 13 Show that for each of the eigenvalue problems considered here, if ux is an eigenfunction corresponding to an eigenvalue, λ, then for any nonzero constant k, vx = ku x, is also an eigenfunction corresponding to the eigenvalue, λ.
8. Fundamental Solutions for the Laplacian Let δx denote the ”function” with the property that for any continuous function, fx, δxfxdx = f0, or, equivalently, δx yfydy = fx ∫Rn ∫Rn − Of course this is a purely formal definition since there is no function δx which could have this property. Later, we will see that δx can be given a rigorous, consistent meaning in the context of generalized functions. However, using the delta in this formal way, we can give a formal definition of a fundamental solution for the negative Laplacian as the solution of,
2 n −∇x Ex − y = δx − y, x,y ∈ R . 8.1 Formally, this definition implies
2 x Ex yfydy = δx yfydy = fx −∇ ∫Rn − ∫Rn − Then the solution of the equation −∇2ux = fx, x ∈ Rn, is given by ux = Ex yfydy . 8.2 ∫Rn − Although these steps are only formal, they can be made rigorous. Note that since there are no side conditions imposed on Ex or on ux neither of these functions is unique. For example, any harmonic function could be added to either of them and the resulting function would still satisfy the same equation.
14 Since δx and ∇2 are both radially symmetric, it seems reasonable to assume that Ex is = = 2 + + 2 radially symmetric as well; i.e., Ex Er, for r x1 ... xn . Then a definition for Ex which does not make use of δx can be stated as follows:
2 n Enx is a fundamental solution for −∇ on R if,
2 n i Enr ∈ C R \0 2 ii ∇ Enr = 0, for r > 0 8.3
iii lim →0 ∫ ∂NEnxdS x = −1 ∂B0 The properties i) and ii) in the definition imply that
2 = + n − 1 ′ = > ∇ Enr En”r r Enr 0, for r 0 ′ i.e., En”r/Enr = −n − 1/r ′ log Enr = −n − 1log r + C,
′ 1−n Enr = Cr ,
C2 log r if n = 2 Enr = . 2−n Cn r if n > 2
The constant Cn can be determined from part iii) of the definition. It is this part of the 2 definition that causes −∇ Enx to behave like δx. For n = 2 we have = 2π = 2π 1 = ∫ ∂NEnxdS x ∫ ∂rC2 log r dθ C2 ∫ dθ 2πC2. ∂B0 0 0
Then lim →0 ∫ ∂NE2xdS x = 2πC2. = −1 ∂B0 = = 1 so C2 −1//2π and E2r − 2π log r. When n = 3 we have = 2 = 1 2 = ∫ ∂NEnxdS x ∫ ∂rC3/r dω −C3 ∫ 2 dω −4πC3. ∂B0 ∂B0 ∂B0
Then lim →0 ∫ ∂NE3xdS x = −4πC3. = −1 ∂B0
so C3 = 1//4π and E3r = 1/4πr.
2 We will now show that condition 8.3iii) really does produce the δ behavior for −∇ En. Of 2 course we can’t try to show that −∇ En = δx since are not allowed to refer to δx. Instead, we will show equivalently that −∇2ux = fx, for u given by 8.2. Here, we suppose that fx is continuous, together with all its derivatives of order less than or equal to 2, and we suppose further that fx has compact support; i.e., for some positive K, fx 2 n vanishes for |x| > K. The notation for this class of functions is Cc R .
15 2 n Theorem 8.1 Let Enr denote a fundamental solution for −∇ on R . Then, for any 2 n f ∈ Cc R ,
ux = Enx yfydy , ∫Rn − satisfies u ∈ C2Rn , − ∇2ux = fx for any x ∈ Rn. Proof The smoothness of f implies the smoothness of u; i.e., for i = 1,2,...,n ⃗ ⃗ ⃗ ⃗ ⃗ ⃗ ux + hei − ux fx + hei − z − fx − z ∂u/∂xi = lim h→0 = lim h→0 ∫ Ez dz , h Rn h ⃗ ⃗ ⃗ fx + hei − z − fx − z Now converges uniformly to f/ x and it follows that for each i, h ∂ ∂ i
u/ xi = Ez x fx zdy , ∂ ∂ ∫Rn ∂ i − 2 Similarly, ∂ ux/∂xi∂xj exists for each i and j since the corresponding derivatives of f all exist. To show the second assertion, write
2 2 2 x ux = Enz x fx zdy = Enz z fx z dz . −∇ ∫Rn − ∇ − ∫Rn − ∇ −
Since Enz tends to infinity as |z| tends to zero, we treat this as an improper integral; 2 = 2 + 2 ∫ n Enz∇z fx − z dz ∫ Enz∇z fx − z dz ∫ n Enz∇z fx − z dz . R B0 R \B0 First, note that
2 2 ∫ Enz∇z fx − z dz ≤ max B0|∇z fx − z| ∫ |Enz| dz . B0 B0 But 2π 1/2π ∫ ∫ |log r|rdrdθ = C 2|log | if n = 2 = 0 0 ∫ |Enz| dz . B0 2−n n−1 2 Cn r r drdω = C if n > 2 ∫0 ∫ω hence
2 lim →0 ∫ Enz∇z fx − z dz = 0. B0 Next,
2 = ∫ n Enz∇z fx − z dz ∫ n Enz∂Nfx − z dS z − ∫ n ∇Enz ⋅ ∇zfx − z dz , R \B0 ∂R \B0 R \B0 and
| ∫ n Enz∂Nfx − z dS z| ≤ max z∈∂B0|∂Nfx − z| ∫ |Enz| dS z ∂R \B0 −∂B0
16 0 1/2π |log |dθ = C2|log | if n = 2 ∫2π ≤ C1 2−n n−1 Cn ∫ dω = C3 if n > 2
n n We used the fact that ∂R \B0 = −∂B0. Finally, since Enz is harmonic in R \B0, = 2 ∫ n ∇Enz ⋅ ∇zfx − z dz ∫ ∂NEnzfx − z dS z − ∫ n ∇ Enz fx − z dz R \B0 −∂B0 R \B0
= ∫ ∂NEnzfx − z dS z. −∂B0 Now we can write
∫ ∂NEnzfx − z dS z = ∫ ∂NEnzfx − z − fx dS z + ∫ ∂NEnzfx dS z, −∂B0 −∂B0 −∂B0 and note that because fx is continuous, → → ∫ ∂NEnzfx − z − fx dS z ≤ C max z∈∂B0 |fx − z − fx| 0 as 0. −∂B0 In addition, ∫ ∂NEnzdS z = −∫ ∂NEnzdS z → 1 as → 0 −∂B0 ∂B0 because of 8.3iii , and then it follows that
2 n −∇ ux = lim →0 ∫ ∂NEnzfx − z dS z = fx ∀x ∈ R ■ −∂B0
We remark again that since no side conditions have been imposed on ux, this solution is not unique. Any harmonic function could be added to ux and the sum would also satisfy −∇2ux = fx.
9. Green ’s Functions for the Laplacian Throughout this section, U is assumed to be a bounded open, connected set in Rn, whose boundary ∂U is sufficiently smooth that the divergence theorem holds. Consider the Dirichlet boundary value problem for Poisson’s equation, −∇2ux = Fx, for x ∈ U, and ux = gx for x ∈ ∂U 9.1 We know that
ux = Enx yFydy , ∫Rn − satisfies the partial differential equation but this function, does not, in general, satisfy the Dirichlet boundary condition. In order to find a function which satisfies both the equation and the boundary condition, recall that for smooth functions ux and vx
2 2 = ∫ vy∇y uy − uy∇y vy dy ∫ vy∂N uy − uy∂N vy dS y 9.2 U ∂U
For x in U fixed but arbitrary, let vy = Enx − y − φy in 9.2 where denotes a yet to be specified function that is harmonic in U. Then since Enx − y is a fundamental solution and
17 φ is harmonic in U,
2 2 uy y vy = uy y Enx y 0 dy = ux − ∫U ∇ ∫U −∇ − − Since ux solves the Dirichlet problem, (9.2) becomes now,
2 ux = −∫ vy∇y uy dy + ∫ vy∂N uy − uy∂N vy dS y U ∂U
= ∫ vyFydy − ∫ gy∂N vydS y + ∫ vy∂N uydS y U ∂U ∂U
If the values of ∂N uy were known on ∂U then this would be an expression for the solution ux in terms of the data in the problem. Since ∂N uy on the boundary is not given, we instead choose the harmonic function φ in such a way as to make the integral containing this term disappear. Let φ be the solution of the following Dirichlet problem,
2 ∇y φy = 0 for y ∈ U, φy = Enx − y, for y ∈ ∂U where we recall that x denotes some fixed but arbitrary point in U. Then vy = Enx − y − φy = 0 on the boundary and the previous expression for ux reduces to
ux = ∫ Gx,yfydy − ∫ ∂NGx,ygydS y 9.3 U ∂U where Gx,y = Enx − y − φy. Formally, Gx,y solves
2 2 −∇ Gx,y = −∇ Enx − y − 0 = δx − y for x,y ∈ U, 9.4 Gx − y = 0, for x ∈ U, y ∈ ∂U and G(x,y) is known as the Green ’s function for the Dirichlet problem for the Laplacian, or, alternatively, as the Green’s function of the first kind. Note that if there are two Green’s functions then their difference satisfies a completely homogeneous Dirichlet problem. This would seem to imply uniqueness for the Green’s function except for the fact that the uniqueness proofs were for the class of functions C2U ∩ CU and it is not known that Gx,y is in this class. This point will be cleared up later.
It can be shown rigorously that Gx,y = Gy,x for all x,y ∈ U. However, a formal demonstration based on (9.4) proceeds as follows. For x,z ∈ U, (be careful to note that x and y are points in Rn apply (9.2) with uy = Gy,z and vy = Gy,x, 2 2 ⃗ ⃗ ⃗ ⃗ ⃗ ⃗ ⃗ ⃗ uy y vy vy y uy dy = Gy,zδy x Gy,xδy zdz ∫U ∇ − ∇ −∫U − − − ⃗ ⃗ ⃗ ⃗ ⃗ ⃗ ∫ uy∂N vy − vy∂N uy dS y = ∫ Gy,z∂Nvy − Gy,x∂NvydS y = 0 ∂U ∂U The last integral vanishes because Gy,z = Gy,z = 0, for y ∈ ∂U. Then (9.2) implies 0 = Gy,zδy x Gy,xδy zdz = Gx,z Gz,x for all x,z U. ∫U − − − − ∈ This proof will become rigorous when we have developed the generalized function framework in which this argument has meaning.
2 Example 9.1 Let U = x1,x2 ∈ R : x2 > 0. The half space is the simplest example of a set having a boundary (i.e., the boundary of the half space is the x1 − axis , x2 = 0) and we will be able to construct the Green’s function of the first kind for this simple set. Note that
18 the half space is not a bounded set (having a boundary is not the same as being bounded!). Since n = 2, we write
⃗ ⃗ ⃗ ⃗ 2 2 Ex − y = −1/2π log |x − y| = −1/2π log x1 − y1 + x2 − y2
⃗ ⃗∗ For x = x1,x2 ∈ U, let x = x1,−x2 and let
⃗ ⃗∗ ⃗ 2 2 vy = −1/2π log |x − y| = −1/2π log x1 − y1 + x2 + y2 . Then v = v⃗y is a harmonic function of ⃗y for ⃗y ∈ U. Moreover, v reduces to v⃗y = E⃗x − ⃗y ⃗ for y ∈ ∂U; i.e., vy1,0 = Ex1,x2 − y1,0. Then G⃗x,⃗y = −1/2π log |⃗x − ⃗y| − log |⃗x∗ − ⃗y| = −1/2π log |⃗x − ⃗y|/|⃗x∗ − ⃗y|
2 2 2 2 = −1/2π log x1 − y1 + x2 − y2 / x1 − y1 + x2 + y2 . 9.5 ⃗ ⃗ ⃗ Note that Gx,y = 0 for y ∈ ∂U;i.e.,Gx1,x2 ,y1,0 = 0. It is clear from the construction ⃗ ⃗ ⃗ ⃗ ⃗ that for each fixed x = x1,x2 ∈ U, Gx,y is a harmonic function of y for y ∈ U.
2 Problem 14 Show that for the half-space U = x1,x2 ∈ R : x2 > 0,
−1 x2 ∂NGx̄,ȳ|ȳ U = ∈∂ π 2 + 2 x1 − y1 x2 so that 1 ∞ x2 ux1,x2 = ∫ gy1 dy 1 π −∞ 2 + 2 x1 − y1 x2 2 solves ∇ ux1,x2 = 0 in U, and ux1,0 = gx1 , x1 ∈ R.
Example 9.2 Let U = r,θ : 0 < r < R, |θ| < π = DR0. Suppose u = ur,θ satisfies
−∇2ur,θ = 0, in U, and uR,θ = gθ on ∂U = r,θ : r = R, |θ| < π . In an elementary course on PDE’s we would show that for all choices of the constants, an,bn, ∞ = 1 + n + ur,θ 2 a0 ∑ r an cos nθ bn sin nθ n=1 solves Laplace’s equation in the disc, U. Moreover, the boundary condition is satisfied if ∞ = 1 + n + = uR,θ 2 a0 ∑ R an cos nθ bn sin nθ gθ. 9.6 n=1 Then we would appeal to the theory of Fourier series which asserts that any continuous g can be expressed as ∞ = 1 + + gθ 2 A0 ∑An cos nθ Bn sin nθ 9.7 n=1 where π π An = 1/π ∫ gscos ns ds , Bn = 1/π ∫ gssin ns ds . −π −π
19 n n Then, comparing (9.6) with (9.7), it follows that R an = An, R bn = Bn, and so ∞ = 1 + n + ur,θ 2 A0 ∑r/R An cos nθ Bn sin nθ n=1 satisfies both the PDE and the boundary condition. By uniqueness, this must be the solution of the boundary value problem. If we write
π π An cos nθ + Bn sin nθ = 1/π ∫ gscos ns ds cos nθ + 1/π ∫ gssin ns ds sin nθ. −π −π π = 1/π ∫ gscos ns cos nθ + sin ns sin nθ ds −π π = 1/π ∫ gs cos nθ − s ds , −π then ur,θ can be written as
π ∞ ur,θ = 1/π ∫ 1 + ∑r/Rn cos nθ − sgs ds , −π 2 n=1 π 2 2 = 1 ∫ R − r gsds 2π −π R2 − 2Rrcos θ − s + r2 Here the series in n was summed by writing cos nθ − s in terms of exp ±in θ − s and recognizing that the series is a geometric series. Then 2 2 1 π R − r π ur,θ = ∫ gsds = ∫ ∂NGr,θ,R,sgsds 2π −π R2 − 2Rrcos θ − s + r2 −π where Gr,θ,R,s denotes the Green’s function for this problem. This representation is often called the Poisson integral formula .
10. The Inverse Laplace Operator ⃗ We are all familiar with problems of the form Ax⃗ = f where A denotes an n by n matrix and ⃗ ⃗x,f denote vectors in the linear space Rn. In this situation, A can be viewed as a linear ⃗ ⃗ operator from the linear space Rn into Rn. If the only solution of Ax⃗ = 0, is⃗ x = 0, then ⃗ ⃗ Ax⃗ = f has a unique solution ⃗x for every data vector f . This solution can be expressed as ⃗ ⃗x = A−1f, where A−1 denotes the inverse of the matrix A. There are strong analogies ⃗ between the problem Ax⃗ = f on Rn and the problem (9.1). Consider problem (9.1) in the special case g = 0; i.e.,
−∇2u⃗x = fx, x ∈ U, ux = 0, x ∈ ∂U. 10.1
Recall that we showed that the only solution of (9.1) when g = f = 0, is u = 0, so the solution to 10.1 is unique. In fact, the unique solution u = u⃗x, can be expressed in terms of the Green’s function by
u⃗x = G⃗x,⃗yf⃗ydy⃗ 10.2 ∫U
20 If we define Kf⃗x = G⃗x,⃗yf⃗ydy⃗ for any f CŪ, ∫U ∈ then it is clear that KC1f1 + C2f2 = C1Kf1 + C2Kf2 for all f1,f2 ∈ CU, C1,C2 ∈ R. We say that K is a linear operator on the linear space CŪ. We recall that to say that CŪ is a linear space is to say that for all f1,f2 ∈ CŪ and for all C1,C2 ∈ R., the linear combination C1f1 + C2f2 is also in CŪ. The problem (10.1) can be expressed in operator notation. Define an operator L by
Lu⃗x = −∇2u⃗x for any u ∈ D = u ∈ C2Ū : u⃗x = 0 for⃗ x ∈ ∂U . Then for any u ∈ D it follows that Lu⃗x ∈ CŪ so L can be viewed as a function defined on D with values in CŪ. Since D is a subspace of CŪ we can even say that L is a function from CŪ into CŪ but we should note that L is not defined on all of CŪ. It is also easy to check that L is a linear operator from D into CŪ, and 10.1 can be expressed in terms of this linear operator as follows, find u ∈ D such that Lu = f ∈ CŪ.
The uniqueness for 10.1, stated in the operator terminology, becomes Lu = 0 if and only if u = 0. Evidently, the operators K and L are related by,
a Kf⃗x ∈ D for any f ∈ CŪ, and LKf⃗x = f⃗x
b for any u ∈ D, Lu⃗x ∈ CŪ, and KLu⃗x = u⃗x.
These two statements together assert that K = L−1, K is the operator inverse to L.
If we use the notation 〈⃗x,⃗z〉 to denote the usual inner product between two vectors ⃗x,⃗z, then 〈Ax⃗,⃗z〉 = 〈⃗x,A⃗z〉 for all ⃗x,⃗z ∈ Rn. Here A denotes the matrix transpose of A. It is a fact from linear algebra that the dimension of the null space of the matrix A is equal to the dimension of the null space of the transpose matrix, A. If the null space of A has positive dimension then the solution of ⃗ Ax⃗ = f is not unique. What is more, if ⃗z denotes any vector in the null space of A then
⃗ f,⃗z = 〈Ax⃗,⃗z〉 = 〈⃗x,A⃗z〉 = 0
⃗ and it is then evident that a necessary condition for the existence of a solution for Ax⃗ = f is ⃗ that f,⃗z = 0 for all ⃗z in the null space of A. The matrix A is said to be symmetric if either of the following equivalent conditions applies, A = A or 〈Ax⃗,⃗z〉 = 〈⃗x,Az⃗〉 for all ⃗x,⃗z. When A is symmetric, the null space of A not only has the same dimension as that of A, the two ⃗ ⃗ null spaces are actually the same. In this case, Ax⃗ = f has no solution unless f,⃗z = 0 for ⃗ all ⃗z in the null space of A. If this condition is satisfied, then any two solutions of Ax⃗ = f
21 differ by an element from the null space of A. We will now consider the analogue of these last results for the case of a boundary value problem for Laplace’s equation. First, we have to have an inner product on the function space CŪ. The essential properties of the inner product are i 〈⃗x,⃗z〉 = 〈⃗z,⃗x〉 for all ⃗x,⃗z. ii 〈Cx⃗,⃗z〉 = C〈⃗x,⃗z〉 for all ⃗x,⃗z, and all C ∈ R. iii 〈⃗x + ⃗y,⃗z〉 = 〈⃗x,⃗z〉 + 〈⃗y,⃗z〉 for all ⃗x,⃗y,⃗z. ⃗ iv 〈⃗x,⃗x〉 ≥ 0 for all ⃗x, and 〈⃗x,⃗x〉 = 0, if and only if⃗ x = 0. and any mapping from Rn × Rn to R having these four properties is called an inner product on the linear space Rn. We can define an inner product on the function space CŪ, by letting
⃗ ⃗ ⃗ f1,f2 = f1xf2xdx for all f1,f2 CŪ. ∫U ∈
This is just a generalization of the vector inner product for vectors on Rn and it is easy to check that the four properties given above are all satisfied for this product. We observe now, that
⃗ ⃗ ⃗ ⃗ ⃗ ⃗ ⃗ ⃗ ⃗ Kf1, f2 = Kf1x f2xdx = Gx,yf1ydy f2xdx for all f1,f2 CŪ. ∫U ∫U ∫U ∈
Note further that
⃗ ⃗ ⃗ ⃗ ⃗ ⃗ ⃗ ⃗ ⃗ ⃗ ⃗ ⃗ Gx,yf1ydy f2xdx = Gx,y f2xdx f1ydy = 〈f1,K f2 〉 ∫U ∫U ∫U ∫U
where K f2 is defined by
Kf = G⃗x,⃗y f⃗xdx⃗ for any f CŪ. ∫U ∈
Clearly, Kf defines another linear operator on CU. When Kf1, f2 = 〈f1,K f2 〉 for all f1,f2 ∈ CU, we say that K is the adjoint of the operator K. Since we know that G⃗x,⃗y = G⃗y,⃗x for all ⃗x,⃗y ∈ U, it follows that Kf =Kf for any f ∈ CŪ.
We say that the operator K is symmetric . Since
Kf1, f2 = 〈f1,K f2 〉 for all f1,f2 ∈ CŪ, and K = L−1, it seems reasonable to expect that 〈Lu,v〉 = 〈u,Lv〉 for all u,v ∈ D. That
22 this is, in fact, the case follows from (3.4). That is, 2 2 〈Lu,v〉 = ∫ −v∇ udx = ∫ −u∇ vdx − ∫ v∂Nu − u∂NvdS U U ∂U
= u 2vdx = 〈u,Lv〉 for all u,v D. ∫U − ∇ ∈
Now consider the Neumann problem
2 ⃗ −∇ ux = fx, x ∈ U, ∂Nux = 0, x ∈ ∂U. 10.3
Problem 10.3 can be expressed in terms of the following operator,
⃗ 2 ⃗ 2 ⃗ ⃗ LNux = −∇ ux for any u ∈ DN = u ∈ C Ū : ∂Nux = 0 for x ∈ ∂U . as find u ∈ DN such that LNu = f ∈ CŪ.
Although the action of this operator, LNu, is the same as that of the previously defined operator, L, it is not the same operator since they have different domains. In particular, DN contains all constant functions and these functions belong to the null space of LN. Then LN is not invertible. However, the same argument used above shows that LN. is symmetric. Then LNu = f has no solution unless f satisfies 〈f,v〉 = 0 for all constant functions v. If this condition is satisfied, then any two solutions differ by a constant. This fact was already mentioned in the beginning of section 7 but now we see it in a new setting. It is just the analogue of the linear algebra result for singular matrices A.
23