<<

Topics from Tensoral

Jay R. Walton Fall 2013

1 Preliminaries

These notes are concerned with topics from tensoral calculus, i.e. generalizations of calculus to functions defined between two tensor spaces. To make the discussion more concrete, the tensor spaces are defined over ordinary Euclidean , RN , with its usual inner product structure. Thus, the tensor spaces inherit a natural inner product, the tensor dot-product, from the underlying , RN .

2 The of Tensor Valued Functions

Let F : D ⊂ T r −→ T s (1) be a with domain D a of T r taking values in the tensor space T s, where D is assumed to be an open subset of T r.1 Continuity. The function F is said to be Continuous at A ∈ D provided for every  > 0 there exists a δ > 0 such F (B) ∈ B(F (A), ) whenever B ∈ B(A, δ), i.e. F maps the of radius δ centered at A into the ball of radius  centered at F (A). The function F is said to be continuous on all of D provided it is continuous at each A ∈ D. There are two useful alternative characterizations of continuity. The first is that F is continuous on D provided it maps convergent onto convergent sequences. That is, if An ∈ D is a converging to A ∈ D (limn→∞ An = A), then limn→∞ F (An) = F (A). The second alternative characterization is that the F −1 maps open of T s to open subsets of D. Derivative. The function F is said to be Differentiable at A ∈ D provided there exists a tensor L ∈ T r+s such that

F (A + H) = F (A) + L[H] + o(H) as |H| → 0 (2)

∗Copyright c 2011 by J. R. Walton. All rights reserved. 1A subset D ⊂ T r is said to be open provided for every element A ∈ D there exists an open ball centered at A that is wholly contained in D.

1 where |H| denotes the of the tensor H ∈ T r. If such a tensor L exists satisfying (??), it is called the Derivative of F at A and denoted DF (A). Thus, (??) can be rewritten F (A + H) = F (A) + DF (A)[H] + o(H). (3) Recall that o(|H|) is the Landau “little oh” symbol which is used to denote a function depending upon H that tends to zero faster that |H|, i.e. o(H) lim = 0. |H|→0 |H| If the derivative DF (A) exists at each in D, then it defines a function DF (·): D ⊂ T r −→ T r+s. Moreover, if the function DF (·) is differentiable at A ∈ T r, then its derivative is a tensor in T 2r+s, denoted by D2F (A), called the of F at A ∈ D. Continuing in this manner, of F of all orders can be defined. Example. Let 1 ∼ N 0 ∼ φ(·): T = R −→ T = R. (4) Thus, φ(·) is a real-valued function of N-real variables and its graph is a in RN+1. In the definition of the derivative (??), it is more customary in this context to let H = hu where u is a unit vector in RN . Defining equation (??) then becomes φ(a + hu) = φ(a) + Dφ(a)[hu] + o(hu). (5) From the of the tensor Dφ(a)[hu], one concludes that φ(a + hu) − φ(a) Dφ(a)[u] = lim , (6) h→0 h which is the familiar of φ(·) at the point a in the direction u. Thus, being differentiable at a point implies the existence of directional derivatives, and hence partial derivatives, in all directions. However, the converse is not true. That is, there exist functions with directional derivatives existing at a point in all possible directions but which are not differentiable at the point. For such an example, consider the function φ(·): R2 −→ R given by ( 3 3 x −y when (x, y) 6= (0, 0), φ(x, y) = x2+y2 (7) 0 when (x, y) = (0, 0). One then shows easily that if u = (cos(θ), sin(θ))T , the directional derivative of φ(·) at the origin (0,0) in the direction u equals cos(θ)3 − sin(θ)3. However, φ(·) is not differentiable at the origin in the sense of (??). (Why?) Consider further the function φ(·) in (??). If φ(·) is differentiable in the sense of (??), then its derivative Dφ(a)[·] ∈ T 1 is a linear transformation from RN to R, and as such is representable by dot-product with a vector in RN . Specifically, there exists a unique vector, denoted by ∇φ(a) ∈ RN such that Dφ(a)[u] = ∇φ(a) · u for all u ∈ RN . (8)

2 The vector ∇φ(a) is called the of φ(·) at a. The component forms for the derivative Dφ and the gradient ∇φ are easily constructed. N In particular, let B = {e1,..., eN } be the natural orthonormal basis for R . Then the 1×N representation for Dφ and the N-tuple vector representation for the gradient ∇φ are given by   ∂x1 φ(a) [Dφ(a)] = [∂ φ(a), . . . , ∂ φ(a)] and [∇φ(a)] =  .  (9) B x1 xN B  . 

∂xN φ(a)

where ∂xi φ(a) denotes the of φ with respect to xi at the point a

φ(a + hei) − φ(a) ∂x φ(a) = lim . i h→0 h Example. Another important example is provided by functions F (·): T 0 −→ T s, i.e. s-ordered tensor valued functions of a single scalar . Since in mechanics the scalar independent variable is usually time, that variable is given the special symbol t and the derivative of such functions is represented by

DF (t)[τ] = F˙ (t)τ ∈ T s (10) where F (t + h) − F (t) F˙ (t) = lim . h→0 h In component form, if the tensor valued function F (·) has the component representation s [Fi1,...,is ] with respect to the natural basis for T , then the component representation for the tensor F˙ (t) is ˙ ˙ [F (t)] = [Fi1,...,is ]. (11) 1 ∼ N 1 Example. A Vector is a function a(·): D ⊂ T = R −→ T . Its derivative defines a second order tensor Da(x)[·] ∈ T 2. Its component form, with respect to the natural basis on RN for example, is

[Da(x)] = [∂xj ai(x)] i, j = 1,...,N (12) where [ai(x)], i = 1,...,N gives the component representation of a(x). The right hand side of (??) is the familiar Jacobian matrix. . Various types of “products” of tensor functions occur naturally in . Rather than proving a separate product rule formula for every product that arises, it is much more expedient and much cleaner to prove one product rule formula for a general, abstract notion of product. To that end, the appropriate general notion of product is provided by a general bi-. More specifically, suppose that F (·): D ⊂ T r −→ T p and G(·): D ⊂ T r −→ T q are two differentiable functions with the same domain set D in T r but different range spaces. Letπ ˆ(·, ·): T p × T q −→ T s denote a bi- (i.e. πˆ(·, ·) is linear in each of its variables separately) with values in T s. One then defines the product function E(·): D ⊂ T r −→ T s by

E(A) :=π ˆ(F (A),G(A)) for A ∈ D.

3 Since F and G are assumed to be differentiable at A ∈ D, it is not difficult to show that E is also differentiable at A with

DE(A)[H] =π ˆ(DF (A)[H],G(A)) +π ˆ(F (A), DG(A)[H]) for all H ∈ T r. (13)

Notice that (??) has the familiar form (fg)0 = f 0g + fg0 from single variable calculus. 0 ∼ r 0 s Example. Let A(·): T = R −→ T and B(·): T −→ T be differentiable tensor valued functions of the single scalar variable t. Then their E(t) := A(t) ⊗ B(t) is differentiable with E˙ (t) = A˙(t) ⊗ B(t) + A(t) ⊗ B˙ (t). The familiar chain rule from single variable calculus has a straight forward generalization to the tensor setting. Specifically, suppose F (·): D ⊂ T r −→ T q is differ- entiable at A ∈ D and G(·): G ⊂ T q −→ T s (with G being an on which G(·) is defined) is differentiable at F (A) ∈ G ∩ F (D), then the composite function E(·) := G ◦ F (·) is also differentiable at A ∈ D with

DE(A)[H] = DG(F (A))[DF (A)[H]] for all H ∈ T r. (14)

The right hand side of (??) is the composition of the tensor DG(F (A))[·] ∈ T q+s with the tensor DF (A)[·] ∈ T r+q producing a the tensor DG(F (A)) ◦ DF (A)[·] ∈ T r+s. This generalizes the familiar chain rule formula g(f(x))0 = g0(f(x))f 0(x) from single variable calculus. Example. An important application of the chain rule is to composite functions of the form E(t) = G ◦ F (t) = G(F (t)), i.e. functions for which the inner function is a function of the single scalar variable t. The chain rule then yields the result

E˙ (t) = DG(F (t))[F˙ (t)].

For example, let A(t) be a differentiable function taking values in T 2, i.e. A(·): R −→ T 2. Then the composite real valued function, φ(t) = det(A(t)) is differentiable provided det(A(t)) 6= 0 with φ˙(t) = det(A(t))tr(A˙(t)A(t)−1).

3 Div, Grad,

Classical concerns special cases of (??) with r = 0, 1 and s = 0, 1, 2. It is in that setting that the operators gradient, and curl are usually defined. In elementary calculus, these operators are most often defined through component representa- tions with respect to the natural orthonormal basis for RN . Here they are given intrinsic definitions irrespective of any chosen basis for RN . 1 ∼ N Gradient. The definition of the gradient of a scalar valued function defined on T = R has been given previously and won’t be repeated here. It is also customary to define the gradient of a vector field a(·): T 1 −→ T 1 as

Grad(a(x)) = ∇a(x) := Da(x).

4 Thus, for vector fields on RN , the gradient is just another name for the previously defined derivative. Divergence. For a vector field, a(x), defined on D ⊂ RN , the Divergence is defined by Div(a(x)) := tr(∇a(x)). (15)

Thus, the divergence of a vector field is a scalar valued function. The reader should verify that the component form of Div(a(x)) with respect to the natural basis on RN is

Div(a(x)) = ∂x1 a1(x) + ... + ∂xN aN = ai,i(x).

It is also useful to define the divergence for second order tensor valued functions defined on D ⊂ RN . To that end, if A(·): D ⊂ RN −→ T 2, then its divergence, denotes Div(A(x)), is that unique vector field satisfying

Div(A(x)) · a = Div(A(x)T a) (16)

N for all constant vectors a ∈ R . In component form, if [A(x)] = [aij(x)], then

T [Div(A(x))] = [aij,j(x)] .

Curl. The Curl is defined for vector fields on R3. To give it an implicit, component independent definition, one proceeds as follows. For a vector field v(·): D ⊂ R3 −→ R3, Curl(v(x)) is defined to be the unique vector field satisfying

(∇v(x) − (∇v(x))T )a = Curl(v(x)) × a (17) for all constant vectors a ∈ R3. The right hand side of (??) is the vector cross product of the two vectors Curl(v(x)) and a. An important observation from (??) is that Curl(v(x)) = 0 if and only if ∇v(x) is a symmetric second order tensor. Useful Formulas. In the following formulas, φ(x) denotes a scalar valued field, u(x) and v(x) denote vector fields and A(x) denotes a second order tensor valued field. One can then readily show from the general product rule (??) that

∇(φv) = φ∇v + v ⊗ ∇φ Div(φv) = φDiv(v) + v · ∇φ ∇(u · v) = (∇v)T u + (∇u)T v Div(u ⊗ v) = uDiv(v) + (∇u)v (18) Div(AT v) = A · ∇v + v · Div(A) Div(φA) = φDiv(A) + A∇φ.

Another useful formula shows that the operators Div and ∇ do not commute. Specifically, the reader should verify that

∇(Div(v)) = Div((∇v)T ). (19)

5 4

In this are cataloged a selection of results concerning of various differential operators acting on tensor fields. They all may be thought of as multidimensional general- izations of the Fundamental of Single Variable Calculus

Z b f(x) dx = F (b) − F (a) a where F (x) is any anti-derivative of f(x), i.e. F 0(x) = f(x). All of the theorems presented have versions valid in RN . However, attention is restricted here to R3 to keep the mathemat- ical complications to a minimum. The reader is assumed to be familiar with the definition of line, surface and integral in R3 as well as with techniques for actually computing them. In what follows, R ⊂ R3 is assumed to be a bounded open set with piecewise smooth boundary ∂R, and n denotes the piecewise smooth field on ∂R of outward pointing unit vectors. Moreover, the natural basis for R3 is denoted by {e1, e2, e3}. Also, volume integrals over a region R ⊂ R3 are denoted Z f dV R

with dV denoting volume , and surface integrals over a surface S in R3 are denoted Z f dA S with dA denoting surface area measure.

4.1

The Divergence Theorem has four fundamental versions in R3 given in 1 ∼ 3 2 Divergence Theorem. Let φ(·): R −→ R, v(·): R −→ T = R and S(·): R −→ T denote smooth scalar, vector and 2nd-order tensor fields on R. Then Z Z ∇φ dV = φn dA (20) R ∂R Z Z Div(v) dV = v · n dA (21) R ∂R Z Z ∇v dV = v ⊗ n dA (22) R ∂R Z Z Div(S) dV = Sn dA. (23) R ∂R Proof: The reader has undoubtedly seen (??) and (??) proved in a vector calculus text. (??) and (??) are simple consequences of (??) and (??). In particular, for (??), let a be a

6 constant vector in R3. Then, Z Z a · Sn dA = a · (Sn) dA ∂R ∂R Z = (ST a) · n dA ∂R Z = Div(ST a) dV R Z Z = (Div(S)) · a dV = a · Div(S) dV. R R Since the vector a is arbitrary, (??) now follows easily. (??) can be proved in similar fashion. The following important application of the Divergence Theorem provides a useful inter- pretation of the divergence operator. Specifically, one can easily show Theorem. Let B[a, r] ⊂ R denote the ball centered at a ∈ R of radius r > 0. Then 1 Z Div(v(a)) = lim v · n dA (24) r→0+ vol(B[a, r]) ∂B[a,r] 1 Z Div(S(a)) = lim Sn dA. (25) r→0+ vol(B[a, r]) ∂B[a,r] Proof: Supplied by the reader. Thus, one sees from (??) that the Div(v(a)) equals the (outward) flux per unit volume of v through an infinitesimally small sphere centered at a. Whereas, if the second order tensor S is thought of as a tensor, for example, one sees from (??) that Div(S(a)) gives the total contact force per unit volume acting on the boundary of an infinitesimally small ball centered at a.

4.2 Green Identities The Green Identities are important applications of the Divergence Theorem. They are proved from the following multidimensional generalization of the single variable calculus formula Z b Z b f(x)g0(x)dx = f(b)g(b) − f(a)g(a) − f 0(x)g(x)dx. a a If φ is a smooth scalar field and v is a smooth vector field defined on R ⊂ R3, then it follows from the Divergence Theorem and the product rule formula (??b) that Z Z Z (φ Div(v) + v · ∇φ) dV = Div(φ v) dV = φ v · n dA. R R ∂R Letting v = ∇ψ, where ψ is another smooth scalar field defined on R, one obtains the First Green Identity. Let φ and ψ denote smooth scalar fields defined on the region R ⊂ R3. Then Z Z dψ Z φ4ψ dV = φ dA − ∇φ · ∇ψ dV (26) R ∂R dn R

7 where dψ := ∇ψ · n dn denotes the (outward) normal derivative of φ on the boundary ∂R of R. From the First Green Identity one easily derives the Second Green Identity. Again let φ and ψ denote smooth scalar fields defined on R ⊂ R3. Then Z Z dψ dφ (φ4ψ − ψ4φ) dV = (φ − ψ ) dA. (27) R ∂R dn dn Application to Poisson Equation. The Green Identities are work-horse tools in applied mathematics. As an illustration, consider the classical mixed boundary value problem for the Poisson equation. Specifically, let R ⊂ R3 be a bounded open region with piecewise smooth boundary ∂R. Suppose further that the boundary is the union of two surfaces with non-empty (2-dimensional) interiors and with a piecewise smooth common boundary ◦ ◦ ◦ ◦ Γ. More specifically, ∂R = S1 ∪ S2 with S1 ∩ S2= ∅, S1 ∩ S2 = Γ and S1, S26= ∅ where Γ, the common boundary to the surfaces S1 and S2 is a piecewise smooth closed curve. Mixed Boundary Value Problem. The classical mixed boundary value problem for the Poisson equation requires finding a smooth function u(x) satisfying the Poisson equation

−4u = f, for x ∈ R (28) subject to the mixed boundary conditions

u = g for x ∈ S1 (29) du = h for x ∈ S . (30) dn 2 Boundary condition (??) is called a Dirichlet Boundary Condition while (??) is called a Neumann Boundary Condition. Data Compatibility. Letting φ(x) ≡ 1 and ψ(x) = u(x) in (??) yields Z Z Z du f dV = −4u dV = − . (31) R R ∂R dn

In particular, for the pure Neumann problem for which S1 = ∅, one has the equilibrium requirement on the data h and f Z Z h dA + f dV = 0. ∂R R

Uniqueness of Smooth Solutions. Suppose there are two smooth solutions, u1(x) and u2(x), of (??,??,??). Define u(x) := u1(x) − u2(x). Then u(x) satisfies −4u = 0, for x ∈ R (32)

subject to the mixed boundary conditions du u = 0 for x ∈ S and = 0 for x ∈ S . (33) 1 dn 2

8 Multiplying (??) by u(x), integrating the resulting equation over R and then integrating by parts (i.e. using the First Green Identity (??)) making use of (??), one obtains the identity Z |∇u|2dV = 0. (34) R It follows from (??) that |∇u(x)| ≡ 0 on R, and hence that u(x) is a . Since the boundary conditions require u(x) ≡ 0 on S1, u(x) ≡ 0 on all of R. Thus, u1(x) = u2(x) on R as was required to be shown.

4.3 Potential Theory This section presents a few results from classical potential theory. Specifically, the issue studied is conservative vector fields on RN . Conservative Field. A vector field v(x) defined on a region R ⊂ RN is called Conservative provided that work is independent. That is, given any two piecewise smooth paths Γ1[a, b], Γ2(a, b] ⊂ R connecting the two points a, b ∈ R, one has I I v · dx = v · dx (35) Γ1[a,b] Γ2[a,b] H where Γ[a,b] v · x denotes the of v(x) along the path Γ[a, b]. Alternatively, the field v(x) is conservative provided its line integral around any piecewise smooth, closed curve in R is zero. The main result of the subject asserts that a vector field v(x) is conservative if and only if it is the gradient of some scalar field φ(x). If such a scalar field exists, it (or possibly its negation in some applications) is called a Potential for the field v(x). If the region R is connected, i.e. if every two points a, b ∈ R can be connected by a path Γ[a, b] ⊂ R, then any two potential functions differ by at most an additive constant. More formally, this result is called the Potential Theorem. A vector field v(x) defined on a connected region R ⊂ RN is conser- vative if and only if there exists a scalar field φ(x) such that v(x) = ∇φ(x). Any two such scalar fields differ by a constant. Proof: Suppose that v(x) = ∇φ(x) and let Γ[a, b] be a path in R joining the two points a and b. Suppose c(r), 0 ≤ r ≤ 1 is a parametric representation of Γ[a, b] with c(0) = a and c(1) = b. Then

I Z 1 v(x) · dx = v(c(r)) · c0(r) dr Γ[a,b] 0 Z 1 = ∇φ(c(r)) · c0(r) dr 0 Z 1 d = φ(c(r)) dr 0 dr = φ(c(1)) − φ(c(0)) = φ(b) − φ(a).

9 Thus, the line integral depends only upon a and b, not the particular path joining them. It follows that v(x) is a conservative vector field on R. Conversely, assume v(x) is conservative. Let a ∈ R be fixed. Then for any x ∈ R, define φ(x; a) by I φ(x; a) := v(z) · dz (36) Γ[a,x]

where Γ[a, x] is any path in R joining a to x. By path independence, φ(x; a) is unambiguously defined. To show that v(x) = ∇φ(x; a), it suffices to demonstrate that d d φ(x; a) = φ(x + s e; a)| = v(x) · e (37) de ds s=0 where e is any unit vector. Since the left most term in (??) is the directional derivative of φ(x; a) at x in the direction e, which must also be given by d φ(x; a) = ∇φ(x; a) · e, de one concludes that ∇φ(x, a) = v(x). To verify (??), let δ > 0 be chosen small enough so that x + s e ∈ R for all |s| < δ. Let Γ[a, x − δe] denote any path in R joining a and x − δe and let Γ[x − δe, x + s e] denote the straight line path connecting x − δe to x + s e. Then I I φ(x + s e; a) = v(z) · dz + v(z) · dz. (38) Γ[a,x−δe] Γ[x−δe,x+s e]

The first integral on the right-hand-side of (??) is constant with respect to s whereas for the second, I Z s v(z) · dz = v(x + r e) · e dr. Γ[x−δe,x+s e] −δ It now follows that d φ(x + s e; a)| = v(x + s e) · e| = v(x) · e ds s=0 s=0

as required. Finally, let φ1(x) and φ2(x) be two scalar fields whose equal v(x). Define φ(x) := φ1(x) − φ2(x). Then ∇φ(x) ≡ 0 on R. Since R is assumed to be connected, a standard theorem in calculus shows that φ(x) is identically constant on R. This result is somewhat unsatisfying as a test for whether or not a given vector field v(x) is conservative since it requires either showing that a suitable exists or showing that there is a closed curve on which the line integral of v(x) does not vanish. However, for simply connected regions R ⊂ R3, there is a convenient test for conservative fields involving the Curl operator. More specifically, a region R ⊂ R3 is called Simply Connected provided every closed path is Homotopic to a Point. Roughly speaking, homotopic to a point means that a closed path can be continuously deformed to a point without leaving the region R. To be more precise, a closed curve c(s), 0 ≤ s ≤ 1 is said to be homotopic to a point if there exists a h(s, r) defined for 0 ≤ s, r ≤ 1 safisfying h(s, 0) = c(s) for 0 ≤ s ≤ 1, h(0, r) = h(1, r) for all 0 ≤ r ≤ 1, h(s1, 1) = h(s2, 1) for all 0 ≤ s1, s2 ≤ 1.

10 The function h(·, ·), which is called a , is a one parameter family of closed paths, h(·, r) 0 ≤ r ≤ 1, that continuously deforms the given closed curve, c(s), to the one point (constant) curve h(s, 1). The reader should show that the region between two co-axial cylinders is not simply connected in R3 whereas the region between to concentric spheres is. A simple test for conservative vector fields on simply connected regions in R3 is given by the following Theorem. Let R ⊂ R3 be simply connected. Then a smooth vector field v(x) on R is conservative if and only if Curl(v(x)) ≡ 0 on R. Proof: The theorem can be proved with the aid of the classical Stokes Theorem. Let v(x) be a smooth vector field on a region R ⊂ R3 and let S ⊂ R be a smooth, orientable surface whose boundary ∂S = Γ is a a piecewise smooth, simple closed curve (no self-intersections). Then Z I Curl(v) · n dA = v(x) · dx. (39) R Γ where n(x) is a continuous normal vector field on S and the direction for the line integral on the right-hand-side of (??) is chosen by a parametric representation c(s), 0 ≤ s ≤ 1 for Γ satisfying (c˙(0) × c˙(s)) · n(c(0)) > 0, 0 < s < δ for some 0 < δ. (Proof: Omitted since the reader is assumed to have seen the proof in a previous calculus course.) The main theorem is proved by showing that the line integral of v(x) along any simple closed curve in R vanishes. To that end, let c(s), 0 ≤ s ≤ 1 be a parametric representation for a smooth, simple closed curve Γ in R. Since R is assumed to be simply connected, there exists a homotopy h(s, r), 0 ≤ s, r ≤ 1 smoothly deforming Γ to a point in R. The homotopy h(s, r), 0 ≤ s, r ≤ 1 may be thought of as giving a parametric representation for a smooth orientable surface S ⊂ R having Γ as boundary and with a smooth normal vector field m(x) defined by n(x) := |m(x)| where

m(x) := ∂sh(s, r) × ∂rh(s, r).

Applying Stokes Theorem one concludes that I Z v(x) · dx = Curl(v(x)) · n(x) dA = 0. Γ S The integral over the surface S vanishes because Curl(v(x)) is assumed to be identically zero in R. Conversely, if v(x) is conservative, then v(x) = ∇φ(x) for some scalar field φ(x). It follows that ∇v(x) = ∇∇φ(x) is a symmetric second order tensor, and hence that Curl(v(x)) ≡ 0 on R. This completes the proof of the theorem.

11