Differentiation under the integral in Geometric Algebra Derek Elkins September 13, 2015 Abstract Z Z d m m @x Lt(x; d x) = L´ t(x; (d x ^ ) · r´ x) dt D(t) D(t) @t I m−1 @x + Lt(x; d x ^ ) @D(t) @t Z @L (x; dmx) + t D(t) @t where Lt is an arbitrary, time-indexed family of suitably differentiable, multivector-valued differential forms, n the dimension of the containing manifold, m ≤ n the dimension of D(t), and rx is the vector derivative with respect to x. 1 Introduction Usually the bounds of an integral are fixed with the main exception being in- stances of the fundamental theorem of calculus. When the bounds are fixed, differentiation and integration commute.1 In some areas, such as fluid dynamics, varying domains of integration are more common. Reynolds' transport theorem is an instance of differentiation under the integral. When differentiating an integral, let's say with respect to \time" for con- creteness, there are three ways the value of the integral can vary. First, the integrand can itself simply be time-varying. Next, the boundary of the domain of integration can be changing with respect to time: imagine a disc growing in the plane. Finally, the whole domain of integration can be moving: imagine that disc floating out of the plane. Each of these scenarios corresponds to a term in the equation. Ultimately, this paper merely states that the result from [Fla73] is also true for multi-vector valued forms and not just scalar forms. The proof uses a similar sort of approach as well, but the notation of the result and the proof is quite 1Given enough differentiability. 1 different. So beyond situating the result in the context of geometric calculus which subsumes exterior calculus, my hope and opinion is that the result and the proof are much less abstruse than, at least, a typical presentation in exterior calculus. By using geometric calculus most operations of exterior calculus have formulas that are so simple they don't bear naming. For example, Flanders uses the interior product which he describes as \not widely known" [Fla73, p. 623]. It's a simple exercise to compare his formula to the above to find out what has become of the interior product. Similarly, geometric calculus has no need for chains and defining vectors as differential operators and so forth. The proof below attempts to be fairly detailed, albeit informal about some aspects. It also is linked to an intuitive picture and I attempt to explain why certain definitions are made and are correct. 2 Preliminaries The main driver of the proof, unsurprisingly, will be the Fundamental Theo- rem of Calculus, both in its general form [HS87, p. 256] and in the basic high school form. I'm using a fairly explicit notation (compare to [Fla73]) with the main omission being the parameter dependence of dmx(= dmx(x)). Following Hestenes, the´accents mark which expression the vector derivative, r, is differ- entiating. So in the following formula for the Fundamental Theorem of Calculus, r is not differentiating dmx. Z I F´(x; dmx · r´ ) = F(x; dm−1x) (1) M @M Here F is simply a field of linear functions of (m−1)-vectors (parameterized by x) linear in dm−1x. That is all a differential (m−1) (multi-)form is. We also have the high school form of the fundamental theorem of calculus. d Z t F(τ)dτ = F(t) (2) dt t0 We'll simplify the proof by separating out the time-varying integrand case. With suitably differentiable functions and constant domain of integration, it's trivial to show that the derivative commutes with the integral: Z Z Z t m d m d @Fτ (x; d x) Ft(x; d x) = dτ dt D dt D t0 @τ d Z t Z @F (x; dmx) Z @F (x; dmx) = τ dτ = t (3) dt t0 D @τ D @t To handle the case with a non-constant domain, we define g(t) ≡ f(t; t) and define f as Z m f(t1; t2) ≡ Ft2 (x; d x) D(t1) 2 and we differentiate g with respect to t getting2 dg(t) df(t; t) @f @f = = (t; t) + (t; t) (4) dt dt @t1 @t2 Expanding definitions gives: Z Z Z m d m d m @Ft(x; d x) Ft(x; d x) = Fτ (x; d x) + (5) dt D(t) dt D(t) τ=t D(t) @t where we've used (3) and the middle term is what we need to define. 3 Proof The goal of the proof is to convince you that the formula is correct, and not to validate analytic details. Functions are assumed sufficiently smooth and generally we assume no \bad" things happen, e.g. I'm not really sure what happens if the domain of integration becomes degenerate. For ease of speaking, I'll refer to the x variables and the manifold supporting them as \spatial" and \space" to contrast with the t, τ variables that will be referred to as \time". The outline of the proof is to extrude the \space" along \time" producing a \space-time" manifold, apply the general fundamental theorem of calculus to this manifold, and then differentiate the result using the high school fundamental theorem of calculus to get back to just the \spatial" manifold. No physical significance is attached to these terms. In particular, no connection to special relativity. We define our \space-time" manifold as E(t) ≡ Στ :[t0; t]:D(τ), borrowing the dependent sum notation from type theory and where t0 is an arbitrary point. In technical terms, this is the total space of the fiber bundle over the interval [t0; t]. In non-technical terms, we're just stacking D(τ) shaped slices on top of each other for each τ in [t0; t] so that E(t) is an undulating, waving \cylinder". We'll write the points in E(t) as ! ≡ (x; τ) and we have an additional tangent @! vector defined as usual: eτ ≡ @τ . On to step two: applying the general fundamental theorem of calculus to our new manifold. First we make a change of variables [HS87, pp. 267-269]: m m B(!; d !) ≡ L(πx(!); πx(d !)) (6) Note, that the dm! is just an elaborately named parameter, it could just as well be K. πx :[t0; t] × N ! N where N is the containing manifold, and in particular πx(!) = x. This specializes to a function E(t) !D(t) but it's differential (the πx) is defined for the entire tangent space not just D(t)'s. Also, this broader function avoids concerns about the codomain depending on values of the domain. I.e. by analogy3 to dependent types, the \tighter" projection 2Exercise: Derive the right hand side of (4). As a hint, what does the chain rule look like dh(v(λ)) for dλ when v is vector-valued? 3This is more than an analogy... 3 operator would have a type like Π! : E(t):D(πτ (!)). We'll talk more about πx and co. later, but we won't be concerned with L again for a while. Applying the fundamental theorem (1) to B, we get: Z I B´ (x; τ; dm+1! · r´ ) = B(x; τ; dm!) (7) E(t) @E(t) @E(t) splits into three parts, namely the \caps", D(t0) and D(t), and the \side of the cylinder", S(t) ≡ Στ :[t0; t]:@D(τ). So we have, I Z B(x; τ; dm!) = B(x; t; dmx) @E(t) D(t) Z m − B(x; t0; d x) (8) D(t0) Z − B(x; τ; dm−1x ^ dτ) S(t) A little bit of magic happened here. For the integrals over D(t), the tangent volume element dm! exactly corresponds to the tangent volume element of D(t), i.e. dmx. For example, if we had a disc in the x-y plane and we extruded it along the z axis to get a cylinder, the vectors tangent to that cylinder on the top or bottom would still be vectors in the x-y plane. However, the tangent vectors along the side of that cylinder would necessarily have a z component, and this is exactly what we are seeing in the integral over S(t) where we make the dτ factor explicit. But wait. Where did those two negative signs come from? While it's easy to intuitively see the one for D(t0), both can be explained by taking a closer look at the general Fundamental Theorem of Calculus. In the Fundamental Theorem of Calculus (1), we weren't too precise about how dmx and dm−1x on the left and right sides respectively were related to each other. A little bit of thought reveals that if dm−1x represents the tangent space of @M, i.e. every vector in it goes \along" the surface (i.e. boundary) of M, then in general the only remaining direction to go is normal to the surface, i.e. out of (or into) M. Already we can see where the negative sign for the D(t0) integral came from: its normal vector points in the opposite direction of the normal for the D(t) integral (and we arbitrarily, but consistently, choose dmx so that the D(t) integral comes out positive). Still, the sign of the S(t) integral may not yet be clear, so let's choose dmx = dm−1x ^ n where n coincides with the normal vector at the boundary in (1).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-