J.M. Sullivan, TU Berlin B: Differential Forms Diff Geom II, WS 2015/16
B. DIFFERENTIAL FORMS instance, if S has k elements this gives a k-dimensional vector space with S as basis. We have already seen one-forms (covector fields) on a Given vector spaces V and W, let F be the free vector space over the set V × W. (This consists of formal sums manifold. In general, a k-form is a field of alternating k- P linear forms on the tangent spaces of a manifold. Forms ai(vi, wi) but ignores all the structure we have on the set are the natural objects for integration: a k-form can be in- V × W.) Now let R ⊂ F be the linear subspace spanned by tegrated over an oriented k-submanifold. We start with ten- all elements of the form: sor products and the exterior algebra of multivectors. (v + v0, w) − (v, w) − (v0, w), (v, w + w0) − (v, w) − (v, w0), (av, w) − a(v, w), (v, aw) − a(v, w). B1. Tensor products These correspond of course to the bilinearity conditions Recall that, if V, W and X are vector spaces, then a map we started with. The quotient vector space F/R will be the b: V × W → X is called bilinear if tensor product V ⊗ W. We have started with all possible v ⊗ w as generators and thrown in just enough relations to b(v + v0, w) = b(v, w) + b(v0, w), make the map (v, w) 7→ v ⊗ w be bilinear. b(v, w + w0) = b(v, w) + b(v, w0), The tensor product is commutative: there is a natural linear isomorphism V⊗W → W⊗V such that v⊗w 7→ w⊗v. (This b(av, w) = ab(v, w) = b(v, aw). is easiest to verify using the universal property – simply factor the bilinear map (v, w) 7→ w ⊗ v through V ⊗ W to The function b is defined on the set V × W. This Cartesian give the desired isomorphism.) product of two vector spaces can be given the structure of a vector space V ⊕ W, the direct sum. But a bilinear map Similarly, the tensor product is associative: there is a natu- b: V × W → X is completely different from a linear map ral linear isomorphism V ⊗ (W ⊗ X) → (V ⊗ W) ⊗ X. Note V ⊕ W → X. that any trilinear map from V × W × X factors through this triple tensor product V ⊗ W ⊗ X. The tensor product space V ⊗ W is a vector space designed exactly so that a bilinear map b: V × W → X becomes a Of special interest are the tensor powers of a single vector ⊗k linear map V ⊗ W → X. More precisely, it can be charac- space V. We write V := V ⊗ · · · ⊗ V. If {ei} is a basis ⊗ · · · ⊗ ⊗k terized abstractly by the following “universal property”. for V, then ei1 eik is a basis for V . In particular if V has dimension m, then V⊗k has dimension mk. There Definition B1.1. The tensor product of vector spaces V is a natural k-linear map Vk → V⊗k and any k-linear map and W is a vector space V ⊗ W with a natural bilinear map Vk → W factors uniquely through V⊗k. V × W → V ⊗ W, written (v, w) 7→ v ⊗ w, with the prop- One can check that the dual of a tensor product is the tensor erty that any bilinear map b: V × W → X factors uniquely product of duals: (V ⊗ W)∗ = V∗ ⊗ W∗. In particular, through V ⊗ W. That means there exists a unique linear we have (V∗)⊗k = (V⊗k)∗. The latter is of course the set map L: V ⊗ W → X such that b(v, w) = L(v ⊗ w). of linear functionals V⊗k → R, which as we have seen is exactly the set of k-linear maps Vk → R. This does not yet show that the tensor product exists, but Definition B1.2. A graded algebra is a vector space A de- uniqueness is clear: if X and Y were both tensor products, L∞ then each defining bilinear map would factor through the composed as A = k=0 Ak together with an associative other – we get inverse linear maps between X and Y, show- bilinear multiplication operation A × A → A that respects ing they are isomorphic. the grading in the sense that the product ω · η of elements ω ∈ A and η ∈ A is an element of A . Often we consider Note that the elements of the form v ⊗ w must span V ⊗ W, k ` k+` graded algebras that are either commutative or anticommu- since otherwise L would not be unique. If {e } is a basis i tative. Here anticommutative has a special meaning: for for V and { f j} a basis for W then bilinearity gives k` ω ∈ Ak and η ∈ A` as above, we have ω · η = (−1) η · ω. X X X i j i j Example B1.3. The tensor algebra of a vector space V is v ei ⊗ w f j = v w ei ⊗ f j. i j i, j ∞ M ⊗k ⊗∗V := V . Clearly then {ei ⊗ f j} spans V ⊗ W – indeed one can check k=0 that it is a basis. This is a valid construction for the space Here of course V⊗1 V and V⊗0 R. Note that the tensor V ⊗ W – as the span of the ei ⊗ f j – but it does depend on the chosen bases. If dim V = m and dim W = n then we product is graded, but is neither commutative nor anticom- note dim V ⊗ W = mn. mutative. A much more abstract construction of V ⊗ W goes through a huge infinite dimensional space. Given any set S , the free B2. Exterior algebra vector space on S is the set of all formal finite linear com- P binations ai si with ai ∈ R and si ∈ S . (This can equally well be thought of as the set of all real-valued functions We now want to focus on antisymmetric tensors, to de- on the set S which vanish outside some finite subset.) For velop the so-called exterior algebra or Grassmann algebra
18 J.M. Sullivan, TU Berlin B: Differential Forms Diff Geom II, WS 2015/16 of the vector space V. is, alternating k-linear maps from Vk correspond to linear Just as we constructed V ⊗ V = V⊗2 as a quotient of a huge maps from ΛkV. (One can also phrase the universality for vector space, adding relators corresponding to the rules for all k together in terms of homomorphisms of anticommu- tative graded algebras.) bilinearity, we construct the exterior power V ∧ V = Λ2V as a further quotient. In particular, letting S ⊂ V⊗V denote So far we have developed everything abstractly and alge- span of the elements v ⊗ v for all v ∈ V, we set V ∧ V := braically. But there is a natural geometric picture of how k- (V ⊗ V)/S . We write v ∧ w for the image of v ⊗ w under the vectors in ΛkV correspond to k-planes (k-dimensional lin- quotient map. Thus v ∧ v = 0 for any v. From ear subspaces) in V. More precisely, we should talk about simple k-vectors here: those that can be written in the form 4 (v + w) ∧ (v + w) = 0 v1 ∧· · ·∧vk. We will see that, for instance, e12 +e34 ∈ Λ2R is not simple. it then follows that v ∧ w = −w ∧ v. If {e : 1 ≤ i ≤ m} is a i ∈ basis for V, then A nonzero vector v V lies in a unique oriented 1-plane (line) in V; two vectors represent the same oriented line if and only if they are positive multiples of each other. Now {ei ∧ e j : 1 ≤ i < j ≤ m} suppose we have vectors v1,..., vk ∈ V. They are linearly is a basis for V ∧ V. independent if and only if 0 , v1 ∧ · · · ∧ vk ∈ ΛkV. Two Higher exterior powers of V can be constructed in the same linearly independent k-tuples (v1,..., vk) and (w1,..., wk) way, but formally, it is easiest to construct the whole ex- represent the same oriented k-plane if and only if the L wedge products v1 ∧ · · · ∧ vk and w1 ∧ · · · ∧ wk are pos- terior algebra Λ∗V = Λ V at once, as a quotient of k itive multiples of each other, that is, if they lie in the same the tensor algebra ⊗∗V, this time by the two-sided ideal ray in ΛkV. (Indeed, the multiple here is the ratio of k- generated by the same set S = {v ⊗ v} ⊂ V ⊗ V ⊂ ⊗∗V. This means the span not just of the elements of S but also areas of the parallelepipeds spanned by the two k-tuples, of their products (on the left and right) by arbitrary other given as the determinant of the change-of-basis matrix for the k-plane.) tensors. Elements of Λ∗V are called multivectors and ele- ments of ΛkV are more specifically k-vectors. We let Gk(V) denote the set of oriented k-planes in V, called the (oriented) Grassmannian. Then the set of simple End of Lecture 30 Nov 2015 k-vectors in ΛkV can be viewed as the cone over Gk(V). (If Again we use ∧ to denote the product on the resulting (still we pick a norm on ΛkV, say induced by an inner product graded) quotient algebra. This product is called the wedge on V, then we can think of Gk(V) as the set of “unit” sim- product or more formally the exterior product. We again ple k-vectors, say those arising from an orthonormal basis get v ∧ w = −w ∧ v for v, w ∈ V. More generally, for any for some k-plane.) v1,..., vk ∈ V and any permutation σ ∈ Σk of {1,..., k}, (Often, especially in algebraic geometry, one prefers to this implies work with the unoriented Grassmannian Gk(V)/±. It is most naturally viewed as lying in the projective space vσ1 ∧ · · · ∧ vσk = (sgn σ) v1 ∧ · · · ∧ vk. P(V):= V r {0}/ R r {0}. A special case is the product of a k-vector α with an `- vector β where we use a cyclic permutation to get the anti- In algebraic geometry one typically also replaces R by C commutative law α ∧ β = (−1)k`β ∧ α. throughout.) If {ei : 1 ≤ i ≤ m} is a basis for V, then If we give V an inner product, then any k-plane has a unique orthogonal (m − k)-plane. This induces an isomor- {e := e ∧ · · · ∧ e : 1 ≤ i < ··· < i ≤ m} i1···ik i1 ik 1 k phism between GkV and Gm−kV. It extends to a linear, norm-preserving isomorphism m is a basis for ΛkV. In particular, dim ΛkV = k ; we have Λ0V = R but also ΛmV R, spanned by e12···m. For k > m ?: ΛkV → Λm−kV there are no antisymmetric tensors: Λ V = 0. The exterior k algebra has dim Λ V = Pm m = 2m. The determinant called the Hodge star operator. (Recall that both these ∗ k=0 k has a natural definition in terms of the exterior algebra: if m spaces have the same dimension k .) If v is a simple k- we have m vectors v j ∈ V given in terms of the basis {ei} vector, then ?v is a simple (m − k)-vector representing the P i as v j = i v jei then orthogonal complement. In particular, if {ei} is an oriented orthonormal basis for V, then v ∧ · · · ∧ v = det vi e . 1 m j 12···m ? e1 ∧ · · · ∧ ek = ek+1 ∧ · · · ∧ em (The components of the wedge product of k vectors vi are given by the various k×k minor determinants of the matrix and similarly each other vector in our standard basis for i v j .) ΛkV maps to a basis vector for Λm−kV, possibly with a The exterior powers of V with the natural k-linear maps minus sign. k V → ΛkV are also characterized by the following univer- Classical vector calculus in three dimensions uses the sal property. Given any alternating k-linear map Vk → X to Hodge star implicitly: instead of talking about bivectors any vector space X, it factors uniquely through ΛkV. That and trivectors, we introduce the cross product and triple
19 J.M. Sullivan, TU Berlin B: Differential Forms Diff Geom II, WS 2015/16 product: Λ1V = V∗ we set v × w := ?(v ∧ w), [u, v, w]:= hu, v × wi = ?(u ∧ v ∧ w). ω ∧ η := ω ⊗ η − η ⊗ ω. k ` But even physicists noticed that such vectors and scalars More generally, for ω ∈ Λ V and η ∈ Λ V we use an transform differently (say under reflection) than ordinary alternating sum over all permutations σ ∈ Σk+`: vectors and scalars, and thus refer to them as pseudovec- tors and pseudoscalars. (ω ∧ η)(v1,..., vk+`):= 1 X For dim V = m, we can use these terms as follows: (sgn σ) ω(v ,..., v ) η(v ,..., v ). k!`! σ1 σk σ(k+1) σ(k+`) σ • scalars are elements of R = Λ0V, i • vectors are elements of V = Λ1V, The factor is chosen so that if {ei} is a basis for V and {ω } is the dual basis for Λ1V = V∗ then • pseudovectors are elements of ?V = Λm−1V, and i ···i i i • pseudoscalars are elements of ?R = ΛmV. ω 1 k := ω 1 ∧ · · · ∧ ω k
k Of course, these are in a sense the easy cases. For these k, is the basis of Λ V dual to the basis {ei1···ik } for ΛkV. any k-vector is simple. We can identify both G1V and Putting these spaces together, we get an anticommutative Gm−1V as the unit sphere in V = Λ1V Λm−1V. For graded algebra 2 ≤ k ≤ m − 2 on the other hand, not all k-vectors are m simple, and GkV has lower dimension than the unit sphere M Λ∗V := ΛkV. in ΛkV. Indeed, it can be shown that the set of simple k- k=0 vectors (the cone over GkV) is given as the solutions to a certain set of quadratic equations called the Grassmann– Again the dimension of each summand is m so the whole P i j ∈ 4 k Plücker relations. For instance a ei j Λ2R is a simple algebra has dimension 2m. 2-vector if and only if If L: V → W is a linear map, then for each k we get an ∗ a12a34 − a13a24 + a14a23 = 0. induced map L : ΛkW → ΛkV defined naturally by ∗ 4 L ω(v ,..., v ) = ω(Lv ,..., Lv ). This shows that G2R is a smooth 4-submanifold in the 1 k 1 k 5 4 unit sphere S ⊂ Λ2R . Of course, we have introduced these ideas in order to apply If we choose an inner product on V, then thinking about them to the tangent spaces T M to a manifold Mm. We get how oriented orthonormal bases for a k-plane and its or- p dual bundles Λ TM and ΛkTM of rank m . thogonal complement fit together, we see that we can iden- k k tify GkV = SO(m)/ SO(k) × SO(m − k) . In particular, it Definition B3.1. A (differential) k-form on a manifold Mm is a smooth manifold of dimension k(m − k). is a (smooth) section of the bundle ΛkTM. We write Ωk M = Γ(ΛkTM) for the space of all k-forms, which is a module over C∞ M = Ω0 M. Similarly we write L B3. Differential forms Ω∗ M = Γ(Λ∗TM) = Ωk M for the exterior algebra of M. Many textbooks omit discussion of multivectors and con- If ω ∈ Ωk M is a k-form, then at each point p ∈ M the sider only the dual spaces. (This is presumably because the value ω ∈ ΛkT M is an alternating k-linear form on T M abstract definition of tensor powers and then exterior pow- p p p or equivalently a linear functional on Λ T M. That is, for ers as quotient spaces seems difficult.) Recall that vector k p any k vectors X ,..., X ∈ T M we can evaluate subspaces and quotient spaces are dual operations, in the 1 k p ∗ sense that if Y ⊂ X is a subspace, then the dual (X/Y) ωp(X1,..., Xk) = ω(X1 ∧ · · · ∧ Xk) ∈ R. of the quotient can be naturally identified with a subspace ∗ o ∗ of X , namely with the annihilator Y of X , consisting of In particular, ωp naturally takes values on (weighted) k- those linear functionals on X that vanish on Y: planes in T p M; as we have mentioned, k-forms are the nat- ural objects to integrate over k-dimensional submanifolds (X/Y)∗ Yo ⊂ X∗. in M. If f : Mm → Nn is a smooth map and ω ∈ ΩkN is a k-form, Using this, we find that then we can pull back ω to get a k-form f ∗ω on M defined k ∗ ⊗k∗ by Λ V := (ΛkV) ⊂ V ( f ∗ω) (X ,..., X ) = ω ((D f )X ,..., (D f )X ). is the subspace of those k-linear maps Vk → R that are p 1 k f (p) p 1 p k alternating. (Of course this vanishes if k > m.) As a special case, if While it is easy to construct the wedge product on multi- f : M → N is the embedding of a submanifold, then f ∗ω = vectors as the image of the tensor product under the quo- ω|M is the restriction of ω to the submanifold M, in the ∗ tient map, the dual wedge product on Λ V requires con- sense that we consider only the values of ωp(X1,..., Xk) structing a map to the alternating subspace. For ω, η ∈ for p ∈ M ⊂ N and Xi ∈ T p M ⊂ T pN.
20 J.M. Sullivan, TU Berlin B: Differential Forms Diff Geom II, WS 2015/16
Exercise B3.2. Pullback commutes with wedge product in Theorem B4.3. For any manifold Mm, the differential map the sense that d : Ω0 M → Ω1 M has a unique R-linear extension to an antiderivation d : Ω∗ M → Ω∗ M satisfying d2 = d ◦ d = 0. f ∗(ω ∧ η) = ( f ∗ω) ∧ ( f ∗η) This antiderivation has degree 1 in the sense that it sends Ωk M to Ωk+1 M; it is called the exterior derivative. for f : M → N and ω, η ∈ Ω∗N. Proof. First suppose g, f i ∈ C∞ M so that g d f 1∧· · ·∧d f k ∈ In a coordinate chart (U, ϕ) we have discussed the coordi- Ωk M. The two conditions on d together automatically im- nate bases {∂ } and {dxi} for T M and T ∗ M, respectively, i p p ply that the pullbacks under ϕ of the standard bases on Rm. Simi- larly, d g d f 1 ∧ · · · ∧ d f k = dg ∧ d f 1 ∧ · · · ∧ d f k ∈ Ωk+1 M. n o i1 ik dx ∧ · · · ∧ dx : 1 ≤ i1 < ··· < ik ≤ m In a coordinate chart (U, ϕ) of course every k-form ω can be expressed as a sum of terms of this form. The propo- forms the standard coordinate basis for k-forms; any ω ∈ sition above shows we can work locally in such a chart. k Ω (M) (or more properly its restriction to U) can be ex- Thus we know the exterior derivative (if it exists) must be pressed uniquely as given in coordinates by X X X X X | i1 ∧ · · · ∧ ik I I i I ω U = ωi1···ik dx dx d ωIdx = dωI ∧ dx = ∂iωI dx ∧ dx i1<···