<<

Vector and Dual

Let V be a finite dimensional vector space over R. Suppose v1, ..., vn is a for V then the dual vector space to V is denoted V ∗ = {f : V → R} where f is a R-linear . The next question might be, what is ∗ i ∗ i j i 1 n ∗ ∗ a basis for V ? Suppose α ∈ V with α (v ) = δj. We claim that α , ..., α is a basis for V . As V is P i also a vector space over R, suppose i ciα = 0, it then follows that:

! X i j ciα (v ) = cj = 0 : ∀j i

Now let f ∈ V ∗ then we have by direct computation: ! X j X j X j j f(v) = f cjv = cj · f(v ) = α (v) · f(cjv ) j j j i.e α1, ..., αn are a basis for V ∗ and so dim(V ) = dim(V ∗) = n.

1 0 Example: Let V = 2 with field with; e1 = e2 = and αi : 2 → defined by R R 0 1 R R i j i 2 1 2 1 2 α (e ) = δj. Suppose v ∈ R then v = αe + βe ⇒ α (v) = α and α (v) = β. And so, the basis elements are precisely the projections onto the x, y axis respectively.

Multilinear Functions

Let V be a vector space and consider V × · · · × V (k-copies). We say f is a multilinear if f is linear in each component i.e f(...., av + bw, ...) = af(..., v, ...) + bf(..., w, ...). We say f is k-linear, f is a k- or f is a k-form and denote Lk(V ) to be the of all k-linear functions on V .

n n n P j j Example: Let v, w ∈ R and h·, ·i : R × R → R defined by hv, wi = j v · w .

Let f ∈ Lk(V ). We say f is symmetric if given any τ ∈ Sk : f(vτ(1), ..., vτ(k)) = f(v1, ..., vk). We say f is alternating if f(vτ(1), ..., vτ(k)) = (sgn τ)f(v1, ..., vk).

Example: For any example of a , consider the given above.

Example: Let g : R3 → R be defined by g(v, w) = v × w which is the cross-product from multivariable calculus.

Example: Let f = det : Mat(n, ) → defined by f(A) = P (sgn τ) a a ··· a . If we let R R τ∈Sn 1,τ(1) 1,τ(2) 1,τ(n) n A = [v1 v2 ··· vn] then f ∈ An(R ).

1 Actions of Permutations on Functions

Let f ∈ Lk(V ) and σ ∈ Sk then we define (σf)(v1, ..., vk) = f(vσ(1), ..., vσ(n)). With this action we have f is symmetric ⇐⇒ σf = f and f is alternating ⇐⇒ σf = (sgn σ)f. We now introduce a very useful lemma;

P P Lemma: If f ∈ Lk(V ) and σ ∈ Sk then Sf = σ σf is symmetric and Af = σ(sgn σ) σf is alternating.

Proof: Let τ ∈ Sk then; ! X X X X τ(Sf) = τ σf = τ(σf) = (τσ)f = σf

σ∈Sk σ∈Sk σ∈Sk σ∈Sk

−1 The last inequality comes from the fact that for any σ ∈ Sk we have σf = ττ σf. For the next result, notice that sgn(τσ) = (sgn τ)(sgn σ)

! X X X τAf = τ (sgn σ) σf = (sgn σ) τσf = (sgn τσ)(sgn τ) τσf

σ∈Sk σ∈Sk σ∈Sk X = (sgn τ) (sgn τσ) τσf

σ∈Sk X = (sgn τ) (sgn σ) σf

σ∈Sk

= (sgn τ) Af

Example: Let f ∈ L3(V ) and suppose v1, v2, v3 ∈ V . Using the short hand notation for permutations, we have S3 = {(1), (12), (13), (23), (123), (132)}. All of the transpositions i.e cycles of length two, have by definition of −1. To determine the others, note that (123) = (13)(12) and (132) = (12)(13) i.e that have sign +1.

2

Let f ∈ Lk(V ) and g ∈ Lj(V ) then we define the tensor product;

f ⊗ g :(v1, ..., vk, vk+1, ..., vk+1) 7→ f(v1, ..., vk)g(vk+1, ..., vk+j)

The juxtaposition above denotes the standard which makes sense due to the fact that f, g are functionals. In this case, we can think of the tensor ⊗ as a bilinear ;

⊗ : Lk(V ) × Lj(V ) → Lk+j(V )

⊗(f, g) = f ⊗ g

The operation above can be shown to be associative. The next example will show that we can express the inner product on a vector space, as a of tensor products.

Example: We will take a special case where h·, ·i : Rn × Rn → R. If we take e1, ..., en to be the standard i j basis and he , e i = gij then we have;

* + X i i X j j X i j i j X i j X i j X 1 j hv, wi = v e , b e = v b he , e i = v b gij = α (v)α (w)gij = gij(α ⊗ α )(v, w) i j i,j i,j i,j i,j

In the above we have αi(v) = vi and αj(w) = wj by our comment on the maps in the dual vector space being the coordinates with respect to the .

The Wedge Product

Let f ∈ Ak(V ) and g ∈ Al(V ) then we define their exterior product or wedge product;

1 f ∧ g = A(f ⊗ g) k!l!

In coordinates, if we let v1, ..., vk+l ∈ V then;

1 X (f ∧ g)(v , ..., v ) = (sgn σ) f(v , ..., v ) · g(v , ..., v ) 1 k+1 k!l! σ(1) σ(k) σ(k+1) σ(k+l) σ∈Sk+l

3 The by k!l! comes from the fact that in Sk+l there are permutations σ such that σ(j) = j for all j > k. For such σ we have σ(f ⊗ g)(v1, ..., vk+l) = (sgn σ)f(v1, ..., vk) · σg(vk+1, ..., vk+l). Since |Sk| = k!, this value is repeated k!-times. Similarly we have permutations τ ∈ Sk+l with τ(j) = j for all j < k + 1 and so τ(f ⊗ g)(v) is repeated l!-times. Our last note is that by definition f ∧ g is alternating.

Properties of Wedge Product

We will prove a few of these properties however some are a bit messy and will be reserved for k-forms on tangent .

kl • If f ∈ Ak(V ), g ∈ Al(V ) then f ∧ g = (−1) g ∧ f

• If f ∈ A2k+1 then f ∧ f = −f ∧ f (by above) and so f ∧ f = 0. 1 • If f, g, h ∈ A (V ) then (f ∧ g) ∧ h = f ∧ (g ∧ h) = A(f ⊗ g ⊗ h) ∗ k!l!h! i • If α ∈ L1(V ) and vi ∈ V then;

1 k X 1 2 k i (α ∧ · · · ∧ α )(v1, ..., vk) = (sgn σ) α (v1) · α (v2) ··· α (vk) = det[α (vj)]

σ∈Sk

Multi-Index Notation

1 n P i j i j Let e , ..., e be a basis for V . Consider f ∈ L2(V ) then f(v, w) = i,j v w f(e , e ) then f is determined j by its values on e . Since we will be only considering forms in Ak(V ) we make another observation;

n n X X X viwjf(ei, ej) = viwjf(ei, ej) i,j j=1 i=1

In any case when i > j we have f(ei, ej) = −f(ej, ei). Therefore, we can get the same sum if we sum over 1 ≤ i < j ≤ n since the only thing we are doing is using the property of f to establish an identity, still keeping all summands.

X f(v, w) := f(ei, ej)(αi ⊗ αj)(v, w) 1≤i

I i1 ik We introduce the multi-index notation I = (i1, ..., ik) and write e = (e , ..., e ). By the above, since f ∈ Ak(V ) it is enough to consider strictly increasing indices.

4 n Remark: If we defined A∗(V ) = ⊕k=0Ak(V ) then (A∗(V ), ∧) is an anti-commutative graded , called the or the Grassmann algebra of multicovectors.

Basis for k-covectors

Let e1, ..., en be a basis for V and α1, ..., αn be its in V ∗. Consider the strictly increasing multi-indices I = (1 ≤ i1 < ··· < ik ≤ n) and J = (1 ≤ j1 < ··· < jk ≤ n) then;

I i1 ik ir α (eJ ) = (α ∧ · · · ∧ α )(ej1 , ..., ejk ) = det[α (ejs )]

ir If I = J then [α (ejs )] = I and so the is 1. If I 6= J we look at the indices (is, js) until ir 6= jr. Without loss of generality, suppose ir < jr then ir 6= jr, jr+1, ..., jn. We also have ir 6= j1, ..., jr−1 ir since we have equality of pairs until ir i.e det[α (ejs )] = 0 since the r-th row has all zeros.

I I We will now show that the alternating k-linear functions α , for a basis for Ak(V ). Since α is also a vector space over R suppose we have; ! X I X I J aI α = 0 ⇐⇒ aI α (e ) = aJ = 0 I I

I The above shows that α are linearly independent. To show that they span Ak(V ), let f ∈ Ak(V ). We P I ∗ claim f = f(eI )α . This consideration comes from when we showed the one forms spanned V . Let P I I J P I I J v ∈ V and define g = f(e )α then g(e ) = f(e )α (e ) = f(eJ ) ⇒ f = g.

Tangent Vectors I

n n n The to p ∈ R is defined as TpR = {[p,~v]: ~v ∈ R }. We will denote vp = [p,~v] and we say n n n vp = wq ⇐⇒ p = q,~v = ~w i.e we do make a distinction between TpR and TqR . Naturally TpR becomes a R-vector space if we define λ[p,~v] = [p, λ~v] and [p, ~w] + [p,~v] = [p, ~w + ~p]. Lastly, we get a nice n n R-linear between TpR and R defined by g :[p,~v] 7→ ~v.

5 Directional

n n 0 Let f : R → R be a smooth map and p,~v ∈ R then we have; D~vf = f(p + t~v) (0) is the of f in the direction of ~v at p. If we let x1, ..., xn denote the coordinate function on Rn then by the multivariable chain rule we have:

X ∂f D~vf = ∇f|p · ~v = vj where v = hv1, ..., vni ∂xj p j ∞ From this definition, we can thing of D~v as an operator itself i.e D~v : Cp (U) → R defined by: X ∂ X ∂f f 7→ vj (f) := vj ∂xj p ∂xj p j j ∞ The operator D~v is linear since if we let f, g ∈ Cp (U) then;

d d D~v(f + g) = (f + g) ◦ (p + t~v) = f(p + t~v) + g(p + t~v) dt t=0 dt t=0 d d = f(p + t~v) + g(p + t~v) dt t=0 dt t=0 = D~vf + D~vg

We also have the D~v obeys the Leibniz Rule (”product rule”) since given f, g as above;

d d D~v(fg) = (fg)(p + t~v) = f(p + t~v) · g(p + t~v) dt t=0 dt t=0 d d = g(p) f(p + t~v) · f(p) g(p + t~v) dt t=0 dt t=0 = g(p)D~vf · f(p)D~vg

∞ Let Dp = {f : f ∈ HomR(Cp (U), R) and f obeys the Leibniz Rule}. We have a nice n µ : TpR → Dp defined by: ∂ X i µ(~vp) = D~vp = v ∂xj p j

6 We claim that µ is an vector space-isomorphism. Since µ is linear, to show it is injective, we show that it’s is trivial. (come back)

Note that if we let e1, ..., en be the standard basis for Rn then:

i ∂ j µ(e (p)) = Dei(p) = := ∂ |p ∂xj p Therefore, from now on we will refer to tangent vectors as derivations and make the following identification:

X i ∂ ~vp ≡ v (p) ∂xj p j j n We claim that {∂ |p : j = 1, .., n} forms a basis for TpR . Suppose that we have;

X j X ∂ i λj∂ |p = 0 ⇐⇒ λj (x ) = 0 ∂xj p j j X i ⇐⇒ λj · δj = 0 j

⇐⇒ λj = 0

n From now on we will write ~vp = v. To show that they span, let v ∈ TpR then by definition we have: X ∂ v = vj ∂xj p j

Abstract

The typical definition of a M requires that it is only second-countable, hausdorff and locally homeomorphic to Rd for some d ∈ Z. However we will require even more structure, a differentiable structure. We say (M d,D) is a differentiable manifold if:

d d k S d • ∃A = {(Ui, φi): Ui ⊂ M , i ∈ I} where each φi : Ui → R is a C -diffeomormphism and i Ui = M

−1 −1 k • If φj, φi ∈ A then φi ◦ φj and φj ◦ φi are C -diffeomorphisms • If A0 is any other atlas for M d then A0 ⊂ A

7 Tangent Vectors II

Let p ∈ M d and (U, ψ) = (U, x1, ..., xd) to be a chart about p. If we take r1, ..., rd to be the coordinate d i i ∞ functions on R then ψ = r ◦ ψ. Suppose f ∈ Cp (U) is a at p and define:

∂ ∂ (f) := (f ◦ ψ−1) ∂xj p ∂rj ψ(p) j We will show that ∂ |p is a derivation at p i.e a tangent vector. From there we will show that j j {∂ |p : j = 1, ..., d} is a basis for TpM. is fine, so we will just show that ∂ |p obeys the Leibniz Rule. Let f, g be germs at p ∈ U then we have:

∂ ∂ ∂ (fg) = (fg ◦ ψ−1) = (f ◦ ψ−1 · g ◦ ψ−1) ∂xj p ∂rj ψ(p) ∂rj ψ(p) ∂ ∂ = g(p) (f ◦ ψ−1) + f(p) (g ◦ ψ−1) ∂rj ψ(p) ∂rj ψ(p) ∂ ∂ = g(p) (f) + f(p) (g) ∂xj p ∂xj p

Push-Forward Map

Let Xp ∈ TpM and F : M → N a map between smooth manifolds M,N. Here smooth manifold means ∞ that the overlap map φij is C stead of just k-times differentiable. We define the push-forward map (or differential) F∗,p : TpM → TF (p)N by:

∞ (F∗,pXp)(f) = Xp(f ◦ F ): ∀f ∈ Cp

To show that F∗,p actually maps tangent vectors to tangent vectors, we need to show F∗,pXp ∈ TF (p)N i.e ∞ a derivation at F (p). We will only show the Leibniz Property, since linearity is easy. Let f, g ∈ CF (p) then we have:

[F∗,p(Xp)](gh) = Xp(gh ◦ F ) = Xp(g ◦ f · h ◦ F )

= Xp(g ◦ F · h ◦ F )

= Xp(g ◦ F ) · h(F (p)) + g(F (p)) · Xp(h ◦ f)

= [F∗,p(Xp)](g) · h(F (p)) + g(F (p))[F∗,p(Xp)](h)

8 Basis for Tangent Space

d d 1 n We now transition back to TpM . Let p ∈ M and (U, φ) = (U, x , ..., x ) be a chart. We claim that i d {∂ |p : i = 1, ..., d} is a basis for TpM and for that we have the following lemmas.

Lemma: If ι : M → N is the identity then ι∗,p : TpM → Tι(p)=pN is also the identity.

∞ Proof. Suppose ι : M → N is the identity map. Let Xp ∈ TpM and f ∈ Cp be a germ at p then (f ◦ ι)(p) = f(p) ⇒ f ◦ ι = f and so we have:

(ι∗,pXp)(f) = Xp(f ◦ ι) = Xp(f)

Lemma: Let F : M → N and G : N → P be smooth maps between manifolds. Then G ◦ F : M → P and if p ∈ M then (G ◦ F )∗,p = G∗,F (p) ◦ F∗,p.

∞ Proof. Let f ∈ C(G◦F )(p)(P ) be a germ at (G ◦ F )(p) and Xp ∈ TpM where p ∈ M. Then by direct computation we have:

((G ◦ F )∗,pXp)(f) = Xp(f ◦ G ◦ F )

((G∗,F (p) ◦ F∗,p)Xp)(f) = [G∗,F (p)(F∗,pXp)](f)

= F∗,pXp(f ◦ G)

= Xp(f ◦ G ◦ F )

Lemma: If F : M → N is a diffeomorphism then F∗,p : TpM → TF (p)N is an isomorphism.

−1 −1 Proof. Suppose F : M → N is a diffeomorphism then F ◦ F = idN and F ◦ F = idM . By chain rule −1 −1 −1 −1 we have: (idN )∗,F (p) = (F ◦ F )∗,F (p) = F∗,p ◦ F∗,F (p) and (idM )∗,p = (F ◦ F )∗,p = F∗,F (p) ◦ F∗,p. Hence; −1 −1 (F ◦ F )∗,F (p) = (idN )∗,F (p) : TF (p)N → TF (p)N and (F ◦ F )∗,p = (idM )∗,p : TpM → TpM are both the identity i.e F∗,p is an isomorphism.

−1 −1 Now since φ : φ(U) → U is a diffeomorphism then φ∗,φ(p) : Tφ(p)φ(U) → TpU is an isomorphism of vector spaces i.e it maps basis vectors to basis vectors. If we let r1, ..., rd be the standard coordinate d j d functions on R then {∂/∂r |φ(p) : j = 1, ..., d} is a basis for Tφ(p)φ(U) = Tφ(p)R . It follows that j {∂ |p : j = 1, ..., d} form a basis for TpU since:

 ∂  ∂ ∂ −1 −1 ∞ φ∗,φ(p) (f) = (f ◦ φ ) := (f): ∀f ∈ Cp (U) ∂rj φ(p) ∂rj φ(p) ∂xj p

9 Differential 1-forms and

Definition: A differential 1-form ω is R-valued linear map on TpM. That is, for all v, w ∈ TpM and a, b ∈ R we have ω(av + bw) = aω(v) + bω(w).

Let f be any smooth R-valued function on a manifold M. Define the differential 1-form df by the 1 d i property (df)p(Xp) = Xp(f) for Xp ∈ TpM. Suppose (U, ψ) = (U, x , ..., x ) is a chart about p then ∂ |p’s form a basis, but even more, we have:

  i i ∂ ∂x i (dx )p = = δj ∂xj p ∂xj p

i Thus (dx )p : TpM → R and is linear i.e it is an element of the dual vector space to TpM which we will ∗ denote as Tp M. From this we have that if Xp ∈ TpM then:

  X ∂ i X i ∂ Xp = αj ⇒ (dx )p(Xp) = βj(dx )p = βj ∂xj p ∂xj p j j

This is all too familiar! The (dxi)’s are acting exactly as the αj’s that were the corresponding dual basis 1 n j j i i to a basis e , ..., e of a vector space V . Thus if we write ∂ |p = e and (dx )p = α and V = TpM we get i ∗ all of the results from previous weeks. In particular, {(dx )p : i = 1, ..., d} is a basis for Tp M! We can now classify all differential 1-forms.

1 P j j j Definition: A differential 1-form ω ∈ Ω (M) s.t ω = j a dx where each a is smooth i.e we require that ∗ the scalar field of Tp M be smooth function and we define (fω)(p) = f(p)ωp.

P j j Lemma: Let f be a smooth R-valued function on a manifold M. Then (df)p = j ∂ |p(f)(dx )p.

1 P j j Proof. Since (df)p ∈ Ω (M) then we can write (df)p = j a (dx )p. Now we apply a basis vector of TpM to both sides:

    ∂f ∂ X j j ∂ i = (df)p = a (dx )p = a ∂xi p ∂xi p ∂xi p j

10 Differential k-forms

Now that we’ve observed the identification then it comes without surprise that if (U, x1, ..., xd) is a chart about p ∈ M d then given ω ∈ Ωk(M) we have:

X i1 ik X I ω = ai1···ik dx ∧ · · · ∧ dx := aI dx

1≤i1<···ik≤d I where each coefficient function is smooth.

The Wedge Product

Given ω ∈ Ωk(M) and τ ∈ Ωl(M) we define:

X I X J X I J k+l ω ∧ τ = aI dx ∧ bJ dx := aI bJ dx ∧ dx ∈ Ω (M) I J I,J

The

P J k P J k+1 Let ω = J aJ dx ∈ Ω (M) then we define dω = J d(aJ ) ∧ dx ∈ Ω (M).

Pull-Back Map

Let F : M → N be a smooth map between manifolds. Recall that F∗,p : TpM → TF (p)N was the linear map in which we called the differential and it was characterized by the property that it pushed tangent vectors forward to tangent vectors. We now wish to define an operation that pulls-back forms. The map to do the trick is the one given by:

∗ ∗ ∗ ∗ ∗ (F∗,p) := F : TF (p)N → Tp M; ω 7→ F ω

∗ k where (F ω)(v1, ..., vk) = ω(F∗v1, ..., F∗vk) ∈ R and ω ∈ Ω (N), vi ∈ TpM.

Remark: Each of these operations have tons of properties, but keeping the goal in mind, we won’t need them and so I will not add them here.

11