NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY

AARON LANDESMAN

CONTENTS 1. Introduction 4 2. 9/3/15 5 2.1. Logistics 5 2.2. Lecture begins 5 3. 9/8/15 7 3.1. Curvature of curves 8 3.2. Manifolds 8 3.3. Partitions of Unity 9 3.4. A is compact 10 3.5. A = ∪iAi with Ai compact and Ai ⊂ int(Ai+1) 10 3.6. A is open 11 3.7. A general 11 4. 9/10/15 11 4.1. Partitions of Unity, Hiro’s version 11 4.2. Submersions 13 5. 9/15/15 13 5.1. Tangent Spaces 13 5.2. Return to the submersion theorem 16 6. 9/17/15 16 6.1. Completing the submersion theorem 16 6.2. Lie Brackets 17 6.3. Constructing the Tangent bundle 18 7. 9/22/15 20 7.1. Constructing vector bundles 23 8. 9/24/15 25 8.1. Logistics 25 8.2. structure groups 25 8.3. Fiber bundles in general 26 8.4. Algebraic Prelude to differential forms 26 8.5. Differential Forms 27 9. 9/29/15 29 9.1. Integration 33 10. 10/1/15 34 10.1. Review 34 10.2. Flows and Lie Groups 34 10.3. Lie Derivatives 36 11. 10/6/15 39 1 2 AARON LANDESMAN

11.1. Key theorems to remember from this class, not proven until later today 39 11.2. Class as usual 39 12. 10/8/15 43 12.1. Overview 43 12.2. Today’s class 44 12.3. Riemannian Geometry on vector bundles 47 12.4. Connections 48 13. 10/20/2015 49 13.1. Key theorems for today 49 13.2. Class time 49 13.3. Connections 51 14. 10/22/15 54 14.1. Class time 55 14.2. Connections and Riemannian Geometry 56 15. 10/27/15 60 15.1. Overview 60 15.2. Parallel Transport 60 16. 10/29/15 65 16.1. Overview 65 16.2. Connections 66 16.3. The Fundamental Theorem of Riemannian Geometry 67 16.4. Geodesics 70 17. 11/2/15 71 17.1. Geodesics and coming attractions 71 17.2. Properties of the exponential map 74 18. 11/5/15 76 18.1. Review 76 18.2. Geodesics and length 77 19. 11/10/15 81 19.1. Preliminary questions 81 19.2. Hopf-Rinow 82 19.3. Curvature 83 19.4. Towards some properties and intuition on curvature tensors 85 20. 11/12/15 87 20.1. Types of curvatures 87 20.2. Review of Linear Algebra 88 20.3. Traces in Riemannian Geometry 89 20.4. Back to linear algebra 92 21. 11/17/15 93 21.1. Plan and Review 93 21.2. Scalar curvature 93 21.3. Normal Coordinates 97 21.4. Hodge Theory 99 22. 11/19/15 100 22.1. Questions and Overview 100 22.2. Gauss’ Theorema Egregium 100 22.3. Sectional Curvature and the Exp map 102 NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 3

22.4. Hodge Theory 103 23. 11/24/15 105 23.1. Good covers, and ﬁnite dimensional cohomology 105 23.2. Return to Hodge Theory 107 23.3. Harmonic Forms and Poincare Duality 110 24. 12/1/15 113 24.1. Overview, with a twist on the lecturer 113 24.2. Special Relativity 113 24.3. The Differential Geometry Set Up 114 24.4. Toward Maxwell’s equations 115 25. 12/3/15 118 25.1. Overview 118 25.2. Principle G-bundles 118 25.3. Connections and curvature on principle G-bundles 119 25.4. An Algebraic characterization of connections on principle G-bundles120 25.5. Curvature as Integrability 124 4 AARON LANDESMAN

1. INTRODUCTION Hiro Tanaka taught a course (Math 230a) on Differential Geometry at Harvard in Fall 2015. These are my “live-TEXed“ notes from the course. Conventions are as follows: Each lecture gets its own “chapter,” and appears in the table of contents with the date. Of course, these notes are not a faithful representation of the course, either in the mathematics itself or in the quotes, jokes, and philosophical musings; in particular, the errors are my fault. By the same token, any virtues in the notes are to be credited to the lecturer and not the scribe. 1 Please email corrections to [email protected]

1This introduction has been adapted from Akhil Matthew’s introduction to his notes, with his permission. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 5

2. 9/3/15 2.1. Logistics. (1) Phil Tynan is the TF, who isn’t here (2) email: [email protected] (3) Hiro’s ofﬁce is 341, ofﬁce hours are Tuesday 1:30 - 2:30pm, and Wednesday 2-3pm. (4) Phil will have ofﬁce hours 2-3pm on Thursdays, in ofﬁce 536 and 532. (5) There will be homeworks, once a week, the ﬁrst homework is due Sept 17. (6) When homework is graded, we will get a remark from Phil to see Hiro and Phil during ofﬁce hours. You will not be numerically graded from week to week, but you have to come to them in person, so that we know what is going on. (7) There will be no midterm, but one take home ﬁnal. Remark 2.1. There are two words in the title of the course, Differential and Ge- ometry. This is not Riemannian geometry and we’ll discuss the difference later. “Differential” connotates calculus. You can ask how to do calculus on shapes likes triangles and cubes. To understand calculus, we will learn about manifolds, and calculus on manifolds. To understand geometry, we will think of a space together with some structure (possibly some type of metric). Example 2.2. (1) Riemannian geometry (2) Symplectic geometry - use things like Hamiltonian to describe how vector spaces evolve. (3) Complex geometry - generalize complex analysis to shapes you can build with Cn or CW complex. (4) Kahler geometry (5) Calabi-Yau geometry - study supersymmetric string theory 2.2. Lecture begins. Consider a curve γ : R Rn, t 7 γ(t). Deﬁnition 2.3. For γ a curve, we deﬁne the length of γ to be → → |γ0(t)| dt, ZR where s q 0 0 2 0 0 |γ (t)| = γi(t) = hγ , γ i i . X Remark 2.4. The inner product from Deﬁnition 2.3 should really be thought of n n as an inner product on Tγ(t)R and not on R . Even though these objects are isomorphic, they should not be thought of as “the same.” Deﬁnition 2.5. Let U ⊂ Rn be an open set. A function f : U Rm is called • C0 if it is continuous • C1 if it has partial derivatives ∂f for i = 1, ... , n which→ are all C0 ∂xi • Cr if it has all partial derivatives of order at most r which are all C0 • C or smooth if f is Cr for all r.

∞ 6 AARON LANDESMAN

Deﬁnition 2.6. Let U ⊂ Rn be an open set. Then, a Riemannian metric on U is a C function g : U Mn×n(R) (where the matrix represents an inner product on that space) such that ∞ • g(x) is a symmetric→ nondegenerate matrix. • g(x) is positive deﬁnite

Example 2.7. (1) Set g(x) := In×n for all x. This is the standard Riemannian metric on Rn. m 1 (2) Fix a smooth map f : U R . Since f is C , it induces a map dfx : TxU =∼ n m n TxR Tf(x)R . In “standard basis” for TxR , we can write → ∂fj → dfx := ∂xi j=1,...,m,i=1,...,n If there is a Riemannian metric h on Rm, this induces a bilinear product on U: Given u, v ∈ TxU, we send it to hu, vi := hdfx(u), dfx(v)i. This deﬁnes a Riemannian metric on U precisely when dfx is an injection. m Deﬁnition 2.8. A C map f : U R is an immersion if dfx is injective for all x ∈ U. The induced Riemannian metric is denoted f∗h and is given by ∞ ∗ f hx(u, v→) := h(dfx(u), dfx(v)) Remark 2.9. Caution: Immersions need not be injective. For example, one can send two points to the same point. Alternatively, one can take the universal cover R S1. Deﬁnition 2.10. Let g be a Riemannian metric on U ⊂ Rn, the volume of (U, g) is → p Vol(U, g) := deg gdx1 ··· dxn ZU Next, we describe when we should be able to think of two open sets with a Riemannian metric as equivalent. Deﬁnition 2.11. A diffeomorphism if f is a bijection, f is C and f−1 is C . Deﬁnition 2.12. Fix (U, g) and (V, h) to be two open sets each∞ with a Riemannian∞ metric. An isometry from (U, g) to (V, h) is a smooth diffeomorphism f : U V such that f∗h = g. Remark 2.13. Why is there a square root in the volume function? When one→ tries to evaluate the volume function, we get two contributions from dfx, so we have to take a square root. Remark 2.14. What is the connection between giving a matrix and giving an inner product? If the function g, viewed as a matrix, deﬁnes an inner product gij := t hei, eji, where ei is the ith standard basis vector. Then, g(u, v) := u · g · v. So far, there is an obvious constraint, that we’ve only been dealing with open sets in Rn. We would like the notion of manifolds, which are more general spaces in which one can do differential geometry. A manifold is a topological manifold with a smooth structure. Deﬁnition 2.15. A topological space X is locally Euclidean if for all x ∈ X, there exists d ≥ 0, d ∈ Z, an open set U ⊂ Rd, and a homeomorphism f : U X.

→ NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 7

Remark 2.16. Caution: Locally Euclidean does not imply Hausdorff. As a coun- terexample, consider the afﬁne line with a doubled origin. Deﬁnition 2.17. A topological space X is second countable if X admits a countable basis of open sets.

Deﬁnition 2.18. A basis for a topological space X is a collection of subsets Vα so that (1) x = ∪αVα (2) For every α, β, one can cover Vα ∩ Vβ = ∪γVγ Warning 2.19. The above deﬁnition 2.18 determines a topology, where the open sets are given by arbitrary unions of elements in the basis. However, if we are given a topology on X to start with, we will also need to require that every open set U ⊂ X can be written as a union of basis elements. Example 2.20. Euclidean space (Rn) is second countable. To see this, take a count- able basis given by balls around all rational points with rational radii. Any sub- space of a second countable space is also second countable, by restricting the basis. Remark 2.21. If X is a topological manifold, every connected component is of X will be a locally Euclidean, Hausdorff, second countable space. So, one can deﬁne a topological manifold to be something satisfying these three properties.

Deﬁnition 2.22. An open cover Uα is locally ﬁnite if for every x ∈ X, there exists W ⊂ X an open subset containing x such that W ∩ Uα 6= 0 for only ﬁnitely many α. Deﬁnition 2.23. A space X is paracompact if every open cover admits a locally ﬁnite reﬁnement. Deﬁnition 2.24. A topological manifold is a space X so that X is (1) locally Euclidean (2) Hausdorff (3) Paracompact Paracompact allows you to turn local functions to global ones.

3. 9/8/15 Exercise 3.1. Let γ : R Rn be an immersion. Show there exists a diffeomor- d(γ◦φ) phism φ : R R such that γ ◦ φ is parameterized by arc length, i.e., | dt | = 1. Remark 3.2. If you’re given→ a smooth curve in Rn, we have an intuitive idea of what it means,→ but we can choose various parameterizations. We can choose a parameterization by arc length so that the amount of time traveled is the amount of distance traveled. This exercise looks a lot like a differential equation, which can be solved by the fundamental theorem of calculus. s dγ −1 Solution to exercise: take φ to be 0 | dt dt By the chain rule, d Rdγ dφ dγ dγ −1 (3.1) γ ◦ φ = = ds dt ds dt dt we use the fundamental theorem of calculus is employed to calculate the deriv- ative of φ. 8 AARON LANDESMAN

3.1. Curvature of curves. Deﬁnition 3.3. Let γ : R Rn be an immersion. Deﬁne γ˙ (3.2) ~T : R Rnt 7 → |γ˙ | γ(t) The curvature vector at is deﬁned→ to be → d~T d~T/dt (3.3) ~κ := = ds ds/dt Exercise 3.4. (1) Show ~κ ⊥ ~T. 2 1 (2) If γ : R R has image a circle of radius R, show |~κ| = R . (3) If φ : R R is a diffeomorphism, then the value of ~κ(γ(t)) = ~κ(γ ◦ φ(s)). Solution to exercise:→ (1) Consider→ the function t 7 h~T(t), ~T(t)i. This is a constant function. The de- d ~ ~ d ~ ~ ~ d ~ d ~ ~ rivative dt hT(t), T(t)i = h dt T(t), T(t)i + hT(t), dt T(t)i = 2h dt T(t), T(t)i. (2) Choose γ : R R2, t 7 →R · (cos t, sin t) So, ~T = (− sin t, cos t). Then |d~T/dt 1 1 (3.4) = = → →ds/dt ds/dt R because we the circle is parameterized by t between 0 and 2π while the length of the circle is 2πR. (3) We use the chain rule. We write the circle in two ways. Consider a hyperboloid in R3. Say we want to know the curvature of the sur- face at x. We can deﬁne a normal vector to a tangent plane at a point. Given two vectors, a normal vector and a point in the plane, we can intersect the plane with a surface and obtain a curve. Given this curve, we know how to compute the curva- ture. Then, there are two principal vectors in the tangent space, one with minimal curvature and one with maximal curvature. The Gaussian curvature is then the product of the maximal and minimal curvature. This turns out to be independent of the embedding of the surface. Remark 3.5. Curvature |κ(γ(t))| is the inverse radius of the best approximating circle at γ(t). 3.2. Manifolds. Recall the following deﬁnitions from the previous class:

Deﬁnition 3.6. An open cover Uα is locally ﬁnite if for every x ∈ X, there exists W ⊂ X an open subset containing x such that W ∩ Uα 6= 0 for only ﬁnitely many α.

Example 3.7. Say Uα = Bα(0), α ∈ Q, where Bα(0) is a ball about the origin of radius α. Then, {Uα}α∈Q is not a locally ﬁnite cover about 0. Similarly, if we only index over the integers, it is still not locally ﬁnite. Deﬁnition 3.8. A space X is paracompact if every open cover admits a locally ﬁnite reﬁnement, where a reﬁnement is another cover so that each element of the new cover is contained in some element of the original cover. Deﬁnition 3.9. A topological manifold is a space X so that X is (1) locally Euclidean (2) Hausdorff NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 9

(3) Paracompact Remark 3.10. We still can’t do calculus. On the overlap of two open sets, we will need a compatibility condition. We have to check that the derivatives agree on the −1 overlaps. If φV ◦ φU , the composition of two chart functions isn’t smooth, there’s no way to compare calculus on φU(U) and φV (V). Deﬁnition 3.11. Let X be a topological manifold. Then, a chart on X is a pair n (U, φU) where U is open an φU is a homeomorphism onto some open set in R , for some n, possibly depending on U. r Deﬁnition 3.12. A C atlas is a collection of charts {(Uα, φα)} so that (1) {Uα} form a cover −1 (2) for all α, β the function φβ ◦ φα where deﬁned is C . Deﬁnition 3.13. A Cr manifold is a pair (X, A) where X is a∞ topological manifold and A is a Cr atlas on X. t Deﬁnition 3.14. Let (X, AX), (Y, AY), be two C manifolds then a continuous func- tion f : X Y is Cr for r < t if it is locally Cr. That is, if for all x ∈ X, there is some −1 (U, φ) ∈ AX, (V, ψ) ∈ AY so that x ∈ U, f(x) ∈ V and the function ψV ◦ f ◦ φU is r C . → −1 r Remark 3.15. The existence of (U, φ), (V, ψ) implies that ψβ ◦ f ◦ φα is C for all charts in AX, AY. r Deﬁnition 3.16. A function f(X, A) (Y, AY) is called a C diffeomorphism if (1) f is a bijection (2) f is Cr → (3) f−1 is Cr Theorem 3.17. (Whitehead) Not every topological manifold admits a C atlas. Theorem 3.18. (Milnor) If X = S7, then X admits non diffeomorphic smooth∞ structures. Theorem 3.19. (Donaldson-Freedman) Say X = R4. Then, X admits uncountably many non diffeomorphic smooth structures. Remark 3.20. Deﬁne an equivalence relation on the set of possible C atlases on X. Say A ∼ A0 if A ∪ A0 is also a C atlas. It’s not hard to check this is equiv- ∞ alent to the existence of a diffeomorphism (given by the identity map) between ∞ these two structures on A. Note that given an equivalence class of an atlas, there exists a maximal representative, given by taking the union over all atlases in the equivalence class of A. For this reason, one can also deﬁne a C manifold to be a topological manifold together with a maximal atlas A. ∞ 3.3. Partitions of Unity. Partitions of unity are devices that let us piece together functions on a manifold.

Deﬁnition 3.21. A partition of unity of A subordinate to a cover Uα is a collection of functions Φ, with U some open set containing A and φ : U [0, 1] so that (1) For each x ∈ A there exists an open set V with x ∈ V so that only ﬁnitely many φ ∈ Φ are nonzero on V. → 10 AARON LANDESMAN

(2) We have φ∈Φ φ(x) = 1, which makes sense as the sum is a ﬁnite sum, by the previous point. P (3) For each φ ∈ Φ, there exists α so that. we have Supp(φ) ⊂ Uα. n Theorem 3.22. Given any set A ⊂ R , and any open cover Uα, a partition of unity on A subordinate to Uα exists. Proof. We prove this by breaking successively tackling more and more compli- cated types of sets A.

3.4. A is compact. Lemma 3.23. For any open ball B(x, r) there exists a smaller open ball B(x, s) ⊂ B(x, r) and a smooth φ with φ|B(x,s) = 1 and φ|Rn\B(x,r) = 0.

Proof. We can replace B(x, r) and B(x, s) by cubes S = i(ai, bi) ⊂ R = i(ci, di) by choosing s so that B(x, s) ⊂ S ⊂ R ⊂ B(x, r). So, it sufﬁces to prove the theorem for cubes. Now, we have already shown this on problemQ set 5, problemQ 4c in the case n = 1. Let fi : R R be a function which is 1 on on (ai, bi) and 0 outside of (ci, di). Then, f(x1, ... , xn) = i fi(xi) is the desired function. → In this case, for each x ∈ X,Q choose Bx to be an open ball so that there is some Uα with Bx ⊂ Uα, and choose Cx to be a smaller open ball so that x ∈ Cx ⊂ Bx, so that there exists a function which takes the value 1 in Cx and 0 outside of Bx. Then, take a ﬁnite cover of A by such balls Cx. call the associated functions ψi, with 1 ≤ i ≤ n. Deﬁne ψk φk = n . i=1 φn Observe that P k φi = 1. i=1 X This shows the φi sum to 1 everywhere. Additionally, each φi has support con- tained in the same Uα that ψi does.

3.5. A = ∪iAi with Ai compact and Ai ⊂ int(Ai+1). t Take our given open i cover Uα of A. Construct Uα an open cover of Bi = int(Ai+1) \ Ai−2, by deﬁning i Uα = Uα ∩ Bi. Deﬁne Ci = Ai \ (Ai−1). Then, Ci ⊂ Bi. Therefore, we can construct a partition of unity subordinate of C subordinate to Ui , Let the partition R i α of unity be denoted Φi. Deﬁne

σ(x) = ψ(x). i∈N φ∈φ X, i Deﬁne φ(x) = ψ(x)/σ(x). Note that σ 6= 0 on some open set containing A, since at each x ∈ A, some φ are strictly positive at x. Say x ∈ Ai, x ∈/ Ai−1. Therefore, on the domain where σ 6= 0, we obtain there are only ﬁnitely many φ with φ(x) 6= 0, since we must have φ ∈ Φk for k ≤ i + 2, and there are only ﬁnitely many such functions in each Φk. Additionally, the φi sum to 1 by construction, because we divided by their sum, σ. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 11

3.6. A is open. Construct 1 A = {x ∈ A|d(x, ∂A) ≥ , |x| ≤ i}. i i Observe this give a cover of A by sets as in the previous case.

3.7. A general. Say our open cover of A is Uα. Then, choose B = ∪αUα. Note that there is a partition of unity for B, which is also a partition of unity for A.

4. 9/10/15 4.1. Partitions of Unity, Hiro’s version. Exercise 4.1. (1) Consider j : R2 R3, (x, y) 7 (x, cos y, sin y). Compute ∗ j gstd. (2) The arc length parameterization→ proof from last→ lecture is incorrect (some- thing about the chain rule being incorrectly applied) Why? Solution: ∗ 2 2 (1) Recall j gstd : R M2×2(R). Note the image j(R ) is a cylinder. To 1 0 compute the pullback→ of the inner product is given by dj(x,y) = 0 − sin y 0 cos y Then, we compute g11 = 1, g12 = 0, g22 = 1, so it is the standard metric. ∗ t We can see this also by computing j gstd = dj · dj (2) Look at the errata. To correctly parameterize curves, given γ : R Rn, t consider the map ` : R R, t 7 0 |γ˙ |dt. Since γ is an immersion, γ has an inverse, so we see γ = γ ◦ `−1 and we can ﬁnd the derivative of →γ. R → → Remark 4.2. From now on, we write X for a smooth manifold, but remember this also comes with the datum of an atlas A. Remark 4.3. In the previous day, I added some notes I had written for a previ- ous class on partitions of unity. Here, we repeat the same thing, but with Hiro’s notation.

Deﬁnition 4.4. Let X be a smooth manifold. Fix an open cover U = {Uα}.A parti- tion of unity subordinate to U is a collection of smooth functions fβ : X R≥0 so that (1) β∈B fβ(x) = 1 → (2) ForP all β, Supp(fβ) = x : fβ(x) 6= 0 ⊂ Uβ (3) Supp(f ) is locally ﬁnite That is, for every x there is an open x ∈ W so β that W ∩ Supp(fβ) 6= ∅ for only ﬁnitely many β ∈ B. Theorem 4.5. (Existence of partitions of unity) Let X be a C manifold. Then for all open covers U = U , there exists a C partition of unity subordinate to U. β ∞ Remark 4.6. This is the way we’ll prove∞ that any manifold admits a Riemannian metric, and many other foundational results. It will let us patch things on Rn together. 12 AARON LANDESMAN

Remark 4.7. Replace the words C by Cr, and the theorem still holds. To prove this, we only need to show an analog of Lemma 4.8, and the rest goes through ∞ automatically. Proof. Lemma 4.8. Let U ⊂ Rn and K ⊂ U compact. Then, there exists a smooth function f : U R≥0 so that (1) f(int(K)) ⊂ R>0 (2)→ Supp(f) ⊂ U.

Proof. Follows from homework.Cover K ⊂ U by open balls {Wx : x ∈ K} so that Wx ⊂ U. By compactness, choose a ﬁnite such collection. We can ﬁnd Wx ⊂ 0 Wx ⊂ U, and by the homework, there is a function fx : U R, with f > 0 on Wx and f ≥ 0 on Wx. → Lemma 4.9. Let {Cγ} be a collection of closed subsets of X. If {Cγ} is locally ﬁnite, then ∪γCγ is closed. Proof. This is an easy topological lemma. By local ﬁniteness, for all x ∈ X, there is some Wx so that Cγ ∩ Wx 6= ∅ for only ﬁnitely many γ. So, this implies ∪γW ∩ Cγ) is closed in Wx. This implies ∪γCγ is locally closed. Because X is locally Euclidean and Hausdorff, then ∪γCγ is closed. Using these lemmas, we now prove the theorem.

4.1.1. Step A. Let W be a reﬁnement of Uβ. If there exists a partition of unity subordinate to W, then there exists a partition of unity subordinate to Uβ. To see this, ﬁx k : {} {β} so that W ⊂ Uk(). Then, if {f} is a partition of unity, deﬁne fβ = ∈k−1(β) f. The ﬁrst two properties of partition of unity hold because f is. To verify→ Supp(fβ) ⊂ Uβ follows from. That is, we have P Supp(fβ) ⊂ Supp f ⊂ ∪W ⊂ Uβ by Lemma 4.9.

4.1.2. Step B. We can always choose a reﬁnement W of Uβ so that W is compact. Proof: Homework

4.1.3. Step C. Fixing such a W as in step B, we can ﬁnd a locally ﬁnite reﬁnement Y of W so that Y ⊂ W (with the same indexing set). n Proof: for each W, model it as a union of open balls in R , and then choose a reﬁnement of W by very small open balls Zδ so that the closure of each Zδ is contained in W, and we can assume by paracompactness that it is locally ﬁnite. Then, take the union of the Zδ in a given W to be Y. This is implicitly assuming Lemma 4.9. 4.1.4. Step D. We’re done! Let’s see why: By Lemma 4.8, we have smooth functions f : W R so that (1) f(Y) ⊂ R≥0 (2) Supp(f) ⊂ W. → Then, set f (4.1) g = f This assignment enforces that they sum toP 1. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 13

Remark 4.10. It is possible Hiro came up with this proof but the inspiration came from Collins’ textbook which mentioned that Lemma 4.9 as crucial. 4.2. Submersions. We will treat the submersion theorem just inside Rn. The prin- cipal for why we can do this is that anything you can do in Rn, you can do for manifolds in general by piecing together open sets. Deﬁnition 4.11. Let f : U V be a smooth map. Let U ⊂ Rn, V ⊂ Rm open. Then, f is called a submersion at x ∈ U if dfx : TxU Tf(x)V is a surjection. Remark 4.12. (1) For f→to be a submersion, n ≥ m. (2) if U V is an inclusion of open sets with m→= n, then f is a submersion. (3) f :(x1, ... , xn) 7 (x1, ... , xm) is a submersion, because dfx is (Im×m 0). Deﬁnition 4.13.→ f is a submersion if f is a submersion at all x ∈ U. → Theorem 4.14. (Submersion Theorem) Let f : U V be a submersion. Then, for all y ∈ V, f−1(y) ⊂ U is a smooth submanifold. Remark 4.15. This theorem will readily generalize→ to arbitrary manifolds, once we deﬁne the relevant terms. The following deﬁnition was stated in class, but isn’t relevant to the submersion theorem Deﬁnition 4.16. A continuous map f : X Y between topological spaces is proper if for all K ⊂ Y compact, f−1(K) is compact. Remark 4.17. The dimension of f−1(y) will→ be n − m if n = dim U, m = dim V. Deﬁnition 4.18. A subset X ⊂ U ⊂ Rn with U open is a smooth submanifold of U if for all x ∈ X there exists an open W ⊂ U and a smooth diffeomorphism φ : Rn W so that φ(Ri) = X ∩ W with Ri ⊂ Rn some sub vector space. Remark 4.19. A smooth submanifold of U is a smooth manifold. → Example 4.20. If f : Rn R,~x 7 |x|2.

5. 9/15/15 → → The course website is now on piazza. 5.1. Tangent Spaces. Remark 5.1. A tangent vector gives me a way to take derivatives. Say we have U ⊂ Rn, f : U R. The derivative is a row vector with n entries. More ge- ometrically, we can discuss the derivative as follows: Given X ∈ TxU, we know how to compute→ the directional derivative of f at x in the direction of X, using X(f), Xx(f), X(x)(f), X(f)(x) when X is a vector ﬁeld.

Question 5.2. What algebraic properties does Xx : C (U) R satisfy? Deﬁnition 5.3. Given a manifold X, we let C (X)∞, C (X; R) denote the set of smooth functions X R. → ∞ ∞ What properties do tangent vectors satisfy? → 14 AARON LANDESMAN

(1) Xx(af + g) = aXx(f) + Xx(g) for a ∈ R, f, g ∈ C (M) (2) Leibniz rule, Xx(fg) = Xx(f) · e(g) + e(f) · Xx(g). ∞ Deﬁnition 5.4. Let A, B be commutative algebras over R. Fix an R algebra homo- morphism e : A B.A derivation is a function D : A B satisfying linearity and the Leibniz rule.

Example 5.5. →(1) Take A = C (M), B = R, and e = →evx : C (M) R, f 7 f(x). ∞ ∞ (2) A = C (M), B = A, e = id. → → (3) A = C (M), B = C (N), j : N M, e : A B, f 7 f ◦ j. ∞ Remark 5.6. In∞ algebraic geometry,∞ given a map of manifolds, we get a map of rings, and this operation similarly encodes→ the relative→ geometry→ of the rings. Deﬁnition 5.7. Let M be a smooth manifold. Then, the tangent space of M at x ∈ M is denoted

TxM := {D : C (M) R derivations with respect to evx} .(5.1) We should verify things∞ like n ∼ n → (1) T0R = R as vector spaces (2) Chain rule

Proposition 5.8. Let x ∈ U ⊂ M. Then, if f|U = gU with f, g ∈ C (M), then Xx ∈ TxM implies Xx(f) = Xx(g). ∞ Proof. Choose some compact ball B with int(B) 3 x so that B ⊂ U. Fix h : M R so that h|B = 1 and Supp h ⊂ U. Given a derivation Xx consider Xx(h · (f − g)) = 0. By the Leibniz rule, we see → 0 = Xx(h · (f − g))

= Xx(h) · (f − g)(x) + h(x) · Xx(f − g)

= h(x) · Xx(f − g)

= Xx(f) − Xx(g). Proposition 5.9. Let j : N M. Then, there exists a R linear map notated by any of dj|x, djx = dj(x) with x ∈ MTxN Tj(x)M deﬁned by

→ Xx 7 (f 7 Xx(f ◦ j)) → Proof. Exercise → → j h Proposition 5.10. Let N − M − L be C functions. Then the chain rule holds. That is, ∞ → d(→h ◦ j)x = dhj(x) ◦ djx Proof. Given f ∈ C (L), we have

∞ d(h ◦ j)x(Xx)(f) = Xx(f ◦ (h ◦ j))

= Xx((f ◦ h) ◦ j)

= djx(Xx)(f ◦ h)

= dhj(x)djx(Xx)(f) NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 15

Corollary 5.11. The natural map TxU TxM, induced by

C (M) C (U) → ∞ ∞

R is an isomorphism. This is supposed to be an algebraic incarnation of your intuition that tangent vectors depend only on germs around a point. Proof. Immediate from Proposition 5.8.

Exercise 5.12. We have TxM is an R vector space. Solution: We have a 0 derivation, and derivations add and scale. Remark 5.13. Why the Leibniz rule? This pops out of doing computations over Spec k[]/2, and maps of this into the manifold are the same as tangent vectors. n ∼ n Proposition 5.14. T0R = R as vector spaces, but not canonically. Proof. Note that the assignment ∂ ∂f | : C (Rn) Rf 7 (0) ∂xi x ∂xi is a derivation. We claim ∞ ∂ → ∂ → | , ... , | ∂xi 0 ∂xn 0

n n f Form a basis for T0R . By Taylor’s theorem, any C function R − R can be f(x) = f(0) + x g (x) g : Rn R C g (0) = ∂ | written as i i i where i ∞ is and i ∂xi 0. Given a derivation X , because the derivation of a constant function is→9, we have ~0 P ∞ → X~0 = X~0(f(0)) + sumiX~0(xigi(x)) ~ ~ = 0 + X~0(xi)gi(0) + xi(0) · X~0(gi(x)) i i X X ∂ = X (x ) (0) ~0 i ∂xi i X f X (f) = (a ∂ | (f) which is independent of . That is, we have shown ~0 i ∂xi 0 . This ∂ ∂ xj = δj shows ∂xi span. They are also linearly independent since ∂xi P i. n Corollary 5.15. If M is n dimensional at x then TxM =∼ R . Proof. Follows from Proposition 5.14 and the above corollary stating that tangent spaces can be computed locally. Remark 5.16. Let j : Rm Rn be smooth. Then, ∂ ∂ dj | = (dj ) → 0 ∂xi 0 0 ij ∂xj j X is the connection between Taylor’s deﬁnition and the matrix of partial functions. 16 AARON LANDESMAN

n n Remark 5.17. For all y ∈ R , there is a smooth diffeomorphism Ty : R Rn, x 7 x + y. Then, ∂ ∂ → | = dT | → ∂xi y y ∂xi 0 Exercise 5.18. By the chain rule, and diffeomorphism j : M N induces linear ∼ isomorphism dxj : TxM = Tj(x)N. → 5.2. Return to the submersion theorem. Recall: Deﬁnition 5.19. Let F : M N be smooth. A point y ∈ N is a regular value of f −1 if for all x ∈ f (y), dfx is a surjection. → Example 5.20. f : R R, t 7 t2 is regular whenever y ∈ R is nonzero. Deﬁnition 5.21. A subset Z ⊂ M is called a smooth submanifold if for all z ∈ Z there is U ⊂ M open→ and Z→⊂ U and a smooth diffeomorphism h : V U so that h(Rm) = U ∩ Z, with Rm ⊂ Rn. Theorem 5.22. Let M, N be smooth manifolds and f : M N be smooth.→ Then, for all regular values y ∈ N, we have f−1(y) ⊂ M is a C submanifold.

f ∞ → Proof. Go to local charts M − N ⊃ V 3 y. Then,

→ U V

φ(U) φ(V).

We now ask what f looks like in these coordinate charts. By deﬁnition of smooth- −1 −1 ness, ψ ◦ f ◦ φ : U V is smooth. Since y is a regular value, d(ψ ◦ f ◦ φ )φ(x) −1 with x ∈ f (y) is a surjection. So, Tφ(x)U Tφ(y)V is surjection. Without loss of generality, assume→φ(x) = 0 ⊂ Rm and φ(y) = 0 ∈ Rn. By linear algebra, n n −1 there is an invertible matrix A : R R so→ that A ◦ d(ψ ◦ f ◦ φ = (In 0). −1 m n So, the C function A ◦ ψ ◦ f ◦ φ : R R has the derivative (In 0) at 0. Now, deﬁne a function and expand the function so that the derivative matrix is the ∞ → identity matrix by the inverse function theorem,→ and make it the matrix mapping the coordinate matrix to a hyperplane.

6. 9/17/15 6.1. Completing the submersion theorem. Hiro was up late last night, so he might be a little less active and a little more sarcastic or dismissive, but he said he’ll try not to be. The homework is due, emailed to Phil by 11:59pm tonight. Recall: Last time we deﬁned TxM, tangent spaces, and started proving the sub- mersion theorem: Theorem 6.1. If f : X Y smooth and y ∈ Y is a regular value, then f−1(y) ⊂ X is a smooth submanifold. → NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 17

Proof. As in Guilliman and Pollack, ﬁnd coordinate charts U ⊂ X, B ⊂ Y so that y ∈ V and x ∈ f−1(y), x ∈ U so that

U V

φ(U) ψ(V) with φ(U) ⊂ Rn, φ(V) ⊂ Rm. We can take the composite ψ ◦ f ◦ φ−1, where −1 m n ψ ◦ f ◦ φ :(x1, ... , xn) 7 (x1, ... , xm), where we are viewing R ⊂ R . Assuming without loss of generality f˜ = ψ ◦ f ◦ φ−1(0) = 0, then f˜−1(0) = −1 −1 n−m (0, ... , 0.xm+1, ... , xn). This→ ﬁnishes the proof because f (y) ∩ U = φ (R ). 6.2. Lie Brackets. Exercise 6.2. If f : Rn R, x 7 |x|2, then f−1(1) = Sn−1 is a smooth submanifold of Rn, hence a C manifold.

Recall Xx : C ∞(M) → R then→ Xx is a derivation with evx : C (M) R. Let’s examine: ∞ ∞ Deﬁnition 6.3. Deﬁne→ → Γ(TM) := {R linear derivations from C (M) to itself with respect to e = id}

= {X : C (M) C (M): X(af∞+ g) = aX(f) + X(g), X(f · g) = X(f) · g + fX(g)} Deﬁnition 6.4. ∞An element∞X ∈ Γ(TM) is a vector ﬁeld on M. → Remark 6.5. For every x ∈ M, we have a function Γ(TM) TxM, X 7 (Xx : C (M) R, Xx := ev ◦ X. Then, Xx is a derivation because evx is a ring homo- morphism. ∞ → → Remark→ 6.6. Geometrically, any vector ﬁeld X in the of multivariable calculus gives a derivation C (M) C (M) as follows: for all x ∈ M, consider the directional derivative of f in the direction of Xx. This gives me a new function ∞ ∞ X(f)(x) = Xx(f), the directional→ derivative. Remark 6.7. Since any X : C (M) C (M), we can try composing vector ﬁelds. Proposition 6.8. Let X, Y ∈ ∞Γ(TM) be vector∞ ﬁelds. Deﬁne X ◦ Y − Y ◦ X := [X, Y]. Then, → (0) [•, •] : Γ(TM) × Γ(TM) Γ(TM). (1) [•, •] is R bilinear. (2) [X, Y] = − [Y, X] → (3) [•, •] satisﬁes the Jacobi identity: [X, [Y, Z]] = [[X, Y] , Z] + [Y, [X, Z]] .

That is, for every X ∈ Γ(TM), the operation Dx = [X, •] is a derivation on [•, •]. That is, Dx [Y, Z] = [DxY, Z] + [Y, DxZ]. Deﬁnition 6.9. Let V be a R vector space. Any bilinear map V × V V, (x, y) 7 [X, Y] is called a lie bracket if it satisﬁes (2) and (3) from Proposition 6.8. The pair (V, [•, •]) is called a Lie algebra. → → 18 AARON LANDESMAN

Remark 6.10. The Proposition 6.8 is equivalent to Γ(TM) being a Lie algebra.

Proof of (0). We need to show X ◦ Y − Y ◦ X is a derivation. Pick f, g ∈ C (M). We want to show this satisﬁes the Leibniz rule. ∞ X(Y(fg)) − Y(X(fg)) = X(Y(f)g + fY(g)) − Y(X(f)g + fX(g)) = X(Y(f))g + Y(f)X(g) + X(f)Y(g) + fX(Y(g)) − Y(X(f))g − X(f)Y(g) − Y(f)X(g) − f(Y(X(g))) = X(Y(f))g + fX(Y)(g) − Y(X(f))g − f(Y(X(g)) = (X(Y(f)) − Y(X(f)))g + f(X(Y(g)) − Y(X(g)))

Remark 6.11. For all commutative rings A, we have Der(A, A) is a Lie algebra under [X, Y] = X ◦ Y − Y ◦ X, as follows from the proof of Proposition 6.8.

Exercise 6.12. If M = Rn, any vector ﬁeld X can be written as

n ∂ X = Xi ∂xi i=1 X x ∈ Rn ∂ (x) = ∂ | where the above derivation at satisﬁes ∂xi ∂xi x. Then, ∂ ∂ [X, Y] = Xi , Yj ∂xi ∂xj X ∂Yj ∂X ∂Xi ∂ = Xi − Yj ∂xi ∂xj ∂xj ∂xi ij X So X(Y) is “take the naive derivative of X with respect to Y.

Remark 6.13. A more geometric interpretation can be given as follows. Each vec- X X tor ﬁeld X gives rise to a ﬂow.’ If we have Φ : M × R M, Φt : M M is a Y X x Y diffeomorphism for all t. Given X and Y, can we compare Φs − Φt and Φt ◦ ΦS. Then, [X, Y] measures noncommutativity of these vector ﬁelds→ near t = s→= 0. 6.3. Constructing the Tangent bundle. We now embark on constructing the tan- gent bundle.

Deﬁnition 6.14. Given a smooth manifold M, deﬁne

TM := TxM x∈X a We now want to topologize this tangent bundle and give it a smooth atlas. If we manage to do this, we end up with the following structure: (1) A smooth manifold TM together with

TM −π M (x, y) 7 x → x π−1(x) R (2) for all , we have has the structure→ of a vector space over NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 19

(3) and, by the way we deﬁne the smooth atlas, we will have local trivializa- tions. That is, we will have U ⊂ M open, x ∈ U and

U × Rk Φ TU TM

U M

where Φ is a diffeomorphism making

U × Rk TU pr π U

commutes, where pr(y, v) = y, y ∈ U, v ∈ Rk, and k Φ(y, •): TyU {y} R is a linear isomorphism for all y ∈ U. → Deﬁnition 6.15. Let E be a smooth manifold together with a smooth map π : E M and the structure of a R vector space on each π−1(x) so that for all x ∈ M ∼ k there is a U with U open and Φ : E|U = U × R as in the above enumeration.→ Then, (E, π) is called a rank k vector bundle over M, and k can be any nonnegative integer. Remark 6.16. E is like a bundle of vector spaces, one vector space for each x ∈ X. The condition of E and π being smooth means these vector spaces vary smoothly and piece together. Local triviality is mimicking the convenience of local charts. Now, we’ll topologize the tangent bundle. Remark 6.17. Vector bundles are here to stay. We’ll construct the tangent bundle as follows: (1) Take a sufﬁcient open cover U = {Uα} k (2) identify TUα =∼ Uα × R , so that TUα inherits a C structure. (3) set an equivalence relation α / ∼=: TM, where ∼ says when V ∈ TUα 0 ∞ and v ∈ TUβ come from the same tangent vector on M. ` Here is the construction:

Construction 6.18. Let A = {(U, φα)} be a smooth atlas for M. Consider the map n U φα(Uα) ⊂ R , which is smooth by deﬁnition. So, for all x ∈ Uα I get a map n TxUα Tφα(x)R . As sets, we obtain a map → n TαUα T R → φα(x) x∈U Yα a For all x, this is an isomorphism of vector→ spaces. But, we know ∂ ∂ T Rn = spanh | , ... , | i φα(x) ∂x1 φα(x) ∂xn φα(x) 20 AARON LANDESMAN

So, we have an isomorphism n ∼ n Tφα(x)R = {φα(x)} × R , x 7 (a1, ... , an)

X = a ∂ | where i ∂xi φα(x). So, we obtain→ a map P n TxUα φα(Uα) × R x∈U aα TU = T U → Let α x∈Uα x α be given the unique smooth structure making this a diffeomorphism. ` What is the equivalence relation? We have

T x∈Uα∩Uβ x ` Φβ Φα k k φαUα × R φβ(Uβ × R )

−1 −1 φα ×id φβ ×id

k k Uα × R Uβ × R

k k (Uα ∩ Uβ) × R (Uβ ∩ Uα) × R

−1 (x, v) (x, d(φβ ◦ φα )U) that is, for all α, β we have a function γβα : Uα ∩ Uβ GLk(R), x 7 d(φβ ◦ −1 φα ). You can check γαα(x) = id and γδβ ◦ γβα = γδα by the chain rule. The k k equivalence relation on TUα × R ∼ (y, w) ∈ Uβ × R→which is equivalent→ to x = y and γβα(v) = w. Then, we can check that

TUγ/ ∼=: TM α a is a smooth manifold.

7. 9/22/15 Recall, last time we deﬁned

TM := TxM x∈M a ! k := Uα × R / ∼ α∈A a NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 21

The key to the relation ∼ is

Φ β k T Uα ∩ Uβ (Uβ ∩ Uα) × R (7.1) Φα Γβα k (Uα ∩ Uβ) × R

−1 Recall, Γβα was deﬁned by d(φβ ◦ φα and satisﬁes the cocycle condition

Γγβ ◦ Γβα = Γγα with Γβα : Uβ ∩ Uα Gdim M(R).

Deﬁnition 7.1. Let M be a smooth manifold. A GLn cocycle for M is a choice of → (1) An open U = {Uα} n2 (2) for all pairs (β, α) a smooth function ΓβαUβ ∩ Uα GLn(R) ⊂ R . satisfying the cocycle condition

Γγβ ◦ Γβα = Γγα →

Remark 7.2. Since GLn(R) is a group, the cocycle condition implies (1) Γαα(x) = id −1 (2) Γαβ(x) = (Γβα(x)) . Thus, we have an equivalence relation on the set n Uα × R α∈A a k 0 0 k 0 by Uα × R 3 (x, v) ∼ (x , v ) ∈ Uβ × R if and only if x = x and v = Γβα(v)

Proposition 7.3. Given a GLn cocycle for M, ! k E := Uα × R / ∼ α a k is a smooth vector bundle with obvious projection map E M =∼ {[x, 0]} where Uα × R is an open embedding. Proof. The cocycle condition is exactly what we need→ to construct a vector bundle, as follows directly from the deﬁnition.

Deﬁnition 7.4. Let Γ = Uα, Γβα be a GLn cocycle. Let G be a subgroup of 0 GLn.A reduction of structure group to G is a choice of cocycle Γ so that for all 0 0 0 0 0 α , β ∈ A , Γα0β0 (x) ∈ G and so that Γ, Γ admit a common reﬁnement. That is, the vector bundles constructed from Γ, Γ 0 are isomorphic.

By default, the structure group of a vector bundle is GLn. Deﬁnition 7.5. Let E M, F N be two vector bundles. A map of vector bundles is a pair (f˜, f) so that (1) fE˜ F smooth→ → (2) f : M N → → 22 AARON LANDESMAN

(3)

E F (7.2)

M N

(4) For all x ∈ M we have a map f˜|x : Ex Ff(x) is an R linear map of vector space. Deﬁnition 7.6. An isomorphism of vector bundles→ is a bundle map (f˜, f) so that f˜ (and hence f) are diffeomorphisms. Deﬁnition 7.7. Let E M be a smooth vector bundle. Then, a section of E is a smooth function s : M E so that → M s E → (7.3) id π M commutes, for all x ∈ M, s(x) ∈ Ex. Deﬁnition 7.8. We let Γ(E) denote the set of all sections of E Note that the notation Γ has nothing to do with cocycles, it is just notation for global sections. Example 7.9. An element X ∈ Γ(TM) is a vector ﬁeld on M. ∼ Proposition 7.10. DerR(C (M), C (M)) = Γ(TM). Proof. Exercise ∞ ∞ Remark 7.11. Looking for sections is the ﬁrst strategy for studying vector bundles, hence manifolds. Example 7.12. Say a section s ∈ Γ(E) is nowhere vanishing if s(x) 6= 0 for all x ∈ M. The ﬁrst question one might ask about a vector bundle is whether you can ﬁnd a nowhere vanishing vector section (vector ﬁeld). Theorem 7.13. (Poincare-Hopf) TS2 does not admit a nowhere vanishing section.

Proof. Not given Corollary 7.14. S2 6=∼ S1 × S1 Proof. This follows from Theorem 7.13, though there are much easier ways to prove this. Deﬁnition 7.15. A bundle E is orientable if it admits a reduction of structure group to + G = GLn (R) = {A ∈ GLn(R): det A > 0} NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 23

Intuitively, if we choose our transition functions, we want some sort of way to enforce that these transition functions preserve orientation, that is, preserve the sign of the determinant. Remark 7.16. Studying whether TM can admit a G reduction yields information about M Deﬁnition 7.17. M is called orientable if TM is orientable.

Remark 7.18. This means you can choose coordinate charts {(Uα, φα)} so that −1 d(φβ ◦ φα ) always has positive determinant. Deﬁnition 7.19. A bundle E is called trivial if E =∼ M × Rn as bundles and M is parallelizable if TM is trivial. Remark 7.20. Poincare Hopf is a proof that TS2 is not trivial. A fancier form of the Poincare Hopf theorem says there is always a nonvanishing vector ﬁeld on an odd dimensional manifold. Example 7.21. The number of linearly independent (nowhere vanishing) sections is a difﬁcult, interesting, invariant of a vector bundle. Proposition 7.22. E M is trivial if and only if (1) E admits n linearly independent sections, with n = dim Ex (2) E admits a reduction→ of structure group to {id}. Proof. Omitted 7.1. Constructing vector bundles. 7.1.1. Pullbacks. First, we can pull back vector bundles. π Construction 7.23. Suppose F − M is a vector bundle. Fix a smooth map M N. Deﬁne ∗ f F =→ (x, v): x ∈ M, v ∈ Ff(x) → It is not hard to check local triviality. Remark 7.24. One way to see smoothness of f∗F is as follows:

f∗F F

(7.4) π

M f N then π is transverse (meaning that the direct sum of the derivatives span the tan- gent space) to f automatically. Now, the ﬁber product of these two smooth maps is a smooth manifold, since the maps are transverse, their ﬁber product is smooth, as follows from the homework. π π Example 7.25. Let E −−1 M, F −−2 M. Then,

• E → → (7.5)

F M 24 AARON LANDESMAN

∗ ∗ we have π1F = π2E and admits a projection map to M by π∗F = {(x, v, y, w)|(x, v) ∈ E, (y, w) ∈ F, x = y} ∗ In particular, π Fx = Fx ⊕ Ex. This is called the Whitney sum or direct sum of E and F and is denoted E ⊕ F M. Example 7.26. Consider j : Sn Rn+1. We know TRn+1 is trivial so j∗(TRn+1) is trivial and TSn admits a ﬁberwise→ injective map of vector bundles

→ dj TSn j∗TRn+1 (7.6)

Sn Sn

Moreover, we can check that TSn ⊕ R =∼ j∗TRn+1, where by R we mean the trivial line bundle, which is the bundle of vectors perpendicular to TSn. 7.1.2. Functorial Methods. We often have ways of producing new vector spaces from old ones, such as dualizing and tensoring. n • Deﬁnition 7.27. Given V we can send V 7 ⊕n≥0 ⊗ V := T (V), the tensor algebra or free associative algebra on V, with T k(V) = V⊗k. • → Note that T (V) is an associative algebra over primitives v1 ⊗ · · · ⊗ vk with multiplication given by simple tensor and unit 1 ∈ T 0(V) =∼ R. Remark 7.28. This is a super useful algebra, it’s super fun! Consider the two-sided ideal I ⊂ T •(V) generated by v ⊗ v ∈ V⊗2 Deﬁnition 7.29. The exterior algebra ∧•(V) := T(V)/I. Remark 7.30. Given V we can also construct also construct the exterior algebra by • n ∧ V := ⊕n ∧ V. Example 7.31. Given T k , T •(V) ∧•(V), we set ∧k(V) to be the image of k 0 ∼ 0 ∼ T (V) and write [v1 ⊗ · · · ⊗ vk] := v1 ∧ ··· ∧ vk. Note that ∧ (V) = T (V) = R 1 ∼ 1 ∼ and ∧ (V) = T (V) = V. → → Next, we want to understand ∧2(V). We demand that v ⊗ v = 0, and so x ⊗ y = y ⊗ x, so we obtain anticommutativity, by expanding (x + y) ⊗ (x + y). Going further, the product on T •(V) induces a product on ∧•(V) ∧k(V) × ∧l(V) ∧l+k(V) α, β 7 α ∧ β → satisfying α ∧ β = (−1)klβ ∧ α. → Exercise 7.32. If dim V = 1 then T •(V) =∼ R[x]. All of these methods of making new vector spaces respects isomorphism smoothly and composition of isomorphisms. That is, they determine functors on the groupoid of the category of vector spaces. Then, given a cocycle, analogous constructions yield new vector bundles on M. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 25

Example 7.33. (Dual Vector Bundles) Given E M with cocycle Γ, we deﬁne a new cocycle as follows. We have We start with Γβα. Taking the dual construction, we consider the maps → ∨ k ∨ k ∨ (Γβα) : Uβ ∩ Uα × (R ) Uα ∩ Uβ × (R ) and note that Γ ∨ determines a cocycle as well, hence a vector bundle. Then, the vector bundle constructed from Γ ∨ is called→ the dual vector bundle to E. Example 7.34. (Tensor Product) Let E, F be vector bundles. Assume we have co- cycles Γ E, Γ F over the same open cover U, possibly after taking reﬁnements. Then, E F deﬁne Γαβ ⊗ Γαβ : Uα ∩ Uβ GLnE·nF (R) where nE = dim Ex, nF = dim Fx. This is a cocycle for E ⊗ F called the tensor product for E ⊗ F. Deﬁnition 7.35. → (TM)∨ =: T ∗M is the cotangent bundle of M.

8. 9/24/15 8.1. Logistics. Email Phil the homework by 11:59 tonight. Last time, we discussed: (1) Reducing structure groups (2) E ⊕ F, E ⊗ F, ∧•(E). Today, we’ll discuss (1) Structure groups (2) Fiber bundles in general (3) Differential forms 8.2. structure groups. Deﬁnition 8.1. For G a subgroup of automorphisms of the ﬁbers, a G cocycle on M is the data of (1) A set A (2) A function A Open(M), α 7 Uα (3) For all α, β ∈ A × A a smooth function Γαβ : Uα ∩ Uβ G. satisfying → → (1) {Uα} is an open cover → (2) the cocycle condition Deﬁnition 8.2. We’ll say a cocycle

Γ = A, {Uα} , Γαβ is contained in another cocycle 0 0 0 0 Γ = A , Uα0 , Γα0β0 if there is an injection j : A A0 so that 0 (1) Uj(α) = Uα (2) Γj(α)j(β) = Γαβ → Alternatively, two cocycles have a common reﬁnement if they are contained in a common cocycle. 26 AARON LANDESMAN

8.3. Fiber bundles in general. We have now deﬁned vector bundles, but it is nat- ural to ask if we can construct objects whose ﬁbers are manifolds. These are called ﬁber bundles. Remark 8.3. More generally, consider a mathematical object F like a lie group, smooth manifold, a vector space with inner product then there is a group Aut(F) = { smooth automorphisms of F}. Then, we can deﬁne an Aut(F) cocycle analogously.

Remark 8.4. We say Γαβ is smooth if Uα ∩ Uβ × F F is smooth, assuming F is a smooth manifold (assuming F has some smooth struc- ture). → Question 8.5. We can ask whether all bundles over the circle with ﬁber equal to the circle are smooth 8.4. Algebraic Prelude to differential forms. Fix a ﬁeld k. Recall: Deﬁnition 8.6. A commutative algebra over k is the data of (1) A vector space V/k (2) A map k V called the unit (3) and a map m : V ⊗ V V which is k-linear satisfying (a) associativity→ (b) commutativity, meaning→ swap V ⊗ V V ⊗ V (8.1) m m V

(c) unit⊗ k ⊗ V idV ⊗ V

(8.2) m

V id V

Now, replace the vector space V by a cochain complex A•. Recall: Deﬁnition 8.7. A cochain complex A• = (A•, d) is the data of (1)A k-vector space or k-module Ai for all integers i, (2)A k-linear map di : Ai Ai+1, called the differential satisfying di+1 ◦ di = 0, often written as d2 = 0. → Deﬁnition 8.8. If (A, dA) , (B, dB) are cochain complexes, we deﬁne a new cochain complex called A ⊗ B by i j k (A ⊗ B) := ⊕j+k=iA ⊗ B d(a ⊗ b) = da ⊗ b + (−1)|a|a ⊗ db where a ∈ Aj has |a| = j. Deﬁnition 8.9. A map of cochain complexes or chain map is the data of NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 27

(1) fi : Ai Bi satisfying i i+1 (1) dAf −→f dB = 0. Pictorially,

Ai Ai+1 (8.3)

Bi Bi+1. Remark 8.10. There exists a natural swap isomorphism A ⊗ B −σ B ⊗ A a ⊗ b 7 (−1)|b||a|b ⊗ a → Next, we’ll introduce the structure of differential forms as chain complexes. Now, ﬁx a ring k, where we’ll usually→ take k = R. The cochain complexes in this class will represent dimension. Deﬁnition 8.11. A cdga (commutative differential graded algebra) or commuta- tive algebra in the category of chain complexes over k is the data of (1) A cochain complex V = (V•, d) (2) A map k V called the unit of cochain complexes, (where k is concen- trated in degree 0, and the differential sends the image of k to 0) (3) and a map→ of cochain complexes m : V ⊗ V V meaning |v1| d (m (v1 ⊗ v2)) = m (d (v1 ⊗ v2)) = m (dv1 ⊗ v2) + (−1) v1 ⊗ dv2 → satisfying (a) associativity (b) commutativity, meaning

V ⊗ V V ⊗ V (8.4)

V

|v ||v | meaning m(v1, v2) = (−1) 1 2 v2 · v1.

Remark 8.12. writing multiplication as times instead of m, we have d(v1 · v2) = dv1 · v2 ± v1 · dv2 which looks like the Leibniz rule. We’ll often notate m(v1, v2) = v1 · v2. 8.5. Differential Forms. Recall: Deﬁnition 8.13. Let M be a smooth manifold. Then, the cotangent bundle of M is the dual of the tangent bundle. ∨ ∨ We often denote (T M)x := Tx M which is equal to homR(TxM, R). The cotangent bundle T ∨M has matrices which are the transposes of the ma- trices for TM. If we want to explicitly map to GLn, we can ﬁx an isomorphism ι−1 df∨ ( ι ι :(Rm)∨ =∼ R and take the new cocycle to be Rm −− (Rm)∨ −−− − Rm)∨ − Rm. → →→ → 28 AARON LANDESMAN

Deﬁnition 8.14. A differential k-form is a section of ∧k(T ∨M) Example 8.15. A differential 0-form is a section of R × M, i.e., a smooth function on M. A differential one form is a section of T ∨M.A k form is a smooth choice of k ∨ α(x) ∈ ∧ (Tx M). Recall k ⊗k ∧ (V) = [v1 ⊗ · · · ⊗ vk , vi ∈ V, v1, ⊗ · · · ⊗ vk ∈ V , v1 ∧ v2 = −v2 ∧ v1 X Remark 8.16. If you think of Vi ∈ V as being an element of degree 1, we obtain graded commutativity k Lemma 8.17. If {ei} is a basis for V then ∧ V has a basis ei1 ∧ ··· ∧ eik , for i1 < ··· < ik. Proof. Spanning is clear. Independence can be seen by relating it to independence in tensor products, I think. The goal for the remainder is the prove the collection of differential forms, no- tated • • • ΩdeR(M) := Ω (M) := A (M) is a cdga over R. That is, we’ll consider a cochain complex with the ith piece deﬁned to be Γ(∧i(T ∨M)) and multiplication comes from that of concatenating wedge products. The work is in deﬁning a differential which is a derivation

d = ddeR the de Rham differential. Deﬁnition 8.18. We deﬁne the 0th differential, d0 : Ω0(M) Ω1(M) sending C (M) Γ(T ∨M), in which we want an assignment sending a function to a map sending a point x to some dual vector to TxM. → ∞ 0 We deﬁne→ d to be

f 7 (TxM R, Xx 7 Xx(f)) This map is indeed linear over R, meaning af + g, a ∈ R, f, g ∈ C (M), because → → → Xx(af + g) = aXx(f) + Xx(g) ∞ and it is linear over TxM because

(Xx + Yx)(f) = Xx(f) + Yx(f).

We still need to check d0(f) is a smooth section, and notate d0(f) as df, which is annoying because df : TM TR, but instead df : M T ∨M, which overloaded. Although, the composite

→ Df ∂t7 1 → (8.5) TM TR R → has composite equal to df. Now, we’ll write df in local coordinates. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 29

∨ n (1) Choose a consistent basis for Tx U with U ⊂ R open. Consider the func- tion xi : U R, the ith coordinate function. i i What does dx do at a point x? We have dx |x : TxM R as an element ∨ of Tx U. Let→ dim U → ∂ v = vi | ∈ T U ∂xi x x i=1 X dim U ∂ dxi| (v) = v(xi) = vj (xi) = vi x ∂xj j=1 X Proposition 8.19. Let f : U R be smooth. Then, dim U ∂f df| =→ | dxi| x ∂xi x x i=1 X so n ∂f df = dxi ∂xi i=1 X n = U ∂f ∈ C (U) dxi ∈ T ∨U where dim and ∂xi and . Proof. Not hard to prove ∞ Proposition 8.20. Let g : U V, U ⊂ Rn, V ⊂ Rm be smooth. Deﬁne g∗ : C (V) C (U), f 7 f ◦ g. Then, d ◦ g∗ = g∗ ◦ d

∞ ∞ 1 → 1 → Ω→(V) Ω (U) (8.6)

C (V) C (U(

∞ ∞ Proof. Easy ∗ 1 1 Deﬁnition 8.21. Let g : Ω (V) Ω (U), α 7 α ◦ Dg. ∗ ∗ Exercise 8.22. g α(Xx) = α|g(x)(Dg(Xx)) ∈ Tg(x)(V) where g (α)(Xx) ∈ TxU. → →

9. 9/29/15 The goal for today is the following: • (1) Prove (ΩdeR(M), ddR) is a cdga by class and homework (2) For all f : M N smooth, there is an induced contravariant map f∗ : Ω•(N) Ω•(M) as a map of cdga’s and cochain complexes. i i i−1 (3) Deﬁning H (M→) := ker d /im d and obtain an induced map on coho- ∗ • • mology→f : H (N) H (M). (This will be one of the easiest ways to prove ∼ (a) M 6= N → (b) f 6' g. op • (4) This deﬁnes a functor Mfld grCommAlg/k sending M 7 HdeR(M).

→ → 30 AARON LANDESMAN

Last time, we deﬁned d0 : C (M) Ω1(M). In coordinates, ∞ dim M ∂f df = → dxi ∂xi i=1 X Remark 9.1. For any vector bundle E, Γ(E) is a module over C (M). Addition is given by (s + t)(x) = s(x) + t(x) ∈ Ex, and scaling is given by (f · s)(x) = f(x) · s(x). ∞ Proposition 9.2. Let f : U V be smooth. Then, f∗d0 = d0f∗. Proof. We will compute both sides and see they end up the same way. By deﬁnition → f∗ : C (V) C (U)

∞ h 7 h∞◦ f ∗ 1 → 1 f : ΩdeR(V) ΩdeR(U) → α 7 (f∗α : v 7 α(Df(v)), v ∈ Γ(U)) → Now, → → ! n ∂h f∗d0h = f∗ dyi ∂yi i=1 X n ∂h = f∗dyi ∂yi i=1 X n ∂h = dyi ◦ Df ∂yi i=1 X n ∂h ∂fi = dxj ∂yi ∂dj i=1 X = d(h ◦ f) by the chain rule. Deﬁnition 9.3. Let U ⊂ Rn be an open subset. then, 1 1 2 ddeR : Ω (U) Ω (U) ∂α α = α dxi 7 i dxj ∧ dxi i → ∂xj i j X X, Then, deﬁne → i i i+1 ddeR : Ω (U) Ω (U) by → i j+1 d(α1 ∧ ··· ∧ αi) = (−1) α1 ∧ ··· ∧ (dαj) ∧ ··· ∧ αi j=1 X Remark 9.4. We usually use lower subscripts for contravariant things and upper indices for covariant things. We would usually write things the other direction, but physicists think of things the opposite way as mathematicians, and we are following the physicist notation. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 31

Deﬁnition 9.5. For all f : U V smooth, deﬁne ∗ • • f : ΩdeR(V) ΩdeR(U) → ∗ ∗ α1 ∧ ··· αj 7 f (α1) ∧ ··· f (αi) → and this makes f∗ an algebra map. → • 1 Remark 9.6. ΩdeR(U) is a free graded commutative algebra, on ΩdeR(U), meaning 1 • that once we deﬁne things on ΩdeR(U), there’s a unique extension to ΩdeR(U) Proposition 9.7. f∗d1 = d1f∗

Proof. Let i i α = αidy ∈ Ω (V). Now, let’s compute both sides. First,X ∂α f∗(d1α) = f∗ i dyj ∧ dyi ∂yj ∂αX = i dyj ◦ Df ∧ dyi ◦ Df ∂yj X ∂αi ∂fj ∂fi = dxk ∧ dxl ∂yj ∂xk ∂xl i j k l X, , , ∂αi ∂fi ∂fi = ∂yj ∂xk ∂xl ∂α ◦ f ∂fi X · dxk ∧ dxl ∂xk ∂xl X ∂αi V f where we view ∂yj as a function on by post composing with . Next, we com- pute the other side: d1(f∗α) = d1 αi ◦ f dyi ◦ Df X ∂f = d1 αi ◦ f i dxk ∂xk X∂α ◦ f ∂f ∂2fi = i + αi ◦ f dxj ∧ dxk ∂xj ∂xk ∂xi∂xk X Now, to complete the proof, we have to show ∂2fi αi ◦ f = 0 ∂xi∂xk X The reason for this is that if we ﬁx values of j, k, we have ∂2fi ∂2fi dxj ∧ dxk + dxk ∧ dxj ∂xj∂xk ∂xk∂xj and so these partialsX pair up and cancel out. This is the key heart of the interplay of geometry and algebra, we need that mixed partials commute. 32 AARON LANDESMAN

Corollary 9.8. So, d1 deﬁnes a global assignment d1 : Ω1(M) Ω2(M)

k Proof. We have deﬁned this map locally, and so if we write E = Uα × R / ∼, to → k give a section of E, it is equivalent to give maps sα : Uα R so that ` Γαβ ◦ sα = sβ → This is what the proposition veriﬁes. To complete the proof, we should really write down what the induced overlap maps are, and check this is compatible with the overlap maps, but this essentially follows because we have a two form and the derivatives of the two coordinates. 2 Proposition 9.9. ddeR = 0 Proof. It sufﬁces to check this on an open set in Rn. We can further reduce to checking d1 ◦ d0 = 0. (because d(α∧β) = dα ∧ β + (−1)|α|α ∧ dβ) and then ∂f d1 ◦ d0(f) = d1 dxj ∂xj X ∂2f = ∂xi∂xj = X0 • • Corollary 9.10. The local deﬁnition of ddeR : ΩdeR(M) ΩdeR(M) • Proof. To show ΩdeR(M), ddeR is a cdga, it remains to show It remains to show 2 → d = 0. This follows from Proposition 9.9. Remark 9.11. For any smooth f : U V, we showed d0f∗ = f∗d0 → d1f∗ = f∗d1 and so the analogous statement holds for 0, 1 replaced by i because we deﬁned f∗ • • on i forms to deﬁne an algebra map ΩdeR(V) ΩdeR(U). Question 9.12. Here’s a slogan: k forms are things you can integrate over oriented k manifolds. How do we integrate k forms? → We can answer the above question in two steps. k ∨ ∨ (1) First, deﬁne an isomorphism ∧ (V ) = (TxM) . Inﬁnitesimally speak- ing, we should get a map out of a collection of these k tangent vectors, and k so we can think of ∧ (TxM) as an oriented collection of k tangent vectors. Getting a number from some vector is an element of a dual vector space. We then use the isomorphism ∧k(V∨) =∼ ∧k(V)∨. (2) We then use partitions of unity. Deﬁnition 9.13. (Deﬁnition of the isomorphism) (1) For k = 0, we want a map ∧0(R∨) =∼ R R∨ =∼ ∧0(V)∨, and since R is a ﬁeld we have a distinguished 1 ∈ R and we send 1 7 (α : R R, 1 7 1). ∨ ∨ (2) When k = 1, we need a map R R ,→ so take the identity map → → → → NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 33

(3) When k ≥ 2, We want φ : ∧k(V∨) ∧k(V)∨ α1 ∧ ··· ∧ αk 7 v1 ∧ ··· ∧ vk 7 det αj(vk) ij → dim V k ∨ dim V k Question 9.14. This induces→ a multiplication→ on ⊕k=0 ∧ (V) because ⊕k=1 ∧ (V∨) has a multiplication, and we can then transfer the multiplication on the latter algebra to the former algebra. But, what is the product? Answer: Given φ(α), φ(β) ∈ ∧k(V)∨, ∧l(V)∨, we need φ(α) ∧ φ(β) ∈ ∧k+l(V)∨ is

φ(α) ∧ φ(β) (v1 ∧ ··· ∧ vk+l) = Sign(π)φα(vπ(1), ... , vπ(k)) · φβ(vπ(k+1), ... , vπ(k+l)). π∈Shuff X k,l Where Shuffk,l ⊂ Sk+l is the set of k,l shufﬂes if π(1) < ··· < π(k) and π(k + 1) < ··· π(l). n n 9.1. Integration. Let U ⊂ M and φ : U R be a chart. Let’s ﬁx ω ∈ ΩdeR(M) so that Supp(ω) ⊂ U. −1 Then, φ is a smooth map from φ(U→) M, so pulling back ω we get an n n n form on R or φ(U). But, any n form on R is of the form f · dx1 ∧ ··· ∧ dxn, n and we know how to integrate a smooth function→ on R . We could try to deﬁne M ω := Rn f. However, there is a problem with orientation, the integral ω is only well R R M deﬁned up to sign. Consider an orientation reversing diffeomorphism j : φ(U) R φ(U). This negates the value of the integral, by change of variables, if we denote −1 by f˜ the function by pulling back along φ ◦ j, we get → ω = f˜ = −f n n ZM ZR ZR because the chain rule will have an absolute value around the determinant of the Jacobian. So, to make this well deﬁned, we should demand that φ must satisfy some compatibility condition with an orientation on M. By deﬁnition of orientation, if φ is compatible with an orientation on M, then j−1φ is not. Deﬁnition 9.15. Let M be an oriented n manifold. Let U be a Euclidean open set, meaning that there exists a chart φ : U Rn. Then, for any n form ω, with Supp ω ⊂ U, → ω := f n ZM ZR where f is obtained by pulling back ω along a chart φ compatible with orientation. Deﬁnition 9.16. Let M be oriented and ω any n form on M. Then, ﬁx an atlas A = {Uα, φα} for M compatible with the orientation on M, ﬁx a partition of unity {hα} subordinate to A, and deﬁne

ω := hαω M α∈A M Z X Z 34 AARON LANDESMAN

Remark 9.17. Depending on the behavior of ω, this could be , − or undeﬁned. Example 9.18. When a function is unbounded on R, you can deﬁne this by taking some limit over extending open sets in R. Unless you choose∞ such∞ a convention for all manifolds, this integral might be undeﬁned.

10. 10/1/15 • 10.1. Review. Last time, we showed the existence of a differential ddeR on ΩdeR(M). Locally, k n Ω (R ) = fIdxI X k ∨ n where I = (i1, ... , ik), i1 < ··· < ik and dxI ∈ Γ(∧ T R ). Then, d glues to a global map Ωk(M) Ωk+1(M). Exercise 10.1. We have • → (1) ΩdeR(M) is a cdga over R for any smooth M ∗ • • (2) For all f : M N we have a map of cdga’s f ΩdeR(N) ΩdeR(M). (3) By the chain rule, (f ◦ g)∗ = g∗ ◦ f∗. Remark 10.2. Using→ the isomorphism → ∧k(V∨) =∼ ∧k(V)∨ α1 ∧ ··· ∧ αk 7 v1 ∧ ··· ∧ vk 7 det αi(vj) We can also write f∗ : Ωk(N) Ωk(M) as follows. Given α ∈ Ωk(N), we have → → ∨ (f∗α)(x) ∈ ∧k(T ∨M) =∼ ∧k (T M) → x x deﬁned by ∗ (f α) (x) 7 (v1 ∧ ··· ∧ vk 7 α(f(x)) (Df(v1) ∧ ··· ∧ Df(vk))) 10.2. Flows and Lie Groups. → → Remark 10.3. Here is some motivation. Fix a vector ﬁeld X on M. Does it make sense to ﬂow along X? That is, if we give our manifold some sort of ﬂuid, does it make sense for the ﬂuid to move in the direction of the vector ﬁeld? Theorem 10.4. (Existence, Uniqueness, and smooth dependence of solutions to ﬁrst order ODEs) Let U ⊂ Rn be open, and I ⊂ R open. Fix a smooth function Y : I × U Rn (t, x) 7 Y(t, x) → Then, for every x ∈ U, there exists → (1) tmin < 0 < tmax ∈ R. (2) A smooth function γ :(tmin, tmax) U so that (1) γ(0) = x and → (2) ∂γi = Yi(t, γ1(t), ... , γn(t)) ∂t NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 35

In other words, γ˙ = Y(t, γ(t)). (Existence) 0 0 Furthermore, if γ˜ :(tmin, tmax) U also satisﬁes the above two conditions, then γ˜ = γ on the intersection of their domains of deﬁnition. (Uniqueness) Further, → (1) There exists W ⊃ U, x ∈ W (2) > 0 so that the function W × (−, ) U

(x, t) 7 γx(t) → is well deﬁned and smooth (C dependence). → Proof. Idea: Look at all smooth∞ functions on U, pass to some vector space of maps from R to U, and show that as you look for solutions, and create a contraction operator, and the ﬁxed point of this is the solution. This is just the contraction lemma. You have to show that the limit of these vector ﬁelds is a smooth vector ﬁeld. We won’t give a proof in class, though. Remark 10.5. Theorem 10.4 is a consequence of the Picard Lindelof theorem. Corollary 10.6. Fix X a vector ﬁeld on M. Locally, this deﬁnes a function Y : U Rn, where n = dim M, U ⊂ M. For every x ∈ M there is W ⊂ M, > 0 x ∈ W and smooth map → Φx : W × (−, ) M (x, t) 7 Φ(x, t) → so that for all x, t, we have ∂ → DΦx| | = X(Φx ) (x,t) ∂t t (x,t) and ΦX(x, 0) = x. That is,

DΦx T(W × (−ε, ε) TM

(10.1) ∂ (0, ∂t ) X Φx W × (−ε, ε) M commutes. Proof. Corollary 10.7. By uniqueness we have X X X Φt0 ◦ Φt = Φt0+t X where deﬁned. And, for all t, Φt is a diffeomorphism onto its image X X X Proof. Use that Φt ◦ Φ−t = Φ0 = id, and use uniqueness plus the fact that every- thing in sight is smooth. Deﬁnition 10.8. A vector ﬁeld X on M is complete if for all x ∈ M, the interval X Ix ⊂ R on which the ﬂow Φ : W × Ix M is deﬁned can be taken to be R.

→ 36 AARON LANDESMAN

Remark 10.9. Intuitively, completeness means that the ﬂow exists for all time, for all x. Deﬁnition 10.10. A manifold M is called complete if for every X ∈ Γ(TM), X is complete. Example 10.11. Here are some examples why we need to be careful regarding completeness M = Rn \ {0} X = ∂ i (1) Let Take ∂xi for some be a constant vector ﬁeld. This is not complete at points on the xi axis. (2) Rn is note complete because we can choose an diffeomorphism between Rn ∼ B(0 1) X = ∂ = , , an open ball, and take ∂xi and then pull back along this isomorphism, so that Rn is not complete. Proposition 10.12. If M is compact, any vector ﬁeld on it is complete. Proof. For every point there is some and W, choose a ﬁnite collection which covers. By uniqueness, we can patch together the ﬂows. Then, the ﬂows extends as far as we want. So, we can ﬂow for as long as you want. Corollary 10.13. Any vector ﬁeld X on M compact deﬁnes a family of diffeomorphisms X Φt : M M called ﬂowing for time t. Proof. Immediate, note that the image is all of M because we have a two sided → inverse ﬂowing by −t. 10.3. Lie Derivatives. Fix a vector ﬁeld X and a section of TM, T ∨M, ∧kT ∨M called α. How might we compute a derivative that measures how α changes along X? Deﬁnition 10.14. A smooth curve γ : (−ε, ε) M is called a ﬂow line or an integral curve for all X ifγ ˙ (t) = X(γ(t)) whereγ ˙ (t) ∈ Tγ(t)(M) where γ gives rise to a derivative → ∂ (−ε, ε) ∂t T(−ε, ε) (10.2) γ˙ Dγ TM

X X Locally, Φ deﬁnes a diffeomorphism from Wx to W X so DΦ admits an t Φt (x) t x ∗ inverse, as does (Φt ) . Call the isomorphism ∗ (Φ ) : E x E t Φt (x) x or (Φ−t)∗. Then, → ∗ (Φt) (α (Φt(x))) ∈ Ex for all t small enough, t ∈ (−ε, ε) Then, we can take ∗ (Φt) (α (Φt(x))) − α(x) rk (E) lim ∈ Ex =∼ R t 0 t

→ NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 37

Deﬁnition 10.15. The Lie derivative of α along X is is

∗ (Φt) (α (Φt(x))) − α(x) ∼ rk (E) (LXα) (x) := lim ∈ Ex = R t 0 t

Proposition 10.16. (1)→If α is a section of the trivial bundle R × M =∼ ∧0(T ∨M), then LX(α) = X(α) (2) If α is a section of TM then LX(α) = [X, α].

Proof. (1) Let f = α : M R be a smooth function and γ : (−ε, ε) M be the integral curve for X at x. where (γ(0) = x,γ ˙ (t) = X(γ(t))) then, this is precisely, → →

f(γ(t)) − f(0) t

so,

f(γ(t)) − f(0) lim = γ˙ (t)(f) t 0 t

→ (2) To prove the second point, we ﬁrst need a lemma.

Lemma 10.17. For all f : M R, X ∈ Γ(TM), there is a function g : (−ε, ε) × M R so that −1 (a) f ◦ Φt = f − tg → (b)→g(0, x) = Xx(f).

Note that

−1 Φt f (10.3) W M R

will satisfy that at time 0 it is the directional derivative of f in the direction of X. It is like a ﬂowy version of Taylor’s theorem.

Proof.

Now, using the above lemma, we complete the proof. Let α = Y. We will be done if we show

LX(Y)(x)(f) = [X, Y] (x)(f) 38 AARON LANDESMAN

for all x, f : M R. We have, by Lemma 10.17 ∗ (Φt) (Y (Φt(x))) − Y(x) LX(Y)(x)(f) = lim (f) → t 0 t (Φ−t)∗ (Y (Φt(x))) (f) − Y(x)(f) = lim→ t 0 t (Y (Φt(x))) (f ◦ Φ−t) − Y(x)(f) = lim→ t 0 t (Y (Φt(x))) (f − tg) − Y(x)(f) = lim→ t 0 t (Y (Φt(x))) (f) − Y(x)(f) tY(Φt(x))(g) = lim→ − t 0 t t (Y (Φt(x))) (f) − Y(x)(f) = lim→ − Y(Φ0(x))(g) t 0 t = X(Y(f))(x) − Y(Φ0(x))(g) → = X(Y(f))(x) − Y(x)(g) = X(Y(f))(x) − Y(x)X(f) = X(Y(f))(x) − Y(X(f))(x)

We still have to justify why Y(Φt(x))(f) = X(Y(f))(x), which follows from the fact that

Y(Φt(x))(f) = Y(f ◦ Φt(x))

Y(f(◦Φt(x)) − Y(f)(x) = lim t 0 t Y(f)(Φ (x)) − Y(f)(Φ (x)) = t 0 → t = Φ˙ 0(x)(Y(f)) = X(Y(f)) Remark 10.18. Note if E = R × M, pulling back a section of E is just precomposing. g That is, given a diffeomorphism M0 − M, we have g∗(f) = f ◦ g. Remark 10.19. How do we compute ∂ (f ◦ γ). Then, ∂ ∈ Gamma(T (−ε, ε)). →∂dt ∂dt We have a map γ : (−ε, ε) M

DγT(−ε, ε) TM → By the derivation deﬁnition of Dγ, we have → ∂ ∂ X(γ(t))(f) = D | (f) = | (f ◦ γ) γ ∂t t=0 ∂t t=0 where the ﬁrst equality is the deﬁnition of an integral curve.

Remark 10.20. Note (Φ−t)∗ = DΦ−t. The above remarks are key to understanding integral curves. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 39

11. 10/6/15 11.1. Key theorems to remember from this class, not proven until later today. Lemma 11.1. Let D, D0 be derivations of degree k, k0. Then, 0 D ◦ D0 = (−1)k·k D0 ◦ D is a derivation of degree k + k0

Proof. Generalization of a lemma below. Lemma 11.2. Let R• = ∧•(V). If two derivations D0, D agree on R0, R1 then D0 = D.

Proof. Proven below. For any X ∈ Γ(TM) with derivations i i−1 ix : Ω (M) Ω (M) d : Ωi(M) Ωi+1(M) i → i LX : Ω (M) Ω (M) → where the last commutes with d. → Theorem 11.3. LX = iX ◦ d + d ◦ iX. Proof. Proved below. k Proposition 11.4. For α ∈ Ω (M), and v0, ... , vk ∈ Γ(TM), we have n

Lv0 (α(v1, ... , vn)) = (Lv0 α)(v1, ... , vn) + α(v1, ... , Lv0 vi, ... , vn) i=1 X and n i i+j dα(v0, ... , vn) = (−1) viα(v0, ... , v^i, ... , vn) + (−1) α([vi, vj], v0, ... , v^i, ... , v^j, ... , vn) i=1 0≤i

LX : Γ(TM) Γ(TM) Y 7 [X, Y] Γ(R) = C (M) →C (M) → ∞ f 7 ∞Y(f) → Today, we’ll look at the induced map on Ωi. → There’s another operator we can associate to any vector ﬁeld. Deﬁnition 11.5. Interior multiplication by X is the linear map Ωi(M) Ωi−1(M) α 7 α(X, •, ... , •) → → 40 AARON LANDESMAN

Remark 11.6. Interior multiplication can be deﬁned pointwise as follows. For i ∨ i ∨ i−1 α(x) ∈ ∧ (T Mx) =∼ (∧ TxM)) , and ixα ∈ Ω (M) assigns to x ∈ M

(IXα)(x): v1 ∧ ··· ∧ vi−1 7 α(x)(Xx, v1, ... , vi−1) with vk ∈ TMx. → i Deﬁnition 11.7. Let R = ⊕i∈ZR be a graded algebra, meaning R is a ring with graded multiplication. A derivation of degree d on R is a collection of linear maps Di : Ri Ri+d for all i so that → D(a · b) = Da · b + (−1)|a|·da · Db

Example 11.8. The de Rham differential is a derivation of degree 1. Proposition 11.9. For any vector ﬁeld X, 2 (1) (ιX) = 0 (2) ιX is a derivation of degree −1. Proof. (1) First,

(ιX ◦ ιX)(α)(v1, ... , vi−2) = α(X, X, v1, ... , vi−2) = 0 because X, X are linearly dependent. (2) Note that i ∨ ∨ ∨ ∧ (T Mx = R ⊕ T Mx ⊕ ∧(T Mx) ⊕ · · ·

we have that D is a derivation of degree −1 if and only if for all α1, ... , αk ∈ ∨ T Mx k i−1 D(α1, ∧ ··· αk) = (−1) α1 ∧ ··· ∧ Dαi ∧ ··· ∧ αk i=1 X So, we claim, i (ιX(α1 ∧ ··· ∧ αk))(v1, ... , vk−1) = (−1) (α1 ∧ ··· ιX(αi) ∧ ··· ∧ αk) (v1, ... , vk−1)

X ∨ for all v1, ... , vk−1 ∈ TxM, α1, ... , αk ∈ TxM . First, we evaluate the left hand side. This is

α1 ∧ ··· αk(X, v1, ... , vk) = det(αi(vj)

by deﬁnition, where v0 = X. The right hand side is given by k i−1 i−1 (−1) ιX(αi)α1 ∧ ··· ∧ α^i ∧ ··· ∧ αk(v1, ... , vk−1) = (−1) αi(x) det(αi(vj)) i=1 X X = deg αi(vj) where the determinant is the determinant of the ith cofactor matrix. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 41

Remark 11.10. Recall the space of vector ﬁelds Γ(TM) forms a Lie algebra by notic- ing that X, Y ∈ Der(C (M)) implies ∞ X ◦ Y − Y ◦ X ∈ Der(C (M))

Remark 11.11. Lemma 11.1 is just a generalization∞ of the above proposition ?? • We now have two derivations on Ω (M). What derivation does [ιX ◦ d] corre- spond to? Theorem 11.3 says this is precisely LX. The strategy of proof will be to show they agree on generating elements, and this implies they agree everywhere, once we show they are derivations. i i Proposition 11.12. For all X ∈ Γ(TM) we have LX : Ω (M) Ω (M) is a derivation. Proof. The idea is to use the same proof as that of the proof of the product rule from one variable calculus. → Let’s ﬁx α ∈ Ωk(M), β ∈ Ωl(M). Recall X ∗ X (Φt ) α ∧ β(Φt (x)) − α ∧ β(x) LX(α ∧ β)(x) = lim t 0 t

Now, expanding the above, we have→

LX(α ∧ β)(x) (ΦX)∗α ∧ β(ΦX(x)) − α ∧ β(x) = lim t t t 0 t X ∗ X X (Φt ) (α(Φt(x))) ∧ Φt (β(Φt (x))) − α ∧ β(x) = lim→ t 0 t X ∗ X X X ∗ X ∗ (Φt ) (α(Φt(x))) ∧ Φt (β(Φt (x))) − (Φt ) (α(Φt(x))) ∧ β(x) + (Φt ) (α(Φt(x))) ∧ β(x) − α ∧ β(x) = lim→ t 0 t X ∗ 1 X X 1 X ∗ = lim→ (Φt ) (α(Φt(x))) ∧ ( (Φt (β(Φt (x)))β(x)) + ( (Φt ) (α(Φt(x))) − α(x)) t 0 t t = α(x) ∧ (LXβ)(x) + (LX(α))(x) ∧ β(x) → showing LX is a derivation.

Proposition 11.13. For all X, i ∈ Z we have LX commutes with d. ∂ Proof. We’ll come back to this in a later class. The idea is that ∂t and derivatives in the M component commute. Remark 11.14. The name magic formula might have come from Raul Bott, who found this formula very useful, and the name caught on. i Proof of Lemma 11.2. Any element of ∧ (V) can be written as a · v1 ∧ ··· ∧ vk where 1 a ∈ R, vi ∈ V = ∧ (V). By deﬁnition of derivation, k (i−1)|D| D(a · v1 ∧ ··· ∧ vk) = Da · v1 ∧ ··· ∧ vk + a (−1) v1 ∧ ··· Dv1 ∧ ··· ∧ vk i=1 X 0 0 so if D (a) = D(a) for all a and D (v) = D(v) for all v, then 42 AARON LANDESMAN

Proof of Theorem 11.3. Since LX, ιX ◦ d + d ◦ ιX are derivations, by Lemma 11.2, we only need check that both sides agree on C (M) and Ω1(M). For functions, L (f) = X(f) = df(x) d0 X from last time and the∞ deﬁnition of deR. From last time, (ιx ◦ d + d ◦ ιX)(f) = ιX(df) + d(0) = df(X), and so the derivations agree on func- tions. We next check they agree on 1 forms. Let α ∈ Ω1(U) so dim U i α = fidx i=1 X Then, i LX = L(dx ) i = LX(dx ) = d(X(xi)) = d(Xi) X = Xi ∂ Where ∂xi . On the other hand, the right hand side is i i P (ιX ◦ d + d ◦ ιX) (dx ) = d ◦ ιX(dx ) = d(dxi(X)) = d(Xi) Remark 11.15. It’s only recently that we started paying attention to the geometry of Tm ⊕ T ∨M, but studying the geometry of this is a very useful tool into mirror symmetry. This is called generalized geometry, and is pioneered by Hitchin and Gualtieri. This is an interesting example of something that is quite obvious to study but hasn’t been thought about until recently. k Proposition 11.16. For α ∈ Ω (M), and v0, ... , vk ∈ Γ(TM), we have n

(11.1) Lv0 (α(v1, ... , vn)) = (Lv0 α)(v1, ... , vn) + α(v1, ... , Lv0 vi, ... , vn) i=1 X and (11.2) n i i+j dα(v0, ... , vn) = (−1) viα(v0, ... , v^i, ... , vn) + (−1) α([vi, vj], v0, ... , v^i, ... , v^j, ... , vn) i=1 0≤i

Lv0 (α(v1)) = (Lv0 (α))(v1) + α(Lv0 v1)

= v0(α(v1)) = (d ◦ ιv0 + ιv0 ◦ d) (α)(v1) + α ([v0, v1]) NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 43

Then,

d (α(v1)) (v0) = d (α(v0)) (v1) + dα (v0, v1) + α ([v0, v1]) implying

dα (v0, v1) = v0 (α(v1)) − v1 (α(v0)) − α ([v0, v1])

More generally, for higher k, we have

Lv0 (α(v1, ... , vk)) = Lv0 (α))(v1) + α(v1, ... , Lv0 vi, ... , vk)

= v0(α(v1, ... , vk)) = (d ◦ ιv0 + ιv0 ◦Xd) (α)(v1, ... , vk) + α (v1, ... , [v0, vi] , ... , vk)

= d (ιv0 α)(v1, ... , vk)) + iv0 (dα)(v1, ... , vk) + ···

= d (ιv0 α)(v1, ... , vk)) + dα(v0, v1, ... , vk) + ··· By induction,

i+1 i+1+j+1 d(ιv0 α(v1, ... , vk) = (−1) viιv0 α(v1, ... , v^i, ... , vk) + (−1) ιv0 α vi, vj , v1, ... , v^i, ... , v^j, ... , vk i

i+1 v0(α(v0, ... , vk)) = (−1) viα(v0, ... , v^i, ... , vk) i+j + X(−1) α v0, vi, vj , ... , v^i, ... , v^j, ... , vk

+ X α (v1, ... , [v0, vi] , ... , vk) + dα (v0, ... , vk) X

12. 10/8/15

12.1. Overview. First, LX commutes with d.

Deﬁnition 12.1. (1) f related vector ﬁeld (2) Riemannian metrics on E (3) Connection on E

Results, without proof:

Theorem 12.2. [X, Y] = 0 if and only if

X Y Y X Φs ◦ Φt = Φt ◦ Φs Theorem 12.3. (Frobenius) E ⊂ TM is involutive if and only if it is integrable

Proposition 12.4. Any E admits a Riemannian metric and connection.

0 Theorem 12.5. (Colloquially:) Γ takes ⊗ to ⊗C (M). More precisely, if E, E are vector M bundles over , there is a natural isomorphism ∞ 0 ∼ 0 Γ(E ⊗ E ) = Γ(E) ⊗C (M) Γ(E )

∞ 44 AARON LANDESMAN

12.2. Today’s class. From last time, we didn’t prove formula “star” from last time, which is left as an exercise. We’ll prove LX commutes with d today. Proposition 12.6. For all X ∈ Γ(TM), we have

LX ◦ d = d ◦ LX

Proof. Note that LX ◦ d − d ◦ LX is a derivation of degree 1. We just need to check this holds on Ω0(M), Ω1(M). First, we check it for Ω0(M) = C (M). Let f ∈ C (M), x ∈ M, Yx ∈ TxM. Recall pulling back a 1 form via Φ : M N is deﬁned ∞ by Φ∗(α)(v) = α(DΦ(v)). ∞ ∗ Φt(df(Φt(x))) − df(x) → (LX ◦ d(f))(x)(Yx) = lim (Yx) t 0 t

df|Φt(x)(DΦt(Yx)) − dfx(Yx) = lim→ t 0 t ∂ = df| (DΦ (Y )) ∂t→ Φt(x) t x ∂ = (Y (f ◦ Φ )) ∂t x t

(d ◦ LX(f))(x)(Yx) = d(X(f))(x)(Yx)

= Yx(X(f)) ∂ = Y f ◦ Φ x ∂t t The proof is for smooth functions is essentially complete, the two above terms are now equal because mixed partials commute. More precisely: Now, we’re done because f ◦ Φt deﬁnes a function on W × (−ε, ε) sending (x, t) 7 f(Φt(x)). On the other hand, Yx extends to a vector ﬁeld ∂ on W × (−ε, ε) equal to zero on the T(−ε, ε) component. Also, ∂dt deﬁnes a vector ﬁeld equal to 0 on the→TW component. In other words, if we choose local coordi- ∂ nates, ∂t , Yx only consist of pairwise distinct coordinates, and hence commute because mixed partials commute. To complete the proof, we only have to check in local coordinates we want i (LX ◦ d − d ◦ LX) (dx ) = 0 In local coordinates, we have i i i LX ◦ d ◦ d(x ) − d ◦ LX ◦ d(x ) = 0 − d ◦ d ◦ LX(x ) = 0 Here is an important philosophical comment. Remark 12.7. We’ll use these formulas a lot of prove useful results about geometry, even though these formulas have proofs which are largely algebraic. What tool did we really need for these formulas? Recall the formula for d(α(v1, ... , vn))? We proved it by induction using Car- tan’s magic formula LX = d ◦ ιX + ιX ◦ d, and LX ultimately depended on a solu- tion to a differential equation. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 45

Here is the theme: We deﬁned easy things where the de Rham derivative came for free, and ιX is quite natural as well. But, however, we got the algebraic output from an algebraic input by going through differential equations. Much of the (algebraic) progress was due to Gromov who delved into differen- tial equations to prove something algebraic. Deﬁnition 12.8. Let f : M N be a smooth map and let X ∈ Γ(TM), X˜ ∈ Γ(TN). Say (X, X˜) are f related if → M f N

(12.1) X X˜ TM Df TN

That is, for all x ∈ M, we have Dfx(Xx) = X˜ f(x). Remark 12.9. Note, vector ﬁelds don’t push forward, essentially because they can come in conﬂict with themselves at nodes. even though tangent vectors do push forward. Proposition 12.10. Fix f : M N. Suppose X, X˜ , Y, Y˜ are f related. Then, [X, Y] , X˜, Y˜ → are f related. Proof. We need to show that ˜ ˜ Dfp [X, Y]p = X, Y p We just need to show these are the same derivations. Let φ : N R be a smooth function. We need to show the above evaluates to the same value on φ. ˜ ˜ ˜ ˜ ˜ ˜ → X, Y p (φ) = (X ◦ Y − Y ◦ X)pφ

= X˜ p(Y˜(φ)) − Y˜p(X˜(φ))

= Dfp(Xp)(Y˜(φ)) − Dfp(Yp)(X˜(φ))

= Xp(Y˜(φ) ◦ f) − Yp(X˜(φ) ◦ f)

= Xp(Y(φ ◦ f)) − Yp(X(φ ◦ f))

= [X, Y]p (φ ◦ f) = Dfp [X, Y]p (φ) Note that above we used the following: Y˜(φ): N R

y 7 Y˜y(φ) → Y˜ (φ) M −f N −−− R → x 7 f(x) 7 Y˜f(x)(φ) = Dfx(Yx)(φ) = Yx(φ ◦ f) → → → → Here is an interesting question: 46 AARON LANDESMAN

Question 12.11. Fix a subbundle E ⊂ TM and ﬁx a bundle map

E TM (12.2)

M M so that this map is an injection on the level of ﬁbers. That is, the image is a subbun- dle To what extent does E look like the tangent bundle to a bunch of submanifolds? To elaborate on the above question, we might try taking subspaces of the tan- gent bundle at each point For every x ∈ M, we’re asking if there exists an immer- sion j : U M so that Dj(TU) = E|j(U). We have

→ TU TM (12.3)

U M and we’re asking if we can rig it so that the vector subbundle is the tangent bundle of the image of some immersion. In fact, E is rarely such a subbundle Question 12.12. What property does E have if it does come locally as the images of some immersions or embeddings? Deﬁnition 12.13. If E satisﬁes the property that for all x ∈ M, there exists a mani- fold jx : Nx M an embedding, so that

Djx (TxN) = E|jx(Nx) → We then say that E is integrable. Remark 12.14. You can image that if E is an integrable submanifold, then sections of E deﬁne ﬂows, and we can ﬁnd N along all of these ﬂows. It is called integrable, because Frobenius asked this question in terms of ﬁnd- ing solution to differential equations, and ﬁnding such a solution is the same as “integrating” the differential equations to ﬁnd a solution. If E is integrable, consider X, Y ∈ Γ(E) ⊂ Γ(TM)

Then, [X, Y] ∈ Γ(E) because locally X, Y are related to vector ﬁelds on Nx, and use the Proposition 12.10. So, we see immediately that if E is integrable then Γ(E) must be a Lie subalgebra of Γ(TM). You can ask if this is enough. Deﬁnition 12.15. A subbundle E ⊂ TM is involutive if X, Y ∈ Γ(E) then [X, Y] ∈ Γ(E). Theorem 12.16. E is involutive if and only if E is integrable. In our homework, we’ll prove the Koszul dual version, which deals with differ- ential forms instead of subbundles. Proof. Given in two weeks, when Hiro gets back. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 47

Theorem 12.17. Let X, Y ∈ Γ(TM). Then, [X, Y] = 0 if and only if X Y Y X Φs ◦ Φt = Φt ◦ Φs where the ﬂows are deﬁned. Proof. Given in two weeks, when Hiro gets back. 12.3. Riemannian Geometry on vector bundles. 0 Theorem 12.18. (Colloquially:) Γ takes ⊗ to ⊗C (M). More precisely, if E, E are vector M bundles over , there is a natural isomorphism ∞ 0 ∼ 0 Γ(E ⊗ E ) = Γ(E) ⊗C (M) Γ(E ) Remark 12.19. The proof is easy when M is compact.∞ Try doing it at home. The hard part is doing it for paracompact M. In this case, you need dimension theory (Lebesgue dimension theory). Proof. Omitted, exercise in the compact case. So, any s ∈ Γ(E ⊗ E0) can be written as a ﬁnite linear combination 0 fijti ⊗ tj 0 X 0 where fij ∈ C (M), ti ∈ Γ(E), ti ∈ Γ(E ). Deﬁnition 12.20.∞ Let E be a smooth vector bundle over M.A Riemannian metric on E is a section g ∈ Γ((E ⊗ E)∨) so that (1) for all x ∈ M, v, w, ∈ Ex we have g(x)(v ⊗ w) = g(x)(w ⊗ v) (2) g(x)(v ⊗ v) ≥ 0 and is equal to 0 if and only if v = 0. Remark 12.21. A Riemannian metric is a smooth choice of positive deﬁnite inner product on each ﬁber. Deﬁnition 12.22. A Riemannian metric on M is a Riemannian metric on TM. Remark 12.23. All the deﬁnitions and results for when M ⊂ Rn carry over to this more general setting. For example: (1) If j : M N is an immersion, and h is a Riemannian metric on N, then j∗h deﬁnes a Riemannian metric on M. Example→ 12.24. We have an inclusion of Sn Rn+1 and this induces a pullback of the standard Riemannian metric We deﬁne → ∗ j h(v, w) := h Djx (v), Djx (w)

for all v, w ∈ TxM. 48 AARON LANDESMAN

(2) We say that f :(M, g) (N, h) is an isometry if f is a diffeomorphism and f∗h = g. Question 12.25. Does any manifold→ admit a Riemannian metric? One answer is given by the Whitney immersion theorem, since it embeds into Rn for some n. However, we have something even stronger: Proposition 12.26. Any vector bundle E on M admits a Riemannian metric. Proof. The idea is to use partitions of unity. Let {(Uα, φα)} be local coordinates so that {Uα} is an open cover of M. ∼ k Φα : E|Uα = Uα × R k Now, U × R admits a Riemannian metric gα, take for example gij = δij and g = I, the identity matrix. Hence, we obtain an induced metric on E|Uα . Let Φα be a partition of unity subordinate to Uα, and deﬁne

g = φαgα

Explicitly, for all v, w ∈ Ex, we have X

g(v, w) = φα(x)gα(v, w)

Now, g(v, w) is symmetric. Further, itX is positive deﬁnite because φα add up to 1 and the gα are positive deﬁnite. 12.4. Connections. A connection will be a way to take directional derivatives of sections of E. Question 12.27. What could such a thing be?

Fix a v ∈ TxM. Fix s ∈ Γ(E). Intuitively, a directional derivative ∇ Fix s ∈ Γ(E). Intuitively, a directional derivative ∇ should produce

∇vs ∈ Ts(x)Ex Remark 12.28. Intuitively, if we move along a tangent direction in M, this should induce a movement along the tangent bundle to E to deﬁne a section of the tangent bundle to E Proposition 12.29. Let V be a smooth vector space over R, and demand R × V V V × V −t V → ∼ are smooth. Then, there exists a natural isomorphism T0V = V. → Proof. Omitted. ∼ Corollary 12.30. Since V is a lie group under addition, we have an isomorphism T0V = TxV. Proof. Omitted ∼ Corollary 12.31. We have an isomorphism Ts(x)Ex = Ex. Proof. Combine the above proposition and corollary. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 49

So, a connection should take for all x ∈ M,

TxM × Γ(E) Γ(E)

(v, s) 7 (x 7 ∇js(x) → T algebraically the directional derivative is→ linear→ in the x component. So taking all x at once, we get a map

Γ(TM) × Γ(E) Γ(E)

(X, s) 7 ∇Xs → Remark 12.32. A connection is a way to take derivatives along a vector bundle. → There are a lot of them, but only a few of them are special.

13. 10/20/2015 13.1. Key theorems for today.

Theorem 13.1.

X Y Y X [X, Y] = 0 Φs ◦ Φt = Φt ◦ Φs

X Y whenever Φs , Φt are deﬁned. ⇐⇒ Proof.

We’ll also learn about connections: (1) locally (2) existence (3) convexity (4) curvature

13.2. Class time. Today, we’ll give a geometric interpretation of the Lie bracket via Theorem 13.1. Let’s recall what these symbols mean: Given X ∈ Γ(TM), for all x ∈ M, there is W ⊂ M, there is x ∈ W, (−ε, ε) ⊂ R so that we have a map ΦX : (−ε, ε) × W M is a diffeomorphism onto its image, satisfying a derivative condition. → Theorem 13.2.

X Y Y X [X, Y] = 0 Φs ◦ Φt = Φt ◦ Φs

X Y whenever Φs , Φt are deﬁned. ⇐⇒ Proof. First, we prove the reverse direction. This is mostly formal. We need to show that for f : M R, we have

X(Y(f)) = Y(X(f)) → 50 AARON LANDESMAN

We have Y(f)(ΦX(x) − Y(f)(x) X(Y(f))(x) = lim s s 0 s Y X Φs (x)(f)−Y(f)(x) = lim→ s 0 s Y X X Y f(Φt (Φs (x))) − f(Φs (x) f(Φt (x) − f(x) = lim→ lim − s 0 t 0 st st X Y Y X f(Φs (Φt (x))) − f(Φt (x) f(Φs (x) − f(x) = lim→ lim→ − s 0 t 0 st st = Y(X(f)) → → For the reverse direction, we’ll need a clever little trick. Deﬁne a curve

~v : (−ε, ε) T Y M Φt (x) X ∨ s 7 (DΦs ) Y X Y → Φs ◦Φt (x) x ∈ M t ∈ R We a priori ﬁx and →. We’ll show this curve is constant. We claim: ∂~v Lemma 13.3. ∂s = 0 We now show why the theorem follows from this lemma, and then come back to this lemma. X Y Now, we’re done: Set C : (−ε, ε) M, t Φs ◦ Φt (x) where s, x are ﬁxed. Now, observe, ∂ → → C˙ (t) = ΦX ◦ t 7 ΦY(x) ∂t s t X = DΦs (Y Y → Φt X = DΦs (~v(0))

= Y X Y Φs ◦Φt (x) where the last step follows from Lemma 13.3 because

X X (DΦs )∗(~v(0)) = DΦs (~v(s)) ∗ ∗ X X = DΦs DΦs Y X Y ∗ Φs ◦Φt (x) = Y X Y Φs ◦Φt (x) So, C is a curve in M satisfying the condition

X C(0) = Φs (x)

C˙ (t) = YC(t)

Y X However, Φt ◦ Φs also satisﬁes these derivative conditions, and so by uniqueness, Y X Φt ◦ Φs = C(t) NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 51

Proof of Lemma 13.3. We have

∂~v 1 h ∗ ∗ i |s = lim (DΦs+h) YΦX ΦY (x) − (DΦs) YΦX◦ΦY (x) ∂s s 0 h s+h t s t 1 ∗ h i = lim→ (DΦs) YΦX ΦY (x) − YΦX◦ΦY (x) h 0 h s+h t s t 1 ∗ h ∗ i = lim→ (DΦs) (DΦh) YΦXΦXΦY (x) − YΦX◦ΦY (x) h 0 h h s t s t ∗ 1 h ∗ i = (DΦ→ s) lim (DΦh) YΦXΦXΦY (x) − YΦX◦ΦY (x) h 0 h h s t s t ∗ = (DΦs) LXY ∗ → = DΦS [X, Y] = 0

Deﬁnition 13.4. Let Y be a vector ﬁeld on M. An integral curve of Y at a point x ∈ M is a smooth curve C : (−ε, ε) M so that (1) C(0) = x → (2) C˙ (t) = YC(t) 13.3. Connections. A connection on a vector bundle is a way to take derivatives along bundles. If we had a notion of Dv, a directional derivative in ~v, it should be ∼ an element of T(Ex)s(x) = Ex. That is, we should have a map Γ(TM) × Γ(E) Γ(E) and this should obey the Leibniz rule: → DX(fs) = X(f)s + fDX(s) Deﬁnition 13.5. Let E be a smooth vector bundle on M.A connection on E is a R linear map Γ(TM) × Γ(E) Γ(E)

(X, s) 7 ∇Xs → so that → (1) ∇X(fx) = X(f)s + f∇Xs for all smooth functions f : M R (2) ∇fX(s) = f∇Xs. Remark 13.6. R linear in the deﬁnition of connection means, for→ example, that

∇X+Ys = ∇Xs + ∇Ys 0 0 ∇Xs + s = ∇Xs + ∇Xs For t ∈ R, that is, a constant function M R, we have

∇tXs = t∇Xs = ∇Xts → 52 AARON LANDESMAN

The second property of connections gives us a dual interpretation ∇ Γ(TM) × Γ(E) − Γ(E) is equivalent to the data of a map → Γ(E) −Γ (T ∨M ⊗ E) s 7 ∇s → where ∇s is waiting for a subscript vector ﬁeld X to be plugged in. → Example 13.7. Let E = R := M × R. Then, a connection is a map Γ(TM) × C (M) C (M) or equivalently, a map ∞ ∞ ∨ ∼ → ∨ • C (M) Γ(T M ⊗ R) = Γ(T M) =: ΩdeR(M) and the de Rham∞ derivative is a connection on R = E. What is the de Rham derivative as a map → Γ(TM) × C (M) C (M)?

We know it should be X(s) =: ds(X∞) = ∇Xs That∞ is, → ∇ : (X, s) 7 X(s)

Here X(s) is a function that at a point p is Xp(s), the derivative of s in the direction of Xp. →

Proposition 13.8. Let E be a trivial vector bundle. Fix k = dim Ex many linearly independent sections and an assignment ∨ ∇si ∈ Γ(T M ⊗ E)

Then, there exists a unique connection on E so that ∇si are the prescribed sections. Remark 13.9. A section of T ∨M ⊗ E is the same thing as an E valued 1 form. That is, any α ∈ Γ(T ∨M ⊗ E) is something that eats X ∈ TM and outputs a section of E. k i i Proof. Any section of E can be written as S = i=1 f si where f ∈ C (M). We j j can write ∇s = k α ⊗ s where α are one forms. i j=1 i j i P ∞ Then, set ∇(fs ) = df ⊗ s + f∇s . Pi i i Example 13.10. Set E = R. Fix s a nowhere vanishing section. Declare ∇s = 0 ∈ • ΩdeR(M). Then, ∇(fs) = df ⊗ s + f∇s. df = ∂f dxi df ⊗ s = s · ∂f dxi s Locally ∂xi and ∂xi . If were chosen to be the constant function 1, then ∇ corresponds to the usual de Rham derivative. Remark 13.11. Question 13.12. In the dual picture, where ∇ : Γ(E) Γ(T ∨M ⊗ E), what does the Leibniz rule become? This becomes → ∇(fs) = df ⊗ s + f∇s ∨ • where df ∈ Γ(T M) = ΩdeR(M) and s ∈ Γ(E) and so this tensor product df ⊗ s ∈ Γ(T ∨M ⊗ E). NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 53

Corollary 13.13. Trivial vector bundles admit connections.

Proof. We have given many in Proposition 13.8. Proposition 13.14. Let E be a smooth vector bundle. (1) E admits a connections ∇. (2) If ∇1, ∇2 are two connections, then

t∇1 + (1 − t)∇2 is also a connection. (3) ∇1 − ∇2 is C (M) linear meaning

∞ (∇1 − ∇2)(fs) = f((∇1 − ∇2)(s)) Remark 13.15. In math, Hom(X, Y) almost always inherits the properties of Y. Connections sit inside R linear maps Γ(E) Γ(T ∨M ⊗ E). However, the set of all connections is deﬁnitely not a vector ﬁeld. So, it sits inside in a very “curved way.” Part 3 of the above proposition shows→ the difference of two connections is not in general a connection, because it is smooth functions linear. Also, Example 13.16. Γ(E) Γ(T ∨M ⊗ E) is not a connection because df ⊗ s will not be 0 in the equality → ∇(fs) = df ⊗ s + f∇s

Proof. (1) Let {Φα, Uα} be a trivializing cover of E. Let fα be a partition of

unity subordinate to {Uα}. By a previous proposition, we have E|Uα := π−1(U), but we also have

j∗E E (13.1)

Uα M

∗ where E|Uα := j E. We have a connection ∇α on it. Namely, set

∇(s) := fα · ∇α(s|Uα ). α X ∨ We have s|Uα in Γ(E|Uα ). Then, ∇α(s|Uα ) ∈ Γ(T Uα ⊗ E|Uα . Then, the sum lies in Γ(T ∨M ⊗ E). We claim this ∇ is a connection. We have

∇(hs) = fα∇α(hs|Uα ) α X = fα (dh|Uα ⊗ s|Uα + h|Uα ∇α(s|Uα )) α X = fαdh|Uα ⊗ s|Uα + fαh|Uα ∇α(s|Uα ) α X = dh ⊗ s + h fα∇α(s|Uα ) α X = dh ⊗ s + h∇(s) 54 AARON LANDESMAN

(2) We need to show the Leibniz rule is satisﬁed. That is,

(t∇1 + (1 − t)∇2)(fs) = df ⊗ s + f(t∇1 + (1 − t)∇2)s We have

(t∇1 + (1 − t)∇2)(fs) = t (df ⊗ s + f∇1s) + (1 − t) (df ⊗ s + f∇2s)

= df ⊗ s + tf∇1s + (1 − t)f∇2s

= df ⊗ s + f(t∇1 + (1 − t)∇2)s (3) Here we have

∇1 − ∇2(fs) = df ⊗ s + f∇1s − df ⊗ s − f∇2s

f(∇1 − ∇2)(s) Remark 13.17. You can study the manifold of connections, but that’s not very interesting. More interesting is ﬂat connections or Yang Mills theory. Proposition 13.18. Given ∇ Γ(E) − Γ(T ∨M ⊗ E) there exists a unique operation → Γ(T ∨M ⊗ E) −D Γ(∧2T ∨M ⊗ E) so that D(α ⊗ s) = dα ⊗ s + (−1)|α| ∧ ∇ . →s Proof. The proof is the same as before, where we extend by the Leibniz rule. In local coordinates, the above is a deﬁnition for what D ought to be.

Remark 13.19. Given the de Rham derivative, we had a map Γ(R) Γ(T ∨M) and a complex with d2 = 0. → Deﬁnition 13.20. A connection ∇ is called ﬂat if D ◦ ∇ = 0.

14. 10/22/15 Theorem 14.1. Let E ⊂ TM be a subbundle. Then E is integrable if and only if E is involutive.

Proof. Preview of Riemannian geometry: (1) fundamental theorem of Riemannian geometry (2) Parallel transport (3) geodesics Principle G bundles We will see a correspondence between (1) ∇ ﬂat on E (2) H involutive on P (3) a group homomorphism π1(M) G.

→ NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 55

14.1. Class time. Theorem 14.2. (Frobenius) Fix E ⊂ TM a subbundle. Then, E is integrable if and only if it is involutive. Recall that E being integrable is a geometric condition: For all x ∈ M there is an k immersion j : R M, 0 7 x so that the im Dj = E|j(Rk). Also, we have that E being involutive is an algebraic condition that Γ(E) ⊂ Γ(TM) is a Lie subalgebra.→ → We’ll start with a lemma

Lemma 14.3. If E is involutive, then, for all x ∈ M, there exist local sections X1, ... , Xk ∈ Γ(E|U) where x ∈ U and U is open, k = rk E, so that Xi, Xj = 0

Proof. Fix k linearly independent vector ﬁelds Yi with Yi ∈ Γ(EU) near x. In local coordinates, choose n ∂ Y = fj i i ∂xj j=1 X By changing the order of j if necessary, we can assume j det(fi)i,j∈1,...,k is nonzero. (The reason for reordering is that we are dealing with a k × n matrix, and we have to ﬁnd the minor with nonvanishing determinant.) Let g be the inverse and set j Xi = giYj ∂ ∂ = + cj ∂xi i ∂xj j>k X We claim that Xi, Xj = 0 We now use that E is involutive Since E is involutive, we must have k h Xi, Xj = a Xh h=1 X ∂ ∂ = ah + bl ∂xh ∂xl j≤k l>k X X ∂ = 0 + bl ∂xl l>k X

The key insight is that when we are using h is involutive for the ﬁrst sum to be up to k, and that we know the vector ﬁelds commute. We only had to show that ah were 0. 56 AARON LANDESMAN

Proof of Theorem 14.2. Note, integral implies involutive. This is because if X˜ ∈ Γ(E) is j related to X ∈ Γ(TRk), then [X, X0] is j related to X˜, X˜ 0. In other words, ˜ ˜ 0 0 X, X p = Dj X, X for p ∈ j(Rk). Now, we prove the converse. Fix dx ∈ M and vector ﬁelds Xi as in Lemma 14.3. Deﬁne a smooth map, for U ⊂ Rk as follows

j U − M k X X1 (t1, ... , tk) 7 Φt ◦ · · · Φ (x) → k t1 X X = 0 ΦXi Note that since i, j , the order→ of the ti is irrelevant, as we showed in the previous class. We only need to show for all ~t ∈ U that

Dj(T~tU) = Ej(~t)

Once we show equality, it will follow it is an immersion because the two vector spaces have the same dimension. Since X1, ... , Xk evaluated at j(~t) forms a basis for Ej(t), it sufﬁces to show each

(Xi)j(~t) ∈ Dj(T~t(U)) But,

∂ ∂ Xi Xk ^Xk X1 Dj = |ti Φt ◦ Φt ◦ · · · ◦ Φt ◦ · · · ◦ Φt (x) = (Xi)j(~t) ∂ti ∂ti i k k 1

14.2. Connections and Riemannian Geometry.

Remark 14.4. Recall that a connection is a map

Γ(E) Γ(T ∨M ⊗ E) s 7 ∇s → so that ∇fs = df ⊗ s + f∇s, with f a smooth function and s ∈ Γ(E). → Proposition 14.5. Fix a vector bundle E M and a smooth map j : M˜ M. Fix a connection ∇ on E. Then there exists a connection ∇ on j∗E such that the diagram

∇ → → Γ(E) − Γ(T ∨M ⊗ E)

Note a section on E induces a section on the pullback. → E E

(14.1) s j M˜ M NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 57

∗ determines a map. So, we get a map M˜ M˜ ×M E = j E. So, we have

∇ ∨ Γ(E) → Γ(T M ⊗ E) (14.2)

˜ Γ(j∗E) ∇ Γ(T ∨M˜ ⊗ j∗E) commutes. Proof. Let’s parse the maps in the above diagram in local coordinates. Fix a local ∼ k trivialization E|U = U × R . This deﬁnes linearly independent sections s1, ... , sk on E|U. Deﬁne

s˜i := si ◦ j These are linearly independent section of j∗E on j−1(U). Recall, j∗E = (x, v) : x ∈ M˜ , v ∈ E, j(x) = π(v) ∗ and si ◦ j : M˜ j E sends x 7 si ◦ j(x). Remark 14.6. Given E, F vector bundles on M, we have → → ∼ Γ(E ⊗ F) = Γ(E) ⊗C (M) Γ(F), and under this isomorphism, ∞

∨ ∨ Γ(T M ⊗ E) Γ(T M) ⊗C (M) Γ(E) (14.3) ∞

∨ ˜ ∨ ˜ ∗ Γ(T M ⊗ E) Γ(T M) ⊗C (M˜ ) Γ(j E)

∞ We know that k j ∇si = αi ⊗ sj j=1 X j • where we deﬁne αi ∈ ΩdeR(M) so deﬁne k ˜ ∗ j ∇s˜i := j (αi) ⊗ s˜j j=1 X and then the diagram commutes. Further, by the Leibniz rule, we have a map deﬁned on all section. There were no choices in the matter, so ∇˜ is unique. Last time, we saw that the de Rham differential is a connection on R on M. Can we ﬁnd a ∇ on TRn? Proposition 14.7. Deﬁne ∇ : Γ(TRn) Γ(T ∨Rn ⊗ TRn) by →∂ 7 0 ∂xj

→ 58 AARON LANDESMAN so that ∂ ∂ ∇( Xi = dXi ⊗ + 0 ∂xi ∂xi Then, X

(1) ∇X∇Y − ∇Y ∇X = ∇[X,Y] (2) • n dhX, Yi = h∇X, Yi + hX, ∇Yi ∈ ΩdeR(R ) Proof. Let’s ﬁx ∂ Z = Zi ∂xi i X ∂ X = Xi ∂xi i X ∂ Y = Yi ∂xi i X We have ∂ ∇ ∇ Z = ∇ (dZi(Y) ⊗ ) X Y X ∂xi ∂ ∂ = ∇ Yj Zi ⊗ X ∂xj ∂xi X ∂Yj ∂Zi ∂2Zi ∂ = Xk + Yj ∂xk ∂xj ∂xj∂xk ∂xi X analogously, ∂Xj ∂Zi ∂2Zi ∂ −∇ ∇ Z = − Yk + Xj Y X ∂xk ∂xj ∂x ∂xk ∂xi j k i X, so, ∂Yj ∂Zi ∂xi ∂zi ∂ (∇ ∇ − ∇ ∇ ) Z = Xk − Yk X Y Y X ∂xk ∂xj ∂xk ∂xj ∂xi X But, this is

∇[X,Y]Z because j j k ∂Y k ∂x ∂ [X, Y] = X − Y k j . ∂xk ∂x ∂x X For the second part, n hX, Yi = XiYi i=1 X so ∂Xi ∂Yi dhX, Yi = Yi + Xi dxk ∂xk ∂xk NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 59 while ∂Xi ∂ h∇ X, Yi = hZk , Yi z ∂xk ∂xi ∂Xi = Zk Yi ∂xk i k X, and similarly, ∂Yi hX, ∇ Yi = XiZk Z ∂xk i k X, and so the claimed equality holds. Deﬁnition 14.8. Let ∇ be a connection on TM. Then, ∇ is called symmetric or torsion free if ∇ satisﬁes ∇X∇Y − ∇Y ∇X = ∇[X,Y] Deﬁnition 14.9. Fix a Riemannian metric on M. Then, ∇ is compatible with the metric if it satisﬁes • n dhX, Yi = h∇X, Yi + hX, ∇Yi ∈ ΩdeR(R ) Now, we will ignore metrics altogether. Now, there’s a notion of constant sec- tion along curve, but not really along a manifold itself. That is, we realize a con- nection ∇ is a way to take derivatives along tangent vectors. So, we have a notion of when a section is constant on a curve, where by curve, we mean a map from R. However, we shouldn’t be able to do any better. But, we shouldn’t be able to deﬁne constancy along higher dimensional objects. Question 14.10. How do we make this notion of a section being constant along a curve concrete? Deﬁnition 14.11. Fix a smooth map γ : (−ε, ε) M By the previous proposition, if E is a vector bundle on M with connection ∇, there is a unique connection ∇˜ on γ∗E which makes→ the appropriate diagram commute. And, given s ∈ γ(E), s ◦ γ is a section of γ∗E. ∂ On (−ε, ε) we have a vector ﬁeld ∂t . We say The section s is parallel along γ if

∇˜ ∂ s˜ = 0 ∂t Exercise 14.12. We have

∇˜ ∂ s˜(t) = ∇c˙ s(c(t)) ∂t You can show this immediately from the properties of the deﬁnition.

Deﬁnition 14.13. Fix γ :[a, b] M. Fix va ∈ Eγ(a). Then, the parallel transport of va along γ is the point vb ∈ Eγ(b) obtained by evaluating a parallel section of ∗ γ E starting with va at γ(b). → We’ll prove the following theorem later: 60 AARON LANDESMAN

Theorem 14.14. (Fundamental theorem of Riemannian geometry) Fix (M, g) on a Rie- mannian manifold. Then, there exists a unique connection on TM which are symmetric an compatible with the metric.

Proof. Remark 14.15. There are a large number of connections which are either sym- metric or compatible with the metric, but once you require both, there’s a unique connection. ∇ ∂ = 0 Rn g Corollary 14.16. ∂xi is the unique connection on compatible with std and symmetric.

Proof.

15. 10/27/15 15.1. Overview. Today we’ll talk about (1) Parallel transport (2) Christoffel symbols (3) Connections and metrics (a) torsion free (b) A theorem on Levi Civita Connections

15.2. Parallel Transport. Deﬁnition 15.1. Fix ∇ a connection on E. Also ﬁx s ∈ Γ(E), X ∈ Γ(TM) then ∇Xs ∈ Γ(E) is called the covariant derivative of s in X. Fix γ : R M so we get

∗ → γ E E (15.1)

R M

Then, recall, ∗ Deﬁnition 15.2. A sections ˜ of γ E is called parallel along γ if ∇˜ ∂ s˜ = 0. ∂t

Question 15.3. Fix γ : R M and v0 ∈ Eγ(0). Can we ﬁnd a parallel section s˜ ∈ Γ(γ∗E) so that → ∇˜ ∂ s˜ ∂t and

s˜(0) = v0? If the answer is yes, then for all γ : R M, we have a way of transporting elements of Eγ(0) to elements of Eγ(t) for all t ∈ R. → Remark 15.4. Caution! Looking ﬂat depends on your choice of ∇ NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 61

∗ Proposition 15.5. For all v0 there exists a unique s˜ of γ E so that

∇˜ ∂ s˜ ∂t and

s˜(0) = v0. Moreover (1) For every γ we have a map ∗ Eγ(0) Γ(γ E) which is R linear. (2) Further, we have a composition →

∗ evt Eγ(0) Γ(γ E) −− Eγ(t) is a linear isomorphism. → → Deﬁnition 15.6. Fix ∇, E, γ. Then, the linear isomorphism

Eγ(0) Eγ(t) given by Proposition ??, called parallel transport along γ at time t. → Proof. Fix local linear independent sections s1, ... , sk and let k j ∇si := αi ⊗ sj j=1 X j • for αi ∈ ΩdeR(M). Shrinking the trivializing neighborhood if necessary, we can j write αi in local coordinates. We can write k n j q ∇si = αiqdx ⊗ sj j=1 q=1 X X Writing out the equation

k j ∇si := αi ⊗ sj j=1 X we seek k i s˜ = f si i=1 X so that i j q i ˙ f αiqdx (γ˙ ) ⊗ sj + df (∇) ⊗ si = 0 where above we appliedX ∇˜ tos ˜ and got 0, sinceX

i i ∇s˜ = ∇ fisi = df ⊗ si + f ∇si X X 62 AARON LANDESMAN

That is, we’re looking for curve

~f : R Rk t 7 f1(t), ... , fk(t) → satisfying the ordinary differential→ equation ∂fi ∂γi ∂γj = −fiαj ∂t ∂t iq ∂t i q X, and so for t ∈ (−ε, ε) there is a unique solution to this differential equation. Fur- ther, this ODE is linear. That is, it is of the form f˙ = Af where A is some matrix of functions. Such ODE’s have solutions for all times t ∈ R. So, by existence and uniqueness, the ﬁrst part of the proposition is complete. Now, if f, g are two solutions to the ODE then so is f + g. This proves part a because if f(0) = v0, g(0) = w0 then f + g is the unique solution so that (f + g(0) = v0 + w0. Additionally, part b follows from uniqueness as well. We can translate the ODE from time 0 to time t, and by uniqueness of solutions to ODE’s, the two solutions agree, and the parallel transport backwards along the second curve is inverse to the parallel transport forward along the ﬁrst.

Remark 15.7. Here is a preview of coming attractions. Let E = TM. Then, any curve γ : R M deﬁnes a canonical section of γ∗TM calledγ ˙ . So, given γ, we can ask, does → ∇˜ ∂ γ˙ = 0? ∂t Intuitively, this means γ has no acceleration. This will later be the deﬁnition of being a geodesic, when we choose ∇ Levi-Civita with respect to some metric g. Remark 15.8. When E = TM, then ∇ can be written in local coordinates ∂ n ∂ ∇ = αj ⊗ ∂xi i ∂xj j=1 X n ∂ = Γ j dxk ⊗ = ik ∂xj j k X, k 3 These Γij are a collection of n smooth functions. They are called the Christoffel symbols.

2 Example 15.9. Caution ∇ ∂ γ˙ 6= γ˙ . For example, take M = R \ {0}. Let γ(t) = ∂t (cos t, sin t). We then have γ˙ = −(cos t, sin t). Then, by reparameterizing R2 \ {0} by r, θ, we have γ(t) = (1, t). We can pretty much take the connection onγ ˙ to be almost anything. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 63

Remark 15.10. When is γ : R M a geodesic in this ∇ dependent sense? This is when γ satisﬁes the differential equation ∂→2γk ∂γi ∂γj + Γ k = 0 ∂t2 ∂t ∂t ij if and only if ∇˜ ∂ γ˙ = 0. We’re sloppily calling γ ∂t

M ⊃ U (15.2)

R Rn the bottom map. Example 15.11. With the connections on M = Rn given by ∂ ∇ = 0 ∂xi we have k Γij = 0 so ∂2γk γ is a geodesic = 0 ∂t2 Remark 15.12. Some pros of the coordinate⇐⇒ deﬁnition of the deﬁnition of geodesic is that (1) it makes sense for any ∇ (2) it has an interpretation as “no acceleration” (3) Obviously preserved by diffeomorphisms respecting ∇ while it has some cons in that (1) we don’t have an interpretation as distance minimizing. Now it’s time to relate ∇ to g. Remark 15.13. Fix a Riemannian metric on M. Then, g induces an isomorphism TM T ∨M iv 7 g(v, •) → Fiber by ﬁber, a matrix is nondegenerate if and only if it deﬁnes an isomorphism between a vector space and its dual.→ Recall from last time: Deﬁnition 15.14. ∇ is compatible with g if and only if dg(X, Y) = h∇X, Yi + hX, ∇Yi for all X, Y ∈ Γ(TM). Deﬁnition 15.15. A connection ∇ is torsion free or symmetric if

∇XY − ∇YX = [X, Y] 64 AARON LANDESMAN

Next, to lead up to our next lemma, note

Γ(TM) ∇ Γ(T ∨M ⊗ TM) (15.3)

Γ(T ∨M) ∇ Γ(T ∨M ⊗ T ∨M) where ∇ is deﬁned to make the above diagram commute. So, ∇ induces a connec- tions ∇ on T ∨M. Lemma 15.16. Fix ∇ compatible with a metric g. Then, the following are equivalent: (1) For all X, Y ∈ Γ(TM), we have that ∇ is torsion free. That is,

∇XY − ∇YX = [X, Y] (This comes purely from derivatives, and was stated incorrectly last week) (2) The composition

Γ(T ∨M) ∇ Γ(T ∨M ⊗ T ∨M)

ddeR (15.4) ∧

Ω2(M)

Fact 15.17. Here are some facts: (1) Fix a Riemannian metric g on E. Then, locally, there exist linear indepen- dent sections s1, ... , sk so that {si} are orthonormal. We have

g(si, sj) = δij This can be proven by the Gram-Schmidt process. Namely, start with a vec- s1 s2−hs2,s1is1 tor so that g(s1, s1) > 0. Set s1 := √ . Then, set s2 := √ , g(s1,s1) s2−hs1,s2is1 E = TM s ∂ and proceed similarly. Caution if , then i are almost never ∂xi . This will only happen if the metric g is locally isometric to Rn. j i (2) Fixing such a local basis, ∇ is compatible with g if and only if αi = −αj where j ∇si = αi ⊗ si j X This is also not hard to check. Proof. Start by reducing to the local case, since all formulas are local. Here are some useful observations: If j ∇si = αi ⊗ si j X with si orthonormal local sections of TM. Then, j ∇g(s, •) = αi ⊗ g(sj, •) j X NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 65 which holds by the commutative diagram

Γ(TM) ∇ Γ(T ∨M ⊗ TM) (15.5)

∇ Γ(T ∨M) Γ(T ∨M ⊗ T ∨M).

j If the second property holds, then set θi := g(si, •). Then, observe dθi = j αi ∧ θj by condition 2. But, P dθj(sk, sl) = −θi([sk, sl]) + skθi(sl) − slθi(sk)

= −θi ([sk, sl]) by orthonormality. So, remember, dθi(sk, sl) = −θi ([sk, sl]). Meanwhile, j j j αi ∧ θj(sk, sl) = αi(sk)θj(sl) − θj(sk)αi(sl) l k = αi(sk) − αi (sl) Let’s study

∇sk sl − ∇sl sk − [sk, sl] well, we have i ∇sk sl = αl(sk)si and k −∇sl sk = −αi (sl)si so the sith component of ∇sk sl − ∇sl sk − [sk, sl] is, by compatibility i i l k αl(sk) − αk(sl) − θi ([sk, sl]) = −αi(sk) + αi (sl) − θi ([sk, sl]) j = −αi ∧ θj(sk, sl) + dθi(sk, sl) and so the second and ﬁrst statements are equivalent, by ¡++¿

16. 10/29/15 16.1. Overview. (1) More on compatibility of ∇ with g (2) Example 16.1. Parallel transport is an isometry. (3) Theorem 16.2. There exists a unique Levi Civita connection.

Proof. (4) Geodesics (5) Exponential map 66 AARON LANDESMAN

16.2. Connections. Proposition 16.3. (1) Fix ∇, ∇ 0 on E, E0. Then, there exists a natural connection on E ⊗ E0 given by Γ(E ⊗ E0) Γ(T ∨M ⊗ E ⊗ E0) s ⊗ s0 7 ∇s ⊗ s0 + s ⊗ ∇ 0s0 → (2) Given ∇ on E there exists a connection ∇˜ on E∨ by, given s˜ ∈ Γ(E∨), s ∈ Γ(E), → d(s˜(s)) = ∇˜ s˜ (s) + s˜(∇s) Proof. The proof is straightforward. (1) We send fs ⊗ s0 7 (∇fs) ⊗ s0 + fs ⊗ ∇ 0s0 = df ⊗ s ⊗ s0 + f(∇s) ⊗ s0 + fs ⊗ ∇ 0s0 → = df ⊗ (s ⊗ s0) + f ∇s ⊗ s0 + s ⊗ ∇ 0s0 (2) Here, we check two things. ∇˜ s˜ (s) = d(s˜(s)) − s˜(∇s) That is, (a) ∇˜ s˜ (fs) = f(∇˜ s˜(s)) to check ∇˜ s˜ ∈ Γ(T ∨M ⊗ E∨)) (b) The Leibniz rule Checking these, we have (a) ∇˜ s˜(fs) = d(s˜(fs)) − s˜ (∇fs) = d (fs˜(s)) − s˜ (df ⊗ s + f∇s) = dfs˜(s) + fd(s˜(s)) − dfs˜(s) − fs˜(∇s) = f∇˜ s˜(s) (b) ∇˜ (fs˜) (s) = d(fs˜(s)) − fs˜(∇s) = dfs˜(s) + fd(s˜(s)) − fs˜(∇s) = (df ⊗ s˜) (s) + f (d (s˜(s)) − s˜ (∇s)) Remark 16.4. Last time, we deﬁned another connection on E∨ using a metric

Γ(E) Γ(T ∨M ⊗ E) (16.1)

Γ(E∨) Γ(T ∨M ⊗ E∨) NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 67

Exercise 16.5. The two (this one from last time, and the one just deﬁned) agree when ∇ is compatible with g, and don’t agree in some cases when ∇ is not com- patible with g. Deﬁnition 16.6. We deﬁne 1-forms with values in E • ∨ • Γ(∧ T M ⊗ E) =: ΩdeR(E) Deﬁnition 16.7. Fix a connection ∇ on E and γ : R M. Then, for every section s ∈ Γ(γ∗E) → deﬁne

Dts := ∇˜ ∂ s ∂t where ∇˜ is the the pullback connections. Here is some more intuition for when ∇ is compatible with g Proposition 16.8. Fix g on E The following are equivalent. (1) (∇ is compatible with the metric) dhX, Yi = h∇X, Yi + hX, ∇Yi (2) Observe g ∈ Γ (E ⊗ E)∨. Then, ∇g = 0. (The metric is constant from the perspective of ∇.) (3) For all γ : R M and all v, w ∈ Γ(γ∗E) we have ∂ hv, wi = hD v, wi + hv, D wi → ∂t t t (4) If Dtv = Dtw = 0 then hv, wi is constant. (5) Parallel translation Eγ(t) Eγ(t) deﬁnes an isometry of vector spaces with an inner product. Proof. For 1 and 2, just write out→ the deﬁnitions and they cancel out. 1 implies 3 from the deﬁnition of the pullback connection. You have to write out the formula for Dt. 3 implies 4 clearly. Then, 4 implies 5 because ∗ Eγ(0) Γ(γ E) Eγ(t) sending v0, w0 to vt, wt, and ∂→ → hv , w i = 0 ∂t t t if we chose v0, w0 = 0. For 5 implies 3, use ∇˜ s˜(fs) = f∇˜ s˜(s) So, to check 5 implies 3, if we choose a basis at γ(0), we get a basis for γ(t) and obtain 3. Finally, we can show 2 and 3 are equivalent. 16.3. The Fundamental Theorem of Riemannian Geometry. Remark 16.9. Whenever we repeat two adjacent multiplied symbols with repeated indices, there is an implied summation. For example, i i k i i k Ajkθ ∧ θ = Ajkθ ∧ θ j k X, Theorem 16.10. (Fundamental Theorem (or Lemma) of Riemannian Geometry) Fix g on TM. Then, there exists a unique connection ∇ on TM so that 68 AARON LANDESMAN

(1) ∇ is torsion free, meaning

∇XY − ∇YX = [X, Y]

(2) ∇ is compatible with g.

Remark 16.11. This says the moduli space of connections which is torsion free and is compatible with g is simply a point.

Remark 16.12. We can work on the tangent bundle or cotangent bundle. We’ll work on the cotangent bundle because we’ll have the wedge product there, and don’t have to keep track of signs.

Proof. Fix orthonormal sections s1, ... , sk. We’ll check this just locally, because by uniqueness of solutions to ODE’s, they must agree on the overlaps, and patch together. i ∨ Set U ⊂ M and θ := g(si, •), a basis of sections for T U. From last time, if ∇ is torsion free and compatible with the metric, we have

i ∇ i k θ αk ⊗ θ d (16.2) ∧

i i j k dθ =: Ajkθ ∧ θ

(Einstein summation notation has crept in, and is here to stay.) Since this diagram commutes, we know i k i j k αk ∧ θ = Ajkθ ∧ θ

i i i Lemma 16.13. For all Ajk there is a unique Bjk, Cjk so that i i i (1) Ajk = Bjk + Cjk i i i (2) Bjk is symmetric in j, k meaning Bjk = Bkj; i i k (3) Cjk is skew in i, k, meaning Cjk = −Cjk. Proof. Set 1 Bi = Ai + Ai Ak − Ak + Aj − Aj jk 2 jk jk ji ij ki ik 1 Ci = Ai − Ai − Ak + Ak − Aj + Aj jk 2 jk kj ji ij ki ik This proves existence. To prove uniqueness. Suppose B0 + C0 = B + C + A satisfying the constraints of the lemma. Then, B − B0 + C − C0 = 0. The ﬁrst is symmetric in j, k and the second is skew symmetric in i, k So, it sufﬁces to show that the only matrices B00 = C00 satisfying the given symmetry and antisymmetry 00 00 i conditions, then B = C = 0. So, let Djk be symmetric in j, k and skew symmetric a a b b b c c a in i, k. Then, Dbc = Dcb = −Dca = −Dac = −Dac = Dab = Dba = −Dbc. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 69

Assuming this lemma, we have i i i k dθ = Ajkθ ∧ θ i i j k = Bjk + Cjk θ ∧ θ i j k = Cjkθ ∧ θ i i j k k i j This shows ∇θ must equal Cjkθ ⊗ θ . That is, αj = Cjkθ . This proves unique- i i ness because for any choice of two Ajk, the Cjk is always determined. This shows uniqueness. Remark 16.14. Given this theorem, how would we compute the Levi Civita con- nection? We take a dual basis, we compute the A’s and then the C’s. This proof is i constructive and nice if you have good candidates for si and can compute dθ . Deﬁnition 16.15. This unique ∇ from Theorem 16.10 is called the Levi-Civita con- nection or Riemannian connection of (M, g). Theorem 16.16. In local coordinates, the Levi-Civita connection can be written 1 ∂g ∂g ∂g Γ k = gkl · il + jl − ij ij 2 ∂xj ∂xi ∂xl ab where g is the inverse of gab with ∂ ∂ g = h , i ij ∂xi ∂xj Proof. Not too hard, but very computational, and omitted. See any standard Rie- mannian geometry textbook. φ M ⊃ M − Rn ∂ Remark 16.17. In local coordinates, , we have a basis ∂xi for TM =∼ TRn =∼ Rn × R6n then ∂ ∂→ g = h , i ij ∂xi ∂xj ab ab a and g is the inverse matrix, so g gbj = Ij . k Recall Γij was deﬁned

∂ k ∂ ∇ ∂ j = Γij k ∂xi ∂x ∂x

Remark 16.18. This local coordinate version, we have that gij is hard to deal with, [• •] ∂ 0 s g but , are easy to deal with in ∂xi , the lie bracket is . In the i version, ij is easy to deal with, but lie brackets are hard to deal with. n Example 16.19. Take M = R and g = gstd. Then, 1 Γ k = gkl = 0, ij 2 as they are derivatives of constants. This means,

∂ k ∂ ∇ ∂ j = Γij k = 0 ∂xi ∂x ∂x 70 AARON LANDESMAN

n+1 n Example 16.20. take M = {(~x, y) : y > 0} ∈ R . We have ~x ∈ R , y ∈ R>0. Deﬁne,

R2 g = g y2 std for some R > 0. Let’s compute the Christoffel symbols. First,

∂gij 0 if l ≤ n = 2 ∂xl −2R δ l = n + 1 y3 ij if

y2 ∂g ∂g ∂g Γ k = ik + ik − ij ij 2R2 ∂xj ∂xi ∂xk

Then,

1 −Γ n+1 = Γ j = − ii (n+1)j y and all other values are 0. So if ∇ is the Levi Civita connection, then

1 ∂ y ∂y i = j ∂ 1 ∂ ∇ ∂ = − j = n + 1 i 6= j i y ∂xi , ∂xj ∂x −1 ∂ i = n + 1 i 6= j y ∂xj , Now we can’t really interpret these things very well, since we might not expect vertical vector ﬁelds to change when we move horizontally, but they do.

16.4. Geodesics. Let’s recall the deﬁnition.

Deﬁnition 16.21. Fix ∇ on TM. Then, a curve γ : (−ε, ε) M is called a geodesic if → ∇˜ ∂ γ˙ = 0 ∂t

That is, Dtγ˙ = 0.

In local coordinates, we have

M ⊃ U γ (16.3) φ c (−ε, ε) Rn NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 71

Letc ˙i = vi, vi : R Rn. Then,

i ∂˜ ∇˜ ∂ γ˙ = ∇˜ ∂ v →∂t ∂t ∂xi i ∂ ∂˜ i ∂˜ = dv + v ∇˜ ∂ ∂t ∂xi ∂t ∂xi ∂vk ∂ = + viγ∗αk ∂t i ∂xk ∂vk ∂ ∂˜ = + viΓ k dxj D ∂t ij γ ∂t ∂xk ∂vk ∂˜ = + vivjΓ k ∂t ij ∂xk We can solve when this equals 0.

17. 11/2/15 17.1. Geodesics and coming attractions. Today, we’ll show geodesics exist, the exp map is a diffeomorphism near the identity, deﬁne completeness, give exam- ples of the exp map. We’ll also give a preview of Hadammard’s theorem, the Hopf-Rinow Theorem, and show that Geodesics locally minimize length. Recall, last time, in local coordinates, a curve γ :(ε, ε) M is a geodesic if, setting v = γ˙ , we have → ∂vk + Γ k vi · vj = 0 ∂t ij Recall ∂ k ∂ ∇ ∂ j = Γij k ∂xi ∂x ∂x Remark 17.1. We are thinking of this equation as lying in the tangent bundle as ∂ opposed to the trivial bundle because we’re using the coordinate ∂t , written in ∂ terms of ∂xj which are thought of as a basis for the tangent bundle.

Question 17.2. Given x ∈ M, v ∈ TxM, is there a geodesic γ with γ(0) = x,γ ˙ (0) = v? This is now an ODE if we think of the geodesic equations as living in TM rather than M. In local coordinates, we look for a function t 7 (x(t), v(t)) ∈ Rn × Rn so that → ·x = v ∂vk + Γ k vi · vj = 0 ∂t ij So, by uniqueness of solutions to ODEs, there is a unique (x(t), v(t)) : (−ε, ε) TM so that x(t) is a geodesic. → 72 AARON LANDESMAN

In fact, for all (x0, v0) ∈ TM there is an ε > 0 and W ⊂ TM, (x0, v0) ∈ W so that Φ : (−ε, ε) × W TM is deﬁned as usual. → Example 17.3. Take M = (R, gstd) as our manifold In equations, v = γ˙ ,v ˙ = i j Γijv v = 0 deﬁnes a vector ﬁeld on TM and Φ is the ﬂow. This vector ﬁeld only has a horizontal component, and its value at a point v is v. Remark 17.4. If you’re worried about transitioning from chart to chart, do home- work 9. Intuitively, if γ : (−ε, ε) M is a geodesic with initial vectorγ ˙ (0) = v. Then,γ ˜ with initial vector ·(0) = av, a ∈ R should just be a reparameterization of γ. Let’s now make this precise.→

Deﬁnition 17.5. Given v ∈ TxM let

γv : (−ε, ε) M

t 7 γv(t) → be the geodesic with initial conditions → γv(0) = x

γ˙ v(0) = v Proposition 17.6. γ satisﬁes the Rescaling property

γv(at) = γav(t) Proof. Go into local coordinates and compute. We only need to show these two functions satisfy the same ODE’s. Proposition 17.7. Let γ : (−ε, ε) M be a geodesic. Then,

(−ε, ε) R≥0 → t 7 h·γ(t), ·γ(t)i → is constant. → Proof. Recall ∇ ∂ · γ = 0. Then, by compatibility of the metric ∂t ∂ h·γ(t), γ(t)i = 2h∇ ∂ · γ(t), ·γ(t)i = 0 ∂t ∂t So, at each x ∈ M, we have an interesting map. Let

Wx ⊂ TxM be

Wx = {v ∈ TxM : γv is deﬁned at least for time t = 1 } Deﬁnition 17.8. The exponential map at x

expx : Wx M v 7 γv(1). → → NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 73

Remark 17.9. This is highly dependent on the choice of metric g. If we have two different metrics, we can have two different exponential maps. We then use uniqueness of the Levi-Civita connection to deﬁne the exponential map as above. Now, let’s see some examples. n Example 17.10. Let M = (R , gstd). Then, n expx : Wx R v 7 γv(1) n → If we have a point x ∈ R and a tangent vector v ∈ TxM. Recall γ is a geodesic in n (R , gstd) if and only if γ˙ = 0. So, γv(t) =→x + tv. Note expx is smooth, a surjection and an injection. Though, the injection is quite special to this case. 1 ∗ Example 17.11. Let M = S , g where g = j gstd where j : S1 R2 is the usual inclusion. Of course, in this case, the exponential map can’t be a dif- → 1 feomorphism because R is not compact and S is. So, expx is not an injection, but it is a covering map. 1 it 1 So, we can choose TxS =∼ R, we can show that exp(t) = e ∈ S . n ∗ Example 17.12. Let M = (S , g) with j gstd = g and j : Sn Rn+1 In our homework, we’ll show that geodesics of (Sn, g) in this metric are great circles with constant speed. → n So, expx is an injection on the open ball TxS of radius at most πR. It is a surjection but not a covering map. Remark 17.13. Interesting theorems in differential geometry are often about things where you ignore the differentiable structure, and just show something about topology, like the Gauss Bonnet theorem and Hadamard’s theorem. Deﬁnition 17.14. (M, g) is complete if geodesics exist for all time. By rescaling, this is equivalent to expx is deﬁned on all of TxM for all x. Theorem 17.15. (Hadamaard) Let (M, g) be a connected complete Riemannian manifold so that the sectional curvature is ≤ 0 everywhere. Then, for all points x ∈ M the exponential map

expx : TxM M is a covering map. → Proof. Not given now. Question 17.16. How severe is the completeness condition? Answer: It always words for compact manifolds. However, even Rn isn’t nec- essarily complete. For example, we can take a diffeomorphism between Rn =∼ U ⊂ Rn, and if we take the induced metric on the ﬁrst Rn coming from the second Rn, the metric won’t be complete. 74 AARON LANDESMAN

Corollary 17.17. If M is a smooth connected manifold and admits a complete metric of nonpositive sectional curvature, then

πi(M) = 0, i ≥ 2 where πi are the homotopy groups.

Proof. Covering maps induce an isomorphism on πi for i > 1. Example 17.18. Among compact orientable surfaces, there is ∅, S2, T, and surfaces of higher genus. The sphere cannot exhibit a g of nonpositive curvature because S2 is its own universal cover, and if it admitted a metric of nonpositive curvature, its universal cover would be R2. The torus is R2/Z2, and it admits g of curvature 0, as it is locally isometric to R2. Warning 17.19. Caution, this does not equal the metric inherited from R3. This is because if we take the metric, and look at a region in the middle of the torus, it will have a saddle point, which shows the metric is not ﬂat. We can also take the top of the torus, which has a positive metric there. All higher genus surfaces admit metrics of constant negative curvature. Theorem 17.20. (Hopf-Rinow) A smooth manifold (M, g) is a complete Riemannian manifold if and only if it is complete as a metric space. That is, Cauchy sequences converge. Proof. Not given now. 17.2. Properties of the exponential map. Now, we move onto proving things about the exponential map. 0 Proposition 17.21. There is a neighborhood 0 ∈ U ⊂ Wx ⊂ TxM so that

expx |U0 M is a diffeomorphism onto its image. → Proof. We’ll use the inverse function theorem, and show that the derivative at 0 is invertible. Let’s examine

D expx |0 : T0(TxM) TxM We want to show this is invertible. So, we have → T0(TxM) TxM (17.1) id

TxM

We claim this diagram commutes. What is the isomorphism

TxM T0TxM?

Given v ∈ TxM consider the curve → c : R TxM t 7 tv → → NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 75

Then,c ˙(0) ∈ T0(TxM) and

v 7 c˙v(0)

TxM T0TxM → is the isomorphism. → Question 17.22. What is the composite

D expx |0 TxM T0TxM −−−−−− TxM? We send → → v 7 expx ◦cv (0) by the chain rule. Then, → expx ◦cv : t 7 expx(tv) = γtv(1) → γv(t) ∂ So, D(expx ◦cv) ∂t |t = 0. This isγ ˙ v|t = 0. By deﬁnition of γv, we haveγ ˙ v(0) = v.

Remark 17.23. By the rescaling property, Wx is a star-shaped neighborhood of 0 ∈ TxM, meaning that for every v ∈ Wx, the segment from 0 to v is contained in Wx. Deﬁnition 17.24. An open neighborhood x ∈ U ⊂ M is called a normal neighbor- hood if it is the image of

{v : |v| < ε, ε > 0} ⊂ TxM under expx. We can do even better than the proposition. Proposition 17.25. Let W ⊂ TM be {(x, v) ∈ TM : γv is deﬁned for at least t = 1}

Consider the map exp : W M × M. sending (x, v) 7 (γv(1), x) For all x ∈ M, there exists a neighborhood (x, 0) ∈ U ⊂ W so that exp |U is a diffeomorphism onto its image. Proof. Use the inverse function→ theorem. We have→ exp : TM M × M. Locally, we have U × U ⊂ U × Rk ⊂ TU ⊂ TM. This locally deﬁnes a map U × U M × M. This deﬁnes a map D exp |(x,0) is a matrix T(x,0)(U→× U) = TxU ⊕ T0U and T(x,x)(M × M) = TxM ⊕ TxM. So, we can write the map as a block matrix.→ What are the block components U × UT M × M. We claim this matrix is of the form. IA→ 0 I which would imply the matrix is invertible. The ﬁrst component is the identity by deﬁnition. In the last component, we showed in the previous proposition is that this is the identity map. Finally, the ﬁrst component of M doesn’t depend on the second component of U. 76 AARON LANDESMAN

Here are some corollaries.

Corollary 17.26. We can choose some Ux ⊂ M open and some ε > 0 so that

Uε := {(y, v) ∈ TM : y ∈ Ux, |v| < ε} ⊂ U where U is the neighborhood on which exp is a diffeomorphism from the proposition, and p |v| = gy(v, v). Note that this is open because the Riemannian metric is continuous. Then, we can choose Wx ⊂ M so that x ∈ Wx so that Wx × Wx ⊂ expx(Uε). For all x, there exists ε > 0 and Wx 3 x open, so that for any two points y0, y1 ∈ Wx, there exists a unique geodesic

γ :[0, 1] M, γ(0) = y0, γ(1) = y1 of length < ε from y0 to y1. → Proof. Given a point y0, y1 ∈ Wx × Wx, we have a unique preimage (y0, v) in −1 exp (Wx × Wx). We know |v| < ε by construction. Therefore, the length of the geodesic is |v|. This is unique because if we had any shorter length, it would have to be given by a vector of length < ε, which means it has to be given by some vector in Uε, and then it is unique because the exp map is a diffeomorphism of U onto its image.

Remark 17.27. This is better than expx being a diffeomorphism near 0. This is because we don’t know how points near x are related, but this stronger version indeed tells us about the relation. Remark 17.28. The geodesic γ need not be a map

γ :[0, 1] Wx

It might escape Wx and pass to other parts of M. → Remark 17.29. With work, you can choose Wx so γ is contained in Wx.

18. 11/5/15 18.1. Review. Last time, we ﬁxed g, ∇, M and deﬁned the exponential map by exp : W M

(x, v) 7 γv(1) → Less generally, we deﬁned → expx : Wx M by restriction of exp to x. We proved that expx is a diffeomorphism→ near the origin 0 ∈ TxM and also that W M × M (x, v) 7 x, expx(v) → is a diffeomorphism near (x, 0). We obtained the corollary →

Corollary 18.1. For all x ∈ M there is an ε > 0 with x ∈ Wx ⊂ M so that for all y0, y1 ∈ Wx there is a unique geodesic from y0 to y1 of length < ε, where a geodesic is a map γ :[0, 1] M Proof. Done last time. → NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 77

Today, we’ll show geodesics are locally length minimizing.

18.2. Geodesics and length. Example 18.2. Geodesics are not length minimizing. For example, from the home- work, we know the geodesics are the great circle on the sphere. We can go the short way or the long way around a great circle between two non antipodal points, one of which has longer length than the other. Our ﬁrst goal for today is to prove Proposition 18.3. For all x ∈ M there is ε > 0 so that if

(1) y ∈ expx(Bε(0)). (2) w : [0, 1] M is a piecewise smooth curve from x to y then len(w) ≥ len(γ) where γ is the unique geodesic of length < ε from x to y. Moreover, equality holds if and→ only if im w = im γ. and w is an immersion. Remark 18.4. We will use U to distinguish between sets in the tangent space and their corresponding images U under the exponential map. Proof. In order to prove the proposition, we give some Lemmas.

Lemma 18.5. (Geodesics are orthogonal to distance level sets) Let Ux ⊂ M be a normal neighborhood. Recall, this means Ux = exp(Ux). Let Sδ ⊂ Ux ⊂ TxM be a sphere of radius δ. Then, for all geodesics γv from x, γv ⊥ Sδ = expx(Sδ).

Proof. It sufﬁces to show this perpendicularity for each curve c on the surface Sδ. Fix a curve c : [a, b] U so that len(c) = 1. Consider the surface → R × [a, b] TxM (t, φ) 7 (tc(φ)) → Consider the composition → exp R × [a, b] TxM −− x M and call the composition f. → → Note for φ0 ﬁxed, f(•, φ0) is a geodesic. For t0 ﬁxed, we have f(t0, •) is a curve on St0 . We claim these two curves are orthogonal. It sufﬁces to show ∂f ∂f h , i = 0 ∂t ∂φ by this, we mean ∂ ∂ hD exp , D exp i ∂t ∂t 78 AARON LANDESMAN

Pulling back ∇ on TM to ∇˜ on f∗TM, we can look at how the inner product changes. ∂ ∂f ∂f ∂f ∂f ∂f ∂f h , i = h∇˜ ∂ , i + h , ∇˜ ∂ i ∂t ∂t ∂φ ∂t ∂t ∂φ ∂t ∂t ∂φ ∂f ∂f = h , ∇˜ ∂ i ∂t ∂t ∂φ since ∇ is torsion free, we have

∇XY = ∇YX − [X, Y] But, ∂ ∂ , = 0 ∂t ∂φ in R2. So, ∂f ∂f ∇˜ ∂ = ∇˜ ∂ ∂t ∂φ ∂φ ∂t In this case, we’re choosing the vector ﬁelds coming from these coordinates which aren’t necessarily orthonormal so that we can swap derivatives like this. But now, * + ! ∂f ∂f 1 ∂f ∂f ∂f 1 ∂ ∂f ∂f , ∇˜ ∂ ∇˜ ∂ , + , ∇˜ ∂ , = , ∂t ∂t ∂φ 2 ∂t ∂f ∂t ∂t ∂t 2 ∂φ ∂t ∂t ∂t, ∂t = 0 because we chose c to be constant length 1. Then, at t = 0, we have ∂f ∂f , = hc(φ), 0i = 0 ∂t ∂φ

Remark 18.6. This is one reason Ux is called a normal neighborhood. Deﬁnition 18.7. (1) A smooth function γ : [a, b] M is smooth if γ extends to a smooth function on [a − ε, b + ε] (2) A smooth function →

γ : [a0, an] M

is called piecewise smooth if there exists a0 < a1 < ··· < an−1 < an so γ| → that [ai,ai+1] is smooth. Lemma 18.8. Let

w : [a, b] Ux \ {x} Note that w can be written as → w(φ) = expx (r(φ), c(φ)) where 0 < r(φ) < ε and |c(φ)| = 1. Then, len(w) ≥ |r(b) − r(a)|. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 79

Further, there is equality if and only if r is monotone (meaning the derivative of r always has the same sign) and c is constant. Proof. Again, set f : R × [a, b] M

(t, φ) 7 expx(tc(φ)) → so → w(φ) = f(r(φ)c(φ)) and ∂ w˙ = Dw ∂t ∂ ∂c = Df ◦ Dr + Df ∂t ∂φ Now, because the two terms in the last equality above are orthogonal by Lemma ??, we have ∂d ∂r ∂c hw˙ ,w ˙ i = | |2 + Df , Df ∂r ∂φ ∂φ ∂r ≥ | |2 ∂t So, equality is equivalent to ∂c = 0 ∂φ which is equivalent to c being constant. Then, integrating square roots, we have b b ∂r |w˙ |dφ ≥ | |dφ ∂φ Za Za ≥ |r(b) − r(a)| and further, we have equality if and only if r is monotone. Finally, we are ready to prove the main proposition, which isn’t too difﬁcult given our lemmas. Recall we’re trying to prove that geodesics locally give the paths of shortest distance. Given w : [a, b] M a 7 x b →7 y → is piecewise smooth and y ∈ Wx. Let y = expx(r · c) where r ∈ R, c ∈ S1. → Let Sr, Sδ be the images under the exponential map of spheres. w has to be some curve going from x to y. For all real numbers δ > 0, there is some segment of w going to and from the spheres Sr, Sδ. By Lemma 18.8, we have that the length of this segment is at least r − δ for all δ. Now, len(w) is at least the length of this segment, and since inequality holds for all δ > 0, we have len(w) ≥ r. 80 AARON LANDESMAN

Now, suppose

im w 6= im γ

Then, there is some shell ∪δ<ε

We now give some consequences of the above proposition.

Deﬁnition 18.9. Let

d(x, y) := inf len γ piecewise smooth curves γ from x to y

Lemma 18.10. This d(x, y) makes M into a metric space.

Proof. The only difﬁcult part is that this is nondegenerate. This follows from the above Proposition 18.3.

Remark 18.11. In fact (M, d) =∼ M is a homeomorphism via the identity map.

Here’s a fun side theorem.

Theorem 18.12. (Nash Embedding Theorem) For every second countable Riemannian manifold (M, g) there is an embedding M RN so that g is pulled back from RN.

Proof. Very difﬁcult, we deﬁnitely won’t do this in class. → Corollary 18.13. Let w : [a, b] M be piecewise smooth and assume

len(w) = d (w(a), w(b)) → Then, w is a geodesic and can be reparameterized so w is smooth.

Proof. If we knew w is a geodesic, it would certainly be smooth after reparame- terization. If w minimizes length then it locally minimizes length. That is, if we have two points w(a) and w(b) and a curve w between them. If we choose two close points on this curve, then w must be the shortest path between those points, or else we could swap in this path for w and concatenate, and we would get a shorter path. So, by Proposition 18.3.

Remark 18.14. Since for any point x, the exponential map is a surjection, there is always a geodesic which realizes a path between two points with respect to some metric.

Our next goal is the following lemma. Recall a metric is complete if all geodesics exist for all time.

Lemma 18.15. Suppose (M, g) is a complete Riemannian manifold. Then, for all x ∈ M,

expx : TxM M is a surjection. → NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 81

18.2.1. Idea of proof. The idea is the following. Pretend we are in kindergarten. We know the exponential map is locally a diffeomorphism. We take a small sphere around x, and the sphere is compact, there’s a path of minimum distance. We can then ﬂow along this geodesics. And we can keep doing this, and eventually end up at y.

Proof. Fix y ∈ M so that d(x, y) = r. We need to show y = expx(r · v) for some v. Fix δ small and consider Sδ = expx (Sδ). Since d(•, y): M R is continuous, we have

d(•, y): Sδ R → is continuous and attains a minimum at some x0 ∈ Sδ, where Sδ is compact. Then, set →

x0 = expx(δv) where |v| = 1, v ∈ TxM. The main claim is the following: Lemma 18.16. Fro all t ∈ [δ, r] we have

d(γv(t), y) = r − t. Proof. Note, this is true for t = δ because d(x, y) = inf (d(x, x) + d(x, y) x∈Sδ = inf (δ + d(x, y)) x∈Sδ = δ + inf d(x, y) x∈Sδ = δ + d(x0, y) so d(x0, y) = d(x, y) − d = r − d. So, the claim is proven for t = δ. Now, let tmax be the supremum over all t so that the claim holds for t. Then, the claim holds for tmax by continuity. We need to show that tmax < r leads to a contradiction. Let 0 0 x = γv(tmax). Choose Sδ0 (x ) and again minimize the function. Then, d(x0, y) = inf d(x0, x + d (x, y) x∈Sδ0 = δ0 + d(x, y) and so 0 d(x, y) = r − tmax − δ We are then done by the triangle inequality, because we got a further extension of the path. This implies the Lemma 18.15 by taking r = t. 19. 11/10/15 19.1. Preliminary questions. Question 19.1. What is the deﬁnition of integrable submanifold? If E ⊂ TM and x ∈ M then an integrable submanifold is an immersion j : U M, x ∈ j(U) and Dj(TU) = E|j(U). Today, we’ll ﬁnish the proof of Hopf Rinow and curvature. → 82 AARON LANDESMAN

19.2. Hopf-Rinow. Recall, we were proving the following proposition last time:

Proposition 19.2. If (M, g) is complete, then for all x ∈ M, the map expx : TxM M is a surjection. In fact, for all y ∈ M there is a geodesic of length d(x, y) from x to y. Proof. Recall where we left off in the proof last Thursday: We ﬁxed a small δ→and looked at the image of Sδ under the exponential map. We found and x0 which minimizes d(•, y): Sδ R. Let r = d(x, y). We considered

{t ∈ [δ, r] : d(γv(t), y) = r − t} → We claimed that we were almost done by the triangle inequality. We now pick up where we left off. Consider tmax = {t} < r so that t is in the above set. Let’s ﬁnd a contradiction. 0 P We take a point x := γv(tmax). Then we repeat the above argument so that d(•, y): Sδ0 R and obtain a point x1 minimizing this function. Therefore, we have a piecewise smooth curve from x to x1. We claim: →

Lemma 19.3. We have that the length of this piecewise smooth curve from x to x1 is d(x, x1). Proof. Here, we’ll use the triangle inequality twice. We have 0 0 (19.1) d(x, x1) ≤ d(x, x ) + d(x , x)

(19.2) d(x, y) ≤ d(x, x1) + d(x1, y) So, we have

d(x, y) − d(x1, y) ≤ d(x, x1) 0 0 ≤ d(x , x1) + d(x , x), since 0 0 d(x , y) = δ + d(x1, y). So, 0 0 0 0 r + δ d(x , y) ≤ d(x, x1) ≤ δ + d(x , x) 0 0 0 r − d(x , y) ≤ d(x, x1) − δ ≤ d(x , x) 0 0 tmax ≤ d(x, x ) − δ ≤ tmax. Finally, the claim holds because 0 0 0 d(x, x1 = tmax + δ = d(x, x ) + d(x , x1). By our lemma from last class, any piecewise smooth curve minimizing distance 0 is a geodesic, and, in particular, is smooth. Therefore, x1 = γv(tmax + δ ). This ﬁnishes the proof because 0 0 d(x1, y) + d(x , x1) = d(x , y) 0 0 d(γv(tmax + δ ), y) + δ = r − tmax NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 83

But this concludes the proof because 0 0 d(γv(tmax + δ ), y) = r − (tmax + δ ) and so tmax was not maximal. Now, let’s complete the proof of Hopf Rinow. Theorem 19.4. The following are equivalent: (a) The Riemannian manifold (M, g) is complete. (b) Any bounded subset of M has compact closure. (c) M is complete as a metric space. Proof. First, we show a = b. Let A be bounded. Fix x ∈ M and let

d := d(x, y) < . ⇒ y∈A X ∞ Then, A lies in the image of expx(Bd(0)), by Proposition 19.2. Now, since the closed ball is compact, the image under expx is also compact, and hence it contains the closure of A. Therefore, the closure of A is also compact. Next, we show b = c. Then, (x1) is Cauchy. Then, {xi} is compact, and so any sequence in {xi} has a convergent subsequence. Finally, to show c =⇒ a, we only need to show any geodesic extends indeﬁ- nitely. So, ﬁx x, v ∈ TxM.⇒ Let tmax be the supremum over all t for which γv(t) is deﬁned. This is open by the existence theorem for ODE’s. So, it sufﬁces to show γv(t) is deﬁned at tmax. Now, ﬁx ti tmax with ti < tmax. Then, γv(ti) is Cauchy.

So, limti tmax γ(ti) exists. → 19.3. Curvature.→ Curvature is one of the most confusing concepts because there are many types, but they are often all called curvature. Further, in many ways, the study of differential geometry is the study of curvature. Today, we’ll deﬁne some basic types of curvature, and maybe talk about what ﬂatness implies. Recall the following: Remark 19.5. Given a connection ∇ on E, there exists a unique map D : Ω1(E) Ω2(E) respecting the Leibniz rule → D(α ⊗ s) = dα ⊗ s − α ∧ ∇s where the minus sign is coming from +(−1)|α|. Here, D(α ⊗ s) ∈ Ω1(E) = Γ(T ∨M ⊗ E). In this case, D ◦ ∇ 6= 0 in general. So, recall ∇ is ﬂat when D ◦ ∇ = 0. If you’re algebraically minded, if we had a ﬂat connection, we would get an invariant of the cochain complex given by taking cohomology. Proposition 19.6. We have D ◦ ∇ : Γ(E) Ω2(E) is C (M) linear. → ∞ 84 AARON LANDESMAN

Proof. We check, working in local coordinates αj. Then, (D ◦ ∇)(fs) = D (df ⊗ s + f∇s) j = D df ⊗ s + fα ⊗ sj 2 j j = d f ⊗ s − df ∧ ∇s + d(fα ) ⊗ sj − fα ∧ ∇sj j j j = 0 − df ∧ α ⊗ sj + (df ∧ αj + fdα ) ⊗ sj − fα ∧ ∇sj j j = f dα ⊗ sj − α ∧ ∇sj = f (D ◦ ∇(s)) completing the proof. Corollary 19.7. We can think of D ◦ ∇ as an End(E) valued 2-form. Proof. Immediate from the above proposition. Deﬁnition 19.8. By the above, we mean that given X, Y ∈ Γ(TM), we can take

(D ◦ ∇)X,Y : Γ(E) Γ(E) by evaluating the map D ◦ ∇ : Γ(E) Ω2(E) at X, Y, corresponding to composing → it with the map Ω2(E) Γ(E), which is evaluation at X, Y. Further, the map ∼ ∼ (D ◦ ∇)X,Y. Now, in generality, we→ know homC (M) (Γ(E), Γ(F)) = hom(E, F) = Γ(Hom(E F)) E = F hom (Γ(E) Γ(E)) ∼ Γ(End(E)) , . Then, taking→ , we know ∞C (M) , = . 2 Call this 2 form F ∈ Ω (End(E)) by F∇. We call∞ this the curvature tensor of ∇. Proposition 19.9. For X, Y ∈ Γ(TM) we have

F∇(X, Y) = ∇X∇Y − ∇Y ∇X − N[X,Y] j Proof. Locally, choose a basis s1, ... , xk of Γ(E|U). Set ∇si = αi ⊗ sj. Then,

F∇(X, Y)(si) = (D ◦ ∇si) (X, Y) j j k = dαi ⊗ sj − αi ∧ αj ⊗ sk (X, Y) j j j j k j k = X(αi(Y) − Y(αi(X)) − αi ([X, Y]) sj − αi(X)αj (Y) + αi(Y)αj (X)sk Next, note j j j ∇X∇Ysi = ∇X αi(Y) ⊗ sj = dαi(Y) sj + αi(Y)∇sj(X) j j k = X(αi(Y))sj + αi(Y)αi (X)sk This gives two terms of the left hand side. Similarly, switching X, Y we get two other terms of the left hand side. So we obtain j j j j k j k X(αi(Y) − Y(αi(X)) − αi ([X, Y]) sj − αi(X)αj (Y) + αi(Y)αj (X)sk = ∇X∇Y − ∇Y ∇X − ∇[X,Y] si Now, this identity is linear over R, and so it holds for all sections. Remark 19.10. If we have a map Γ(E) Γ(E) given by

∇X∇Y − ∇Y ∇X − ∇[X,Y] → which is C (M) linear.

∞ NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 85

Deﬁnition 19.11. If a map is C (M) linear it is often called a tensor or being tensorial. This terminology is most commonly used when E = TM. ∞ Remark 19.12. Being C (M) linear is super useful! This is because the values of the map at a point only depends on the values at that point, and not on the values ∞ in a neighborhood at that point. More precisely, if Xx = X˜ x, Yx = Y˜x, s(x) = s˜(x). Then, F(X, Y)(s)(x) = F(X˜, Y˜)(s˜)(x) Proposition 19.13. Let j :(M˜ ,g ˜) (M, g) be an isometry. Then, We have D : Γ(TM˜ ) Γ(TM) satisﬁes D (∇˜ Y˜) = ∇ D (Y˜). (1) j j X˜ Dj(X˜ ) j (2) We have → → Dj F˜(X˜, Y˜)Z˜ = F DjX˜, DjY˜ DjZ˜ where ∇, ∇˜ are Levi-Civita connections and F˜, F are their curvature tensors. Proof. The ﬁrst part follows from our homework problem, where we showed that the pullback of a Levi-Civita connection is the Levi-Civita connection. The second part follows from plugging the ﬁrst part into

∇X∇Y − ∇Y ∇X − ∇[X,Y]

Deﬁnition 19.14. In this setting where (E, gE) is a vector bundle and a Riemannian metric on the vector bundle, we have a function Γ(TM) × Γ(TM) × Γ(E) × Γ(E) Γ(R)

(X, Y, Z, W) 7 hF∇(X, Y)Z, Wi → Note that ∇ need not be related to g. However, because all parts of this expression at C (M) linear, the map as a whole is. → And, when E = TM and ∇ is the Levi-Civita connection, the above map is called ∞ the Riemann curvature tensor. Remark 19.15. At the end of the day, we have R : Γ(TM) × Γ(TM) × Γ(TM) × Γ(TM) C (M) 19.4. Towards some properties and intuition on curvature∞ tensors. → φ M ⊃ U − Rn ∂ Example 19.16. Consider . Let ∂xi be the coordinate vector ﬁelds. We know →∂ ∂ , = 0 ∂xi ∂xj So, ∂ ∂ F , = ∇ ∂ ∇ ∂ − ∇ ∂ ∇ ∂ ∂xi ∂xj ∂xi ∂xj ∂xj ∂xi That is, F measures the failure of ∇ to commute. Recall, given ∇ : Γ(E) Ω1(E) a connection, we obtain a covariant derivative along X

→ ∇X : Γ(E) Γ(E).

→ 86 AARON LANDESMAN

Deﬁnition 19.17. We introduce the notation FX,YZ := F(X, Y)Z. Proposition 19.18. We have for E = TM, ∇ Levi-Civita (1)

hFX,YZ, Wi = −hFX,YW, Zi (2)

hFX,YZ, Wi = −hFY,XZ, Wi (3)

FX,YZ + FY,ZX + FZ,XY = 0 (4)

hFX,YZ, Wi = hFZ,WX, Yi Proof. First, we prove 1. Note 0 = (X ◦ Y − Y ◦ X − [X, Y]) hZ, Wi by the deﬁnition of the Lie bracket. Let’s now write out the ﬁrst term.

X ◦ Y (hW, Zi) = X (h∇YW, Zi + hW, ∇YZi)

= h∇X (∇YW) , Zi + h∇XW, ∇YZi + h∇YW, ∇XZi + hW, ∇X∇YZi By symmetry, we have

−Y ◦ XhW, Zi = −h∇Y ∇XW, Zi − hW, ∇Y ∇XZi − h∇XW, ∇YZi − h∇YW, ∇XZi Then, substituting, we ﬁnd 0 = (X ◦ Y − Y ◦ X − [X, Y]) hZ, Wi

= hFX,YW, Zi + hW, FX,YZi The proof of 2 is obvious, since F is a 2 form. To prove 3, note that since F is C (M) linear, it sufﬁces to check this on local linearly independent sections. ∞ Let’s choose X, Y, Z to be local vector ﬁelds so that their lie brackets are 0. Then, 3 becomes

FX,YZ = ∇X∇YZ − ∇Y ∇XZ − 0

FY,ZX = ∇Y ∇ZX − ∇Z∇YX − 0

FZ,XY = ∇Z∇XY − ∇X∇ZY − 0 Then, we have

∇XY − ∇YX = [X, Y] because the connection is Levi-Civita, hence torsion free. Using this, the above 6 terms all cancel outs. The proof of 4 is omitted, because it is tedious. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 87

Remark 19.19. Here is some geometric intuition. We have some algebraic deﬁni- tion of ∇. We can view connections more geometrically. If we have some connec- tion on E, a vector bundle on a manifold M. Say we have a point v ∈ Ex e can then ﬁnd a parallel sectionγ ˜ . For every tangent vector formed by γ, we can look at the tangent vector formed byγ ˜ . Looking at lifts ofγ ˙ from TM to γ˜˙ in TvE. So, at every point x, we get a subspace Hv ⊂ TvE. This is a distribution H ⊂ TE. Moreover, if we project Dπ : Hv TxM is an isomorphism. So, every connection gives a distribution. We can ask whether this is integrable or involutive. If it is,→ we can ﬁnd a manifold passing through this which is ﬂat. So, ∇ being ﬂat turns out to H being integrable. This is called a connection because it connects two different ﬁbers.

20. 11/12/15 20.1. Types of curvatures. Today, we’ll discuss every type of curvature you might see in an introductory differential geometry textbook, which is a lot of types! Recall from last time: Deﬁnition 20.1. Given a map R : Γ(TM⊗4) C (M)

X ⊗ Y ⊗ Z ⊗ W 7 hF∞X,YZ, Wi → where we are given a curvature 2 form → F ∈ Ω2(End(TM)) and

FX,Y = ∇X∇Y − ∇Y ∇X − ∇[X,Y] Recall at the end of last class, we stated some properties of this tensor. Here is a corollary of those. Corollary 20.2. We have

Γ(TM) ⊗ Γ(TM) ⊗ Γ(TM) ⊗ Γ(TM) Γ(R)

(20.1) Γ(∧2TM) ⊗ Γ(∧2TM)

Sym2(∧2TM)

That is, R deﬁnes a symmetric bilinear map ∧2TM ⊗ ∧2TM C (M). In particular, for all x, a bilinear map ∞ 2 2 → ∧ TxM ⊗ ∧ TxM R Proof. Each of the two factorizations follow from different parts of the proposition → from last time. 88 AARON LANDESMAN

Here is a summary of the types of curvature we’ll encounter: (1) Riemann curvature tensor: A map Γ(TM)⊗4 C (M)

or equivalently ∞ → Sym2(Γ(∧2TM)) C (M)

(2) Ricci curvature tensor: ∞ → Ric : Γ(TM) ⊗ Γ(TM) C (M) (3) Scalar curvature: S : M R which is given∞ by taking the trace of the Ricci curvature. → (4) Sectional curvature: A map→ Gr2(TM) R In fact the Riemann curvature tensor is equivalent to the sectional curva- ture. → 20.2. Review of Linear Algebra. Question 20.3. What is a trace? Perhaps you usually think of trace as a function from matrices to R. Perhaps you also think of this as summing the elements along a diagonal. However, to deﬁne trace in this way, it is apparently basis dependant. It would be nice to have a basis free expression of trace. Here it is: Deﬁnition 20.4. Let φ : V ⊗ V∨ End(V) v ⊗ ξ 7 w 7 v ⊗ ξ(w). → In the case V is ﬁnite dimensional, φ is invertible, by the following proposition: → → Proposition 20.5. The following are equivalent: (1) The map φ : V ⊗ V∨ End(V) is an isomorphism (2) idV is in the image of the map φ. (3) V is ﬁnite dimensional.→ Proof. First, 1 = 2, is immediate, because isomorphisms are surjections. Second, 2 = 3. If A is in the image of φ, then dim im A < because tensors are ﬁnite sums.⇒ So, if A = idV then dim V < . Third, for 3 ⇒= 1, choose a basis and do a computation. ∞ Also, deﬁne the evaluation map ∞ ⇒ ev : V ⊗ V∨ k v ⊗ ξ 7 ξ(v) → Then, we have a diagram → φ V ⊗ V∨ End(V) (20.2) ev

k NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 89 and we may deﬁne the trace map ev ◦ φ−1 : End(V) k. Proposition 20.6. The trace as deﬁned above agrees with the usual deﬁnition of trace. → Proof. This follows from a standard after choosing a basis. Essentially, it sends a i matrix A to i Av ⊗ ξi = i Aii. Remark 20.7.PThis is one ofP Hiro’s favorite propositions because it has to do with 1 dimensional ﬁeld theory. g 20.3. Traces in Riemannian Geometry. If we’re given an isomorphism V − V∨, id⊗g then any element of V ⊗ V can be mapped to an element of V ⊗ V −−− V ⊗ V∨ Similarly, we can deﬁne a map →

id⊗g−1 → V∨ ⊗ V∨ −−−−− V∨ ⊗ V.

So, we can take the trace of any element of V ⊗ V or V∨ ⊗ V∨. → Warning 20.8. Caution, this depends on choice of g. We have a map

id⊗g V ⊗ V V ⊗ V∨ (20.3) ev R

Given (M, g) we have an isomorphism g : T M T ∨M x x x v 7 7 g(v, •) → In local coordinates, ﬁx →→ A ∈ Γ(T ∨M ⊗ T ∨M) i j A = Aijdx ⊗ dx Then, X ∂ id ⊗ g−1(A) = A gjkdxi ⊗ ij ∂xk Proposition 20.9. We have gjk is a smooth function so that ik (g ), (gab) are inverse matrices. Proof. We have g : T M T ∨M x x x ∂ ∂ 7 g • ∂xi → ∂xi

→ 90 AARON LANDESMAN

∂ 7 g where the latter sends ∂xj ij. Therefore, ∂ g = g dxj → ∂x ij i j X Therefore the inverse map is given by sending ∂ dxjg−1 7 (g )−1 x ij ∂xi → jk Remark 20.10. Note Aijg deﬁnes a new matrix of functions which we abbreviate k as Ai . Deﬁnition 20.11. Given B ∈ Γ(TM) ⊗ Γ(TM) A ∈ Γ(T ∨M) ⊗ Γ(T ∨M) The trace is the smooth function Γ(TM)⊗2 Γ(TM) ⊗ Γ(T ∨M) (20.4)

C (M)

∞ More generally, given B ∈ Γ(TM) ⊗ Γ(TM) A ∈ Γ(T ∨M) ⊗ Γ(T ∨M) we locally obtain ji tr(A) = Aijg ij tr(B) = B gij Since g is symmetric, the choice of id ⊗ g or g ⊗ id yields the same result. Deﬁnition 20.12. More generally yet, given A ∈ Γ(T ∨M⊗n) we can choose the ith and jth factor for i 6= j and take the trace along these. That is, we can deﬁne a map id ⊗ · · · id ⊗ g−1 ⊗ id ⊗ · · · ⊗ id : Γ(T ∨M)⊗n Γ(T ∨M)⊗(n−1) ⊗ Γ(TM) And then compose this with the evaluation map → id ⊗ · · · ⊗ idev : Γ(T ∨M)⊗(n−1) ⊗ Γ(TM) Γ(T ∨M⊗(n−2)) This is a section of T ∨M⊗n−2 called the contraction (or trace) of A along the i, j factors (or components) of A. → We can do this in the special case of the Riemannian curvature. R : Γ(TM)⊗4 Γ(R) That is, R ∈ Γ(T ∨M⊗4). → NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 91

Question 20.13. Which factors can we take trace along? Locally, i j k l R = Rijkldx ⊗ dx ⊗ dx ⊗ dx

Since Rijlk is skew in i, j the trace along i, j is 0. Additionally, it is skew in k, l, so the trace along k, l is 0. So, it only remains to analyze the trace along (1) i, j (2) i, k (3) j, l (4) j, k However, the trace along any one of these determines the three other ones since

Rijkl = Rklij Deﬁnition 20.14. The Ricci curvature, denoted Ric is the trace of the Riemann curvature tensor R along i and l (that is, X and W) Remark 20.15. This convention is chosen so that the sphere has positive curvature. Question 20.16. What does this trace encode? Gregory Perelman allows us to motivate them. Remark 20.17. There are two Hamiltons Hiro knows of. One is older and invented the quaternions. The other is younger and studies geometry. Aaron Slipper re- marks that there is yet another Hamilton who died in a duel! The following is due to the second Hamilton. Consider the space of all possible Riemannian metrics on M. This set of Riemannian metrics lies inside the set of all sections Γ(T ∨M ⊗ T ∨M). In fact, this set lies in Γ(Sym2(T ∨M)), the symmetric bilinear forms. Further, Rie- mannian metrics are an open subset of this vector space, since being nondegener- ate is an open condition. 2 ∨ Question 20.18. What is Tg(Γ(Sym (T M)))? As a vector space, abstractly, it is Γ(Sym2(T ∨M)). So, in other words, deformations of a metric g are given by ﬂowing along a tan- gent vector. In other words, we only need to give an element of Γ(Sym2(T ∨M)). But, note Ric is a section of Sym2(T ∨M). But, this Ricci curvature depended at the beginning of time on g. So, the assignment

g 7 Ricg is a section of the tangent bundle on some co dimension manifold of the space of all g. → Deﬁnition 20.19. The Ricci ﬂow of a Riemannian metric g is a path γ : (−ε, ε) {g} so that ∂γ → = −2 Ric ∂t γ(t) 92 AARON LANDESMAN

In local coordinate, we’re looking for functions

gij(t) so that ∂gij = −2 Ric ∂t g(t) Hamilton proved that these ﬂows exist for small amounts of time. This ﬂow was central in Perelman’s work on the Poincare conjecture, which was about when you know when the Ricci ﬂow exists. Deﬁnition 20.20. The scalar contraction is the trace of Ric, the Ricci curvature, when Ric ∈ Γ(Sym2(T ∨M)). Remark 20.21. Hopefully the scalar curvature is memorable because it is a scalar. The only way to get a scalar from a 4 fold tensor is by taking traces twice. Next, we will discuss sectional curvature Remark 20.22. The most pedagogically sound way to describe the sectional cur- vature is to give a deﬁnition, and then say what it amounts to. Deﬁnition 20.23. Let X, Y ∈ Γ(TM). The sectional curvature along the 2 plane spanned by X and Y in the vector space Γ(TM) is deﬁned to be R(X, Y, Y, X |X|2|Y|2 − hX, Yi2 where g(•, •) = h•, •i. 20.4. Back to linear algebra. At the end of the day, all these curvatures come about from very good understanding of linear algebra. We will say things for general vector spaces V, but for the application to differ- ential geometry, we should keep in mind 2 ∨ V = ∧ Tx M Consider a symmetric bilinear map V ⊗ V −A R Keep in mind the case that A = R at a point x ∈ M. → Lemma 20.24. If V also has a separate nondegenerate inner product, we can recover A completely by the data of (1) An orthonormal basis {vi} for V and (2) A(vi, vj) for all i, j. Proof. This is a standard linear algebraic fact, purportedly, but Hiro didn’t know how to prove it in class. 2 Question 20.25. Can we put an inner product on V = ∧ TxM and understand the Riemann curvature R in an orthonormal basis? k ∨ First, let’s deal with an inner product on ∧ Tx M.

Deﬁnition 20.26. Fix gx on V, thought of as TxM. Then, g V∨ g • x induces an inner product on , via x. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 93

Proposition 20.27. Let k ∨ u1 ∧ ··· ∧ uk ∈ ∧ V k ∨ w1 ∧ ··· ∧ wk ∈ ∧ V Deﬁne hu1 ∧ ··· ∧ uk, w1 ∧ ··· wki := det huiiwj This is symmetric an nondegenerate. Proof. This is symmetric because h•, •i is symmetric. We wish to see it is not non- degenerate. We can reduce to assuming ui, wi are orthonormal basis elements. If ∨ e1, ... , en is an orthonormal basis for V . Then,

ei1 ∧ ··· ∧ eik : i1 < ··· ik is a basis for ∧kV∨. In fact, this determines an orthonormal basis for this inner product. This is normal because the matrix hui, wji becomes the identity matrix because u = w is a basis vector. It’s orthonormal because the matrix hui, wji will have a column which has all 0’s if u 6= w are two basis vectors. 21. 11/17/15 21.1. Plan and Review. Today, we’ll cover: (1) Sectional curvature determines a Riemannian metric R (2) Grassmannians (3) Normal coordinates (4) Toward Hodge Theory (5) Poincare Duality Let’s recall where we were. We deﬁne R : Γ(TM⊗4) C (M) ∞ X ⊗ Y ⊗ Z ⊗ W 7 h ∇X∇Y − ∇Y ∇X − ∇[X Y] Z, Wi → , the Riemannian curvatures and we deﬁne the Ricci curvature →il ⊗2 Ric := g Rijkl : Γ(TM ) C (M) Today, we’ll talk more about the scalar curvature. ∞ → 21.2. Scalar curvature.

Deﬁnition 21.1. Fix (M, g) let Xx, Yx ∈ TxM. Then, the sectional curvature of the plane σ spanned by Xx, Yx is deﬁned as

R(Xx, Yx, Yx, Xx) K(σ) := K(Xx, Yx) := 2 2 2 |Xx| |Yx| − hXx, Yxi We’ll prove the following three propositions today.

Proposition 21.2. K really does depend only on σ, not on choice of basis Xx, Yx for σ. Proof. Proposition 21.3. The sectional curvature K determines R Proof. 94 AARON LANDESMAN

Proposition 21.4. K is the curvature of the surface expx(σ) at x. Proof. Here is some linear algebra, which we’ll review from last time. Recall, Proposition 21.5. If V has a nondegenerate inner product h•, •i, then ∧k(V) × ∧k(V) R v1 ∧ ··· ∧ vk, w1 ∧ ··· ∧ wk 7 det hvi, wji → is a nondegenerate inner product on ∧k(V). → Example 21.6. If k = 2, what is

hv1 ∧ v2, v1 ∧ v2i? It is hv1, v2i hv1, v2i 2 2 det = |v1| |v2| − hv1, v2i hv2, v1i hv2, v2i which is denominator of the sectional curvature K where X = v1, Y = v2. Using notation from last time let 2 E := ∧ TxM A := R : E ⊗ E R Then we have a well deﬁned function → E \ {0} E ⊗ E A R (21.1)

v 7 v ⊗ −v where the ﬁrst map is not linear. Further, if E is equipped with an inner product → we obtain a map (E \ {0}) /R R A(v, −v) [v] 7 . → |v|2 K(σ) Hence, by deﬁnition, is the function→ A(v, −v) [v] 7 . |v|2 2 applied to very particular elements→v ∈ E = ∧ TxM. Namely, it is applied to indecomposable elements v = X ∧ Y. Deﬁnition 21.7. Deﬁne k k Indecomp(∧ V) := {v1 ∧ ··· ∧ vk} ⊂ ∧ V Let’s understand the function 2 K : Indecomp(∧ TxM) \ {0} /R R Proof of Proposition 21.3. Recall the symmetries of the Riemannian curvature R. It is → NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 95

(1) Skew in X, Y (2) Skew in W, Z (3) 0 = R(X, Y, Z, •) + R(Y, Z, X, •) + R(Z, Y, X, •) (4) R(X, Y, W, Z) = R(W, Z, X, Y). Assume that r, R0 are tensors satisfying the above four conditions and K = K0 for all X, Y. Then, R(X, Y, Y, X) = R0(X, Y, Y, X) for all X, Y. Then, R(X + W, Y, Y, X + W) = R(X, Y, Y, X) + R(W, Y, Y, W) + 2R(X, Y, Y, W) Now, because R(X, Y, Y, W) = R(Y, X, W, Y) = (R, W, Y, Y, X) we obtain R(X, Y, Y, W) = R0(X, Y, Y, X) Now, R(X, Y + Z, Y + Z, W) = R0(X, Y + Z, Y + Z, W) implying R(X, Y, Z, W) + R(X, Z, Y, W) = R0(X, Y, Z, W) + R0(X, Z, Y, W) and so R(X, Y, Z, W) − R0(X, Y, Z, W) = R(Z, X, Y, W) − R0(Z, X, Y, W) Now, think of this as an operator. The only difference between the two sides is a cyclic permutation. So, the operator R − R0 is invariant under cyclic permutation of X, Y, Z. So, the third property from the beginning of the proof tells us 3 R(X, Y, Z, W) − R0(X, Y, Z, W) = 0 Now, a fundamental deﬁnition: Deﬁnition 21.8. The Grassmannian is the manifold whose points are all k planes in a vector space V, Gr(k, V). If dim V = n, we also write Gr(k, V) =: Gr(k, n). If you really dislike exterior powers, hopefully this proposition will give you some motivation to study them.

Proposition 21.9. Fix k ∈ Z>0 and V a vector space. Then, there exists a bijection Indecomp(∧kV) \ {0} /R =∼ Gr(k, V)

Proof. Note that v1 ∧ ··· ∧ vk 6= 0 if and only if the vi are all linearly independent. To see this, if vi = i6=j aivj when we expand v1 ∧ ··· ∧ vk we get that each term 0 V⊗k ∧kV k is . Since P has kernel generated by tuples of tensors. So, we have a function

→ v1 ∧ ··· ∧ vk 7 {v1, ... , vk}

→ 96 AARON LANDESMAN

If the span Span {vi} = Span {wi} then wi = Aijvj where Aij is an invertible k × k matrix, and P w1 ∧ ··· ∧ wk = A1jv1 ∧ ··· ∧ Akjvk

X X = det Av1 ∧ ··· ∧ vk Hence, the map above is well deﬁned, and determines an isomorphism. Its worthwhile to study n Deﬁnition 21.10. The grassmannian Grk(R ) := Gr(k, n). Let’s give it a topology. Consider k n k n Inj(R , R ) = f : R R : f is a linear injection ⊂ Mn×k(R) f 7 f(Rk) f(e ) → , i We have → ∼ n Inj = {(W, {v1 ∧ ··· vk}) : W is a k dimensional linear subspace of R and {vi} is a basis for W } .

So, we have to quotient by the right action of GLk(R)

Mn×k(R) GLk(R) × Mn×k(R) n ∼ n We obtain Grk(R ) = Inj/GLk(R). Give Grk(R ) the induced topology. We have the following facts. ← n (1) Grk(R ) is compact. n (2) Grk(R ) can be given the structure of a smooth manifold. n n (3) The smooth action GLn × R R induces a smooth action GLn × n n Grk(R ) Grk(R ). Example 21.11. Let k = 1 Then, an injective→ linear map → R Rn 1 7 v 6= 0 ∼ ×→ is determined by V and GL1(R) = R . We then have → n ∼ \ × ∼ n−1 Gr1(R ) = R {0} /R = RP Remark 21.12. By the third fact in the deﬁnition of the grassmannian, we have a group homomorphisms n GLn Diff(Grk(R ))

In particular, if M is a manifold with a GLn cocycle, one can construct the space → n Uα × Grk(R )/gαβ M n This is a ﬁber bundle witha ﬁbers diffeomorphic to Grk(R ). This gives a construc- tion →

{E M a vector bundle } {Grk(E) M a ﬁber bundle } Here 0 ≤ k ≤ n := rk E. → → → Here is a fascinating observation. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 97

n Remark 21.13. There exists a natural vector bundle γ on Grk(R ) so that

W γ (21.2)

n n {W ⊂ R } Grk(R )

n This is called the tautological vector bundle on Grk(R ). Theorem 21.14. Let M be paracompact. Then, for all smooth vector bundles E on M of rank k, there is a smooth map

f NE M − E Grk(R ) ∼ ∗ where NE is large enough so that E = f γ. →E Proof. We’ll see this on our ﬁnal.

Remark 21.15. There is an even stronger version of the theorem. If we have

N N+1 Grk(R ) Grk(R ) and this forms a sequence. Then, → fE N M Grk(R ) f (21.3) E

N+1 Grk(R

0 ∗ ∼ ∗ commutes. That is, (fE) γ = fEγ. Then, if two maps fE are homotopic then the cohomology classes are isomorphic. Being able to compute cohomology of these grassmannian spaces helps us understand vector bundles, and the classes we get are called the characteristic class of the vector bundle. 21.3. Normal Coordinates. Here is another digression. We’ll set up some tech- nology to prove Proposition 21.4. Lemma 21.16. Fix (M, g) and x ∈ M. Then, there exists a chart φ : U Rn x 7 0 → so that the induced metric on φ(U) satisﬁes the following at the origin: → (1) g = In×n at 0 ∂ g = 0 0 k (2) ∂xk at for all . k (3) Γij = 0 at 0 for all i, j, k.

k Proof. If we show property 2, we obtain property 3 because Γij is deﬁned in terms of the partial derivatives of g. So, it sufﬁces to prove the ﬁrst two parts. Let U ⊂ TxM be an open neighborhood of 0 on which expx is a diffeomorphism. 98 AARON LANDESMAN

Fix an orthonormal basis v1, ... , vn for TxM. Then, let U = expx(U). We obtain a composite

U U Rn (21.4)

aivi (a1, ... , an)

Call the composite φ. NoteP φ−1 : Rn U

(a1, ... , an) 7 expx( aivi) → i X −1 n →D(φ ) T0R 3 e~i −−−−− vi ∈ TxM In the past, we saw → D exp |0 = idTxM ∼ under TxM = T0(TxM). While ∂ Dφ−1 | −−−− v ∂xi 0 i so ∂ → ∂ g (0) = gM Dφ−1 , Dφ−1 ij x ∂xi ∂xj M = gx (vi, vj)

= (In×n)ij This completes the ﬁrst part. To prove the second part, ﬁx ~ i n U = u e~i ∈ R Then, the map X γ : t t~u R 7 Rn → n is a geodesic. So, if ∇ is the Levi-Civita connection for (R , g), then setting XU~ to be the constant vector ﬁeld on Rn with value→ U~ , we have

∇X X~ (0) = ∇ ∂ ~γ(0) U~ U ∂t = 0

In particular, if ~u = ei then ∂ X = U~ ∂xi and ∂ ∇ ∂ i = 0 ∂xi ∂x NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 99 at 0. Since ∂ ∂ ∇ ∂ + ∂ i + j (0) = 0 ∂xi ∂xj ∂x ∂x ~u = ∂ + ∂ where ∂xi ∂xj . We have ∂ ∂ ∇ ∂ j + ∇ ∂ i (0) = 0 ∂xi ∂x ∂xj ∂x Because ∇ is torsion free, we have ∂ ∇ ∂ j ∂ ∂xi ∂x = ∇ ∂ ∂xi ∂xj So, ∂ 2 ∇ ∂ j (0) = 0 ∂xi ∂x Then,

∂ ∂ ∂ ∂ g := g , ∂xk ij ∂xk ∂xi ∂xj ∂ ∂ = g ∇ ∂ i , j ∂xk ∂x ∂x ∂ ∂ = g( i , ∇ ∂ i ∂x ∂xk ∂x = 0 at 0. 21.4. Hodge Theory. Why are we doing some hodge theory? Because it will lead to a slick proof of Poincare duality. Theorem 21.17. (Poincare Duality) Let M be a compact oriented smooth manifold of dimension n. Then, there exists a nondegenerate pairing k n−k HdR(M) × HdR (M) R ([α] , [p]) 7 α ∧ β → ZM There are at least two ingredients that go into→ this: (1) Is the above pairing well deﬁned? This is Stokes’ theorem. (2) The harder part is nondegeneracy. This is where we will use Hodge The- ory. The two ingredients for Hodge theory are an orientation and a Rie- mannian metric. Let’s now review some basic linear algebra. Suppose we have V and a nonde- generate pairing h•, •i. This gives a nondegenerate pairing on V∨ and hence also on ∧kV∨. Fix also an orientation on V. This induces an isomorphism ∧kV∨ ∧n−kV∨ where dim V = n. This will induce an isomorphism → Ωk(M) Ωn−k(M)

→ 100 AARON LANDESMAN

22. 11/19/15 22.1. Questions and Overview. Question 22.1. Did we prove the proposition that sectional curvature was inde- pendent of basis? Yes, we saw the Riemann Curvature tensor was R : E ⊗ E R determined a map from primitives of E, → Prim(E) \ {0} /R R where Prim(E) means the pure tensors, i.e., the image of the plucker embedding. For today, ﬁx j : M M˜ ,g ˜ an immersion.→ Compare R(X, Y, Z, W) to R˜ (X˜, Y˜, Z˜, W˜ ). If V is a vector ﬁeld, we use V˜ to be an arbitrary local extension of V to→M˜ . We’ll see Deﬁnition 22.2. Recall from homework the second fundamental form is ˜ ˜⊥ II(X, Y) := ∇X˜ Y is a vector ﬁeld in Γ(NM/M˜ )) Remark 22.3. From homework, we saw ˜ II(X, Y) = ∇X˜ Y − ∇XY 22.2. Gauss’ Theorema Egregium. We’ll now develop a proposition, which will yield a slick proof of Gauss’ Theorema Egregium. Proposition 22.4. (Gauss Equation) For all X, YZ, W ∈ Γ(TM) we have R(X, Y, Z, W) = R˜ (X˜, Y˜, Z˜, W˜ ) + hII(X, W), II(Y, Z)i − hII(X, Z), II(W, Y)i Proof. We have ˜ ˜ h∇X∇YZ, Wi = h∇X˜ ∇YZ − II (X, ∇YZ) , Wi ˜ ˜ = ∇X˜ ∇YZ, W ˜ ˜ = h∇X˜ ∇Y˜ Z − II(Y, Z) , Wi ˜ ˜ ˜ ˜ = h∇X˜ ∇Y˜ Z, Wi − h∇X˜ (II(Y, Z)) , Wi To compute this term, we need a lemma:

Lemma 22.5. (Weingarten Equation) For all X ∈ Γ(TM), N ∈ Γ(NM/M˜ )) we have ˜ h∇X˜ N, Wi = −hN, II(X, W)i Proof. The key idea is: the derivative of a constant function is 0. Observe 0 = X˜hN˜ , W˜ i ˜ ˜ ˜ ˜ ˜ = h∇X˜ N, Wi + hN, ∇X˜ Wi ˜ ˜ ˜ ˜ = h∇X˜ N, Wi + hN, II(X, W)i NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 101

Now, we see, by using Lemma 22.5 ˜ ˜ ˜ ˜ ˜ h∇X∇YZ, Wi = h∇X∇YZ, Wi − h∇X˜ (II(Y, Z)) , Wi

= h∇˜ X∇˜ YZ˜, W˜ i + hII(Y, Z), II(X, W)i Likewise, we have ˜ ˜ ˜ ˜ −h∇Y ∇XZ, Wi = −h∇Y˜ ∇X˜ Z, Wi − hII(X, Z), II(Y, W)i and Z W = ˜ Z˜ W˜ − ([X Y] Z) W h∇[X,Y] , i h∇[X˜,Y] , i hII , , i = ˜ Z˜ W˜ h∇[X˜,Y] , i Then, adding the terms gives the proposition. Remark 22.6. Recall, if dim M = dim M˜ − 1, we can locally choose a normal vector ﬁeld N to M so that II(X, Y) = h(X, Y)N for some h, as we saw in the homework. Here h is linear in X, Y. This induces the shape operator

S : TxM TxM

→ Where

hx : TxM ⊗ TxM R X ⊗ Y 7 h(X, Y) → and so we have the composition → hx ∨ g (22.1) TxM (TxM) TxM is the shape operator where g is the matrix. More explicitly, we can choose a orthonormal basis v1, ... , vn for TxM. If we set hij = h(vi, vj) then s = (hij). In this basis, the matrix of g is the identity matrix. Remark 22.7. In homework, we deﬁned the Gauss Curvature at x ∈ M to be det(S). Some types of curvature are (1) Gauss curvature (2) Ricci curvature (3) scalar curvature (4) sectional curvature (5) Riemann curvature Warning 22.8. In general, the Gaussian curvature depends on j and not just j∗g˜. 3 Theorem 22.9. (Theorema Egregium) When dim M = 2 and (M˜ ,g ˜) = (R , gstd). Then, GaussCurv(x) = K(x) where K(x) is the sectional curvature at x. 102 AARON LANDESMAN

Remark 22.10. Here, since we’re in dimension 2, there is only a single two plane, so the sectional curvature doesn’t depend on any choice of vector ﬁelds.

Proof. First, s is a self adjoint operator (symmetric) because hij is. So, it has an orthonormal basis in which s is diagonal. We can choose coordinates around x so that

gij = In×n So, this orthonormal basis is orthonormal with respect to g. This uses a lemma from last time, that by taking normal coordinates, we can arrange that gij is the identity at the origin. Now, using dim M = 2, we can ﬁx an orthonormal basis v1, v2 at TxM. Let’s ﬁrst compute

R(v1, v2, v2, v1) K(v1, v2) = 2 2 2 |v1| |v2| − hv1, v2i = R(v1, v2, v2, v1) n Now, when (M˜ ,g ˜) = (R , gstd) we have R˜ = 0. Also, observe

II(vi, vj) = hij · n~ where n~ is our choice of normal vector to the surface, so

hII(vi, vj), II(va, vb)i = hijhab Therefore, using Proposition 22.4, we obtain 2 R(v1, v2, v2, v1) = 0 + h11h22 − h12 = det h = det s Remark 22.11. Mean curvature measures whether an embedding is of minimal area. Remark 22.12. In higher dimensions, the Theorema Egregium is false. 22.3. Sectional Curvature and the Exp map.

Proposition 22.13. Let σ ⊂ TxM and

expx : σ M. Then, → K (σ) = K (x) M expx(σ)

Proof. As usual, ﬁx some small open neighborhood U ⊂ σ on which expx is an immersion. Then, U inherits a metric from M. Let R be the Riemann curvature for U and let R˜ be the Riemann curvature for M. Fix v1, v2 orthonormal vectors spanning σ. We need to show

R(v1, v2, v2, v1) = R˜ (v˜1,v ˜2,v ˜2,v ˜1) because the left hand side is K (x) and the right hand side is K (σ). Note, expx(σ) M we have an immersion j : U M, also known as exp.

→ NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 103

In light of the Gauss Equation, Proposition 22.4, it sufﬁces to show II(v, w) = 0 for all v, w at a point x. However, from the homework it sufﬁces to show II(v, v) = 0 for all v ∈ T0σ. This follows from the easy lemma: Lemma 22.14. If T : V ⊗ V R is symmetric, then T is determined by T(v, v). This will again follow from our homework. → Here expx sends t 7 tv to a geodesic in M. In homework, we showed ˜ ∇γ˜˙ γ˜˙ = ∇γ˙ γ˙ + II(γ˙ ,γ ˙ ) → But, if γ = expx(tv), so the left hand side is 0, since γ is a geodesic. In the right hand side, the two terms are linearly independent. One is in TU and one is in NU/M. Therefore, since both terms on the right hand side are perpendicular, both terms are 0. Remark 22.15. In a coordinate chart given by the exponential map, the second fundamental form always vanishes at the origin. 22.4. Hodge Theory. Here is a theme: Getting our heads around linear algebra has really helped with geometry. Here’s some more linear algebra! Here is the setup. Let V be an n dimensional vector space over R equipped with the datum (1) A nondegenerate inner product h•, •i (2) An orientation Remark 22.16. These two choices induce an isomorphism ∧nV R as follows: Given an orthonormal basis v1, ... , vn we declare → 1 if v1, ... , vn is positive with respect to the orientation v1 ∧ ··· ∧ vn 7 −1 else . This is independent→ of the choice of orthonormal matrix. If we choose two differ- ent such bases, then they will differ by an element of O(n), and by an element of SO(n) if they have the same orientation. Now, by deﬁnition, there exists a map by wedging and then composing with the above isomorphism. ∧kV ⊗ ∧n−kV ∧nV =∼ R So, by adjunction, we obtain a map →∨ k n−k n−k F : ∧ V ∧ V g ∧ V Explicitly, given θ1, ... , θn an orthonormal basis for V, we have → → 1 k k+1 n F θ ∧ ··· ∧ θ := θ ∧ ··· ∧ θ and i i j j F θ 1 ∧ ··· ∧ θ k = θ 1 ∧ ··· ∧ θ n−k 104 AARON LANDESMAN where θi1 ∧ ··· ∧ θik ∧ θj1 ∧ ··· ∧ θjn−k is a positive n form under the orientation. ∨ Taking V = Tx M and ﬁxing (1) g a metric on M and (2) an orientation on M we obtain a map k ∨ n−k ∨ F : ∧ Tx M ∧ Tx M where n = dim M. Given → α ∈ Ωk(M) deﬁne (Fα) (x) := F(α(x)) Lemma 22.17. Fα is a smooth form Proof. Omitted due to ease. Deﬁnition 22.18. The Hodge star operator is the map Ωk(M) Ωn−k(M) α 7 Fα → n Exercise 22.19. We know F1 =: VolM ∈ Ω (M). Locally, we can write → p 1 n VolM = det gdx ∧ ··· ∧ dx = θ1 ∧ ··· ∧ θn with the positive orientation for T ∨M locally. Proposition 22.20. Fix f ∈ C (M) and α, β ∈ Ωk(M). Then,

(1) ∞ F(fα + β) = fFα + Fβ (2) k(n−k) FFα = (−1) α (3) We can deﬁne an inner product, not on M, but on the space of forms of M because α ∧ Fβ = β ∧ Fα = hα, βi VolM This gives a top dimensional form. The top dimensional forms yield a line bundle which is trivial as M is orientable. Here, hα, βi(x) := hα(x), β(x)i = det (h•, •i) (4) F (α ∧ Fβ) = hα, βi NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 105

(5) F is an isometry. That is, hFα, Fβi = hα, βi Proof. Assume α = θ1 ∧ ··· ∧ θk. are an orthonormal basis. Then, β = θj1 ∧ ··· ∧ θjk . Then, 1 k i i α ∧ Fβ = θ ∧ ··· θ ∧ θ 1 ∧ ··· ∧ θ n−k

0 if ij ∈ {1, ... , k} = 1 n θ ∧ ··· ∧ θ else

23. 11/24/15 23.1. Good covers, and ﬁnite dimensional cohomology.

Deﬁnition 23.1. Let X be a topological space. An open cover {Ui} is called a good cover if for all ﬁnite subsets i1, ... , ik ∈ I then

Ui1,...,ii := Ui1 ∩ · · · ∩ Uik is either empty or contractible. Example 23.2. Take X = S1 and cover it with two connected open sets which are contractible. Then, the intersection is a union of two disjoint open intervals, which is not contractible as it is not connected. Lemma 23.3. Any smooth manifold M admits a good cover. Proof. We will use Riemannian geometry. We will exhibit a good cover. Fix a Riemannian metric g on M. By (a souped-up version of) a lemma from before, for all x ∈ M there is an open subset Wx ⊂ M so that for all pairs of points 0 0 y, y ∈ Wx there is a unique geodesic passing through y and y contained in Wx.

Remark 23.4. Technically we didn’t prove that this path is contained in Wx. Es- sentially, we proved this by looking at a map TM M × M

(x, v) 7 (x, expx(v)) → and we found an open subset of TM mapping diffeomorphically onto its image. → Now, take {U} = {Wx}x∈M. Note then, for every intersection Wi1,...,ik . Choose 0 y ∈ Wi1,...,ik and contract y to y via the geodesic. Here, the map is W × [0, 1] W 0 (y , t) 7 γy0 (1 − t) → 0 This gives a strong deformation retraction of W onto y . → Remark 23.5. Some authors require that a good cover satisﬁes that every inter- section is diffeomorphic to either the empty manifold or Rn. Note that being diffeomorphic to Rn, is quite a stronger condition. However, such a good cover also exists. 106 AARON LANDESMAN

We might care about the Rn version because we might want to look at com- pactly supported cohomology, instead of de Rham cohomology, and then we would like to know the diffeomorphism type of our manifold, and not just the homotopy type. Our aim is to show cohomology is ﬁnite dimensional for a compact manifold. Corollary 23.6. If M is smooth and compact, M admits a ﬁnite good cover.

Proof. Just take a ﬁnite reﬁnement of whatever cover constructed above. Corollary 23.7. Let M be a smooth manifold which admit a ﬁnite good cover. Then, k ⊕k≥0H (M) is ﬁnite dimensional. Proof. We will prove this by induction on N, which is the minimal number of open sets needed to form a good cover. The base cases are N = 0, 1, and clearly hold, as when N = 0, the manifold is empty. When N = 1, M is contractible and so the cohomology groups are concentrated in degree 0, and H0(M) =∼ R. Now, we perform the induction, assuming it hold for N and showing it for N + 1. Choose a good cover U1, ... , UN+1. Now, we use Mayer-Vietoris. Let

V0 = U1 ∪ · · · ∪ UN V1 = U2 ∪ · · · ∪ UN+1 Next, note that

V0 ∩ V1 = (U1 ∩ UN+1) ∪ U2 ∪ · · · UN This intersection admits a good open cover

(U1 ∩ UN+1) , U2, ... , UN So, all three of these admit a good open cover. Now, by Mayer-Vietoris, we obtain a long exact sequence associated to the short exact sequence. (23.1) • • • • 0 Ω (M) Ω (V0) ⊕ Ω (V1) Ω (V0 ∩ V1) 0.

Then, taking cohomology we obtain • • • • dim H (M) ≤ dim H (V0 ∩ V1) + dim V (V0) + dim H (V1) which is ﬁnite. Remark 23.8. This same technique can be used to prove the Kunneth Theorem. If there’s time at the end of class, we’ll prove it: Theorem 23.9 (Kunneth Theorem). Let M be a smooth manifold. Suppose it admits a ﬁnite good cover. Then, say [α] ∈ H•(M). We then have two projection maps

M × N (23.2) p2 p1 M N NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 107

We also have a map H•(M) ⊗ H•(N) H•(M × N) ∗ ∗ [α] ⊗ [β] 7 p1 [α] ∧ p2 [β] → Proof. With out loss of generality, assume M admits a ﬁnite good cover. This is → easy for base cases and then we perform the same induction trick. The only thing you need to worry about is that everything is an algebra map. Corollary 23.10. In particular i j ∼ k ⊕i+j=kH (M) ⊗R H (N) = H (M × N)

Proof. Use the isomorphism from the Kunneth Theorem. 23.2. Return to Hodge Theory. Now, ﬁx (M, g). We deﬁned a map k n−k F : Ω (M) Ω (M) where n = dim M. If θ , ... , θ is an orthonormal basis for T ∨M, then 1 n → F (fθ1 ∧ ··· ∧ θk) = fθk+1 ∧ ··· ∧ θn where f ∈ C (M). Proposition∞ 23.11. The following properties hold k k k(n−k) (1) FF : Ω (M) Ω (M) is (−1) · id. (2) Suppose α, β ∈ Ωk(M). ;w We have → α ∧ Fβ = β ∧ Fα = hα, βi VolM

where VolM is the volume form. Proof. (1) It sufﬁce to prove this for

θi1 ∧ ··· ∧ θik

Assume we’ve chosen ij so that

θi1 ∧ ··· ∧ θin is positive. Then,

F(θi1 ∧ ··· ∧ θik ) = θik+1 ∧ ··· ∧ θin So, FF (··· ) = F θik+1 ∧ ··· ∧ θin σ = (−1) θi1 ∧ ··· θik where σ θik+1 ∧ ··· ∧ θin ∧ θi1 ∧ ··· ∧ θik = (−1) θi1 ∧ ··· ∧ θin and commuting a k form with a n − k form picks up a sign of (−1)k(n−k). 108 AARON LANDESMAN

(2) Set

α = fIθI I X β = gJθJ J X Then, note

hα, βi =h fIθI, gJθJi I J X X fIgJhθI, θJi I J X, = fIgI I X Therefore, ! hα, βi VolM = fIgI VolM I X On the other hand,

Fβ = gIθik+1 ∧ ··· ∧ θin I X and so deﬁning Jc to be the complement of J,

α ∧ Fβ = fIθI ∧ gJθJc I X = fIgJθI ∧ θJc I J X, = fIgIθ1 ∧ ··· ∧ θn I X = fIgI VolM I X Therefore the two are equal. The last equality holds because the inner product is symmetric in α, β.

Corollary 23.12. The map • • Ω (M) ⊗R Ω (M) R

α ⊗ β 7 α ∧ β → F ZM is an inner product. Here, if |α| 6= |β| we deﬁne→

α ∧ Fβ = 0. ZM NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 109

Proof. Symmetry is clear by part 2 of the previous proposition. We only need to check positive deﬁniteness. Note that

α ∧ Fα = hα, αi VolM ZM ZM and note that

hα, αi(x) = hαx, αxi ≥ 0 So, the form is positive. To see it is positive deﬁnite, note that if

α ∧ Fα = 0 ZM then hα, αi = 0, meaning αx = 0 for all x, implying α = 0. Remark 23.13. The weirdest thing may be that we not only needed an inner prod- uct on the tangent space, but also an orientation to get this map on cohomology of the manifold. Remark 23.14. Note that Ω0 ⊕ Ω1 ⊕ · · · is an orthogonal decomposition of Ω•. because the inner product of two forms of different degrees is 0. Deﬁnition 23.15. We will write

hα, βi := α ∧ Fβ ZM Warning 23.16. This should not be confused with the inner product from the Rie- mannian metric, which spits out a function. This just spits out a number.

Proposition 23.17. We deﬁne the adjoint of ddeR to be δ with δ : Ωk Ωk−1 and → hdα, βi = hα, δβi for all α, β. Explicitly, n(k+1)+1 δ = (−1) F ◦ d ◦ F Proof. This follows from Stokes’ theorem. Note that F is an invertible operation. We have |α| dα ∧ Fβ = d(α ∧ Fβ − (−1) α ∧ dFβ Z Z Z |α|+1 |α|+1 −1 = 0 + (−1) α ∧ dFβ = (−1) α ∧ F(F dFβ) Z Z Therefore, |β| −1 δβ = (−1) F ◦ d ◦ Fβ Exercise 23.18. Check the signs work out as claimed above. Remark 23.19. I hope you get the feeling there is so much structure and you don’t know what to do with it. Somehow, all we have is F, and everything falls out from F. Once you have operators, you should try to commute them past each other, and see what happens. 110 AARON LANDESMAN

Deﬁnition 23.20. Deﬁne ∆ := [d, δ] := dδ − (−1)1·−1δ = dδ + δd to be the graded commutator a map ∆ : Ωk Ωk α 7 dδα + δdα → This is also called the Laplacian of (M, g). n → 0 Exercise 23.21. If (M, g) = (R , gstd) then for all f ∈ Ω (M) then n ∂2f ∆f = ± ∂(xi)2 i=1 X Remark 23.22. In terms of linear algebra, you should think of looking at the com- mutator of these two matrices. It makes sense to look at their simultaneous eigen- values. 23.3. Harmonic Forms and Poincare Duality. Deﬁnition 23.23. A k-form ε is called harmonic if ∆ε = 0. Let Harmk be the set of harmonic k forms. Remark 23.24. We’ll see that a harmonic k form lives in the mutual null space of d, δ. Our next big goal is to show there is a natural map Harmk Hk(M) which is an isomorphism, also known as the Hodge Theorem. This map comes from he second part of the following proposition.→ Hodge theory is all about harmonic forms. Proposition 23.25. We have (1) ∆F = F∆ (2) ∆α = 0 dα = 0, δα = 0 (3) ∆ is self adjoint, meaning ⇐⇒ h∆α, βi = hα, ∆βi Proof. We will now ignore issues of signs. (1) δFα = (dδ + δd) Fα = ±dFdFFα + ±FdFdFα = ±FFdFdFFα ± Fdδα = ±F (FdF) d ± α ± Fdδα = Fδdα + Fdδα = F∆α NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 111

(2) Observing the above, we have h0, αi = h∆α, αi = hdδα + δdα, αi = hdδα, αi + hδdα, αi = hdα, dαi + hδα, δαi = |δα|2 + |dα|2 so, δα = dα = 0. (3) This is a easy exercise. Corollary 23.26. We have α is harmonic if and only if Fα is harmonic. Remark 23.27. Consider the vector spaces im d(Ωk−1) := dΩk−1 ⊂ Ωk im δ(Ωk+1) := δΩk+1 ⊂ Ωk

These two spaces are mutually orthogonal. Therefore, we have an injection dΩk−1 ⊕ δΩk+1 ⊕ Harmk Ωk The Hodge decomposition theorem will state that this map is a surjection, hence an isomorphism. → Lemma 23.28. The spaces dΩk−1, δΩk+1, Harmk are orthogonal. Proof. We have hdα, δβi = hd2α, βi = h0, βi = 0 Similarly, if ∆ε = 0, then hε, δβi = hdε, βi = h0, βi = 0 and similarly hδ, dαi = hδε, αi = h0, αi = 0 k ⊥ Theorem 23.29 (Smooth solutions to elliptic PDEs). Let α0 ∈ (Harm ) . Then, k there exists α˜ 0 ∈ Ω so that ∆α˜ 0 = α0. Proof. We will omit the proof. 112 AARON LANDESMAN

Lemma 23.30. The map Harmk Hk(M) is an injection. In particular, Harmk is ﬁnite dimensional because cohomology is. → Proof. We want to show that if ε, ε0 ∈ Harmk and [ε] = [ε0] then ε − ε0 = dα. We will show its norm is 0 because hε − ε0, ε − ε0i = hε − ε0, dαi = hδε − εε0, αi = h0, αi = 0

Corollary 23.31. The map dΩk−1 ⊕ δΩk+1 ⊕ Harmk Ωk is an isomorphism. → Proof. This is injective because it’s an inclusion of mutually orthogonal subspaces. Deﬁne

Pα := hα, εiiεi k X where εi is a basis for Harm . By the theorem, chooseα ˜ 0 so that

α − Pα = ∆α˜ 0. Then,

α = Pα + dδα˜ 0 + δdα˜ 0 is harmonic. Corollary 23.32. The map Harmk Hk is a surjection. Proof. Fix [α] ∈ Hk. By the corollary, we have → α = dα0 + δβ + ε but hα, δβi = hdα, βi = 0

So, α − ε = dα0, so α − ε are in the same cohomology class. Corollary 23.33 (Poincare Duality). The map Hk ⊗ Hn−k R

[α] ⊗ [β] 7 α ∧ β → ZM is nondegenerate. → NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 113

Proof. Take β = Fα. Given any element α we can assume α is harmonic. Hence, β is also harmonic, so it represents a cohomology class. Therefore, α ∧ β = α ∧ Fα = hα, αi > 0 if α 6= 0.

24. 12/1/15 24.1. Overview, with a twist on the lecturer. Today we have a special guest lec- turer: Tristan Collins. Today, we’ll discuss how to recover Maxwell’s equations from Yang Mill’s The- ory. In particular, we’ll take a detour into physics!

24.2. Special Relativity. Einstein Postulated two rules: (1) The laws of physics are the same in all inertial frames. (2) The speed of light, which we’ll notate c is ﬁnite and the same in all inertial frames. Question 24.1. What does this mean mathematically? If we have some metric which describes the laws of geometry, then that metric has to be invariant under the group of Lorentzian isometries, where the Lorentzian group is O(1, 3). Consider M = R3 × R with coordinates (x, y, z, t). Then axioms 1 and 2 above hold for ds2 = c2dt2 − dx2 + dy2 + dz2 and metric equal to c2 0 0 0 0 −1 0 0 g = . 0 0 −1 0 0 0 0 −1 Observe that (1) g is not positive deﬁnite, but (2) g is nondegenerate Remark 24.2. We can still do Riemannian geometry when g is not positive deﬁ- nite, but only nondegenerate. However, there will be some new strange features coming up. The curve γ(s) = (a · s, b · s, d · s, s) + ~p is a geodesic and q |γ0(s)| = c2 − a2 + b2 + d2 114 AARON LANDESMAN

Remark 24.3. The statement that nothing can move faster than light implies a2 + b2 + d2 ≤ c2 Light itself is characterized by equality. Lemma 24.4. If γ(s) is a geodesic with length |γ0(s)| = 0 or spatial speed a2 + b2 + d2 = c2, then the length τ |γ0(s)|ds = 0 Z0 0 Proof. Immediate because |γ (s)| = 0. Remark 24.5. Spacetime has an interesting causal structure. We can make a picture which is a cone. Up is the future, down is the past and the boundary of the cone is called the null cone, which is the cone generated by light-like geodesics. That is, the only way to inﬂuence an event at the origin of the cone is to be inside the light cone in the past. Now, consider the complex line bundle which is the trivial 1-complex dimen- 3,1 3 2 2 sional bundle over R := (R × R, c dt − gR3 ). Put a connection on this bundle. In this case, α ∇ = d + Aαdx Deﬁnition 24.6. Here is our convention for Einstein summation notation on space- time: We let Greek indices run from 0, ... , 3 with x0 = t. We let roman indices run from 1, ... , 3. Example 24.7.

Aα = (A0, A1, A2, A3) 24.3. The Differential Geometry Set Up. Deﬁnition 24.8. In general, if E M is a vector bundle with a connection j ∇ = d + Ajdx → meaning

∇ ∂ σ = dσ + Ajσ ∂xj We deﬁne the curvature by Fkj = ∇j, ∇k . where F ∈ Γ(M, Ω2(End(E))). That is, if we plug in two vector ﬁelds, we get an endomorphism of the tangent bundle. In terms of coordinates, we have

Fjkσ = ∇j∇kσ − ∇k∇jσ ∂ ∂ ∂ ∂ = ( + A ) σ + A σ − + A σ + A σ ∂xj j ∂xk k ∂xk k ∂xj j ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ = σ + A σ + A σ + A σ + A A σ − σ − A σ − A σ − A σ − A A σ ∂xj ∂xk ∂xj k k ∂xj j ∂xk j k ∂xk ∂xj ∂xk j j ∂xk k ∂xj k j ∂ ∂ = Akσ + AjAkσ − k Ajσ − AkAjσ ∂xj ∂x NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 115

So, invariantly, we can write

F = dA + A ∧ A

Using

k A = Akdx ∂ dA = A dxj ∧ dxk ∂xj k j k X,

Lemma 24.9 (Bianchi Identity). Then, ∇AF = 0.

Proof.

∇F = dF + A ∧ F − F ∧ A

This is essentially by deﬁnition because F ∈ Ω2(End(E)). This is

∇F = d(dA + A ∧ A) + A ∧ (dA + A ∧ A) − (dA + A ∧ A) ∧ A = dA ∧ A − A ∧ dA + A ∧ dA + A ∧ A ∧ A − dA ∧ A − A ∧ A ∧ A = 0

j Here, writing A = Ajdx then

j k A ∧ A = AjAkdx ∧ dx j k X,

Example 24.10. In the rank 1 case, we have F = dA because F = dA + A ∧ A = dA, as 1 × 1 matrices commute with each other so A ∧ A = 0. Then, the curvature is ∂ F = A dxµ ∧ dxα ∂µ α µ,α X 1 ∂ ∂ = A − A dxµ ∧ dxα. 2 ∂xµ α ∂xα µ µ,α X 24.4. Toward Maxwell’s equations.

Goal 24.11. Our goal for today is to recover Maxwell’s equations from Geometry.

Recall, in R3,1, we have ~ E(x, y) = (E1(x, t), E2(x, t), E3(x, t)) ~ B(x, t) = (B1(x, t), B2(x, t), B3(x, t)) where E is a electric ﬁeld and B is a magnetic ﬁeld. Then, maxwell’s equations are 116 AARON LANDESMAN

Theorem 24.12 (Maxwell’s Equations). Letting E be an electric ﬁeld and B a magnetic ﬁeld, we have ∇ · E = 0 ∂ E + ∇ × B˙ = 0 ∂t ∇B = 0 ∂ − ∇ × E = 0 ∂t

Proof. Deﬁne Ea = F0a, Ba, Fbc where abc is a cyclic permutation of 123. We then have 0 E1 E2 E3 −E1 0 B3 −B2 F = −E2 −B3 0 B1 −E3 B2 −B1 0 We want to check that this curvature will produce a solution of Maxwell’s equa- tions. Question 24.13. How does one derive physical laws? One has the space of metrics on a Riemannian manifold, write down some ac- tion, and then study the critical points of that action. Here, we have a natural action on the space of connections. Consider the action −1 µβ να A 7 I(A) = g g FβαFµνdx 4 3,1 ZR 2 This is just the L norm→ of the curvature. Here F = dA + A ∧ A. This is a map from the space of connections to R. We have F(A) = dA because A has rank 1. Remark 24.14. A naive analogy is that we can consider the map C (M) R

∞ φ 7 |∇φ|2 → Z and the critical points are Harmonic Functions.→ Question 24.15. What are the critical points of I(A)? That is, I can be thought of as a function from the inﬁnite dimensional manifold of connections to the reals, and we can ask where the derivative of this function is 0. 1 Given A, consider At = A + tτ where τ ∈ Ω (End(E)). Compute ∂ I(A )| ∂dt t t=0 and we ﬁnd that the critical points satisfy ∂ Fµν = 0 ∂xν dF = 0 NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 117 where the latter identity is the Bianchi identity. Let’s write out what this is. First, deﬁne µν µβ να F = g g Fαβ We have 0 = dF µ ν = dFµ,νdx ∧ dx ∂ = F dxρ ∧ dxµ ∧ dxν ∂ρ µν There are now several cases: (1) First, take the coefﬁcient of dx1 ∧ dx2 ∧ dx3. This is ∂ ∂ ∂ 0 = F + F + F = ∇ · ∇B ∂x1 23 ∂x2 31 ∂x3 12 using the deﬁnition of divergence, and so we recover Maxwell’s third equa- tion. (2) Second, let’s look at the coefﬁcient of dx0 ∧ dx1 ∧ dx2. This gives the equa- tion ∂ ∂ ∂ ∂ 0 = F + F + F = B − (∇ × E) ∂x0 12 ∂x1 20 ∂x2 01 ∂t 3 3 Continuing in this way, we see that a critical point of the action of I(A) gives us a solution of Maxwell’s equations. Warning 24.16. The following remark may use some terminology we have not yet seen. Remark 24.17. It is an extremely interesting problem to study the critical points of Maxwell’s equations. Somehow, in the simplest possible case, we already recov- ered Maxwell’s equations. Other cases are also quite interesting. Another interesting case is when we study critical points connections of a holomorphic vector bundle on a compact Kahler manifold. In general, there turn out to be no critical points. In fact, the existence of critical points is equivalent to some algebro geometric condition. The algebro geometric conditions are Mumford-Takamoto stability. That is, if E over (X, ω) is a holomorphic vector bundle over a compact Kahler manifold, then the critical points of I over a suitable space are equivalent to Her- mitian metrics H on E whose curvature satisﬁes

∧ωFH = c · id called the Hermitian-Yang-Mills equation. where c is a topological constant. The left hand side is a contraction of a Kahler metric with the 2 form F. Theorem 24.18 (Donaldson-Uhlenbeck-Yau). There exists a solution to the Hermitian- Yang-Mills equation if and only if for all S ⊂ E, where E is irreducible (not a direct sum of other vector bundles) and these are coherent torsion free subsheaves, we obtain deg s deg E < rk s rk E known as Mumford-Takemoto stability. 118 AARON LANDESMAN

In fact, every force other than gravity can be obtained from some such critical point. This yields a beautiful relationship between algebraic geometry, differential geometry and physics.

25. 12/3/15 25.1. Overview. Today, we’ll talk about the “really smooth” version of Riemann Hilbert. Last week we ﬁnished the discussion of Hodge theory and Poincare duality. Today, we’ll build up some intuition for principle G-bundles. 25.2. Principle G-bundles. Recall, we can build bundles out of cocycles. If we want to build a bundle E M whose ﬁber is F over a point, we can choose a cocycle (1) →

gαβ : Uαβ Diff(f)

(2) gαβ ◦ gβγ = gαγ. → Then, we can construct α Uα × F/ ∼, and we used this construction to construct the tangent bundle early in class. In particular, if F = G`is a Lie group, there is a natural map G Diff(G) given by sending g to left multiplication by g. Consider a cocycle with values in G → gαβ : Uαβ G .

Given gαβ , construct → P := Uα × G/ ∼ α a Remark 25.1. P has a right action of G

Uα × G P (25.1)

Uα M

Deﬁnition 25.2. A bundle P with ﬁbers G and cocycle in G is a principle G-bundle Example 25.3. (1) Take P = G × M. This is the trivial principle G-bundle. π (2) Take P = S3 − S2 the Hopf ﬁbration. Here is how this is constructed: Recall S2 =∼ S3/S1 where 3 → S ⊂ {(z0, z1) ∈ C × C} and this has an action of eiθ. We saw in homework that this quotient was diffeomorphic to S2. (3) Fix a vector bundle E M. By deﬁnition, E is constructed from a cocycle

gαβ : Uαβ GLn(R) . → In particular, this is enough to construct a GL n(R) principle bundle as fol- lows. Over each Uα we have a copy→ of GLn(R) × Uα. Then, the transition NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 119

function is given by left multiplication from the gαβ maps for E. Essen- tially, the relationship between this principle bundle and the original vec- tor bundle is that we can interpret an element of a ﬁber of the GLn(R) bundle as a choice of basis a ﬁber of E. ∼ In other words, (PE)x = GLn are identiﬁed with a choice of basis for Ex. 25.3. Connections and curvature on principle G-bundles. Goal 25.4. Today, we want to (1) Deﬁne connections and curvature for principle G-bundles. (2) Prove that if ω is a ﬂat connection on P then ω deﬁnes a group homomor- phism π1M G. Question 25.5. What kind of geometry does a principle G-bundle P have? → Since P has a right action of P P × G, we have an induced map Γ(TP) g ← which corresponds to a “rotation” of the lie group G on the ﬁbers. ← π Moreover, since we have a smooth map P − M. Deﬁnition 25.6. We have a subspace → Vp ⊂ TpP p ∈ P which we deﬁne to be the vertical tangent space given by

Vp := ker Dπp

Remark 25.7. We have an isomorphism

Vp =∼ g Why is this? Fixing p ∈ P we have a smooth map

jp : G P g 7 pg → and j (G) = π−1(π(p)). Because right multiplication is transitive on a group, we p → obtain the whole ﬁber. Deﬁnition 25.8. Let

Rg : P P denote the right action of G by g ∈ G. → Deﬁnition 25.9. A connection on P is a choice of a subbundle H ⊂ TP called a horizontal subbundle so that (1) H ⊕ V =∼ TP (2) DRg(H) = H. 120 AARON LANDESMAN

25.4. An Algebraic characterization of connections on principle G-bundles. One of the ﬁrst theorems we want to prove is an algebraic characterization, since alge- braic characterizations usually let us do things with geometric objects. Deﬁnition 25.10. Fix a Lie algebra V. Deﬁne • • Ω (M; V) := Ω (M) ⊗R V Remark 25.11. Here is a general principle: If R is a commutative ring and A is an algebra, then R ⊗ A is an algebra. This comes up because we should think of de Rham forms as a commutative algebra. When we tensor with V, the resulting object should also be a Lie algebra. So, Ω•(M) ⊗ V is a graded Lie algebra. Deﬁnition 25.12. Deﬁne the bracket on Ω•(M; V). Ω•(M; V) × Ω•(M; V) Ω•(M; V) (α ⊗ u, β ⊗ w) 7 α ∧ β ⊗ [u, w] → where α ∈ Ω•(M), u ∈ V. For the remainder of this lecture,→ we’ll denote w ∈ Ω•(M; V) Lemma 25.13. We have [w, v] satisﬁes: (1) Deﬁne d(α ⊗ u) := (dα) ⊗ u Then, d [w, u] = [dw, u] + (−1)w [w, du] (2) [u, [v, w]] = [[u, v] , w] + (−1)uv [v, [u, w]]

Proof. Omitted. Example 25.14. Take M = G. There exists a g valued 1 form w so that if X ∈ g is left invariant, we have

wx (Xx) = X For shorthand we’ll write this as w(X) = X. Note, here w : Γ(TM) C (M) ⊗ V so if M = G, V = g then w deﬁnes a map ∞ → Γ(TG) C (M) ⊗ g =∼ Γ(TG) because the tangent bundle is a lie group,∞ and this 1 form can be characterized by inducing the identity map. → Deﬁnition 25.15. The vector ﬁeld satisfying w(X) = X is called the Mauer-Cartan form of G. We will now denote w as the Mauer-Cartan form. NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 121

Proposition 25.16 (Mauer-Cartan Formula). We have 1 dw = − [w, w] 2 Further w, the Mauer-Cartan form is a globally deﬁned 1-form.

Proof. Choose a basis v1, ... , vn ∈ g. Once we choose such a basis there is a canon- ical dual basis w1, ... , wn ∈ g∨. Note that g∨ is, as a set, the left invariant smooth 1 forms. Then, n i w = w ⊗ vi i=1 X In particular, this shows that w deﬁned ﬁber by ﬁber previously is a globally deﬁned 1 form. i Then, there exist unique constants cjk so that

i vj, vk = cjkvi

Let us now calculate this. X Since left invariant vector ﬁelds span Γ(TG), it sufﬁces to compute on left in- variant vector ﬁelds X, Y. Further, let

j X = X vj k Y = X Y vk.

Then, X

i dw = dw ⊗ vi i dw(X, Y) = X dw (X, Y) ⊗ vi X i i i = X w (Y) − Y w (X) − w ([X, Y]) ⊗ vi X i = − w ([X, Y]) vi X ! i a j k = −w cjkX Y va vi a X X i i k = − cjkX Y ⊗ vi 1 = − X w (X)w (Y) − w (Y)w (X) 2 i k j k i j k X, , 1 = − ci w ∧ w (X, Y) ⊗ v 2 jk j k i i j k X, , = dw (X, Y) because wi(X), wi(Y) are constant functions so their directional derivatives are 0. j Also, X = ωj(X). The factor of −1/2 is coming from pairing up the j, k terms, which are symmetric. 122 AARON LANDESMAN

Now, computing the right hand side, we have [w, w](X, Y) = wi ∧ wj ⊗ vi, vj (X, Y)] = X wi ∧ wj (X, Y) ⊗ vi, vj X i j j i k = X Y − X Y ⊗ cijvk and multiplying this by −1/2 givesX us precisely the term above. Remark 25.17. In Ω•(M; V), we have [w, u] = (−1)1+uw [u, w] So, if |w|, |v| are both odd, then [w, v] = [v, w] which is a bit strange to us, since it’s the opposite of the commutative world α ∧ β = −β ∧ α Here is one more tool: ∗ ∗ Question 25.18. We have Lgw = w. Can we compute Rgw? Proposition 25.19. We have ∗ −1 Rgw = Ad(g )w where Ad(g−1) g g (25.2)

DCg−1 TeG TeG where Cg is conjugation by g. Proof. We need to compute ∗ w (DRg(•)) = DRgw(•)

But, note that if we plug in something left invariant for •, then DRg(•) is as well. So, it sufﬁces to check this at e ∈ G. So, computing ∗ Rgw(X) = w (DRg(X)) = w DRg ◦ DLg−1 (X) = w DCg−1 (X)

= DCg−1 (X)

= DCg−1 (w(X)) We want to next state the algebraic characterization of connections. Here is the payoff: NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 123

Theorem 25.20. Fix a principle G-bundle P. Then, there is a bijection between ∼ ∼ 1 ∗ −1 {H ⊂ TP : H ⊕ V = TP, DRg(H) = H} = w ∈ Ω (P; g) : w(X)p = X, Rgw = Ad(g )w where wp : TpP g and in general w : Γ(TP) C (P) ⊗ g → Proof. Let’s construct the map from right to∞ left. Map → w 7 ker w = H First, property 1 on the right hand side implies property 1 on the left hand side because w, by deﬁnition, is non-singular→ along V. This is because on each ﬁber

wp : V g deﬁnes an isomorphism. Hence, this gives a splitting of the map w→ TpP − g. Call this splitting v. Note that the kernel of w is ker w. To show the second property, ﬁx v ∈ ker→wp. Then, ∗ wpg (DRg(v)) = Rg w(v) −1 = Ad(g )(wp(v)) = 0

This means that if v ∈ ker wp then DRg(v) ∈ ker wpg

Since Rg is invertible by Rg−1 , we have 2. For each p ∈ P, if (vn, u) ∈ Hp ⊕ Vp, deﬁne

wp(vh + u) 7 u 7 wp(vh + u) ∈ TeG For the inverse map from left to right, send → → H 7 wp. This inverse map is essentially “project everything to the vertical tangent space.” The rest of the proof is left as an exercise.→ Deﬁnition 25.21. Either H or w is called a connection on P. Proposition 25.22. Any P admits a connection. pr Proof. If P is trivial, P = G × M, then this is pretty obvious. Consider P −− G and set ∗ w = pr wMC → where MC is for Mauer-Cartan. IF P is not trivial, on a trivial cover {Uα} and set ∗ wα = prαwMC with

prα : Uα × G G

→ 124 AARON LANDESMAN

Then, deﬁne π : P M,

w = (fα ◦ π)wα → α X where fα is PαU for {Uα}.

Remark 25.23. So, essentially, the horizontal sections are members of this horizon- tal tangent space H. 25.5. Curvature as Integrability. Deﬁnition 25.24. We saw 1 dw = − [w, w] 2 on G = P pt. In general, given a connection w, we have 1 dw 6= − [w, w] → 2 −1 dw = [w, w] + Ω 2 where Ω is some 2-form. The 2-form Ω is the curvature of w. We say w is ﬂat if Ω = 0. Proposition 25.25. We have (1) −1 Rg ∗ Ω = Ad(g ) ◦ Ω

(2) For all X, Y ∈ TpP, we have

Ω(X, Y) = dw (Xhoriz, Yhoriz) (3) If X, Y ∈ H are horizontal, then Ω(X, Y) = w ([X, Y]) (4) We have dΩ = [Ω, w] which is called the Bianchi identity.

Proof. Omitted

Remark 25.26. We will just prove 3. Observe that 2 tells us Ω(X, Y) = 0 for all X, Y ∈ Γ(H) Ω(X, Y) = 0 for all X, Y However, the former is equivalent to ⇐⇒ Ω(X, Y) = 0 for all X, Y ∈ Γ(H) [X, Y] ∈ H for all X, Y ∈ Γ(H) This tells us that a generalization of Frobenius’ theorem, that involutivity is the same as integrability. ⇐⇒ NOTES FOR MATH 230A, DIFFERENTIAL GEOMETRY 125

Let M˜ be an integrable submanifold for H assuming w is ﬂat. We have a com- muting diagram

M˜ P (25.3) η

M

We know under the projection map, the derivative of η is an isomorphism. More- over, this is a covering map. So, given a covering map, we can lift paths. If we have a curve γ : [0, 1] M we get a lift γ˜ : [0, 1] M˜ → Let’s say γ(0) = p. We have a unique element withγ ˜ (1) = pg. This is an assign- ment →

π1(M) G γ 7 g Now, → → Question 25.27. Is this a group homomorphism? Not quite. By translating, we’ll realize this doesn’t satisfy the group homomor- phism, rather it satisﬁes the opposite. That is, we get a map op π1(M) G γ 7 g Then, we get a map → → op (P, ﬂat w) Rep(π1M, G Theorem 25.28. If we mod out both sides of the above map by isomorphism, this is a bijection. → Proof. Omitted. Remark 25.29. If M is compact and has a fundamental group which is ﬁnitely generated, we can describe the space of representations in terms of polynomials. It’s very surprising that ﬂat connections can be expressed in terms of polynomial expressions.