M382D NOTES: DIFFERENTIAL TOPOLOGY 1. the Inverse And
Total Page:16
File Type:pdf, Size:1020Kb
M382D NOTES: DIFFERENTIAL TOPOLOGY ARUN DEBRAY MAY 16, 2016 These notes were taken in UT Austin’s Math 382D (Differential Topology) class in Spring 2016, taught by Lorenzo Sadun. I live-TEXed them using vim, and as such there may be typos; please send questions, comments, complaints, and corrections to [email protected]. Thanks to Adrian Clough, Parker Hund, Max Reistenberg, and Thérèse Wu for finding and correcting a few mistakes. CONTENTS 1. The Inverse and Implicit Function Theorems: 1/20/162 2. The Contraction Mapping Theorem: 1/22/164 3. Manifolds: 1/25/16 6 4. Abstract Manifolds: 1/27/16 8 5. Examples of Manifolds and Tangent Vectors: 1/29/169 6. Smooth Maps Between Manifolds: 2/1/16 11 7. Immersions and Submersions: 2/3/16 13 8. Transversality: 2/5/16 15 9. Properties Stable Under Homotopy: 2/8/16 17 10. May the Morse Be With You: 2/10/16 19 11. Partitions of Unity and the Whitney Embedding Theorem: 2/12/16 21 12. Manifolds-With-Boundary: 2/15/16 22 13. Retracts and Other Consequences of Boundaries: 2/17/16 24 14. The Thom Transversality Theorem: 2/19/16 26 15. The Normal Bundle and Tubular Neighborhoods: 2/22/16 27 16. The Extension Theorem: 2/24/16 28 17. Intersection Theory: 2/26/16 30 18. The Jordan Curve Theorem and the Borsuk-Ulam Theorem: 2/29/16 32 19. Getting Oriented: 3/2/16 33 20. Orientations on Manifolds: 3/4/16 35 21. Orientations and Preimages: 3/7/16 36 22. The Oriented Intersection Number: 3/9/16 37 23. Exam Debriefing: 3/21/16 39 24. (Minus) Signs of the Times: 3/23/16 41 25. The Lefschetz Number: 3/25/16 43 26. Multiple Roots and the Lefschetz Number: 3/28/16 45 27. Vector Fields and the Euler Characteristic: 3/30/16 46 28. :4/1/16 49 29. The Hopf Degree Theorem: 4/4/16 49 30. The Exterior Derivative: 4/6/16 50 31. Pullback of Differential Forms: 4/8/16 53 32. Differential Forms on Manifolds: 4/11/16 55 33. Stokes’ Theorem: 4/13/16 57 34. Tensors: 4/15/16 59 35. Exterior Algebra: 4/18/16 61 1 2 M382D (Differential Topology) Lecture Notes 36. The Intrinsic Definition of Pullback: 4/20/16 63 37. de Rham Cohomology: 4/22/16 64 38. Homotopy Invariance of Cohomology: 4/25/16 65 39. Chain Complexes and the Snake Lemma: 4/27/16 68 40. The Snake Lemma: 4/29/16 70 41. Good Covers and Compactly Supported Cohomology: 5/2/16 72 42. The Degree in Cohomology: 5/4/16 75 Lecture 1. The Inverse and Implicit Function Theorems: 1/20/16 “The most important lesson of the start of this class is the proper pronunciation of my name [Sadun]: it rhymes with ‘balloon.’ ” We’re basically going to march through the textbook (Guillemin and Pollack), with a little more in the beginning and a little more in the end; however, we’re going to be a bit more abstract, talking about manifolds more abstractly, n rather than just embedding them in R , though the theorems are mostly the same. At the beginning, we’ll discuss the analytic underpinnings to differential topology in more detail, and at the end, we’ll hopefully have time to discuss de Rham cohomology. n m Suppose f : R R . Its derivative is df ; what exactly is this? There are several possible answers. It’s the best! linear approximation to f at a given point. • It’s the matrix of partial derivatives. What we• need to do is make good, rigorous sense of this, moreso than in multivariable calculus, and relate the two notions. n m n n m Definition 1.1. A function f : R R is differentiable at an a R if there exists a linear map L : R R such that ! 2 ! f (a + h) f (a) L(h) lim = 0. (1.2) h 0 j −h − j ! j j In this case, L is called the differential of f at a, written df a. n m j Since h R , but the vector in the numerator is in R , so it’s quite important to have the magnitudes there, or else it would2 make no sense. Another way to rewrite this is that f (a +h) = f (a)+ L(h)+ o(small), i.e. along with some small error (whatever that means). This makes sense of the first notion: L is a linear approximation to f near a. Now, let’s make sense of the second notion. @ f i Theorem 1.3. If f is differentiable at a, then df is given by the matrix @ x j . Proof. The idea: if f is differentiable at a, then (1.2) holds for h 0 along any path! ! So let’s take ej be a unit vector and h = tej as t 0 in R. Then, (1.2) reduces to ! f (a1, a2,..., aj + t, aj+1,..., an) f (a) L tej − , ( ) = t i i @ f and as t 0, this shows L(ej) = @ x j . ! In particular, if f is differentiable, then all partial derivatives exist. The converse is false: there exist functions whose partial derivatives exist at a point a, but are not differentiable. In fact, one can construct a function whose directional derivatives all exist, but is not differentiable! There will be an example on the first homework. The idea is that directional derivatives record linear paths, but differentiability requires all paths, and so making things fail along, say, a quadratic, will produce these strange counterexamples. Nonetheless, if all partial derivatives exist, then we’re almost there. 1 The Inverse and Implicit Function Theorems: 1/20/16 3 Theorem 1.4. Suppose all partial derivatives of f exist at a and are continuous on a neighborhood of a; then, f is differentiable at a. In calculus, one can formulate several “guiding” ideas, e.g. the whole change is the sum of the individual changes, the whole is the (possibly infinite) sum of the parts, and so forth. One particular one is: one variable at a time. This principle will guide the proof of this theorem. Proof. The proof will be given for m = 2 and n = 1, but you can figure out the small details needed to generalize it; for larger n, just repeat the argument for each component. We want to compute f (a1 + h1, a2 + h2) f (a1, a2) − = f (a1 + h1, a2 + h2) f (a1 + h1, a2) + f (a1 + h1, a2) f (a1, a2) − − Regrouping, this is two single-variable questions. In particular, we can apply the mean value theorem: there exist c1, c2 R such that 2 @ f @ f = h2 + h1 @ x 2 @ x 1 (a1+h1,a2+c2) (a1+c1,a2) @ f @ f @ f @ f @ f @ f h1 = h1 + h2 + , , @ x 1 @ x 1 @ x 2 @ x 2 @ x 1 @ x 2 h2 a1+c1,a2 − a a1+h1,a2+c2 − a a a but since the partials are continuous, the left two terms go to 0, and since the last term is linear, it goes to 0 as h 0. ! We’ll often talk about smooth functions in this class, which are functions for which all higher-order derivatives exist and are continuous. Thus, they don’t have the problems that one counterexample had. B C Since we’re going to be making linear approximations· to maps, then we should discuss what happens when n m m you perturb linear maps a little bit. Recall that if L : R R is linear, then its image Im(L) R and its kernel n ker(L) R . ! ⊂ Suppose⊂ n m; then, L is said to have full rank if rank L = n. This is an open condition: every full-rank linear function can be≤ perturbed a little bit and stay linear. This will be very useful: if a (possibly nonlinear) function’s differential has full rank, then one can say some interesting things about it. If n m, then full rank means rank m. This is once again stable (an open condition): one can write such a linear map≥ as L = (A B), where A is an invertible m m matrix, and invertibility is an open condition (since it’s given by the determinant,j which is a continuous function).× To actually figure out whether a linear map has full rank, write down its matrix and row-reduce it, using Gaussian elimination. Then, you can read off a basis for the kernel, determining the free variables and the relations n determining the other variables. In general, for a k-dimensional subspace of R , you can pick k variables arbitrarily and these force the remaining n k variables. The point is: the subspace is the graph of a function. Now, we can apply this to more− general smooth functions. n m n Theorem 1.5. Suppose f : R R is smooth, a R , and df a has full rank. ! 2 j (1) (Inverse function theorem) If n = m, then there is a neighborhood U of a such that f U is invertible, with a smooth inverse. j 1 (2) (Implicit function theorem) If n m, there is a neighborhood U of a such that U f − (f (a)) is the graph n m≥ m \ of some smooth function g : R − R (up to permutation of indices). (3) (Immersion theorem) If n m, there’s! a neighborhood U of a such that f (U) is the graph of a smooth n m g : R R .