Notes from Math 538: Ricci ‡ow and the Poincare conjecture
David Glickenstein
Spring 2009 ii Contents
1 Introduction 1
2 Three-manifolds and the Poincaré conjecture 3 2.1 Introduction ...... 3 2.2 Examples of 3-manifolds ...... 3 2.2.1 Spherical manifolds ...... 4 2.2.2 Sphere bundles over S1 ...... 4 2.2.3 Connected sum ...... 5 2.3 Idea of the Hamilton-Perelman proof ...... 5 2.3.1 Hamilton’s…rst result ...... 6 2.3.2 2D case ...... 7 2.3.3 General case ...... 7
3 Background in di¤erential geometry 9 3.1 Introduction ...... 9 3.2 Basics of tangent bundles and tensor bundles ...... 9 3.3 Connections and covariant derivatives ...... 12 3.3.1 What is a connection? ...... 12 3.3.2 Torsion, compatibility with the metric, and Levi-Civita connection ...... 14 3.3.3 Higher derivatives of functions and tensors ...... 16 3.4 Curvature ...... 17
4 Basics of geometric evolutions 21 4.1 Introduction ...... 21 4.2 Ricci ‡ow ...... 24 4.3 Existence/Uniqueness ...... 25
5 Basics of PDE techniques 29 5.1 Introduction ...... 29 5.2 The maximum principle ...... 29 5.3 Maximum principle on tensors ...... 33
iii iv CONTENTS
6 Singularities of Ricci Flow 35 6.1 Introduction ...... 35 6.2 Finite time singularities ...... 35 6.3 Blow ups ...... 36 6.4 Convergence and collapsing ...... 38 6.5 -noncollapse ...... 39
7 Ricci ‡ow from energies 43 7.1 Gradient ‡ow ...... 43 7.2 Ricci ‡ow as a gradient ‡ow ...... 44 7.3 Perelman entropy ...... 50 7.4 Log-Sobolev inequalities ...... 52 7.5 Noncollapsing ...... 55
8 Ricci ‡ow for attacking geometrization 59 8.1 Introduction ...... 59 8.2 Finding canonical neighborhoods ...... 60 8.3 Canonical neighborhoods ...... 60 8.4 How surgery works ...... 63 8.5 Some things to prove ...... 63
9 Reduced distance 65 9.1 Introduction ...... 65 9.2 Short discussion of W vs reduced distance ...... 65 9.3 -length and reduced distance ...... 66 9.4 VariationsL of length and the distance function ...... 70 9.5 Variations of the reduced distance ...... 74
10 Problems 79 Preface
These are notes from a topics course on Ricci ‡ow and the Poincaré Conjecture from Spring 2008.
v vi PREFACE Chapter 1
Introduction
These are notes from a topics course on Ricci ‡ow and the Poincaré Conjecture from Spring 2008.
1 2 CHAPTER 1. INTRODUCTION Chapter 2
Three-manifolds and the Poincaré conjecture
2.1 Introduction
This lecture is mostly taken from Tao’slecture 2. In this lecture we are going to introduce the Poincaré conjecture and the idea of the proof. The rest of the class will be going into di¤erent details of this proof in varying amounts of careful detail. All manifolds will be assumed to be without boundary unless otherwise speci…ed. The Poincaré conjecture is this:
Theorem 1 (Poincare conjecture) Let M be a compact 3-manifold which is connected and simply connected. Then M is homeomorphic to the 3-sphere S3. Remark 2 A simply connected manifold is necessarily orientable.
In fact, one can prove a stronger statement called Thurston’sgeometrization conjecture, which is quite a bit more complicated, but is roughly the following:
Theorem 3 (Thurston’sgeometrization conjecture) Every 3-manifold M can be cut along spheres and 1-essential tori such that each piece can be given one of 8 geometries (E3; S3; H3; S2 R; H2 R; SL (2; R) ; Nil; Sol). We may go into this conjecture a little moref if we have time, but certainly elements of this will come up in the process of these lectures. We will take a quick look
2.2 Examples of 3-manifolds
Here we look in the topological category. It turns out that in three dimensions, the smooth category, the piecewise linear category, and the topological category
3 4CHAPTER 2. THREE-MANIFOLDS AND THE POINCARÉ CONJECTURE are the same (e.g., any homeomorphism can be approximated by a di¤eomor- phism, any topological manifold can be given a smooth structure, etc.)
2.2.1 Spherical manifolds The most basic 3-manifold is the 3-sphere, S3; which can be constructed in several ways, such as:
The set of unit vectors in R4 The one point compacti…cation of R3: Using the second de…nition, it is clear that any loop can be contracted to a point, and so S3 is simply connected. One can also look at the …rst de…nition and see that the rotations SO(4) of R4 act transitively on S3; with stabilizer SO (3) ; and so S3 is a homogeneous space described by the quotient SO (4) =SO (3) : One can also notice that the unit vectors in R4 can be given the group structure of the unit quaternions, and it is not too hard to see that this group is isomorphic to SU (2) ; which is the double cover of SO (3) : As more examples of 3-manifolds, it is possible to …nd …nite groups acting freely on S3; and consider quotients of S3 by the action. Note that if the action is nontrivial, then these new spaces are not simply connected. One can see this in several ways. The direct method is that since there must be g and x S3 such that gx = x: A path from x to gx in S3 descends to a loop2 in S3=2 : If there is a homotopy6 of that loop to a point, then one can lift the homotopy to a homotopy H : [0; 1] [0; 1] S3 such that H (t; 0) = (t) : But then, looking at ! H (1; s) ; we see that H (1; 0) = gx and H (1; 1) = x and H (1; s) = g0x for some g0 for all s: But since is discrete, this is impossible. A more high level approach2 shows that the map S3 S3= is a covering map, and, in fact, the !3 universal covering map and so 1 S = = if acts e¤ectively. Hence each of these spaces S3= are di¤erent manifolds than S3: They are called spherical manifolds. The elliptization conjecture states that spherical manifolds are the only man- ifolds with …nite fundamental group.
2.2.2 Sphere bundles over S1 The next example is to consider S2 bundles over S1; which is the same as S2 [0; 1] with S2 0 identi…ed with S2 1 by a homeomorphism. Recall that a homeomorphism f g f g : S2 S2 ! induces an isomorphism on homology (or cohomology),
2 2 : H2 S H2 S ! and since 2 H2 S = Z 2.3. IDEA OF THE HAMILTON-PERELMAN PROOF 5 there are only two possibilities for : In fact, these classify the possible maps up to continuous deformation, and so there is an orientation preserving and an orientation reversing homeomorphism and that is all (using 2 instead of H2). (See, for instance, Bredon, Cor. 16.4 in Chapter 2.) Notice that these manifolds have a map : M S1: It is clear that ! this induces a surjective homomorphism on fundamental group : 1 (M) 1 ! 1 S = Z. The kernel of this map consists of loops which can be deformed to maps only on S2 (constant on the other component), and since S2 is simply connected, this map is an isomorphism. Note that these manifolds are thus not simply-connected.
2.2.3 Connected sum One can also form new manifolds via the connected sum operation. Given two 3-manifolds, M and M 0; one forms the connected sum by removing a disk from each manifold and then identifying the boundary of the removed disks. We denote this as M#M 0: Recall that in 2D, all manifolds can be formed from the sphere and the torus in this way. Now, we may consider the class of all compact, connected 3-manifolds (up to homeomorphism) with the connected sum operation. These form a monoid (essentially a group without inverse), with an identity (S3). Any nontrivial (i.e., non-identity) manifold can be decomposed into pieces by connected sums, i.e., given any M; we can write
M M1#M2# #Mk where Mj cannot be written as a connected sum any more (this is a theorem of Kneser). We call such a decomposition a prime decomposition and such manifolds Mj prime manifolds. The proof is very similar to the fundamental theorem of arithmetic which gives the prime decomposition of positive integers.
Proposition 4 Suppose M and M 0 are connected manifolds of the same di- mension. Then
1. M#M 0 is compact if and only if both M and M 0 are compact.
2. M#M 0 is orientable if and only if both M and M 0 are orientable.
3. M#M 0 is simply connected if and only if both M and M 0 are simply connected.
We leave the proof as an exercise, but it is not too di¢ cult.
2.3 Idea of the Hamilton-Perelman proof
In order to give the idea, we will introduce a few concepts which will be de…ned more precisely in successive lectures. 6CHAPTER 2. THREE-MANIFOLDS AND THE POINCARÉ CONJECTURE
Any smooth manifold can be given a Riemannian metric, denoted gij or g ( ; ) ; which is essentially an inner product (i.e., symmetric, positive-de…nite bilinear form) at each tangent space which varies smoothly as the basepoint of the tangent space changes. The Riemannian metric allows one to de…ne angles between two curves and also to measure lengths of (piecewise C1-) curves by integrating the tangent vectors of a curve. That is, if : [0; a] M is a curve, we can calculate its length as ! a ` ( ) = g (_ (t) ; _ (t))1=2 dt; Z0 where _ (t) is the tangent to the curve at t: A Riemannian metric induces a metric space structure on M; as the distance between two points is given by the in…mum of lengths of all piecewise smooth curves from one point to the other. It is a fact that the metric topology induces the original topology of the manifold. The main idea is to deform any Riemannian metric to a standard one. This is the idea of “geometrizing.”How does one choose the deformation? R. Hamilton …rst proposed to deform by an equation called the Ricci ‡ow, which is a partial di¤erential equation de…ned by @ gij = 2Rij = 2 Rc (gij) @t where t is an extra parameter (not related to the original coordinates, so gij = gij (t; x) ; where x are the coordinates) and Rij = Rc (gij) is the Ricci curvature, a di¤erential operator (2nd order) on the Riemannian metric. That means that the Ricci ‡ow equation is a partial di¤erential equation on the Riemannian metric. In coordinates, it roughly looks like 2 @ k` @ gij = 2g gij + F (gij; @gij) @t @xk@x` k` where g is the inverse matrix of gij and F is a function depending only on the metric and …rst derivatives of the metric.
2.3.1 Hamilton’s…rst result The idea is that as the metric evolves, its curvature becomes more and more uniform. It was shown in Hamilton’slandmark 1982 paper that
Theorem 5 (Hamilton) Given a Riemannian 3-manifold (M; g0) with pos- itive Ricci curvature, then the Ricci ‡ow with g (0) = g0 exists on a maximal time interval [0; t ): Furthermore, the Ricci curvature of the metrics g (t) become increasingly uniform as t t . More precisely, ! Rij (t) R (t) gij (t) ; where R is the average scalar curvature, converges uniformly to zero as t t0: ! From this, one can easily show that a rescaling of the metric converges to the round sphere. 2.3. IDEA OF THE HAMILTON-PERELMAN PROOF 7
2.3.2 2D case In the 2D case it can be shown that any compact, orientable Riemannian man- ifold converges under a renormalized Ricci ‡ow (renormalized by rescaling the metric and rescaling time) to a constant curvature metric. This is primarily due to Hamilton, with one case …nished by B. Chow. It is possible to use this method to prove the uniformization theorem, which states that any compact, orientable Riemannian manifold can be conformally deformed to a metric with constant curvature. (The original proofs of Hamilton and Chow use the uni- formization theorem, but a recent article by Chen, Lu, and Tian shows how to avoid that).
2.3.3 General case Hamilton introduced a program to study all 3-manifolds using the Ricci ‡ow. It was discovered quite early that the Ricci ‡ow may develop singularities even in the case of a sphere if the Ricci curvature is not positive. An example is the so-called neck pinch singularity. Hamilton’sidea was to do surgery at these sin- gularities, then continue the ‡ow and continue to do this until no more surgeries are necessary. Perelman’s work describes what happens to the Ricci ‡ow near a singularity and also how to perform the surgery. The new ‡ow is called Ricci Flow with surgery. The main result of Perelman is the following.
Theorem 6 (Existence of Ricci ‡ow with surgery) Let (M; g) be a com- pact, orientable Riemannian 3-manifold. Then there exists a Ricci ‡ow with surgery t (M (t) ; g (t)) for all t [0; ) and a closed set T [0; ) of surgery times! such that: 2 1 1
1. (Initial data) M (0) = M; g (0) = g:
2. (Ricci ‡ow) If I is any connected component of [0; ) T (and thus an interval), then t (M (t) ; g (t)) is the Ricci ‡ow on1I (youn can close this interval on the left! endpoint if you wish).
3. (Topological compatibility) If t T and " > 0 is su¢ ciently small, then we know the topological relationship2 M (t ") and M (t) : 4. (Geometric compatibility) For each t T; the metric g (t) on M (t) is related to a certain limit of the metric 2g (t ") on M (t ") by a certain surgery procedure.
Note, we can express the topological compatibility more precisely. We have that M (t ") is homeomorphic to the connected sum of …nitely many connected components of M (t) together with a …nite number of spherical space forms 3 3 (spherical manifolds), RP #RP , and S2 S1: Furthermore, each connected component of M (t) is used in the connected sum decomposition of exactly one component of M (t ") : 8CHAPTER 2. THREE-MANIFOLDS AND THE POINCARÉ CONJECTURE
3 3 Remark 7 The case of RP #RP is interesting in that it is apparently the only nonprime 3-manifold which admits a geometric structure (i.e., is covered by a model geometry; it is a quotient of S2 R; this does not contradict our argument above because it is not a sphere bundle over S1). I have seen this mentioned several places, but I do not have a reference.
Remark 8 Morgan-Tian and Tao give a more general situation where nonori- entable manifolds are allowed. This adds some extra technicalities which we will avoid in this class.
The existence needs something more to show the Poincaré conjecture. One needs that the surgeries are only discrete and that the ‡ow shrinks everything in …nite time.
Theorem 9 (Discrete surgery times) Let t (M (t) ; g (t)) be a Ricci ‡ow with surgery starting with an orientable manifold! M (0) : Then the set T of surgery times is discrete. In particular, any compact time interval contains a …nite number of surgeries.
Theorem 10 (Finite time extinction) Let (M; g) be a compact 3-manifold which is simply connected and let t (M (t) ; g (t)) be an associated Ricci ‡ow with surgery. Then M (t) is empty for! su¢ ciently large t:
With these theorems, one can conclude the Poincaré conjecture in the fol- lowing way. Given M a simply connected, connected, compact Riemannian manifold, associate a Ricci ‡ow with surgery. It has …nite extinction time, and hence …nite surgery times. Now one can use the topological decomposition to work backwards and build the manifold backwards, which says that the man- ifold M is the connected sum of …nitely many spherical space forms, copies 3 3 of RP #RP , and S2 S1: But since M is the simply connected, everything in the connected sum must be simply connected, and hence every piece of the connected sum must be simply connected, so M must be a sphere. Chapter 3
Background in di¤erential geometry
3.1 Introduction
We will try to get as quickly as possible to a point where we can do some geometric analysis on Riemannian spaces. One should look at Tao’s lecture 0, though I will not follow it too closely.
3.2 Basics of tangent bundles and tensor bun- dles
Recall that for a smooth manifold M; the tangent bundle can be de…ned in essentially 3 di¤erent ways ((Ui; i) are coordinates)
n n n TM = (Ui R ) = where for (x; v) Ui R ; (y; w) Uj R we i 2 2 1 1 have (x;F v) (y; w) if and only i¤ y = j (x) and w = d j (v) : i i x
TpM = paths :( "; ") M such that (0) = p = where if f ! g (i )0 (0) = (i )0 (0) for every i such that p Ui:TM = TpM: 2 p M 2 F TpM to be the set of derivations of germs at p; i.e., the set of linear func- tionals X on the germs at p such that X (fg) = X (f) g (p) + f (p) X (g) for germs f; g at p: T M = TpM: p M 2 F On can de…ne the cotangent bundle by essentially taking the dual of TpM at each point, which we call TpM; and taking the disjoint union of these to get the cotangent bundle T M: One could also use an analogue of the …rst de…nition, where the only di¤erence is that instead of using the vector space Rn; one uses
9 10 CHAPTER 3. BACKGROUND IN DIFFERENTIAL GEOMETRY its dual and the equivalence takes into account that the dual space pulls back rather than pushes forward. Both of these bundles are vector bundles. One can also take a tensor bundle of two vector bundles by replacing the …ber over a point by the tensor product of the …bers over the same point, e.g.,
TM T M = TpM T M : p p M G2 Note that there are canonical isomorphisms of tensor products of vector spaces, such as V V is isomorphic to endomorphisms of V: Note the di¤erence between bilinear forms (V V ), endomorphisms (V V ), and bivectors (V V ). It is important to understand that these bundles are global objects, but will often be considered in coordinates. Given a coordinate x = xi and a point p in the coordinate patch, there is a basis @ ;:::; @ for T M and dual basis @x1 p @xn p p 1 n dx ; : : : ; dx for T M: The generalization of the …rst de…nition above gives p jp p the idea of how one considers the trivializations of the bundle in a coordinate patch, and how the patches are linked together. Speci…cally, if x and y give di¤erent coordinates, for a point on the tensor bundle, one has
ij k @ @ @ a b c Tab c (x) i j k dx dx dx @x @x @x a b c ij k @y @y @y @x @x @x @ @ @ = T (x (y)) dy dy dy ; ab c @xi @xj @xk @y @y @y @y @y @y where technically everything should be at p (but as we shall see, one can consider this for all points in the neighborhood and this is considered as an equation of sections). Recall that a section of a bundle : E B is a function f : B E such that f is the identity on the base manifold!B: A local section may! only be de…ned on an open set in B: On the tangent space, sections are called vector …elds and on the cotangent space, sections are called forms (or 1-forms). On @ a tensor bundle, sections are called tensors. Note that the set of @xi form a basis for the vector …elds in the coordinate x, and dxi form a basis for the local 1-forms in the coordinates. Sections in general are often written as (E) or as C1 (E) (if we are considering smooth sections). Now the equation above makes sense as an equation of tensors (sections of a tensor bundle). Often, a tensor will be denoted as simply
ij k Tabc : Note that if we change coordinates, we have a di¤erent representation T of the same tensor. The two are related by a b c ij k @y @y @y @x @x @x T = T (x (y)) : ab c @xi @xj @xk @y @y @y One can also take subsets or quotients of a tensor bundle. In particular, we may consider the set of symmetric 2-tensors or anti-symmetric tensors (sec- tions of this bundle are called di¤erential forms). In particular, we have the Riemannian metric tensor. 3.2. BASICS OF TANGENT BUNDLES AND TENSOR BUNDLES 11
De…nition 11 A Riemannian metric g is a two-tensor (i.e., a section of T M
T M) which is
symmetric, i.e., g (X;Y ) = g (Y;X) for all X;Y TpM; and 2
positive de…nite, i.e., g (X;X) 0 all X TpM and g (X;X) = 0 if and only if X = 0. 2
i j Often, we will denote the metric as gij; which is shorthand for gijdx dx ; i j 1 i j j i where dx dx = 2 dx dx + dx dx : Note that if gij = ij (the Kronecker delta) then i j 1 2 n 2 ijdx dx = dx + (dx ) : One can invariantly de…ne a trace of an endomorphism (trace of a matrix) which is independent of the coordinate change, since
n @xa @y T a = T a @y @xa a=1 a X X
= T a X
= T :
X In fact for any complicated tensor, one can take the trace in one up index and one down index. This is called contraction. Usually, when there is a repeated index of one up and one down, we do not write the sum. This is called Einstein summation convention. The above sum would be written
a Ta = T :
It is understood that this is an equation of functions. We cannot contract two indices up or two indices down, since this is not independent of coordinate change (try it!) However, now that we have the Riemannian metric, we can use it to “lower an index”and then trace, so we get
ab a T gba = Ta :
In order to raise the index, we need the dual to the Riemannian metric, which ab ab a ab is g ; de…ned such that g gbc = c (so g is the inverse matrix of gab). Then we can use gab to raise indices and contract if necessary. Occasionally, extended Einstein convention is used, where all repeated indices are summed with the understanding that the Riemannian metric is used to raise or lower indices when necessary, e.g., ab Taa = Tabg : Since often we will be changing the Riemannian metric, it becomes important to understand that the metric is there when extended Einstein is used. 12 CHAPTER 3. BACKGROUND IN DIFFERENTIAL GEOMETRY
3.3 Connections and covariant derivatives 3.3.1 What is a connection? A covariant derivative is a particular way of di¤erentiating vector …elds. Why do we need a new way to di¤erentiate vector …elds? Here is the idea. Suppose we want to give a notion of parallel vectors. In Rn; we know that if we take vector …elds with constant coe¢ cients, those vectors are parallel at di¤erent @ @ @ @ points. That is, the vectors @x1 (0;0)+2 @x2 (0;0) and @x1 (1; 1)+2 @x2 (1; 1)are @ @ parallel. In fact, we could say that the vector …eld @x1 + 2 @x2 is parallel since vectors at any two points are parallel. One might say it is because the coe¢ cients of the vector …eld are constant (not functions of x1 and x2). However, this notion is not invariant under a change of coordinates. Suppose we consider the 2 new coordinates y1; y2 = x1; x2 away from x2 = 0 (where it is not a di¤eomorphism). Then the vector …eld in the new coordinates is @yi @ @yj @ @ @ @ @ + 2 = + 4x2 = + 4 y2 : @x1 @yi @x2 @xj @y1 @y2 @y1 @y2 p The coe¢ cients are not constant, but the vector …eld should still be parallel (we have only changed coordinates, so it is the same vector …eld)! So we need a notion of parallel vector …eld that is independent of coordinate changes (or covariant). Remember that we want to generalize the notion that a vector …eld has i @ constant coe¢ cients. Let X = X @xi be a vector …eld in a coordinate patch. @Xi Roughly speaking, we want to generalize the notion that @xj = 0 for all i and @ @ j: The problem occurred because @x1 @x1 is di¤erent in di¤erent coordinates. Thus we need to specify what this is. Certainly, since @ is a basis, we must @xi get a linear combination of these, so we take
@ k @ i = r @xj ij @xk
k for some functions ij: These symbols are called Christo¤el symbols. To make sense on a vector …eld, we must have
j @ i (X) = i X r r @xj @Xj @ @ = + Xj k @xi @xj ij @xk @Xk @ = + Xj k : @xi ij @xk Notice the Leibniz rule (product rule). One can now de…ne for any vector i @ r Y = Y @xi by i Y X = Y i @ X = Y ( iX) : r r @xi r 3.3. CONNECTIONS AND COVARIANT DERIVATIVES 13
This action is called the covariant derivative. k One now de…nes ij in such a way that the covariant derivative transforms appropriately under change of coordinates. This gives a global object called a connection. The connection can be de…ned axiomatically as follows.
De…nition 12 A connection on a vector bundle E B is a map ! : (TB) (E) (E) r !
(X; ) X ! r satisfying:
Tensoriality (i.e., C1 (B)-linear) in the …rst component, i.e., fX+Y = r f X + Y for any function f and vector …elds X;Y r r
Derivation in the second component, i.e., X (f) = X (f) + f X : r r
R-linear in the second component, i.e., X (a + ) = a X () + X ( ) r r r for a R. 2 We will consider connections primarily on the tangent bundle and tensor bundles. Note that a connection on TM induces connections on all tensor bundles (also denoted ) in the followingr way: r
For a function f and vector …eld X; we de…ne X f = Xf r For vector …elds X;Y and dual form !; we use the product rule to derive
X (! (Y )) = X (! (Y )) = ( X !)(Y ) + ! ( X Y ) r r r and thus
( X !)(Y ) = X (! (Y )) ! ( X Y ) : r r In particular, the Christo¤el symbols for the connection on T M are the negative of the Christo¤el symbols of TM; i.e.,
j j k @ dx = ikdx r @xi where k are the Christo¤el symbols for the connection on TM: ij r For a tensor product, one de…nes the connection using the product rule, e.g.,
X (Y !) = ( X Y ) ! + Y X ! r r r for vector …elds X;Y and 1-form !: 14 CHAPTER 3. BACKGROUND IN DIFFERENTIAL GEOMETRY
Remark 13 The Christo¤el symbols are not tensors. Note that if we change coordinates from x to x;~ we have
@ @x~` @ @x~` @ @x~k @x~` @ @ = @x~k @ = + @ i j j ` i j ` i j k ` r @x @x r @xi @x~k @x @x~ @x @x @x~ @x @x r @x~ @x~ which means that @x~p @x~` @xk @x~` @xk k = ~m + : ij p` @xi @xj @x~m @xi@xj @x~` One …nal comment. Recall that we motivated the connection by considering parallel vector …elds. The connection gives us a way of taking a vector at a point and translating it along a curve so that the induced vector …eld along the curve is parallel (i.e., _ X = 0 along ). This is called parallel translation. Parallel vector …eldsr allow one to rewrite derivatives in coordinates; that is, i @ if X = X @xi is parallel, then @Xi = Xk i : @xj jk 3.3.2 Torsion, compatibility with the metric, and Levi- Civita connection There is a unique metric associated with the Riemannian metric, called the Riemannian connection or Levi-Civita connection. It satis…es two properties:
Torsion-free (also called symmetric) Compatible with the metric. Compatibility with the metric is the easy one to understand. We want the connection to behave well with respect to di¤erentiating orthogonal vector …elds. Being compatible with the metric is the same as
X (g (Y;Z)) = g ( X Y;Z) + g (Y; X Z) : r r r
Note that normally there would be an extra term, ( X g)(Y;Z) ; so compati- bility with the metric means that this term is zero,r i.e., g = 0; where g is considered as a 2-tensor. r Torsion free means that the torsion tensor ; given by
(X;Y ) = X Y Y X [X;Y ] r r vanishes. (One can check that this is a tensor by verifying that (fX; Y ) = (X; fY ) = f (X;Y ) for any function f). It is easy to see that in coordinates, the torsion tensor is given by
k = k k ; ij ij ji 3.3. CONNECTIONS AND COVARIANT DERIVATIVES 15 which indicates why torsion-free is also called symmetric. Tao gives a short motivation for the concept of torsion-free. Consider an in…nitesimal parallelogram in the plane consisting of a point x; the ‡ow of x along a vector …eld V to a point we will call x + tV; the ‡ow of X along a vector …eld W to a point we will call x + tW; and then a fourth point which we will reach in two ways: (1) go to x + tV and then ‡ow along the parallel translation of W for a distance t and (2) go to x + tW and then ‡ow along the parallel translation of V for a distance t: Note that using method (1), we get that the point is
@ (x + tV + sW ) + t (x + tV + sW ) + O t3 js=0 @s s=0 @ @ = x + tV + tW + t2 V + O t3 = x + tV + tW t2V iW j k + O t3 : @s ji @xk s=0
Note that using method (2), we get instead @ x + tV + tW t2W iV j k + O t3 ; ji @xk
3 k k Thus this vector is x+t (V + W ) up to O t only if ji = ij: Doing this around every in…nitesimal parallelogram gives the equivalence of these two viewpoints. Here is another:
Proposition 14 A connection is torsion-free if and only if for any point p M; k 2 there are coordinates x around p such that ij (p) = 0:
k Proof. Suppose one can always …nd coordinates such that ij (p) = 0: Then k clearly at that point, ij = 0: However, since the torsion is a tensor, we can calculate it in any coordinate, so at each point, we have that the torsion vanishes. Now suppose the torsion tensor vanishes and let x be a coordinate around p: Consider the new coordinates
x~i (q) = xi (q) xi (p) + i (p) xj (q) xj (p) xk (q) xk (p) : jk Then notice that
@x~i = i + i (p) k x` x` (p) + i (p) xk xk (p) ` @xj j k` j k` j and so @x~i (p) = i : @xj j Thus x~ is a coordinate patch in some neighborhood of p: Moreover, we have that @2x~i = i (p) : @xj@xk jk 16 CHAPTER 3. BACKGROUND IN DIFFERENTIAL GEOMETRY
One can now verify that at p; @x~p @x~` @xk @x~` @xk k (p) = ~m + ij p` @xi @xj @x~m @xi@xj @x~` ~k k = ij (p) + ij (p) :
The Riemannian connection is the unique connection which is both torsion- free and compatible with the metric. One can use these two properties to derive a formula for it. In coordinates, one …nds that the Riemannian connection has the following Christo¤el symbols
k 1 k` @ @ @ = g gj` + gi` gij : ij 2 @xi @xj @x` One can easily verify that this connection has the properties expressed. Note that the gj` in the formula, etc. are not the tensors, but the functions. This k is not a tensor equation since ij is not a tensor. Also note that it is very @ @ important that this is an expression in coordinates (i.e., that @xi ; @xj = 0). 3.3.3 Higher derivatives of functions and tensors One of the important reasons for having a connection is it allows us to take higher derivatives. Note that one can take the derivative of a function without a connection, and it is de…ned as
df = f r df (X) = X f = X (f) r @f df = dxi: @xi One can also raise the index to get the gradient, which is @ @f @ grad (f) = if = gij : r @xi @xj @xi However, to take the next derivative, one needs a connection. The second derivative, or Hessian, of a function is
Hess (f) = 2f = df r r 2 i f = ( idf) dx r r @f j i = i dx dx r @xj @2f @f = dxj j dxk dxi @xi@xj @xj ik @2f @f = k dxj dxi: @xi@xj @xk ij 3.4. CURVATURE 17
Often one will write the Hessian as 2 2 @ f k @f f = i jf = : rij r r @xi@xj ij @xk Note that if the connection is symmetric, then the Hessian of a function is ij 2 symmetric in the usual sense. The trace of the Hessian, f = g ijf; is called the Laplacian, and we will use it quite a bit. 4 r We also may use the connection to compute acceleration of a curve. The velocity of a curve is ;_ which does not need a connection, but to compute the acceleration, _ ;_ we need the connection (one also sometimes sees the equivalent notationr D =dt_ ). A curve with zero acceleration is called a geodesic. Finally, given any tensor T; one can use the connection to form a new tensor T; which has an extra down index. r 3.4 Curvature
One can de…ne the curvature of any connection on a bundle E B in the following way !
R : (TM) (TM) (E) (E) ! R (X;Y ) = X Y Y X : r r r r r[X;Y ] We will consider the curvature of the Riemannian connection on the tangent bundle. One can easily see that in coordinates, the curvature is a tensor denoted as @ @ ` @ i j j i = R r r @xk r r @xk ijk @x` which gives us that
` @ ` @ @ ` @ ` m @ @ ` @ ` m @ i j = + r jk @x` r ik @x` @xi jk @x` jk i` @xm @xj ik @x` ik j` @xm @ @ @ = ` ` + m ` m ` @xi jk @xj ik jk im ik jm @x` So the curvature tensor is @ @ R` = ` ` + m ` m ` : ijk @xi jk @xj ik jk im ik jm Often we will lower the index, and consider instead the curvature tensor
m Rijk` = Rijkgm`: The Riemannian curvature tensor has the following symmetries:
Rijk` = Rjik` = Rij`k = Rk`ij (These imply that R can be viewed as a self-adjoint (symmetric) operator mapping 2-forms to 2-forms if one raises the …rst two or last two indices). 18 CHAPTER 3. BACKGROUND IN DIFFERENTIAL GEOMETRY
(Algebraic Bianchi) Rijk` + Rjki` + Rkij` = 0: (Di¤erential Bianchi) iRjk`m + jRki`m + kRij`m: r r r
Remark 15 The tensor Rijk` can also be written as a tensor R (X;Y;Z;W ) ; which is a function when vector …elds X;Y;Z;W are plugged in. We will ` sometimes refer to this tensor as Rm : The tensor Rijk is usually denoted by R (X;Y ) Z; which is a vector …eld when vector …elds X;Y;Z are plugged in.
Remark 16 Sometimes, the up index is lowered into the 3rd spot instead of the 4th, This will change the de…nitions of Ricci and sectional curvature below, but the sectional curvature of the sphere should always be positive and the Ricci curvature of the sphere should be positive de…nite.
k Remark 17 Note that ij involved …rst derivatives of the metric, so Riemannian curvature tensor involves …rst and second derivatives of the metric.
From these one can derive all the curvatures we will need:
De…nition 18 The Ricci curvature tensor Rij is de…ned as
` `m Rij = R`ij = R`ijmg :
Note that Rij = Rji by the symmetries of the curvature tensor. Ricci will sometimes be denoted Rc (g) ; or Rc (X;Y ) :
De…nition 19 The scalar curvature R is the function
ij R = g Rij
De…nition 20 The sectional curvature of a plane spanned by vectors X and Y is given by R (X;Y;Y;X) K (X;Y ) = : g (X;X) g (Y;Y ) g (X;Y )2 Here are some facts about the curvatures:
Proposition 21 1. The sectional curvatures determine the entire curvature tensor, i.e., if one can calculate all sectional curvatures, then one can calculate the entire tensor. 2. The sectional curvature K (X;Y ) is the Gaussian curvature of the surface generated by geodesics in the plane spanned by X;Y: 3. The Ricci curvature can be written as an average of sectional curvature. 4. The scalar curvature can be written as an average of Ricci curvatures. 5. The scalar curvature essentially gives the di¤erence between the volumes of small metric balls and the volumes of Euclidean balls of the same radius. 3.4. CURVATURE 19
6. In 2 dimensions, each curvature determines the others. 7. In 3 dimensions, scalar curvature does not determine Ricci, but Ricci does determine the curvature tensor. 8. In dimensions larger than 3, Ricci does not determine the curvature tensor; there is an additional piece called the Weyl tensor.
With this in mind, we can talk about several di¤erent kinds of nonnegative curvature.
De…nition 22 Let x be a point on a Riemannian manifold (M; g) : Then x has
1. nonnegative scalar curvature if R (x) 0; i j 2. nonnegative Ricci curvature at x if Rc (X;X) = RijX X 0 for every vector X TxM; 2 3. nonnegative sectional curvature if R (X;Y;Y;X) = g (R (X;Y ) Y;X) 0 for all vectors X;Y TxM; 2 4. nonnegative Riemann curvature (or nonnegative curvature operator) if 2 ij k` Rm 0 as a quadratic form on (M) ; i.e., if Rijk`! ! 0 for i j all 2-forms ! = !ijdx dx (where the raised indices are done using the metric g). ^
It is not too hard to see that 4 implies 3 implies 2 implies 1: Also, in 3 dimensions, 3 and 4 are equivalent. In dimension 4 and higher, these are all distinct. 20 CHAPTER 3. BACKGROUND IN DIFFERENTIAL GEOMETRY Chapter 4
Basics of geometric evolutions
4.1 Introduction
This lecture roughly follows Tao’sLecture 1. We will talk in general about ‡ows or Riemannian metrics and Ricci ‡ow. We will consider a ‡ow of Riemannian metrics to be a one-parameter family of Riemannian metrics, usually denoted g (t) or gij (t) or gij (x; t) on a …xed Riemannian manifold M. There are more ingenious ways to de…ne such a ‡ow using spacetimes (called generalized Ricci ‡ows). However, at present I do not think that they give a signi…cant savings over the more classical idea, since one still needs to consider singular spacetimes. For more on generalized Ricci ‡ows, consult the book by Morgan-Tian. The family g (t) is a one-parameter family of sections of a vector bundle, and one can take its derivative as @ g (t + dt) g (t) g (t) = lim @t dt 0 dt ! since g (t) and g (t + dt) are both sections of the same vector bundle, so the di¤erence makes sense. In fact, we can di¤erentiate any tensor in this way. Similarly, we can try to solve di¤erential equations of the form @ g =g _ @t ij ij for some prescribed g_ij: The evolution of the metric induces an evolution of the metric on the cotangent bundle, using @ @ gijg = i @t jk @t j @ ij ik `j g = g g_k`g : @t
21 22 CHAPTER 4. BASICS OF GEOMETRIC EVOLUTIONS
The Riemannian connection is also changing if the metric is changing. Thus for a …xed vector …eld X; we have
j @ @ @X @ k j @ iX = + X @tr @t @xi @xj ij @xk @ = Xj _ k : ij @xk We can use the fact that the connection is torsion-free and compatible with the _ k metric to derive the formula for ij:
@ @ @ ` ` 0 = ( igjk) = gjk g`k gj` @t r @t @xi ij ik ` ` = ig_jk _ g`k _ gj`; r ij ik and 0 = _ k _ k ij ji _ k so we can solve for ij as
` ` ig_jk = _ g`k + _ gj` r ij ik ` ` jg_ki = _ g`k + _ gk` r jk ji ` ` kg_ij = _ g`j + _ gi` r ki kj to get
k 1 k` _ = g ( ig_j` + jg_i` `g_ij) : (4.1) ij 2 r r r
Remark 23 This mimics the proof of the formula for the Riemannian con- nection given that it is torsion-free and compatible with the metric. There are other ways to derive this formula, for instance by computing in normal coor- k @ k dinates and using the fact that although ij is not a tensor, @t ij comes from the di¤erence of two connections and is thus a tensor. We will use this method below.
We may now look at the induced formula for evolution of the Riemannian curvature tensor. Recall that, in coordinates,
` @ ` @ ` @ R = i j : ijk @x` r jk @x` r ik @x` @ ` _ ` Since we are interested in the derivative of a tensor, @t Rijk = Rijk; we can compute this in any coordinate system we want. Recall that there is a coordinate system around p such that all Christo¤el symbols vanish at p: Doing this reduces 4.1. INTRODUCTION 23 the equation to
` @ @ ` @ ` @ R_ = i j ijk @x` @t r jk @x` r ik @x` @ @ @ @ @ = ` ` @t @xi jk @x` @xj ik @x` @ @ @ @ = _ ` _ ` @xi jk @x` @xj ik @x` ` @ ` @ = i _ j _ : r jk @x` r ik @x` _ k This last piece is tensorial (recall that ij is a tensor), and thus only depends on the point, not the coordinate patch, so we must have that
` ` ` R_ = i _ j _ : ijk r jk r ik We can now use the (4.1) to get
` 1 `m 1 `m R_ = i g ( jg_km + kg_jm mg_jk) j g ( ig_km + kg_im mg_ik) ijk r 2 r r r r 2 r r r 1 `m = g ( i jg_km + i kg_jm i mg_jk j ig_km j kg_im + j mg_ik) 2 r r r r r r r r r r r r 1 `m = g ( i jg_km j ig_km + i kg_jm i mg_jk j kg_im + j mg_ik) 2 r r r r r r r r r r r r 1 `m p p = g ( R g_mp R g_kp + i kg_jm i mg_jk j kg_im + j mg_ik): 2 ijk ijm r r r r r r r r _ _ i We can take the trace Rjk = Rijk to get
1 im p p R_ jk = g ( R g_mp R g_kp + i kg_jm i mg_jk j kg_im + j mg_ik) 2 ijk ijm r r r r r r r r 1 im p 1 pq 1 im 1 im = g R g_mp + g Rjpg_kp g i mg_jk g j kg_im 2 ijk 2 2 r r 2 r r 1 im p p + g ( k ig_jm R g_pm R g_jp + j mg_ik) 2 r r ikj ikm r r im p 1 pq 1 pq 1 im = g R g_mp + g Rjpg_kp + g Rkqg_jp g i mg_jk ijk 2 2 2 r r 1 im 1 im g j kg_im + g ( k ig_jm + j mg_ik) 2 r r 2 r r r r 1 1 im 1 im = Lg_jk g j kg_im + g ( k ig_jm + j mg_ik) 24 2 r r 2 r r r r where
im im p pq pq Lg_jk = g i mg_jk + 2g R g_mp g Rjpg_kp g Rkqg_jp 4 r r ijk is the Lichnerowitz Laplacian (notice only the …rst term has two derivatives of g_jk). (Note: I think that T. Tao has an error in this formula with the sign of the last term of R_ jk: 24 CHAPTER 4. BASICS OF GEOMETRIC EVOLUTIONS
Finally, we may take a trace to get jk pj qk R_ = g R_ jk g g_pqg Rjk pq jk im jk = g g Rjpg_kp g i mg g_jk r r im jk + g g k ig_jm r r = Rc; g_ trg (_g) + div divg _ h i 4 where
k` (div h) = g kh`j j r 4.2 Ricci ‡ow
Note that in the evolution of Ricci curvature, if one considers g_ = 2 Rc; one gets
1 im 1 im g j kg_im + g ( k ig_jm + j mg_ik) 2 r r 2 r r r r im im im = g j kRim g k iRjm g j mRik r r r r r r im im = j kR g j mRik g k iRjm r r r r r r Note that the di¤erential Bianchi identity implies jm k` 0 = g g ( iRjk`m + jRki`m + kRij`m) r r r jm k` = iR g jRim g kRij`m r r r so jm iR = 2g jRim r r so
1 im 1 im 1 1 g j kg_im+ g ( k ig_jm+ j mg_ik) = j kR j kR k jR = 0: 2 r r 2 r r r r r r 2r r 2r r Thus under Ricci ‡ow, @ Rc = L Rc : @t 4 Furthermore, We see that
@R i` jk = 2 Rc; Rc + 2 R 2g g i jRk` @t h i 4 r r i` = 2 Rc; Rc + 2 R g i `R h i 4 r r = R + 2 Rc 2 4 j j The important notion to get right now is that this looks very much like a heat equation with a reaction term. We will see how to make use of this in the near future. 4.3. EXISTENCE/UNIQUENESS 25
4.3 Existence/Uniqueness
Note that the Ricci ‡ow equation,
@ g = 2 Rc @t is a second order partial di¤erential equation, since the Ricci curvature comes from second derivatives of the metric. To truly look at existence/uniqueness, one must write this as an equation in coordinates. We will look at the lineariza- tion of this operator in order to …nd the principle symbol (which is basically the coe¢ cients of the linearization of the highest derivatives). Analysis of the principle symbol will often allow us to determine that a solution exists for a short time. Here is the meta-theorem for existence of parabolic PDE: Meta-Theorem (imprecise): A semi-linear PDE of the form
@u @2u aij (x; t) + F (x; t; u; @u) = 0: @t @xi@xj on a compact manifold has a solution with initial condition u (x; 0) = f (x) ij 2 if there exists > 0 such that a ij (this condition is called strict parabolicity) for t close to 0. Similarly, if wej allowj aij to depend on u (making the equation quasilinear, the same is true if we look at the linearization (which is then semilinear).
Remark 24 aij is called the principal symbol of the parabolic di¤erential oper- @ ator. If one takes out the @t ; the di¤erential operator is said to be elliptic if it satis…es the inequality.
@ Remark 25 We can replace @xi with i since the di¤erence has fewer deriva- tives. r
Remark 26 One should be able to prove a coordinate independent version, but this is not usually done. All theory is based on theory of di¤erential equations on domains in the plane.
Remark 27 For an arbitrary, nonlinear second order PDE of the form
G x; t; u; @u; @2u = 0; one can consider the linearization of G with respect to u: This will look roughly like @2 @ @ G + @ G + @ G v Hij @xi@xj Vi @xi u where G = G (x; t; u; V; H) and the operator is evaluated at some u (which is where it has been linearized. Notice this now gives a semilinear PDE. The principle symbol is @Hij Gij: 26 CHAPTER 4. BASICS OF GEOMETRIC EVOLUTIONS
Example 28 Note that the equation @u @2 @2 @2 = u + u = ij @t (@x1)2 (@x2)2 @xi@xj is already linear. For an equation like @u @2u = u2 ; @t @x2 the linearization is @v @2v @2u = u2 + 2u v; @t @x2 @x2 thus the principal symbol is u2 which is positive if u > 0: Now, the Ricci operator is an operator on sections, not just functions, so how do we make sense of the kind of result given above. We can make a similar de…nition in terms of the linearization, but now the principle symbol is a map from sections of the symmetric 2-tensor bundle to itself. What we need is that for any = 0; the principle symbol is a linear isomorphism. 6 Recall that the linearization of Rjk is
1 im 1 im 1 im R_ jk = g i mg_jk g j kg_im + g ( k ig_jm + j mg_ik) 2 r r 2 r r 2 r r r r so the principle symbol of 2Rjk is im im im ^ [D Rc] ()(h) = g imhjk + g jkhim g (kihjm + jkhik): In order to see if this is an isomorphism, we can rotate so that 1 > 0 and 2 = = n = 0 and by scaling we can assume 1 = 1: We can also assume that at a point gij = ij: Then we see that
1 1 1 1 ^ [D Rc] ()(h) = hjk + (h11 + + hnn) ( hj1 + h1k): jk j k k j And so the matrix for the symbol gives
h22 + + hnn 0 0 0 0 . 1 . h B C B 0 C B C @ A where 2 ; n: We see immediately that there is an n-dimensional kernel (we can let h1k equal anything we want and if everything else is zero, we are in the kernel). Now we will see how to overcome this issue. Rewrite the linearization as
1 im 1 im 1 im R_ jk = g i mg_jk g j kg_im + g ( k ig_jm + j mg_ik) 2 r r 2 r r 2 r r r r 1 im 1 = g i mg_jk + kVj + jVk 2 r r 2r r 4.3. EXISTENCE/UNIQUENESS 27 if im 1 im Vj = g ig_jm j g g_im : r 2r i ij The last term is equal to the Lie derivative LV gjk (where V = V = g Vj; often # denoted V ) and so we get that the linearization of 2Rjk is im g i mg_jk LV gjk: r r Lie derivatives arise from changing by di¤eomorphisms, i.e., if t are di¤eomor- phisms such that d (x) = X (x) dt t and 0 is the identity (i.e., t is the ‡ow of X), then d g = L g : dt t 0 X 0 t=0
One can pretty easily see that if we take the vector …eld V as above, we can look at the ‡ow t and tg (t) and we will see that g~ (t) = tg (t) evolves by @ g~ = 2 Rc (~g) + LV g~ @t and the linearization is im g i mg_jk: r r This is like looking at an equation roughly like
@ im h = g i mhjk; @t r r which is a heat equation with a unique solution. This has principal symbol gij; which is strictly positive de…nite. It can be shown that this implies that the modi…ed Ricci ‡ow (the equation above on g~) has a unique solution. One can then show that this implies the Ricci ‡ow has a unique solution too. I.e.,
Theorem 29 Given an initial closed Riemannian manifold (M; g0) ; there is a time T > 0 and Riemannian metrics g (t) on M for each t [0;T ) such that which satisfy the initial value problem 2 @ g = 2 Rc (g) @t g (0) = g0: Moreover, given the initial condition, g (t) are uniquely determined, and there is a maximal such T:
Remark 30 One can also show that this is true for complete manifolds with bounded curvature Rm ; which was done by Shi. However, the proof is much more di¢ cult on noncompactj j manifolds. 28 CHAPTER 4. BASICS OF GEOMETRIC EVOLUTIONS Chapter 5
Basics of PDE techniques
5.1 Introduction
This section will roughly follow Tao’slecture 3. We will look at some basic PDE techniques and apply them to the Ricci ‡ow to obtain some important results about preservation and pinching of curvature quantities. The important fact is that the curvatures satisfy certain reaction-di¤usion equations which can be studied with the maximum principle.
5.2 The maximum principle
Recall that if a smooth function u : U R where U Rn has a local minimum ! at x0 in the interior of U; then @u (x ) = 0 @xi 0 @2u (x0) 0 @xi@xj where the second statement is that the Hessian is nonnegative de…nite (has all nonnegative eigenvalues). The same is true on a Riemannian manifold, replacing regular derivatives with covariant derivatives.
Lemma 31 Let (M; g) be a Riemannian manifold and u : M R be a smooth 2 ! (or at least C ) function that has a local minimum at x0 M: Then 2
iu (x0) = 0 r i ju (x0) 0 r r ij u (x0) = g (x0) i ju (x0) 0: 4 r r @u Proof. In a coordinate patch, the …rst statement is clear since iu = @xi : The second statement is that the Hessian is positive de…nite. Recallr that in
29 30 CHAPTER 5. BASICS OF PDE TECHNIQUES coordinates, the Hessian is
2 @ u k @u i ju = ; r r @xi@xj ij @xk but at a minimum, the second term is zero and the positive de…niteness follows from the case in Rn: The last statement is true since both g and the Hessian are positive de…nite.
Remark 32 There is a similar statement for maxima.
The following lemma is true in the generality of a smooth family of metrics, though is also of use for a …xed metric.
Lemma 33 Let (M; g (t)) be a smooth family of compact Riemannian manifolds for t [0;T ]: Let u : [0;T ] M R be a C2 function such that 2 ! u (0; x) 0 for all x M: Also let A R. Then exactly one of the following is true: 2 2 1. u (t; x) 0 for all (t; x) [0;T ] M; or 2
2. There exists a (t0; x0) (0;T ] such that all of the following are true: 2
u (t0; x0) < 0
iu (t0; x0) = 0; r u (t0; x0) 0; 4g(t0) @u (x ; t ) < 0: @t 0 0
Proof. Consider the function
u (t; x) + "t:
If u (t; x) + "t > 0 for all " > 0 (small), then u (t; x) 0: Otherwise there is an " > 0 and an initial t0 such that there is a x0 M such that u (t0; x0)+"t0 = 0: 2 At the …rst such time, we must have that x0 is a spatial minimum for this function, and thus
u (t0; x0) = "t0 < 0 u (t0; x0) = 0 r u (t0; x0) 0 4 @u (t0; x0) " < 0: @t 5.2. THE MAXIMUM PRINCIPLE 31
Corollary 34 Let (M; g (t)) be a smooth family of compact Riemannian man- ifolds for t [0;T ]: Let u; v : [0;T ] M R be C2 functions such that 2 ! u (0; x) v (0; x) for all x M: Also let A R. Then exactly one of the following is true: 2 2 1. u (t; x) v (t; x) for all (t; x) [0;T ] M; or 2 2. There exists a (t0; x0) (0;T ] such that all of the following are true: 2
u (t0; x0) < v (t0; x0)
iu (t0; x0) = iv (t0; x0) ; r r u (t0; x0) v (t0; x0) ; 4g(t0) 4g(t0) @u @v (x0; t0) < (t0; x0) + A [u (t0; x0) v (t0; x0)] : @t @t
At Proof. Replace u with e (u v) : This will allow us to estimate subsolutions of a heat equation by supersolu- tions of the same heat equation.
Corollary 35 Let the assumptions be the same as in Corollary 34, including
u (0; x) v (0; x) : Suppose u is a supersolution of a reaction-di¤usion equation, i.e., @u u + u + F (t; u) @t 4g(t) rX(t) and v is a subsolution of the same equation, i.e., @v u + v + F (t; v) @t 4g(t) rX(t) for all (t; x) [0;T ] M; where X (t) is a vector …eld for each t and F (t; w) is Lipschitz in 2w; i.e., there is A > 0 such that
F (t; w) F (t; w0) A w w0 : j j j j Then u (t; x) v (t; x) for all t [0;T ] : 2 Proof. Consider @ (u v) (u v) + (u v) + F (t; u) F (t; v) @t 4g(t) rX(t) (u v) + (u v) A u v : 4g(t) rX(t) j j 32 CHAPTER 5. BASICS OF PDE TECHNIQUES
The dichotomy in Corollary 34 says that either u v 0 for all t; x or else there is a point (t0; x0) such that at that point,
u v < 0 (u v) = 0 4 (u v) = 0 r @ (u v) A0 (u v) = A0 u v @t j j for any A0: But the inequality above says that at that same point @ (u v) A u v ; @t j j which is a contradiction if A0 < A: Usually, instead of making v a subsolution, we will just make v the subsolu- tion to the ODE dv F (t; v) ; dt where v = v (t) is independent of x and so this is also a subsolution to the PDE. Here is an easy application:
Proposition 36 Nonnegative scalar curvature is preserved by the Ricci ‡ow, i.e., if R (0; x) 0 for all x M and the metric g satis…es the Ricci ‡ow for t [0;T ), then R (t; x) 0 for2 all x M and t [0;T ]. 2 2 2 Proof. Recall that R satis…es the evolution equation @R = R + 2 Rc 2 ; @t 4g(t) j j thus it is a supersolution to the heat equation (with changing metric), i.e., @R R: @t 4 By Corollary 35, we must have that R 0 for all t: We can actually do better. Notice that if Tij is a 2-tensor on an n-dimensional Riemannian manifold (M; g), then
2 1 ij 2 T g Tij j j n since 2 1 k` Tij g Tk` gij 0 n