<<

Math 224 - Representations of Reductive Lie Groups

Taught by Dennis Gaitsgory Notes by Dongryul Kim Spring 2018

This was a course taught by Dennis Gaitsgory. The class met on Tuesdays and Thursdays from 11:30am to 1pm, and the main textbook was Springer’s Linear Algebraic Groups. There were a few assignments near the beginning of the course, and a final paper.

Contents

1 January 23, 2018 4 1.1 Linear algebraic groups ...... 4 1.2 Representations of linear algebraic groups ...... 6

2 January 25, 2018 7 2.1 Embedding of linear algebraic groups ...... 7 2.2 The ...... 8

3 February 1, 2018 10 3.1 Examples of groups and classification of representations . . . . . 10 3.2 Tannaka ...... 11

4 February 6, 2018 14 4.1 Connected component ...... 14 4.2 and surjectivity ...... 15

5 February 8, 2018 17 5.1 Fun with Frobenius ...... 17 5.2 More on orbits ...... 17 5.3 Jordan decomposition ...... 18

6 February 13, 2018 20 6.1 Representations of groups ...... 20 6.2 Groups of 1 ...... 22

1 Last Update: August 27, 2018 7 February 15, 2018 23 7.1 Commutative groups with semi-simple points ...... 23

8 February 20, 2018 26 8.1 Continuous family of ...... 26 8.2 Automorphisms of projective ...... 28

9 February 22, 2018 29 9.1 Ind-schemes ...... 29

10 February 27, 2018 33 10.1 Comparing reduced parts of ...... 33 10.2 Lie ...... 34

11 March 1, 2018 36 11.1 Equivariant quasicoherent ...... 36 11.2 Distributions ...... 37

12 March 6, 2018 40 12.1 Lie over p ...... 40 12.2 Quotients ...... 42

13 March 8, 2018 44 13.1 Parabolic and solvable groups ...... 44

14 March 20, 2018 48 14.1 Borel ...... 48 14.2 Structure of connected solvable groups ...... 49 14.3 Maximal tori ...... 50

15 March 27, 2018 51 15.1 Reductive groups ...... 52

16 March 29, 2018 54 16.1 Cartan subgroup ...... 55

17 April 3, 2018 57 17.1 Nonsolvable groups with maximal torus of dimension 1 ...... 57

18 April 5, 2018 60 18.1 Groups of semi-simple rank 1 ...... 60 18.2 Roots ...... 61

19 April 10, 2018 64 19.1 Examples of root systems ...... 64

20 April 12, 2018 66 20.1 Structure of reductive groups ...... 66

2 21 April 17, 2018 68 21.1 Structure of reductive groups II ...... 68

22 April 19, 2018 72 22.1 ...... 72 22.2 Simple roots ...... 73

23 April 24, 2018 75 23.1 Classification of parabolics ...... 75 23.2 Length ...... 76

3 Math 224 Notes 4

1 January 23, 2018

This is a course on linear algebraic groups. We are going to try and follow Springer’s Linear algebraic groups. We will be working over a field k, but we are not going to restrict to varieties.

1.1 Linear algebraic groups

Definition 1.1. An algebraic is a in the Schft of schemes of finite type. Then for every S, Hom(S, G) is a group, and is contravariant in S. Definition 1.2. A linear is a algebraic group that is affine.

Example 1.3. For V a , we define GL(V ) in the following way:

Hom(S, GL(V )) = group of automorphisms of V ⊗ OS as an OS-.

This is something you have actually seen before. Let W be a finite-dimensional vector space, and suppose we want to find W such that

Hom(S, W) = W ⊗ OS.

Then W is actually Spec Sym(W ∗). This is because

∗ ∗ ∗ Hom(S, Spec Sym(W )) = HomComAlg(Sym(W ),OS) = HomVect(W ,OS) = OS⊗W.

Let us generalize this and consider

Hom(S, M(V )) = of all OS-linear of V ⊗ oS.

Then M(V ) is obtained from the vector space End(V ) in the previous construc- tion.

Example 1.4. For V = k, we have GL(k) = Gm. Here, we have

Hom(S, Gm) = OS.

Example 1.5. The group Ga is going to be Hom(S, Ga) = OS under addition. A map between algebraic groups is a of functors. Now let me try to explain what a is. For each V , we can define det V = Vtop(V ). We want to construct a map M(V ) → M(det(V )). To construct this, we need to find a map between . If α acts on V ⊗ OS, Vtop its determinant det(α) acts on O (V ⊗ OS). This is the determinant map. If ∼S V is of dimension 1, we have OS = EndOS (V ). So we have the map

1 det : M(V ) → M(det(V )) =∼ M(k) =∼ A . Math 224 Notes 5

So we have a map GL(V ) → Gm, and SL(V ) is the of this map. This is a scheme because can take the fiber product. Now fix a flag V = Vn ⊇ Vn−1 ⊇ · · · ⊇ V0 = 0. Now we want to define a subgroup P ⊆ GL(V ) by

Hom(S, P ) ⊆ Hom(S, GL(V )) consisting of endomorphisms that send each Vi ⊗ OS to itself. This is going to be the intersection of some MP and GL(V ) in M(V ). We want to show that P is a closed subscheme of M(V ). To do this, it suffices to show that MP is closed in M(V ). Each Vi ⊗ OS preserved by α means that

α Vi ⊗ Os → V ⊗ Os −→ V ⊗ OS → (V/Vi) ⊗ OS is zero. This is really Hom(Vi, V/Vi). So this MP fits in the fiber product

MP M(V )

Q ∗ i Hom(Vi, V/Vi).

Now we have a map

Y 1 → U → P → GL(Vi/Vi−1) → 1. i Q Here, the map P → i GL(Vi/Vi−1) is given in the following way. For each α : V ⊗ Os → V ⊗ OS, we have Vi ⊗ OS → Vi ⊗ OS so (Vi/Vi−1) ⊗ OS → (Vi/Vi−1) ⊗ OS. So we can consider its kernel. Take a V ⊗ V → k. We can also define a H ⊆ GL(V ) given by Hom(S, H) ⊆ Hom(S, GL(V )) consisting of α such that

(V ⊗ OS) ⊗OS (V ⊗ OS) OS

α⊗α

(V ⊗ OS) ⊗OS (V ⊗ OS) OS commute. Now let OG be the algebra of regular functions on G. This OG is going to be a commutative , with maps ∆ : OG → OG ⊗ OG and γ : OG → OG and  : OG → k. There is also a notion of an algebraic group acting on a scheme. This is going to be a G × X → X. This is going to be an action of Hom(S, G) on Hom(S, X) for each S that are compatible with maps on S. Example 1.6. Any G acts on itself on the left and on the right. Also GL(V ) acts on V for a finite-dimensional V . This is tautological, because Hom(S, GL(V )) is the automorphisms of V ⊗ OS and Hom(S, V) is V ⊗ OS. Math 224 Notes 6

The is defined as

Hom(S, P(V )) = {line bundles L on S with 0 → L → V ⊗OS with locally free cokernel}.

Example 1.7. GL(V ) acts on P(V ) because α is an automorphism of V ⊗ OS.

More generally, consider n = nk ≥ nk−1 ≥ · · · ≥ n1 ≥ n0 = 0 where dim V = n. We can define the flag variety as the following: Hom(S, Fl(V )nk,...,n0 ) is the collection of locally free sheaves 0 = M0 ⊆ M1 ⊆ · · · ⊆ Mk such that ∼ rk(Mi) = ni and an identification Mk = V ⊗ OS such that Mi/Mi−1 are locally free.

1.2 Representations of linear algebraic groups Definition 1.8. Let V be a vector space. A representation of G on V is, for every S, an action of Hom(S, G) on V ⊗ OS by automorphisms of OS-modules. Lemma 1.9. To define on V a structure of G-representations is equivalent to define on V a structure of an OG-comodule.

The idea is that V ⊗ OG is the functions on G with values in V . So a comodule structure V → V ⊗ OG is something we want to send v to g 7→ gv. Proof. Take S to be G, and the identity on Hom(G, G). Then there is a canonical automorphism α on V ⊗ OG. Then we can compose V → V ⊗ α OG −→ V ⊗ OG. For the other direction, we start with a comodule structure V → V ⊗ OG to a OG- α : V ⊗ OG → V ⊗ OG. This turns out to be an automorphism. Then for any test scheme S with S → G, restrict V ⊗ OG → V ⊗ OG to S. Corollary 1.10. If V is a G-representation, then for every finite-dimensional W ⊆ V , there exists a W ⊆ V 0 such that V 0 is finite-dimensional and preserved by the G-action. Proof. Given W , let V 0 be the minimal subspace in V such that im(W ) ⊆ 0 V ⊗ OG. We’ll continue next time. Math 224 Notes 7

2 January 25, 2018

We’ll continue talking about representations. Let G be an algebraic group. A representation of G by V is an action of Hom(S, G) on V ⊗ OS. For g ∈ Hom(S, G), the map V ⊗ OS → V ⊗ OS is just a map V → V ⊗ OS of vector spaces. So this should be send an element v ∈ V to a S → V . So we ∼ had an equivalence Rep(G) = OG−coMod.

2.1 Embedding of linear algebraic groups Proposition 2.1. Let V ∈ Rep(G), then for any finite-dimensional W ⊆ V there exists a W ⊆ V 0 ⊆ V where V 0 is finite-dimensional and a subrepresenta- tion.

0 Proof. We have a coaction V → V ⊗OG and let V be the minimal subspace in V 0 0 such that the co-action of W is contained in V ⊗ OG. So given a W ⊆ V ⊗ OG, 0 0 0 what is the minimal V such that W ⊆ V ⊗ OG? This can be computed by the span of (id ⊗ξ)(W ) for ξ ∈ U ∗. Now our claim is that V 0 is a subrepresentation. We need to show that 0 0 the coaction of V is contained in V ⊗ OG. By definition, we need to show 0 that W → V ⊗ OG → V ⊗ OG ⊗ OG → V always lies in V . This is clear by interpreting the map V ⊗ OG → V ⊗ OG ⊗ OG as the comultiplication on OG. Corollary 2.2. Any algebraic group G has a closed embedding to GL(V ) for some finite-dimensional V .

Proof. We know that OG is a representation. Because G is finite type, there exists a W ⊆ OG such that Sym(W ) → OG, and so we can replace W with a V ⊆ OG that is a representation. From this we get a map G → GL(V ). We would like to show that this is a closed embedding, and it suffices to show that

G,→ GL(V ) ,→ M(V ) is a closed embedding, where GL(V ) ,→ M(V ) is an open embedding. Now M(V ) = M(V ) is the variety associated to the vector space V ⊗ V ∗. ∗ I’m trying to show that Sym((V ⊗V )) → OG is surjective. But we started with ∗ a surjection Sym(V )  OG. But how is V → OG related to V ⊗ V → OG? There is a natural covector V → OG → k by evaluation on 1. It turns out that the original V → OG is

id ⊗ ev1 ∗ V −−−−−→ V ⊗ V → OG.

∗ So Sym(V ⊗ V ) → OG is surjective. Corollary 2.3. If G acts on X and X is affine, there exists a closed embedding of X,→ V for V a finite-dimensional representation on which G acts. Math 224 Notes 8

Proof. We have G × X → X which becomes OX → OX ⊗ OG. Let W ⊆ OX be a subspace that generates it as an algebra. We can replace W by V that is a subrepresentation. We have a surjection Sym(V )  OX , or a closed embedding X,→ V. ∗ Proposition 2.4. HomRep(G)(V,OG) = V .

ev1 Proof. The map in one direction is V → OG −−→ k. For the other direction, a ∗ ∗ representation is given by V ⊗ V → OG. So any element of V gives a map V → OG.

∗ Actually, the map V ⊗ V → OG is a map of G × G-representations.

2.2 The regular representation We would now like to construct the regular representations from the represen- tations. A map between representations ϕ : V → W will give a ∗ V ⊗ V OG

id ⊗ϕ∗ ϕ⊗id V ⊗ W ∗ W ⊗ W ∗ For a category C, we define the twisted arrow category TwArr(C) consist- ing of objects C0 → C1 and

C0 C1

0 0 C0 C1. If we have two functors F,G : C → D, then we can look at the natural trans- formations from F to G. These are collections of αC ∈ Hom(F (C),G(C)) satisfying some compatibility conditions. Now my claim is that Nat(F,G) = lim H ←− TwArr(C) where H is H(ϕ : C0 → C1) = HomD(F (C1),G(C0)). This is because a is just a family satisfying some compatibility conditions. Definition 2.5. Let F : C → and G : Cop → Set be the functors. We can define a H : TwArr(C) → Set defined by

H(α : C0 → C1) = F (C0) × G(C1). Then we define the coend as coend(F,G) = lim H. −→ TwArr(C) Math 224 Notes 9

So let C = Rep(G)fd and D = Rep(G × G). Then we define

H(α : V → W ) = V ⊗ W ∗.

Then the claim we want to show is that

lim H = O . −→ G TwArr(Rep(G))fd Math 224 Notes 10

3 February 1, 2018

Last time I sold the product of thinking of representations as comodules.

3.1 Examples of groups and classification of representa- tions

Let’s consider the group Ga. This is A1 with addition, so it is given by OG = k[t] with k[t] → k[t] ⊗ k[t]; t 7→ t ⊗ 1 + 1 ⊗ t. I would like to give a complete description of representations. These are going to be V → V ⊗ k[t], and so we can dualize it to k[s] ⊗ V → V.

How are we doing this? In general, if we have V1 → V2 ⊗ W then we can dualize ∗ it to W ⊗ V1 → V2. This will have the condition that any v1 ∈ V1 is killed by a sufficiently large quotient of W ∗. So this k[s] ⊗ V → V should be a map such that any v ∈ V is killed by a sufficiently large power of s. Let Γ be a finite group. This is an given by M OΓ = kδγ , γ∈Γ with comultiplication X ∆(δγ ) = δγ1 ⊗ δγ2 .

γ1γ2=γ

So we can truly dualize V → V ⊗ OΓ, and get k[Γ] ⊗ V → V . There are no more representations other than the normal ones. −1 Now let G = Gm. Here, OG = k[t, t ] and ∆(t) = t ⊗ t. There are n representations Gm → Gm given by x 7→ x , which we can think of as nth characters.

∼ (n) Proposition 3.1. Every representation of Gm is a V = ⊕nV where V (n) is a direct sum of copies of the nth characters. Proof. Look at the map V → V ⊗ k[t, t−1] and write it as

X n v 7→ vn ⊗ t . n Now let us apply todo Math 224 Notes 11

Let us look at Λ =∼ Zn a . We have ∼ −1 −1 k[Λ] = k[t1, t1 , . . . , tn, tn ], and then the torus ∼ T = Spec k[Λ] = Gm × · · · × Gm. It can be checked that ∆(λ) = λ ⊗ λ. Proposition 3.2. Rep(T ) is equivalent to VectΛ. You can do the exact same thing. Note that this discussion is contravariant in Λ. ∼ Proposition 3.3. There is an Hom(Λ1, Λ2) = HomAlgGrp(T2,T1).

Proof. By definition, Hom(Λ1, Λ2) isomorphic to Hom(k[Λ1], k[Λ2]) where Hom is as Hopf algebras. Given any λ1 ∈ Λ1, we want to show that phi(λ1) is primitive. This is because primitive is equivalent to the statement that ∆(λ) = λ ⊗ λ. So we get map Λ1 → Λ2.

3.2 Tannaka duality Representations is a category. That is, you can tensor representations. For a test scheme S, there is an action of Hom(S, G) on V ⊗ OS. Then if I have V1 and V2, we can take

V1 ⊗ V2 ⊗ OS = (V1 ⊗ OS) ⊗OS (V2 ⊗ OS).

At the level of comodules, we can describe this in the following way. If A is a Hopf algebra, we can define the of V1 → V1 ⊗A and V2 → V2 ⊗A then

V1 ⊗ V2 → V1 ⊗ A ⊗ V2 ⊗ A → V1 ⊗ V2 ⊗ A ⊗ A → V1 ⊗ V2 ⊗ A.

Here, we’re using that A as an algebra is commutative, when we’re saying that ∼ V1 ⊗ V2 = V2 ⊗ V1. ∼ Corollary 3.4. (i) We have an isomorphism Λ = Hom(T, Gm). ∨ (ii) We have an isomorphism Λ = Hom(Λ, Z) = Hom(Gm,T ). Now here is Tannaka’s theorem. There is a forgetful functor

F : Rep(G) → Vect which is a tensor functor, which means that it sends tensor products to tensor products. Sometimes I will write F (V ) = V . Tannaka’s theorem says that we can recover G by this functor. Math 224 Notes 12

Given any element g ∈ G, for every V the element g acts on F (V ) as αV : F (V ) → F (V ). So Tannaka’s theorem is going to say that the auto- morphism group of this functor is going to be G. But it is going to be a tensor automorphism. In particular, we have

αV1⊗V2 F (V1 ⊗ V2) F (V1 ⊗ V2)

∼= ∼=

αV1 ⊗αV2 F (V1) ⊗ F (V2) F (V1) ⊗ F (V2).

I would like to recover S-points of G. So we can consider the functor

fd fd FS : Rep (G) → Coh(S); V 7→ V ⊗ OS. There is an obvious map

Hom(S, G) → Auttensor(FS) with the same imposed condition.

Theorem 3.5 (Tannaka). This map Hom(S, G) → Auttensor(FS) is an isomor- phism. Proof. We are going to construct the map in the other direction. Every tensor fd automorphism of FS canonically extends to a tensor automorphism of FS : Rep(G) → QCoh(S). Now we can consider maps A ⊗ A → A for A ∈ Rep(G). So it makes sense to talk about commutative algebras in Rep(G). If F is a tensor category, every Auttensor(FS) should be respect the algebra structure. Now let us take A = OG. Then FS(A) = OS ⊗ OG, and we obtain an automorphism of OS ⊗ OG as an OS-algebra. So we get an automorphism α : S × G → S × G over S. We need to check that this is given by left multiplication, and it suffices to show that the diagram

S × G × G α⊗idG S × G × G

id ×m id ×m S × G α S × G commutes. To show this, we need to go back to the functor. This comes from the commutative diagram

αOG⊗OG F (OG ⊗ OG) F (OG ⊗ OG)

∆ ∆

αOG F (OG) F (OG) and taking spectra. Math 224 Notes 13

What this also tells us is that if g1, ∈ Auttensor(FS) acts in the same way on OG, then it acts in the same way on each V . Here is the reason. Every V can be embedded in V ⊗ OG, which is a bunch of copies of OG. Here is why. ∗ We saw from last time that we have a map V ⊗ V → OG as G × G- ∗ representation. Then we can move V to the other side and get V → OG ⊗ V and an OG-linear map V ⊗ OG → V ⊗ OG. ˜ Considering OG ⊗ V as functions on G with values in V , this is f 7→ f(g) = −1 g f(g). Now we consider a G × G-action on the domain V ⊗ OG, given by the first G acting on V ⊗ OG and the second G acting on OG. Consider the G × G- action on the target V ⊗ OG, given by the first G acting on the second OG and the second G acting on V ⊗ G. Then the map V ⊗ OG → V ⊗ OG intertwines the G × G-action. This is a wonderful property of the regular representation. For the first G, you lose the representation, and for the second G, you gain this information. Now if we ignore the second G-action, we get an intertwining

V ⊗ OG → V ⊗ OG that is an isomorphism of G-representations, where G acts on the left side diagonally. This will prove that the two construction of the Tannaka duality are inverses. Math 224 Notes 14

4 February 6, 2018

Let G be an algebraic group.

4.1 Connected component Theorem 4.1. There exists an G0 ⊆ G such that

(a) it is a subgroup, (b) it is a connected component, (c) it is normal with finite index. Proof. Suppose I have to components X,Y containing e. Then we can look at the closure XY . Then we get X ⊆ XY and Y ⊆ XY and so X = XY = Y . Likewise, if e ∈ X then e ∈ X−1 and so X = X−1. This shows (a). Now consider the decomposition a gG0 = G g∈G as a scheme for g ∈ G(k) (if k is algebraically closed). The claim is that all 0 0 0 g ∈ G are connected. This can be seen by checking g1G ∩ g2G 6= 0 implies 0 0 g1G = g2G . To see that it is normal, we look at the conjugation on G. It sends connected components to connected components, and the identity to the identity. So it sends G0 to G0. It being finite index means that Hom(S, G0) is of finite index in Hom(S, G), which is because G is quasi-compact.

The groups Ga, Gm, GLn, Bn, Un, SLn are all connected. SLn is because

−1 −1 GLn × GLn → SLn;(g1, g2) 7→ g1g2g1 g2 is surjective at the level of k-points. The source is connected, and so the target should be connected. But On is not connected for char k 6= 2 because

On ±1

det GLn Gm.

We also have that SOn and Sp2n are connected. To show this, we use the fact that these groups are generated by connected subgroups.

Proposition 4.2. Let Yi by irreducible with maps φi : Yi → G. Assume that e ∈ im(φi). Let H be the (closed) subgroup of G generated by them. Then H is irreducible. Math 224 Notes 15

Proof. Let I = {1, . . . , n}, and consider the multiplication (with inversions) map

Y1 × · · · × Yn × Y1 × · · · × Yn × · · · × Yn → G.

Let Zm be the closure of the image. Then Zm ⊆ Zm+1 ⊆ G are irreducible varieties, and because G has finite , it stabilizes. Then Zm·Zm ⊆ −1 Zm+m and also Zm ⊂ Zm, so Zm is an irreducible subgroup.

4.2 Image and surjectivity If Φ : G0 → G is a homomorphism, we can look at its image G0 → im(Φ). This is a subgroup, as you can see by the functor of points. Proposition 4.3. The map G0 → im(Φ) is surjective on k-points. Proof. In general, if we have a dominant morphism X → Y of reduced varieties, there exists some open dense Y 0 ⊆ Y such that Y 0(k) ⊆ im(X(k)). (This is some kind of Chevalley.) Let’s use the following lemma on U = Y 0 and V = im(G0(k)). Then we get

G(k) = U(k) · V ⊆ im(G0(k)) · im(G0(k)) ⊆ im(G0(k)).

This finishes the proof.

Lemma 4.4. If U ⊆ G is open and V ⊆ G(k) is dense, then U · V = G. Proof. For g ∈ G(k), we have

g−1V ∩ U(k) 6= ∅ by definition of density. This means that g ∈ V · U(k).

Let G act on a variety X. Consider the action map G → X and look at the closure Gx. Definition 4.5. An action of G on X is transitive if G(k) acts transitively on X(k).

This is not good if X is not reduced, but is reasonable if X is reduced. Proposition 4.6. Let G be reduced. Then Gx contains a unique open G-stable subvariety with a transitive action that contains the point x. Proof. The map G → Gx is dominant, and therefore there exists an open Y ◦ ⊆ Gx such that Y ◦(k) ⊆ im(G(k)). Define the desired open subvariety by Z = S ◦ g gY . This is transitive and G-stable by construction. Suppose x∈ / Z(k), so that x∈ / Gx−Z. But the right hand side is closed and G-stable, so Gx ⊆ Gx−Z, which is a contradiction. Math 224 Notes 16

Proposition 4.7. Let X be reduced and G(k) act on X(k) act transitively. Then for all x ∈ X(k), G → X; g 7→ gx is faithfully flat.

Proof. In general, if I have a dominant map ϕ : Z1 → Z2 with Z2 reduced, ◦ there exists a nonempty open Z2 ⊆ Z2 over which ϕ is faithfully flat. Applying this, you see that there exists a nonempty open X◦ ⊆ X over which G → X is faithfully flat. Then the same is true for gX◦ for all g ∈ G(k), and then S ◦ X = g gX .

Using a similar idea, you can show that if the kernel ker(G1 → G2) is finite, then the morphism G1 → G2 is finite. Quasi-finite does not imply finite, but you can take a open on which it is finite and then translate it around.

Example 4.8. Let X be a point and G be any non-reduced algebra group. Then G → X is not smooth. But if G is any non-reduced algebraic group then the kernel of Frob : G → G is never reduced. For instance, if G = Ga then ker(Frob) = Spec k[t]/tp. Math 224 Notes 17

5 February 8, 2018

We were talking about the Frobenius X → X.

5.1 Fun with Frobenius

If X = An, the preimage of the Frobenius would be q q k[t1, . . . , tn]/t1 ··· tn. −1 n So if X is smooth of dimension n then Frobq (x) is a scheme of length q . So the inverse image is non-reduced unless the point is isolated. p Consider X = Ga. If we look at ker(Frobp, then it looks like Spec(k[t]/t ). ∗ p Let us look at its representations. Its dual is OG = k[x]/x , and so these are endomorphisms whose p-th power is zero. Also consider X = Gm, and this is −1 p ker(Frobp) = µp = Spec(k[t, t ]/t = 1). ∗ Again, let us look at representations of G. These are modules for the dual OG. ∗ i There is an element called ξ = t∂t ∈ OG that maps t 7→ i. (The motivation is taking the Lie t∂t.) The claims is that ξp = ξ. This is becasue ξp(ti) = hξ ⊗ · · · ⊗ ξ, ti ⊗ · · · ⊗ tii = ip = i ∗ p for any i. Then you can show that OG = k[ξ]/ξ − ξ. Now representations Rep(G) are vector spaces graded by Z/pZ. You can see this in the following way: we have a short exact sequence

1 → G → Gm → Gm → 1, where Gm → Gm is given by taking the pth power. Representations of Gm are given by VectZ, so representations pull back by a factor-p scaling of indices.

5.2 More on orbits We defined a transitive action of a reduced G on X to be an action such that X is reduced and G(k) acts transitively on X(k). Proposition 5.1. If G acts on X transitively, then for all x ∈ X, the action map G → X is faithfully flat. So we can deduce the strongest notion from the weakest notion. Theorem 5.2. Let G reduced act on X. The for all x ∈ X(k), there exists a unique locally closed subset Y ⊆ X such that x ∈ Y and the action of G on Y is transitive. This Y is called the orbit of x in X. Proposition 5.3. Let G act on X. There exist closed orbits. Proof. Take an orbit of smallest possible dimension Y . Call it Y . If Y is not Y¯ , there is going to a point in Y¯ outside Y , and then consider the orbit of anything in Z. Math 224 Notes 18

5.3 Jordan decomposition Let V be a vector space and T be a . Theorem 5.4 (Jordan decomposition).

(1) There exists a decomposition T = Tn + Tss such that they commute.

(2) If T is an automorphism, there exists a decomposition T = Tu · Tss such that TuTss = TssTu. Moreover,

(a) there exist pu, pss ∈ k[t] such that Tu = pu(T ) and Tss = pss(T ),

(b) [S, T ] = 0 implies [S, Tu] = [S, Ts] = 0, 0 0 0 0 0 0 (c) if T = Tu + Tss and [Tu,Tss] = 0 then Tu = Tu and Tss = Tss,

(d) if S : V1 → V2 intertwines T1 and T2 so that S ◦ T1 = T2 ◦ S, then ST1,u = T2,uS and ST1,ss = T1,ssS,

(e) If T1 acts on V1 and T2 acts on V2 so that T1 ⊗ T2 acts on V1 ⊗ V2, then (T1 ⊗ T2)u = T1,u ⊗ T2,u and (T1 ⊗ T2)ss = T1,ss ⊗ T2,ss. L (λ) Proof. By Jordan decomposition, we know that V = λ V so that (T − λ id)|V (λ) are . Then we can set Tss|V (λ) = λ id. Similarly, we can define Tu and Tss. (λ) (a) Let πλ be the projector of V onto V . By the Chinese remainder P theorem, there exists a pλ such that πλ = pλ(T ). Then we can set λ λpλ(T ). Then the others can be deduced. The theorem is false if the field is perfect. Suppose char(k) = p and not perfect, so that there exists an element a ∈ k that does not admit a pth root. Consider the vector space

V = k0 = k[t]/(tp − a), with T multiplication by t. If we tensor with the ,

p p V ⊗k k = k[t]/(t − a) = k[t]/(t − b) .

In this case, Tn = t − b and Tss = b, and this cannot be done in k. Let G be reduced and g ∈ G(k). Definition 5.5. g is semi-simple/unipotent if the following equivalent con- ditions hold:

(1) g acts on OG unipotently on the left (this is on finite-dimensional sub- spaces)

(2) g acts on OG unipotently on the right (3) g acts unipotently on any finite-dimensional representation. Lemma 5.6. If g ∈ G is unipotent/semi-simple and γ : G → G0 is a homomor- phism, then ϕ(g) is unipotent/semi-simple. Math 224 Notes 19

Proof. Given any G0-action on V , we can consider it as a G-action.

Theorem 5.7. For every g there exists a g = gu · gss with gssgu = gugss with gu unipotent and gss semi-simple. Furthermore, (a) the decomposition survives homomorphisms G → G0, (b) the decomposition is unique. Proof. This can be done using Tanaka, but we can do this in an elementary way. Consider the decomposition of right multiplication r(g) = r(g)ur(g)ss in End(OG). You can do this by the locally finite action of G on OG. If we look at OG, we have OG ⊗ OG → Og, and the action on the left with rg ⊗ rg is the action on the right with rg. That is, we have (r(g) ⊗ r(g))u = r(g)u ⊗ r(g)u. But these are also algebra homomorphisms. But to show that these actually come from right multiplication, it suffices to show that they commute with left multiplication. This can be checked. Definition 5.8. A group is called unipotent if every element in it is unipotent.

Example 5.9. Ga is unipotent. Take an element a ∈ Ga, and let’s see how it acts on the regular representation k[t]. It acts as t 7→ t − a, and on the span of {1, . . . , tn}, it is an upper triangular .

Theorem 5.10. Take G = GLn. A point g is semi-simple/unipotent in this algebraic group sense if and only if it is semi-simple/unipotent as an automor- phism of kn.

We’ll prove this later. Lemma 5.11. If G is unipotent, and G0 is a subgroup of G, then G0 is unipotent.

0 Proof. For g ∈ G ⊆ G, we get OG  OG0 and it preserves ideals by actions of g.

Theorem 5.12. Let g ∈ G(k) be of finite . Then it is finite order. (As long as the order is coprime to the characteristic of k.) Proof. Assume T m = id. Then tm − 1 has distinct roots.

For example, the subgroup Z/pZ ⊆ Ga is semi-simple if and only if charac- teristic of k is not p. Math 224 Notes 20

6 February 13, 2018

Let G be an algebraic group. Definition 6.1. A g ∈ G(k) is semi-simple/unipotent if its action on any finite-dimensional representation of G is semi-simple/unipotent. This is equiv- alent to its action on OG being semi-simple/unipotent.

We showed that any homomorphism φ : G1 → G2 sends semi-simple/unipotent elements to semi-simple/unipotent elements.

Example 6.2. Any element in Gm is semi-simple. Any element in Ga is unipo- tent.

Theorem 6.3. An element g ∈ GLn(k) is semi-simple/unipotent if and only if its action on kn is semi-simple/unipotent. Proof. Assume that g acts semi-simple/unipotently on kn = V . If g is semi- simple, we can write M i V = V , g|V i = λi. i Q Then we can consider g ∈ T (k) where T,→ GLn is the torus Gm. Now assume that g is nilpotent in GLn(k). We want to define ϕ : Ga → GLn with n g = ϕ(1). But defining ϕ is the same as defining a representation of Ga on k , and this is the same as the datum of a unipotent on kn. There was a fire alarm. But we moved to the next .

6.1 Representations of unipotent groups Proposition 6.4. (i) If G is a finite group and (|G|, p) = 1, then any element of G(k) is semi-simple. (ii) If |G| = pk, then any element of G(k) is unipotent. Theorem 6.5 (Jordan decomposition). For every g ∈ G(k), there exists a unique decomposition g = gsgu where gs, gu are semi-simple and unipotent, and gsgu = gugs. Theorem 6.6. Let G be unipotent and let V be an irreducible representation. Then V is trivial.

If G acts on V , then G(k) acts on V . If V1 is a subrepresentation in the first sense, then V1 is a subrepresentation in the second sense. But the other direction is true in the reduced case as well.

Lemma 6.7. If V1 ⊆ V is preserved by G(k), then it is a subrepresentation. Proof. We know that G(k) is Zariski dense in G. So

G × V1 → G × V → G × V/V1 todo Math 224 Notes 21

Let V be irreducible as a representation of G(k).

Theorem 6.8 (Burnside). Let Tα be a collection of endomorphisms of V such that there does not exist V1, 0 ( V1 ( V such that Tα|V1 ⊆ V1. Then the span of Tα and their compositions is End(V ). So for instance, G(k) span End(V ).

Corollary 6.9. Let G be unipotent and let V be a finite-dimensional represen- tation. Then there exists a flag

0 = V0 ⊆ V1 ⊆ · · · ⊆ Vk = V such that Vi are subrepresentations such that Vi/Vi−1 is trivial. Proof. By induction.

Corollary 6.10. Let ϕ : G → GLn be a homomorphism. Then there exists an element g ∈ GLn(k) such that conjugation by g gives

G → U(n).

Proof. g is what takes the standard flag to the flag of the previous corollary. Corollary 6.11. G has a filtration by normal group subschemes

1 = G0 ⊆ G1 ⊆ · · · ⊆ Gn = G such that Gi/Gi−1 is a subgroup of Ga. This is not very informative in characteristic p. Let G0 be the Zariski closure −1 −1 of the group generated by G × G → G with (g1, g2) 7→ g1g2g1 g2 .

Corollary 6.12. If G is unipotent, then G0 ( G. Proof. This follows from the corresponding property of U(n). Suppose unipotent G acts on an affine variety X. We have a notion of the orbit Gx ⊆ Gx ⊆ X.

Proposition 6.13. Let G be unipotent. Then Gx is closed.

1 1 Example 6.14. Consider Ga acting on P . Then the orbit of 0 is A , which is not closed.

Proof. Let I ⊆ OGx be the ideal of Gx − Gx. Then I is a representation of G, G and every representation has a nonzero 0 6= f ∈ I . Then f|Gx is constant, and because Gx is dense in Gx, we have f|Gx is constant. But f|Gx−Gx = 0, so f = 0. Math 224 Notes 22

6.2 Groups of dimension 1 Theorem 6.15. Let G be a connected algebraic group of dimension 1. Then either G = Ga or G = Gm. Proof. Because it is smooth, there exists a canonical curve G such that G is an open subset. The claim I want to make is that the action of G on itself by left translations extends to an action of G on G. For now let’s just use it to prove the theorem. Consider S = G − G. This consists of finitely many points and nonempty because G is not projective. G(k) acts on G because it is functorial in the na¨ıve sense, and it preserves S because G is connected. Now I claim that if a proper curve X admits an infinite group of auto- morphisms that preserve a nonempty subset S ⊆ X, then X = P1. The only other case possible is an , but this is not possible because the only automorphisms are translations up to a finite group. Now we show that G is commutative. We’ll get back to this later.

Theorem 6.16. Let G be a commutative algebraic group. Then G → Gu × Gs where Gu is unipotent and all elements in Gs are semi-simple. Math 224 Notes 23

7 February 15, 2018

Theorem 7.1. If G is unipotent and V is an irreducible representation, then V =∼ k. Corollary 7.2. If G is unipotent, every representation admits a flag such that the action of G on Vi/Vi−1 is trivial. Corollary 7.3. If G is unipotent, it embeds into U(n).

Corollary 7.4. If G is unipotent, there exist 1 = G0 ⊆ G1 ⊆ · · · ⊆ Gn such 0 that Gi is reduced and Gi ⊆ Gi−1.

Proof. Take Gi = (G ∩ U(n)i)red. ∼ Theorem 7.5. If G is connected and reduced of dimension 1, then G = Gm or ∼ G = Ga.

Proof. We first prove that any G as above is commutative. Let ϕg = Adg for g ∈ G(k). We want to show that ϕg = id. We will do this later. So G is commutative. We want to show that we can write G = Gu×Gs where Gu is unipotent and all elements of Gs are semi-simple. First note that the set of unipotent elements is Zariski-closed because on every representation V , it is dim V {T ∈ End(V ):(T − id) = 0}. Also, if g1, g2 are unipotent and commute, then g1g2 is unipotent. So Gu is a closed subgroup of G whose k-points are exactly the unipotent elements. Likewise, we have G,→ GL(V ) and because G commutes, we can diagonalize all semi-simple elements of G(k) so that

∼ M V = Vi. i

Then we can take Gs = G ∩ T where T is the torus that acts on each Vi. Then every semi-simple element of G will belong to Gs. We naturally have a map Gs × Gu → G. We will construct a map ϕ : G → Gs that at the level of k-points exacts the semi-simple part of the Jordan decomposition. To construct this, we note that with the diagonalization V = L i Vi, we have G → Bn  Tn where Bn is the upper-triangular matrices and Bn → Tn is the projection. At the level of k-points, this map extracts the semi-simple part, and also it lands in Gs. So G = Gs × Gu, and so either Gu = G or Gs = G.

7.1 Commutative groups with semi-simple points Theorem 7.6. Let G be reduced. The following are equivalent: (1) G is commutative and all of points are semi-simple. Math 224 Notes 24

(2) Every representation splits as a direct sum of 1-dimensional representa- tions.

(3) There exists an embedding G,→ Tn.

(4) Characters span OG. (5) There exists a finite Λ such that (|Λtors|, char k) = 1 and ∼ OG = k[Λ].

Definition 7.7. A character is a homomorphism G → Gm, which is the −1 × same as k[t, t ] → OG, which is the same as an element f ∈ OG such that ∆(f) = f ⊗ f.

Here is an application. Corollary 7.8. If G is a commutative connected group of dimension 1 all of ∼ whose points are semi-simple, then G = Gm. Proof. Let us write G = Spec k[Λ]. Then Λ = Λtors⊕Zn and so G = Spec k[Λtors]× ×n (Gm) , and the reducedness, connectivity and dimension conditions force G = Gm. Now let us prove the theorem. Proof. (1) ⇒ (2) is just simultaneous diagonalization. (2) ⇒ (3) follows from the embedding G,→ GL(V ). For (3) ⇒ (4), We have OTn  OG, and OTn = −1 −1 k[t1, t1 , . . . , tn, tn ]. So characters span OG. For (4) ⇒ (5) we define Λ to be the group formed by the characters of G. Then we have a surjection

OG  k[Λ] of Hopf algebras. We need to show that the characters are linearly indepen- P dent in OG. Suppose i aifi = f1 with minimal cardinality. Then applying comultiplication gives

X X aifi ⊗ fi = f1 ⊗ f1 = aiajfi ⊗ fj. i i,j

2 Because fi are linearly independent, aiaj = 0 and ai = ai. For (5) ⇒ (1), we just use tors G = Spec k[Λ ] × Gm × · · · × Gm. Then it suffices to show that Spec k[Λtors] have semi-simple points, but this is something we have seen. It is going to be a finite product of µn.

Now let us show that if G is a unipotent group of dimension 1 then G = Ga. We showed that 1 G = X,→ X =∼ P . Also, X − X = S 6= ∅ is finite and nonempty. Math 224 Notes 25

∼ 1 Proposition 7.9. (1) PGL2 = Aut(P ), i.e., Hom(S, PGL2) is isomorphic to Aut(S × P1/S). (2) The action of G on X by left multiplication extends to an action of G on X. If we have the proposition, we obtain a map G → PGL(2). Then G → Perm(S), and because G is connected, G preserves elements of S. Choose one element S, and call it ∞. The subgroup of PGL2 that preserves ∞ is Gm n Ga:

1 Ga Ga n Ga Gm 1

G

But the composite G → Gm is trivial, because G is unipotent and the only unipotent element of Gm is 1. (We will prove that G → PGL2 is a closed embedding.) So we get G,→ Ga, and because both are 1-dimensional, G → Ga is an isomorphism. Now let us show that G → PGL2 is a closed embedding. We know that ϕ : G1 → G2 being a homomorphism with finite kernel implies that ϕ is finite. A finite map is a closed embedding if it is injective on S-points. So it is enough to show that G → PGL2 is injective at the level of S-points, and this is because any nontrivial automorphism S ×X/S induces a nontrivial automorphism S ×X/S. To prove this proposition, we are going to use the following general fact. For X a proper scheme and Y a quasi-projective scheme, we want to have

Hom(S, Map(X,Y )) = Hom(S × X,Y ).

Theorem 7.10. Map(X,Y ) is representable.

Proof. If Y1 ,→ Y2 is a closed embedding, Map(X,Y1) → Map(X,Y2) is a closed embedding, and likewise for open embeddings. So it suffices to show this for Y = P(V ). We have S × X → P(Y ) corresponding to a line bundle L on S × X such ∗ −1 that L ,→ V ⊗ OS×X with flat quotient. This means V ⊗ OS×X  L . Recall that if X is proper and E is a on X, then the functor that sends S to the to the set of all quotient coherent sheaves

E ⊗ OS  F on X ×S that are S-flat is representable. This representing scheme is called the ∗ of E . Now take E = V ⊗ OS. In our functor, F has to be a line bundle on X × S, and this is an open condition. So it is representable.

We are actually interested in Aut(P1), not E = Map(P1, P1). Then we have to look at the image of the inverse image of the identity of E × E → E. Math 224 Notes 26

8 February 20, 2018

We are still trying to prove that if G is connected, reduced, 1-dimensional, then G = Gm or G = Ga. Theorem 8.1. Such G is commutative. Proof. If there exists a g ∈ G(k) of infinite order, then G is commutative. This n k is because Adg = id coincides on g . Then g is in the , and also g is in the center, and this is also Zariski dense. Now let us assume that there is no element of infinite order after any field extension. Now I claim that the order is bounded, i.e., gN = 1 for all g ∈ G(k). To show this, let us consider the map G → G given by g 7→ gN , and look at the kernel GN . If we assume that the order is unbounded, all subschemes GN are proper. Let K be the field of rational functions on G. Then the η can be considered as a closed point in the base change OG ⊗k K. Now if Z ⊆ G is any proper subvariety, then η ∈ G(K) is not in Z(K). So η∈ / GN (K) for all N, and this shows that η ∈ G(K) is of infinite order. (Here, commutativity can be checked after a base change, because it’s a variety.) Now take a embedding φ : G,→ GL(n). We know that φ(g)N = id. There- fore the characteristic of φ(g) divides (1 − tN )n. Therefore there exist finitely many options for the characteristic polynomial ch(φ(g)). Now we can consider the characteristic polynomial as

φ ch n G −→ GL(n) −→ A . But we know that the image lies in finitely many points, and G is connected. So it should all map to 1. Now matrices that have the characteristic polynomial (x − 1)n is unipotent by definition. So φ(G) consists of unipotent elements, and thus G is unipotent. This shows that G0 ( G, and also G0 is connected. So G0 = {e} and G is commutative.

8.1 Continuous family of automorphisms

Then we showed that if G is a group then G = Gs × Gu where Gs have all elements semi-simple and Gu is unipotent. Then we had the following theorem. Theorem 8.2. The following are equivalent: (1) G is commutative and the elements are semi-simple. (2) Every representation splits as a direct sum of 1-dimensional representa- tions.

(3) G ⊆ Gm × · · · × Gm.

(4) OG is spanned by characters. (5) G = Spec k[Λ] where Λ is a finitely generated abelian group whose torsion is coprime to p. Math 224 Notes 27

Now it can be shown that Spec k[Λ1] → Spec k[Λ2] correspond to maps Λ2 → Λ1. (We showed this for Λ torsion-free, and the torsion case can be proved in the same way.) This is very different from the unipotent case. There is a continuous family of automorphisms Ga → Ga, for instance, given by

Gm × Ga → Ga. This is actually the universal example.

Proposition 8.3. Let φ : S × G1 → G2 be a morphism where G1 and G2 are diagonalizable. If S is connected, then φ factors through S × G1 → G1 → G2.

Proof. It suffices to show that G2 = Gm, because G2 embeds into some torus. Let us denote G1 = G. Then we have a map

−1 S × G → Gm = Spec k[t, t ] P where t maps to i fi ⊗ χi, where χi are characters on G. The condition that this is a is

X  X  X fi ⊗ χi ⊗OS fj ⊗ χj = fk ⊗ χk ⊗ χk. i i k Then X X fifj ⊗ χi ⊗ χj = fk ⊗ χk ⊗ χk. i,j k

This shows that only fi is 1 and all other fi are 0, because S connected means that there are only two idempotents 0 and 1.

Now comes the fun part. We looked at G = P1. Then P1 − G = S is a finite ∼ set. We are assuming that G is unipotent and trying to show that G = Ga. 1 Theorem 8.4. (1) There is an action PGL2 → Aut(P ). (2) There exist a homomorphism G → Aut(P1,S) such that the action re- stricts to the left multiplication on G. Here, if X and Y are algebraic varieties, then we can define a prestack

Hom(S, Map(X,Y )) = Hom(S × X,Y ).

Theorem 8.5. If Y is quasi-projective and X is proper, then Map(X,Y ) is representable.

Before proving Theorem 8.4, we finish the proof of G = Ga. We may assume that ∞ ∈ S, and then we get

1 ∼ G → Aut(P , ×) = Gm n Ga.

Because G is unipotent, G → Gm n Ga → Gm is trivial. So G maps to Ga. If we can show that this is a closed embedding, we are done. Math 224 Notes 28

Lemma 8.6. A homomorphism of algebra groups is a closed embedding if it is a categorical injection, i.e., injective S-points for all S.

Proof. We can factor G1 → φ(G2) ,→ G2. Now it suffices to show that G1 → φ(G2) is a closed embedding. But it is a general fact that a finite that is a categorical embedding is a closed embedding. So it suffices to show that G1 → φ(G1) is finite. But we know that it is quasi-finite because its kernel is a point, and then a quasi-finite morphism of algebraic group is finite.

Now let us show that G → Aut(P1,S) is a categorical injection. This is actually a tautology because if something acts trivially on Aut(P1,S), then it acts trivially on G.

8.2 Automorphisms of projective space

We have this group PGLn, and this has the universal property

1 Gm GL(V ) PGL(V ) 1

H.

So we have a map PGL(V ) → Aut(P(V )) and want to show that this is an isomorphism. But first let us look at Maps(P(V1), P(V2)). The k-points of this should be n ∼ n ∗ polynomial functions poly (V1,V2) = Sym (V1 ) ⊗ V2, but with some nonvan- ishing condition.

∼ a n ◦ Proposition 8.7. Maps(P(V1), P(V2)) = P(poly (V1,V2)) . n≥0 Math 224 Notes 29

9 February 22, 2018

We were trying to compute Maps(P(V1), P(V2)). a ∗ ◦ ∼ Proposition 9.1. P(Sym(V1 ) ⊗ V2) = Maps(P(V1), P(V2)). n≥0

Proof. We use the functor of points. A map S × P(V1) → P(V2). Recall that a map Y → P(V ) is the same as a line bundle L on Y with an embedding ∗ ⊗−1 L ,→ V ⊗ OY , with V ⊗ OY → L surjective. So we take Y = S × P(V1). Now it is a fact in algebraic that every line bundle on S × P(V ) is of the form

⊗−1 LS  O(n)

⊗−1 for some LS on S. So the data is V ∗ ⊗ → ⊗−1 (n), 2 OS×P(V1) LS  O which is the same as ∗ ∗ π (LS ⊗ V2 ) → OS  O(n), where π : S × P(V ) → P(V ). Then this is the same as

∗ n ∗ LS ⊗ V2 → OS ⊗ Sym (V1 ),

n ∗ which is just a LS → Sym (V1 ) ⊗ V2. This is the same as a map S → n ∗ P(Sym (V1 ⊗ V2)). Now surjectivity will give us the condition that the im- age is nondegenerate.

Corollary 9.2. Isom(P(V ), P(V )) = P(Hom(V,V ))◦ = PGL(V ). So let us finish this proof that we have been working on for two weeks. For S ⊆ P1 a finite set, we have

1 1 1 Aut(P ,S) = Aut(P ) ×Map(S,P ) Map(S, S). Theorem 9.3. The action of G on itself by left multiplication extends to a map G → Aut(P1,S).

9.1 Ind-schemes Definition 9.4. An ind-scheme is a contravariant functor on the category of affine schemes that can be written as

colimn Yn where Y1 ,→ Y2 ,→ Y3 ,→ · · · is a closed embedding. (Or you can also allow a S filtered indexing set.) Then Hom(S, Y ) = n Hom(S, Yn). Math 224 Notes 30

Example 9.5. Consider A∞ that can be written A1 ,→ A2 ,→ · · · . This is different from ∞,pro = Spec k[t ,...] = lim n. A 1 ←− A There are two senses in which we can take colimit. First, we can take

Hom(S, “ colim ”Yi) = colim Hom(S, Yi).

The other thing we can do is to take the colimit colim Yn in the category of affine schemes. If Yi = Spec Ai, then we have

colim Y = Spec(lim A ). i ←− i i

∗ For instance, if Ai = Sym(Vi ), then Hom(S, colim V ) = Hom(lim(Sym(V ∗)),O ) i −→ 1 S and the set Hom(S, “ colim ”Vi) are only those that are continuous. Example 9.6. Consider the sequence

Spec k[t]/t → Spec k[t]/t2 → · · · and there is a notion of

Spf k[[t]] = “ colim ” Spec k[t]/tn, which is an ind-scheme. There is also the scheme

Spec k[[t]] = colim Spec k[t]/ti = Spec k[[t]].

By definition,

Hom(S, Spec k[[t]]) = Hom(k[[t]],OS), n Hom(S, Spf k[[t]]) = colimn(k[t]/t ,OS) = ( in OS).

Definition 9.7. Let X be an affine scheme and Y ⊆ X a closed subscheme. We define

∧ Hom(S, XY ) = Hom(S, X) ×Hom(Sred,X) Hom(Sred,Y ).

1 ∧ Example 9.8. We claim that (A )0 = Spf k[[t]]. This is because

1 Hom(S, )× 1 Hom(S , Spec k[t]/(t)) = {f ∈ O : f is in the nilradical}. A Hom(Sred,A ) red S Similarly, we can prove that

ˆ n XY = “ colimn ” Spec A/I . Proposition 9.9. Map(X,Y ) is an ind-affine ind-scheme if Y is affine. (This just means that the Yi can be taken to be affine.) Math 224 Notes 31

Proof. Let us first take Y = A1. Here, we claim 1 ∼ Map(X, A ) = Γ(X, OX ). To show this, we compare functor of points. Then

1 1 Hom(S, Map(X, A )) = Hom(S × X, A ) = Γ(S × X, OS×X ) = OS ⊗ Γ(X, OX ).

Then V is a ind-scheme for a vector space V . For Y = An, we can do the same thing. Or we can use the fact that the product of ind-schemes is an ind-scheme. Also, it is clear that Map(X,Y1 × Y2) = Map(X,Y1) × Map(X,Y2). Now let us look at arbitrary finite type Y . Then Y fits in the fiber product

Y An

∗ Am. Now it suffices to show that the fiber product of ind-schemes is an ind-scheme. It can be checked that

(colim Yi) ×(colim Zj ) (colim Wk) = colimi,k Yi ×Zj Wk, where j is supposed to sufficiently large. This is because X mapping into them should come as something into Yi and into Wk and into Zj for some large enough indices. By filteredness, we can find a large enough Zj such that Yi → Zj and Wk → Zj.

So what is Aut(A1)? We have 1 1 1 Hom(Spec(k), Aut(A )) = Isom(A , A ) = (Gm n Ga)(k).

1 We have a map Gm n Ga → Aut(A ) defined by the action, but is this an isomorphism? A priori, Aut(A1) is only an ind-scheme. Let us evaluate at R. We are looking at automorphisms R[t] → R[t]. If R is a field, the only ones come from scaling and translation. But if R has nilpotents, like R = k[], then R[t] → R[t]; t 7→ t + t100 is an automorphism. ∼ 1 Proposition 9.10. Gm n Ga = (Aut(A ))red.

The reduced ind-scheme is just defined as taking “ colim ”(Yn)red. This is independent of the choice of the Yn. Given X → “ colim ”Yi, the closure of the image makes sense because all the connecting maps are closed embeddings. Moreover,

Lemma 9.11. Let G → G1 be a homomorphism, where G is a and G1 is a group ind-scheme, and ϕ is injective on S-points, and ϕ is bijective on k-points. Then ϕ induces an isomorphism G → (G1)red. Math 224 Notes 32

ϕ¯ Proof. So consider the closure G −→ G,→ G1. Thenϕ ¯ is injective on S-points, so it is an isomorphism. Now I claim that if X,→ Y where X is reduced, ∼ ∼ and X(k) = Y (k), then X = Yred. We have X,→ Yi for some Yi, and then ∼ ∼ X(k) = Yi(k) implies X = (Yi)red by Nullstellensatz. Math 224 Notes 33

10 February 27, 2018

Last time we define this notion of an ind-scheme. Then we proved that if Y is (finite-type) affine, Maps(X,Y ) is an ind-scheme. Then for X affine, Aut(X) was a group ind-scheme. We had 1 1 ∼ Aut(A ) ⊇ Aut(A )red = Gm n Ga.

10.1 Comparing reduced parts of automorphism functors We had a group G = X acting on X fixing S. Theorem 10.1. We have a homomorphism G → Aut(X,S).

We can look at Aut(X,Sset) and Aut(X,S) = Aut(X) ×Map(S,X) Aut(S). Also, given any functor, we can consider its reduced functor by a left Kan 0 0 extension. That is, we are taking Fred(S) = colimS0 rem F (S ) for S → S . Then I have

Aut(X) Aut(X,Sset) Aut(X,S)

G Aut(X)red Aut(X,Sset)red Aut(X,S)red.

We want to map G → Aut(X,S). We will do this by showing that the bottom two maps are . But let me torture you a bit first. I claim that Aut(X,Sset) ← Aut(X,S) is not an isomorphism. If we look at a T -point of Aut(X,Sset), this is a diagram

φ T × X X

T × S S0 with the set-theoretic image of T × S being S. This means that the scheme- theoretic image is a nilpotent thickening S0 of S. But if we look at a T -point of Aut(X,Sset)red, we can just test at T reduced. Then T × S is reduced, and so 0 ∼ T ×S → S factors through S. Therefore we the isomorphism Aut(X,Sset)red = Aut(X,S)red. Also, we need to show that Aut(X)red ← Aut(X,Sset)red is an isomorphism.

Lemma 10.2. Let G1 → G2 be a homomorphism where G1 a group scheme and G2 a group ind-scheme. If φ is injective on T -points and bijective on k-points, then φ is an isomorphism. todo Recall that for X ⊇ Y , we have defined

∧ Hom(T,XY ) = Hom(T,X) ×Hom(Tred,X) Hom(Tred,Y ). Math 224 Notes 34

∧ For S ⊇ H is a subgroup, then we can also define GH as a group ind-scheme. Take X = P1 and S = ∞, we have

Aut(X,Sset) ⊇ Aut(X,S) = Gm n Ga. Proposition 10.3. Aut(X,S ) = Aut(X)∧ . set Aut(X,S)

10.2 Lie algebras Definition 10.4. For G an algebraic group, we define its as g = TeG. The is defined as

2 TxX = Hom(Spec k[]/ ,X) ×Hom(Spec k,X) {x}.

This is going to be a vector space. Theorem 10.5 (Lie). g has a structure of a Lie algebra. I’ll use Tannaka’s theorem to give a different description.

Theorem 10.6. An element ξ ∈ g is the same thing as

• for every V ∈ Rep(G), an element Tξ ∈ End(V ) that is

• functorial in the sense that for φ : V1 → V2 we have φ ◦ Tξ = Tx ◦ φ and

• Tξ acts on V1 ⊗ V2 as Tξ ⊗ id + id ⊗Tξ. Proof. By Tanaka, an element of g is the same as, for every V we have auto- 2 morphisms T˜ on V ⊗ k[]/ which is the identity modulo . Then if we write ˜ ˜ ˜ ˜ Tξ = id +Tξ, then Tξ|V1⊕V2 = Tξ|V1 ⊗ Tξ|V2 gives the last condition.

Now let us prove Lie’s theorem. For ξ1 and ξ2, we can define [ξ1, ξ2] as

T[ξ1,ξ2] = [Tξ1 ,Tξ2 ].

Corollary 10.7. Every element ξ ∈ g admits a unique Jordan decomposition ξ = ξn + ξss such that ξn acts nilpotently on every representation and ξss acts semi-simply on every representation and [ξn, ξss] = 0.

Proof. You do Jordan decomposition on every Tξ.

Corollary 10.8. TeG is in with endomorphisms of OG that are • compatible with the left left action of G on itself by right translations,

• and with the map OG ⊗ OG → OG. So these are vector fields that are invariant under right translations. Theorem 10.9. If k has characteristic 0, then every algebraic group is reduced. Math 224 Notes 35

So in characteristic 0, every algebraic group is smooth. This is because every reduced scheme has a nonempty open on which the scheme is smooth, and then we can translate.

Proof. Note that OX,x is regular if and only if OˆX,x is regular. So regular is 2 L i i+1 equivalent to Sym(m/m )  i m /m being an isomorphism. We are going to construct the map OˆG,e → Oˆg,0 given by

2 M i i+1 M i i+1 Sym(m/m )  m /m → m /m . i i The composition is the identity, and this shows that all the maps are isomor- phism. So we want to construct his map. First we will define an exponential map ∧ ∧ ˆ ˆ g0 → Ge and then induce Spec Og,0 → Spec OG,e. Now we need to define

∧ ∧ Hom(Spec R, g˜0 ) → Hom(Spec R,G0 ).

The left hand side is ξ ∈ R ⊗g such that ξ = 0 in the reduced R of R. The right hand side is compatible family of automorphisms of R ⊗ V that are identity on R ⊗ V . The left hand side are endomorphims of R ⊗ V that vanishes on R ⊗ V . Then we can define T 2 T˜ = exp(T ) = 1 + T + ξ + ··· , ξ ξ ξ 2 because this is actually finite. In characteristic 0, we can define

Oˆ = lim O /mi. X,x ←− X,x Then we can define distributions

i ∗ DistX,x = colim(OX,x/m ) . ∼ We will later prove that in characteristic 0, DistG,e = U(g). For characteristic positive, U(g) will looks something more like the wrong symmetric power. Math 224 Notes 36

11 March 1, 2018

Let G be an algebraic group. We defined 2 g = TeG = Hom(Spec k[]/ ,G) ×Hom(Spec k,G) {e}. Theorem 11.1. An element ξ ∈ g is equivalent to the functor taking V ∈ Rep(G) to TV (ξ) ∈ End(V ) that is compatible with the tensor structure

TV1⊗V2 (ξ) = TV1 (ξ) ⊗ id + id ⊗TV2 (ξ).

There is the regular representation OG ⊗ OG → OG, and then we get an l l endomorphism Lieξ of OG. This commutes with right translations, and so Lieξ is a derivation of OG. We can think of this as a right-invariant vector field. Now let me do this same construction with a different perspective. For each 2 2 ∗ x ∈ X, we can take TxX = mx/mx, and TxX = (mx/mx) . We can also say ∗ ΩX,x = Tx X. But we can also define the sheaf

TX = H omX (ΩX , OX ). These are the tangent vectors, informally. Now we can take

TX,x → TxX. ∨ ∨ This is a special case of doing (F )x → (Fx) . This is not in general an isomorphism, but is an isomorphism if F is locally free. ∼ ∗ Proposition 11.2. There exists an canonical isomorphism ΩG = OG ⊗ g and TG ' OG ⊗ g such that sections of ΩG (repsectively TG) that is right-invariant ∗ ∗ corresponds to g ,→ OG ⊗ g and g ,→ OG ⊗ g. But we want to refine this.

11.1 Equivariant quasicoherent sheaf Let G act on Y , and F ∈ QCoh(Y ). Definition 11.3. A structure of G-equivariance on F is, for all S consider F  OS ∈ QCoh(Y × S) and make Hom(S, G) act on it, compatibly with pull- backs with respect to S0 → S. This has all functorial properties built in.

• If G acts on Y1 and on Y2, and if Y1 → Y2 is an equivariant morphism, then pullbacks and pushforwards of equivariant sheafs are equivariant. • If Y is a point, then QCoh(pt)G = Rep(G). Most textbooks will define this differently. A structure of an equivariance on F is an isomorphism ϕ : pr∗(F ) → act∗(F ). Here, pr, act : G × Y → Y are the projection and the action. This ϕ should satisfy some associativity condition. Math 224 Notes 37

Lemma 11.4. Let G act on Y . Then ΩY has a canonical structure of a G- equivariance. Proof. For any g ∈ Hom(S, G), this defines Y ×S → Y ×S. Then it immediately gives an automorphism of ΩY  OS = ΩY ×S/S. Let Y = G, with an action by G by left translations. Lemma 11.5. There exists an equivalence of categories QCoh(G)G ' Vect.

Proof. You take the vector space V and take this to V ⊗ OG. This is sending Vect → QCoh(pt)G → QCoh(G)G. The functor in the opposite direction is going to be F 7→ Fe. ∼ ∗ Now this immediately shows that ΩG = g ⊗ OG.

11.2 Distributions For a point x ∈ X of a scheme, we defines the distributions as

ˆ ∗ Dist(X)x = topological dual (OX,x) .

Her is another definition. Take Xi ⊆ X be points so that Xi,red = {x} by i defining Xi = Spec(Ox/mx). Then we have the ind-scheme

∧ Xx = colimi Xi.

There, we have ∗ Dist(X)x = colimi(OXi ) . This is a vector space, and it is going to be a cocommutative . If you know about differential operators, there is a sheaf D(X) such that D(X)x = Dist(X)x. For example, let us take X = V and x = 0. Then we have

∗ ≤i ∗ OX = Sym(V ),OXi = Sym (V ).

Now consider n (Symn(V ))∗ = Symg (V ∗). n This normally, Symn(V ) is a quotient of V ⊗n, and so Symg (V ) is going to be a subspace of V ⊗n. If characteristic 0, we are going to get the invariants, but 2 for other characteristic, we have to be a bit careful. Symg (V ) is going to be the vector space spanned by v ⊗ v. Then we have

2 Sym^n(V ) = (Sym^2(V ) ⊗ V ⊗(n−2)) ∩ (V ⊗ Symg (V ) ⊗ V ⊗(n−3)) ∩ · · · .

Then we can write Dist(V )0 = Sym(g V ). Math 224 Notes 38

Note that if f : X → Y is a map sending x ∈ X to y ∈ Y , then we have a map in the other direction of rings, and so on duals we get

Dist(X)x → Dist(Y )y. Also, we can check that

Dist(X1 × X2)x1×x2 = Dist(X1)x1 ⊗ Dist(X2)x2 . This shows that we have a map

Dist(G)e ⊗ Dist(G)e → Dist(G)e coming from m : G × G → G. This gives an additional algebra structure, so Dist(G)e has a structure of a cocommutative Hopf algebra on Dist(G)e. There is something called the universal enveloping algebra U(g) for a Lie algebra g. This is an algebra U(g) such that a map U(g) → A of algebras is the same as a map g → A of Lie algebras (with the Lie algebra structure given by the .) We can construct this explicitly as

U(g) = T (g)/(ξ1 ⊗ ξ2 − ξ2 ⊗ ξ1 − [ξ1, ξ2]). There is a natural filtration given on the , which descends to U(g). Theorem 11.6 (Poincar´e–Birkhoff–Witt). The map Sym(g) → gr U(g) is an isomorphism.

Note that there is a natural map g → Dist(G)e. This is because there is 2 generally a map Tx → Dist(X)x given by O/m → k. You can check that this map commutes with the bracket. So we get a map

U(g) → Dist(G)e. Consider the case G = V . Then we have

U(V ) Dist(V )0

Sym(V ) Sym(g V )

T (V )

Now this map is defined as

n n X T (V ) → Symg (V ); v1 ⊗ · · · ⊗ vn → vσ(1) ⊗ · · · ⊗ vσ(n).

σ∈Σn This is not necessarily an isomorphism if characteristic is not zero. In particular, for V = k we get k[t] → k[t]; tn → n!tn. Math 224 Notes 39

Theorem 11.7. The map U(g) → Dist(G)e is an isomorphism in characteristic 0. This will also prove the Poincar´e–Birkhoff–Witttheorem.

Proof. It is enough to show that gr U(g) → gr Dist(G)e is an isomorphism. But we have gr U(g) gr Dist(G)e

∼= Sym(g) Sym(g TeG) and so we should have and isomorphism.

∧ ∧ Note that we have an exponential map g0 → Ge , and this induces a map

Dist(g)0 → Dist(G)e of only cocommutative . We can interpret this as

exp Dist(g)0 Dist(G)e

Sym(g g) U(g)

T (g) where T (g) → U(g) is dividing by n!. This is what Baker–Campbell–Hausdorff really is. Math 224 Notes 40

12 March 6, 2018

For a Lie algebra g, we have the universal enveloping algebra U(g). We also have for each point x ∈ X the distributions Dist(X)x, and Dist(G)e becomes a cocommutative Hopf algebra by TxX → Dist(X)x. Then we had a map

U(g) Dist(G)e

g TeG.

Theorem 12.1. In characteristic 0, the map U(g) → Dist(G)e is an isomor- phism.

∗ ∧ ∗ Corollary 12.2. ObG,e = (U(g)) and Ge = Spf(U(g)) . Today I want to say a few words about what happens in characteristic p.

12.1 Lie algebra over characteristic p There is a map g → g written as x → x[p] defined in the following way. Any ξ on X is a differential operator of order 1. Then ξn is also a differential operator of order 1 because

ξp(fg) = ξp−1(ξ(f, g)) = ξp(f)g + p(··· ) + fξp(g).

Let’s look at some examples.

∂ Example 12.3. Take G = Ga. Then g = span( ∂t ). Then we have  ∂ [p] (t) = 0 ∂t

∂ [p] and so ( ∂t ) = 0. ∂ Example 12.4. Take G = Gm. Then we have g = span(t ∂t ) and  ∂ [p] t (t) = t ∂t

∂ [p] ∂ and so (t ∂t ) = t ∂t . Now we can define the restricted universal enveloping algebra as

[p] p U(g)restr = U(g)/(ξ − ξ ).

e Corollary 12.5. The map U(g) → Diff(G) factors through U(g)restr. Math 224 Notes 41

∼ p Recall that we had Sym(g) = gr U(g). If we define Sym(V )restr = Sym(V )/(v ) then we will have Sym(g)restr  gr U(g)restr.

If you choose a V = span{e1, . . . , en}, then this is going to look like

O p Sym(V )rest = k[t]/t . i Recall that there is a Frobenius map Frob : G → G, and its kernel if nilpo- tent, and the same is true for ker(Frobn). Because the Frobenius kills all tangent vectors, Spec(O/m2) ⊆ ker(Frob) and then

Spec(O/m···) ⊆ ker(Frobn).

This shows that ∧ n Ge = colimn ker(Frob ). Here ∧ ∧ ker(Frob : Ge → Ge ) = ker(Frob : G → G) but ker(Frob) are non-isomorphic for Ga for Gm. This shows that there is no way to express this purely in terms of the Lie algebra. ∗ We can consider (Oker(Frob)) ⊆ Dist(G)e.

∗ Lemma 12.6. The map U(g) → Dist(G)e factor through (Oker(Frob)) .

∗ Proof. It suffices to show that generators are mapped into (Oker(Frob)) . But g 2 ∗ ∗ is mapped to (O/m ) and this is contained in (Oker(Frob)) . We thus obtain a map

∗ U(g)rest → (Oker(Frob)) .

Theorem 12.7. This is an isomorphism. n Example 12.8. Last time we had Symg (V ) = (Symn(V ∗))∗ and then

Sym(g V )rest ⊆ Sym(g V ).

Then we get O

You can see that the map Sym(V ) → Dist(V )e factors as

∼= Sym(V ) → Sym(V )rest −→ Sym(g V )rest → Dist(V )e where the isomorphism is explicitly described as k[t]/tp → k[s]

Proof. We prove this on gr. We have

∗ gr U(g)rest gr(Oker(Frob))

∼=

∼= Sym(g)rest Sym(g g)rest where the right vertical map is seen to be an isomorphism by choosing etale coordinates.

12.2 Quotients Given H ⊆ G, we want to define the quotient. That is, we want to find G/H such that H HomG(G/H, Y ) = Y (k) . Definition 12.9. A G- is a scheme X equipped with an action of G such that for any x ∈ X(k), the map G → X given by g 7→ gx is faithfully flat.

Proposition 12.10. Suppose X is a G-homogeneous space and h = StabG(x). Then X has the universal property.

H Proof. We want to show that HomG(X,Y ) → Y (k) is an isomorphism. I want to construct the map in the other direction. There are two maps G × H → G, first projection and multiply. Then the two maps G × H → G → Y agree, and so by faithfully flat , we get

G × H

G Y

X the map X → Y .

Theorem 12.11. Let G be reduced and H ⊆ G. Then there exists a G-scheme X and x ∈ X such that StabG(x) = H. We need to construct this from somewhere. Theorem 12.12. There exists a representation V and a subrepresentation V 0 ⊆ 0 V such that the stabilizer StabG(V ) = H. Here, by definition,

0 Hom(S, StabG(V )) ⊆ Hom(S, G)

0 consists of automorphisms of V ⊗ OS that maps V ⊗ OS to itself. Math 224 Notes 43

Theorem 12.13. We can take V 0 to be 1-dimensional. If we assume this, here is how you prove it. For 0 ≤ k ≤ dim V , there is the Grk(V ) with Hom(S, Grk(V )) given by the locally free k subsheaves of OS ⊗ V of rank k. Take Y = Gr (V ). Then G → GL(V ) acts on Y , and V 0 is a k-point of Y = Grk(V ). The stabilizer of this is going to be H. So we get the first theorem from the second. Here is how you prove the third theorem from the second. We have a natural transformation k Vk Gr (V ) → P( (V )), Vk Vk given by sending E ⊆ OS ⊗ V to (E ) ⊆ OS ⊗ (V ), called the Pl¨ucker embedding.

Proof of second theorem. Let H correspond to the ideal I ⊆ OG. Then there exist a subrepresentation V ⊆ OG such that V ∩ I generates I. Then we first 0 note StabG(I) = H. Also, StabG(V ∩I) = StabG(I). So we take V = V ∩I.

Corollary 12.14. There exists a homogeneous space such that StabG(x) = H. Proof. The homogeneous space is the orbit Gx. Corollary 12.15. If G is reduced, then G/H is reduced.

Also, note that G/H is quasi-projective because we took it from the Grass- mannian. We are assuming that G is reduced. Proposition 12.16. The map G → G/H is smooth if and only if H is reduced. Proof. Because G/H is reduced and G → G/H is faithfully flat, we can check smoothness of fibers. But then H is smooth if it is reduced. Proposition 12.17. Let H ⊆ G be normal. Then G/H has a unique group structure such that G → G/H is a group homomorphism and G/H is affine. Proof. We can use faithfully flat descent to construct G/H × G/H → G/H. To prove that G/H is affine, we first take G◦/G0 ∩ H. This is a connected component, so we may assume that G is connected. Take V a G-representation such that StabG(kv) = H. H acts on v by a character χ. But characters are rigid, so the action of G and H be conjugation preserves the character. Let

V1 = {v1 ∈ V1 : h(v1) = χ(h)v1}.

This a subrepresentation, and so we can assume that H|V acts via χ. This gives a map G/H → PGL(V ). This is categorically injective, so G/H is isomorphic to a closed subscheme of PGL(V ). Now it suffices to show that PGL(V ) is affine. This sat inside P(End(V )) as an open subset, with nonzero determinant. Math 224 Notes 44

13 March 8, 2018

Last time we introduced quotients. For H ⊆ G, we characterized G/H with as the universal property. Definition 13.1. Let H acts on X. We say that X → Y is a faithfully flat quotient of X by H if (1) π is H-invariant, (2) π is faithfully flat, ∼ (3) H × X = X ×Y X. Lemma 13.2. Hom(Y,Z) = Hom(X,Z)H .

Proof. The proof is basically faithfully flat descent. The two maps X ×Y X ⇒ X → Z agree if X → Z is H-invariant, so we get Y → Z.

Observe that if X is a G-homogeneous space, and H = StabG(x), then the map G → Y is a faithfully flat quotient of G with respect to the action of H by right translations. We showed last time that for any H there exists a homogeneous space with this property. It sufficed to construct a G-space Y and y ∈ Y such that StabG(y) = H. Theorem 13.3. Let a 1-dimensional connected group scheme G act on a com- plete variety. Then it has a fixed point. Proof. Take an arbitrary point. Then we get a map G → X, and then we can compactify G,→ G. Because X is proper, we can extend G → X to G → X using the valuative criterion, and this is a map of G-varieties. Then the points of G \ G are fixed.

13.1 Parabolic subgroups and solvable groups Definition 13.4. A subgroup P ⊆ G is parabolic if G/P is complete.

Lemma 13.5. If P is parabolic in G and P1 is parabolic in P , then P1 is parabolic in G. Note that base being proper and the fiber being proper does not imply proper. Proof. Note that being proper is local in the faithfully flat . So we can take G/P 0 G × P/P 0 P/P 0

G/P G pt

todo Math 224 Notes 45

Lemma 13.6. If P ⊆ G is parabolic and P ⊆ Q then Q is parabolic. Proof. We have a surjective G/P → G/Q. Note that if X is complete and π : X → Y is surjective, then Y is complete. To show this, we need to show that Y × Z → Z is closed. But we have X × Z → Y × Z surjective, so we can take image of V ⊆ Y × Z by looking at the inverse image in X × Z and then looking at the image in Z. Definition 13.7. A group G is solvable if the following equivalent conditions hold:

(1) If we define G0 = G and Gi = [Gi−1,Gi−1] then Gn = {1} for n  0.

(2) There exists {1} = G˜n ⊆ · · · ⊆ G˜1 ⊆ G˜0 = G of normal reduced subgroups such that G˜i/G˜i−1 are abelian. (3) (2) without assumption on reducedness. It is clear that (1) implies (2) implies (3). For (3) implies (1), we note that Gi ⊆ G˜i. Lemma 13.8. The following conditions are equivalent: (1) G is solvable. (2) Any representation contains a nonzero G-fixed line. (3) Any representation V has a G-stable flag.

(4) G,→ Bn where Bn is the upper-triangular matrices. Proof. (2) implies (3) implies (4) implies (1) is clear. We have to show (1) implies (2). We induct on the dimension of the group. Then we know that G1 V 6= 0. Then G/G1 acts on this, and the quotient is abelian so we find something that is invariant.

Lemma 13.9. Let G be solvable and connected. Then it has a filtration by nor- mal subgroups such that the subquotients are either Gm and Ga. G is unipotent if and only if all these subquotients are Ga. Note that all unipotent groups are solvable. Let me postpone the proof.

Theorem 13.10 (Borel). If G is solvable and connected, acting on X complete, then it has a fixed point.

G G1 G/G1 Proof. For G1 ⊆ G, we have X = (X ) . Lemma 13.11. P ⊆ G is parabolic if and only if P 0 ⊆ G0 is parabolic.

Proof. We note that (G/P )0 = G0/G0 ∩ P . So G/P is proper if and only if (G/P )0 is proper if and only if G0/G0 ∩ P is proper. But P 0 ⊂ G0 ∩ P is finite index. Theorem 13.12. (1) If G has no proper parabolics, then G is solvable. Math 224 Notes 46

(2) If G is connected and solvable, then it has no proper parabolics. Proof. For (2), suppose that G has a proper parabolic. Then G acts on G/P , which is proper, but it does not have a fixed point. This contradicts Borel’s theorem. For (1), let V be a representation. We want to show that there exists a G-stable line. Consider the action of G on P(V ) and take a closed orbit. Either this is a single point and we get a fixed point, or this is a not a point and we get a proper parabolic. Let us now prove that lemma. First reduce to the case when G is abelian. Then we have G = Gu × Gss. We know that Gss is a product of a bunch of Gms. So we can assume that G is unipotent. By induction, it suffices to find a subgroup isomorphic to Ga. Embed G into Um. We know that there exists a filtraion 0 = G0 ⊆ G1 ⊆ · · · ⊆ Gn = Um such that dim(Gi/Gi−1) = 1. Now take i such that dim(G ∩ Gi) > 0. Then we 0 get G ∩ Gi/G ∩ Gi−1 ,→ Gi/Gi−1 so dim(G ∩ Gi) = 1. So ((G ∩ Gi)red) = Ga. Proposition 13.13. In characteristic 0, a commutative unipotent group is V . Proof. Recall that we had Gu as a Zariski closed subgroup, because we were able to phrase this as a condition on characteristic . In the same way, we similarly define gn. Now the claim is that

exp gn −−→ Gu is an isomorphism. Note that the S-points on the right are the automorphisms n of V ⊗ OS that are unipotent. On the other hand, g are endomorphisms on V ⊗ OS that are nilpotent. So we get this from

nil exp uni EndR (V ⊗ R) −−→ AutR (V ⊗ R).

Note that this is a homomorphism because exp(A + B) = exp(A) exp(B) by commutativity. Note that in characteristic p, we have

⊕··· Ga(k) = k = (Z/pZ) .

So we can take any finite subgroup and the quotient is going to be Ga because it is 1-dimensional reduced. For G a group over characteristic p, we can define

L : G → G; L(g) = Frob(g)g−1.

Theorem 13.14 (Lang). L is always ´etale. If G is connected, then it is sur- jective. Also ker(L) = G(Fp). Math 224 Notes 47

Proof. For ´etale,we need to show that the differential is an isomorphism. First reduce to the case g = e. This is because given an infinitesimal g0, we can take

−1 −1 −1 −1 Frob(gg0)g0 g = Frob(g)(Frob(g0)g0 )g . Now note that the map is the composition of

G −−−−−−→id × Frob G × G −−−−→inv×id G × G −−−→mult G.

But its differential at e is then g → g ⊕ g → g ⊕ g → g given by −1, because the derivative of the Frobenius is 0. Now let us compute its kernel. Consider g ∈ G(k) such that Frob(g) = g. Note that a point in X(Fq) satisfies Frob(x) = x if and only if x ∈ X(Fp). todo −1 For the second part, consider the G action given by G with g∗g1 = Frob(g)g1g . But 0 −1 −1 L (g) = g1 Frob(g)g1g is ´etalebecause we can use the same argument on the tangent spaces. So all orbits are opens, and because G has one connected component, there is only one orbit. Math 224 Notes 48

14 March 20, 2018

We were talking about solvable groups. A group G is solvable if there exist

1 = G0 ⊆ G1 ⊆ · · · ⊆ Gn = G such that Gi/Gi−1 are commutative. You can impose the condition that Gi are reduced, and even let Gi = [Gi−1,Gi−1]. Theorem 14.1. The following are equivalent: (1) G is solvable. (2) Every representation has an invariant line. (3) Every representation has a G-stable flag.

(4) Every G can be realized as a subgroup of Bn. Theorem 14.2. Let G be a solvable connected group. Then there exists 1 = G0 ⊆ · · · ⊆ Gn = G such that Gi/Gi−1 is either Ga or Gm. Moreover, G is unipotent if and only if all subquotients are Ga.

Corollary 14.3. If G is unipotent, there exist 1 = G0 ⊆ · · · ⊆ Gn = G with each of the normal subgroups such that the adjoint action of G on Gi/Gi−1 is trivial. Definition 14.4. P ⊆ G is parabolic if G/P is complete. Lemma 14.5. (a) If P is parabolic and P ⊆ Q then Q is parabolic. (b) If P is parabolic in G and P 0 is parabolic in P then P 0 is parabolic in G. 0 (c) If ϕ : G  G is a surjective morphism, and P is parabolic, then ϕ(P ) is parabolic. Theorem 14.6. Let G be connected. Then G contains a proper parabolic if and only if G is not solvable. To prove this, we used the Borel fixed point theorem. Theorem 14.7. If G is connected solvable, and acts on X a , then it has a fixed point.

14.1 Definition 14.8. A subgroup B ⊆ G is Borel if it is connected, solvable, and maximal with these properties. Theorem 14.9. (a) D is a parabolic if and only if it contains a Borel. (b) A Borel is parabolic. (c) Any given Borel can be conjugated into any given parabolic. (d) Any two Borels are conjugate. Math 224 Notes 49

Proof. (c) We have B acting on G/P , and this has a fixed point. If we pick something in this fixed point and then conjugate by this, B goes inside P . (b) If G is solvable, then this is trivial. If G is not solvable, it has a proper parabolic P , and we may assume that B ⊆ P ⊆ G. Clearly, B is a Borel of P . By induction on the dimension, we have that B is parabolic of P , and then parabolic is transitive. (a) If it contains a Borel, it contains a parabolic and is parabolic. If it is parabolic, some Borel can be conjugated into it. (d) is a consequence of (b) and (c). Corollary 14.10. Z(G)◦ ⊆ B.

14.2 Structure theory of connected solvable groups Unless said otherwise, we are going to assume that G is solvable and connected. Proposition 14.11. (a) There exists a

1 → U → G → T → 1

where U is connected and unipotent, and T is a torus. Here, g ∈ G is unipotent if and only if g ∈ U. (b) There exists a non-canonical splitting, so that G = T o U. Proof. (a) We first have

1 → Un → Bn → Tn → 1.

Also, we know that G embeds into Bn. So we can take U = G ∩ Un. These are precisely going to be the unipotent elements of G. Then G/(G ∩ Un) embeds into Tn, and hence is a torus. For (b), we have G  T . Assume first that k 6= Fq. Then find an element λ ∈ T such that λ generates a dense subset of T . Find a semi-stable element g ∈ GG that projects to λ. Define G0 = hgi. Then G0 surjects to T . Here, G0 0 is a semi-simple commutative group, because under G,→ GLn, G should lie 0 inside some torus Tn. But the kernel of G  Gm should be trivial because it 0 ∼ unipotent as well (as it lies in U). So G = Gm and we get a section. If k 6= Fq, we use the fact that the fiber product

Y Mapgroup(T,G)

id Mapgroup(T,T ).

We want to show that Y (k) is nonempty, but we know that Y (k0) is nonempty for other algebraically closed k0. So the ind-scheme Y should be nonempty as a scheme, and then Y (k) is nonempty by Nullstellensatz. Math 224 Notes 50

Proposition 14.12. Mapgroup(T,G) is a scheme.

Note that we have proved before that Mapgroup(G, T ) is a discrete scheme.

Proof. Because we can embed Mapgroup(T,G) → Mapgroup(T, GLn), we reduce to the case of GLn. Note that the k-points of this is going to be the ways of L Gm acting on V , which is the ways to write V = i Vi with Gm acting on Vi by the ni-th power. So we will have

a Y GL(V )/ GL(Vi) P dim V = i dim Vi i This is because the S-points are locally free sheaves E of rank dim V over S, L equipped with an action of Gm, which is equal to to splittings E = i Ei.

14.3 Maximal tori Definition 14.13. A maximal torus in G is T 0 ⊆ G that projects isomorphi- cally to T . We just showed that maximal tori exists for connected G.

Theorem 14.14. Let G be a connected solvable. (a) All maximal tori are conjugate.

(b) If s1, s2 are two semisimple elements that project to the same elements in T , then they are conjugate. (c) Centralizers of semisimple elements are connected.

Proof. (b) We first reduce to when U = Ga. Let us induct on the dimension of 0 Ga. Let us pick Ga ,→ U and take the quotient of G by Ga and let G = G/Ga 0 0 0 0 and U = G /Ga. Then 1 → U → G → T → 1. Now any two maximal tori are conjugate in G0. So given any two elements in G, we can make them have equal image in G0. That is, we may now assume that G is the inverse image of 0 T in the map G → G . That is, we can assume that G = T n Ga. But this is now completely explicit. Every action of Ga by automorphisms of Ga is the n-th power of the standard characters. Consider G = Gm n Ga for simplicity. Take two elements (s, a) and (s, 0). Assume that Gm acts on Ga by n-th powers. If n = 0, we are done. If n 6= 0, we need to solve the equation bsb−1 = bs, which is b − λnb = λn. This is solvable if λn 6= 1, which is the case because if as = sa then we get a contradiction to that (s, a) is semisimple. Math 224 Notes 51

15 March 27, 2018

We talked about connected solvable groups. We should that any solvable group fits in a short exact sequence

1 → U → G → T → 1 with a splitting T → G, and moreover U has a filtration with subquotients Ga.

Theorem 15.1. Let g1 and g2 be two semisimple elements that project to the same element in T . Then they are conjugate.

Proof. By induction, we reduce to the case when U : Ga. Now G is T n Ga, and so T Acts on Ga. This is given by some character χ. Say that our two points are (t, 0) and (t, a). If χ(t) = 1, then (t, a) = (t, 0)(1, a) and so they can’t both be semisimple unless a 6= 0. If χ(t) 6= 1, then we want to solve (1, b)(t, 0)(1, −b) = (t, a). Then this is the same as solving χ(b) = a + b, which is a = χ(t)b − b. Because χ(t) 6= 1, we can solve it. Recall that a maximal torus in G is a T 0 ⊆ G that projects isomorphically to T . Corollary 15.2. Any torus can be conjugated into any given maximal torus. So any two maximal tori are conjugate.

Proof. Consider {g : Adg(S) ⊆ T }. We want to say that this has a point, so we have to show that it has a point in some field extension. todo

Theorem 15.3. Let t ∈ T ⊆ G be semisimple. Then ZG(t) is connected.

Proof. By induction, we can take out a Ga-factor from U. By Let this by 0 0 0 → U → G → T → 1. Then we assume ZG0 (t) is connected. So we now have

0 1 → Ga → G → G → 1 where t ∈ T ⊆ G0, and moreover that the image of t in G0 is central. Now T acts on Ga, by some character χ. First suppose that χ(t) 6= 1. Then, consider the map ZG(t) → ZG0 (t), and we want to show that this is an isomorphism. This map is injective becasue the kernel should be in Ga but Ga acts non-trivially. (If characteristic is 0, then t 0 t we can conclude this by looking at Lie algebras and showing that (g)  (g ) .) Given any g0 ∈ G0, we want to it to g that is in the centralizer. Let us first lift to g ∈ G0 to g ∈ G. Then we can write

Adt(g) = ag for some a ∈ Ga. Then we find a b such that Adt(bg) = bg, by solving the same equation we did. Now consider the case when χ(t) = 1. Here, we have G = T n Ga. Let us write G = T n U in general, where U is a unipotent group. Then U has a lift Math 224 Notes 52 such that the adjoint action of t on the subquotients is trivial. This means that we have a map T → Autflag(U) flag ◦ of flag-preserving automorphisms. Here, you can show that (Aut (U))red is an algebraic group, and that the kernel

◦,flag flag Y 1 → Aut (U) → Aut (U) → Gm

◦,flag ◦ of the map given by taking the subquotients has (Aut )red unipotent. So t maps to something that is semisimple, but it lies in a unipotent group so it is the identity. This means that t acts as the identity on U. So ZG(t) = G. Corollary 15.4. Let G be solvable, and consider a maximal torus T . The normalizer is NG(T ) = ZG(T ) and is connected. Proof. After field extension, there is going to be a point that generates the torus. So ZG(T ) = ZG(t) for some t, and is connected. To show that NG(T ) = ZG(T ), we look at ntn−1t−1 for any normalizing n. Then ntn−1t−1 ∈ T so it is semi- simple. But if we project to the torus, this should be 1, so the commutator should lie in U. This means that is is unipotent, and so 1.

We also talked about Borel subgroups. For G a connected group, B ⊆ G is Borel if it is solvable and maximal with this property. Theorem 15.5. (1) If B is a Borel subgroup, then G/B is complete. (2) Any connected solvable can be conjugated into any parabolic.

Corollary 15.6. Any Borel can be conjugated into any parabolic. Any solvable can be conjugated into any Borel. Corollary 15.7. Any two Borels are conjugate. Lemma 15.8. Z(G)◦ ⊆ Z(B) ⊆ Z(G).

Proof. Take any g ∈ Z(B). Then we have gxg−1x−1 : G → G. This factors though G/B, and G/B is proper and G is affine. Lemma 15.9. If B is normal, then B = G. Proof. Consider G/B. This is proper, and affine.

15.1 Reductive groups

Definition 15.10. The unipotent radical Ru(G) is the maximal normal unipotent connected subgroup. We say that G is reductive if Ru(G) = 1.

T ◦ Lemma 15.11. Ru(G) = ( B BU ) where B runs over all Borels and BU is the unipotent part. Math 224 Notes 53

Proof. One direction is by definition, because the right hand side is a normal unipotent connected. In the other direction, Ru is in some BU and then conju- gate them around.

Consider B = T n BU .

Proposition 15.12. Suppose that the adjoint action of T on BU is trivial. Then G = B.

Proof. If B = T × BU , then there exists some Ga ⊆ Z(BU ) ⊆ Z(B). Then 0 Ga ⊆ Z(G) and then we can quotient out G = G/Ga. Then we can proceed by induction.

Corollary 15.13. B = T implies G = B = T .

◦ Corollary 15.14. B ⊆ (NG(B)) is an equality.

◦ Proof. Let H = (NG(B)) . Because B ⊆ H, we have B = H. todo Theorem 15.15. Every element of G is contained in a Borel.

Proof. We define

−1 −1 G˜ = (G × B)/(g, b1) ∼ (gb , bb1b ).

Then we can consider G˜ G × G/B π

G.

−1 This is going to be (g, b1) being sent to (gb1b , g mod B). We’ll do this next time. Math 224 Notes 54

16 March 29, 2018

Let G be a connected group. Lemma 16.1. Maximal tori are conjugate.

Theorem 16.2. ZG(T ) is connected. (Later, if G is reductive, then T ⊆ ZG(T ) is an equality.)

Theorem 16.3. Every element of G belongs to a Borel, and every semisimple element of G belongs to a maximal torus.

Theorem 16.4. B,→ NG(B) is an equality. Before proving, I will play around with these theorems.

Corollary 16.5. If P is a parabolic, then P is connected and P,→ NG(P ) is an equality.

◦ Proof. We will show that P → NG(P ) is an equality. Suppose x ∈ NG(P ). ◦ ◦ ◦ Then for B ⊆ P , we have Adx(B) ⊆ P . Because both are Borels in P , there ◦ 0 −1 exists a y ∈ P such that Adx(B) = Ady(B). Then if we write x = y x, we 0 ◦ get Adx0 (B) = B. So x ∈ B ⊆ P by the third theorem. Corollary 16.6. If B ⊆ P 1 and B ⊆ P 2 are conjugate parabolics, then P 1 = P 2. Corollary 16.7. Let T be a torus, and let S be the Borels containing T . Then NG(T ) acts on S. (i) This action is transitive.

(ii) ZG(T ) acts trivially.

(iii) NG(T )/ZG(T ) acts simply.

1 2 1 Proof. (1) Suppose T ⊆ B ,B . Then we can find g ∈ G such that Adg(B ) = B2. But inside the solvable group, any two maximal tori are conjugate. So we can assume Adg(T ) = T . (2) If T ⊆ B, it suffices to show that ZG(T ) ⊆ B. We will prove uncondi- ◦ ◦ tionally that ZG(T ) ⊆ B. Let C = ZG(T ) . Last time we proved that if a maximal torus is central in the Borel, then BC = C. So C is connected solvable, so T ⊆ C ⊆ B0 for some other Borel B0. Now we can play our usual game. (3) For T ⊆ B, assume that x ∈ NG(T ) and x ∈ NG(B). By the third theorem, we have x ∈ NB(T ). But then recall that NB(T ) = ZG(T ). (This was proved by ntn−1t−1 being both semisimple and unipotent.) Math 224 Notes 55

16.1 Cartan subgroup ◦ Definition 16.8. The definition of the Cartan is C = ZG(T ) . Proposition 16.9. There exists a dense open subset in G such that all of its elements can be conjugated into C.

Corollary 16.10. Every element of G belongs to a Borel. Proof. The theorem implies that there exists a dense open of elements that belong to a Borel. But we introduced this Grothendieck alteration G˜ ⊆ G/B×G such that the fiber over G/B is the conjugate of the Borel B. The image is dense open, and because it is closed, the image is the whole thing.

Theorem 16.11. For S ⊆ G a torus, ZG(S) is connected.

Proof. Let g ∈ ZG(S), and let B be a Borel such that g ∈ B. Then let X ⊆ G/B consisting of those x such that Adx(B) 3 g. This is nonempty, and S acts on X. This means that the action has a fixed point. Let x be the fixed point, and 0 0 0 ◦ 0 B = Adx(B). Then g ∈ B , so we have S ⊆ (NG(B )) implies S ⊆ B . Then g ∈ ZB0 (S). But we have proved that inside a solvable group, centralizers of ◦ ◦ tori are connected. So g ∈ ZB0 (S) ⊆ ZG(S) . Proof. Consider similarly the

G/C × G ⊇ G˜ −→π G.

We want to show that π is dominant. It is enough to show that there exist g ∈ G such that π has a finite nonempty fiber over G, by semi-continuity. We use our trick of enlarging the field, and take t ∈ T such that the closure of hti is T . We want to see that there are finitely many (modulo C) elements g such that t ∈ Adg(C). Here, t ∈ Adg(C) implies Adg0 (t) ⊆ C, which implies Adg0 (T ) = T . So g ∈ NG(T ), and we want to see that NG(T )/ZG(T ) is finite. This is because the action on a torus is given by a discrete character, and so ◦ (NG(T )) ⊆ ZG(T ). Now we show that the normalizer of a Borel is itself.

Proposition 16.12. Let S is a torus and B ⊇ S, then ZG(S) ∩ B is a Borel in ZG(S). Using this we can prove the theorem.

Proof. Let x ∈ NG(B). Then we can assume that Adx(T ) = T , because we can move T back. Consider the following strange map

−1 ψ : T → T ; t 7→ Adx(t)t .

◦ Let S = (ker ψ) . First consider the case S 6= 1. We know that x ∈ ZG(S) and also it normalizes B ∩ ZG(S), which is a Borel inside ZG(S). If ZG(S) ( G, Math 224 Notes 56

then by the induction hypothesis on ZG(S), we get x ∈ B ∩ ZG(S) ⊆ B. If ZG(S) = G, then we can replace G by G/S, and we are again done by induction. Now assume that S = 1. Then ψ : T → T is surjective. Let H = NG(B). Then there exists a representation V and v ∈ V such that

H = {g ∈ G : gv ⊆ span{v}}.

(This came up in our construction of the quotient.) Then we get a map

G → V ; g 7→ gv.

The claim is that this factors through G → G/B → V . To show this, we need to show that B acts trivially on v. A priori, the action of NG(B) on V is given by a character χ : NG(B) → Gm. We want χ|B to be trivial. But we know that B = Bu o T , and we need to see that γ|T is trivial. But T ⊆ [NG(H),NG(H)] because ψ was surjective. So we are done, up to that proposition. Math 224 Notes 57

17 April 3, 2018

Today we are going to do more examples. But last time, we needed something. Proposition 17.1. Let G be connected, and S ⊆ B ⊆ G where S is a torus. Then ZG(S) ∩ B ⊆ ZG(B) is a Borel subgroup.

Proof. We want to show that ZG(S)/ZG(S) ∩ B → G/B is closed. This means that Z(G) × B has close image in G. Let Y be the image, so that we want to show Y = Y . Consider the map

Y × S → B;(y, s) 7→ Ady−1 (s).

But we can map further to Y × S → B → B/BU = T . This is a family of maps −1 of tori, parametrized by Y . So Ady (s) = s modulo BU , for y ∈ Y . todo By the centralizer of S, I could have meant two things. I could have meant the scheme-theoretic centralizer, or I could have meant the reduced subscheme of the k-points.

Proposition 17.2. Scheme-theoretic ZG(S) is reduced. Proof. It is enough to prove that for s ∈ G a semi-simple element, its centralizer ZG(s) is reduced. Let s be a semi-simple automorphism of a smooth affine algebraic variety Y . (This means that it is semi-simple on the finite stables of the of functions.) Now we can take Y s as

Hom(Z,Y s) = (Hom(Z,Y ))s.

If Y = Spec A, then Y s = Spec A/(f − s(f)). The claim is that Y s is smooth. Smoothness is equivalent to every n-jet extending to an n + 1-jet.

Spec k[t]/tn Y s

ϕ Spec k[t]/tn+1 0 Y

n+1 n Here, we have ϕ : A → k[t]/t , and we want to find ϕ = ϕ0 + t l so that 0 n 0 0 ϕ is s-invariant. The claim is that if ϕ = ϕ0 + t l where l is the s-invariant projection of l, then ϕ0 is an .

17.1 Nonsolvable groups with maximal torus of dimension 1 Let G be connected and T be a maximal torus. Assume dim T = 1. (This is called a group of rank 1.) If G is solvable, it is a semi-direct product and I can’t say much about this. Math 224 Notes 58

Theorem 17.3. If G is not solvable, then

1 → K → G → PGL2 → 1, where K◦ is unipotent, so that it is the unipotent radical. Proof. Consider G/B. We’ll prove that dim(G/B) = 1 as a variety. If we have this, B acts on G/B which is proper, and fixes a point. So G/B should be 1 P . Because the automorphisms is PGL2, we get a map G → PGL2. Then G → PGL2 is surjective because it acts transitively on the flag variety. Because K◦ is a connected algebra group, B(K◦) is unipotent. Then B(K◦) = K◦. So dim(G/B) = 1 is what we want to prove. We have W = NG(T )/ZG(T ) acts on T . So |W | ≤ 2 because T = Gm. Because W acts simply transitively on the set of Borels containing T , there are at most 2 Borels. Take a representation v ∈ V such that B = {g ∈ G : gv ∈ kv}. I can restrict to subrepresentations containing v, so we can assume that G-translates of v span V . Choose a basis e1, . . . , em such that Gm acts on ei by ni-th power of the standard character. Assume that n1 ≥ n2 ≥ · · · ≥ nm. We know that T acts on G/B, and we are interested in the T -fixed points. But G/B sites inside P(V ), P and so let’s study the Gm-fixed points in P(V ). A line i aiei can be a fixed point if and only if ai, aj 6= 0 implies ni = nj. 0 P But we also wan to look at attractors. Take an arbitrary element v = aiei 0 and look at the map Gm → P(V ) given by λ 7→ λv . By the valuative criterion, this extends to P1 → P(V ). The image of 0 and ∞ will be something like chucking away all ai except for the minimal or maximal ni. Without restriction of generality, we can assume that G,→ GL(V ). Then I claim that n1 > nm. If it wasn’t, T acts on V by scalars and T ⊆ Z(G), then G would be solvable. Now I will be playing with the fact that there are only two Borels. There exists a translate g1v such that X g1v = a1e1 + aiei n≥2 with a1 6= 0. Here, we will have

lim (λgv) = line in the space of {ei} with eigenvalue n1. λ→0 Because G/B is the space of Borel, and this is a fixed point in projective space with the T -action, it is a Borel. Now by the similar logic, we take g2 such that P g2v = amem + n≤m−1 anen and then limλ→∞ has eigenvalue nm. This is another Borel. 0 0 0 Now take V = span{e1, . . . , em−1}. Then P(V ) ⊆ P(V ). We have P(V ) ∩ 0 G/B ⊆ G/B, and we will prove that P(V ) ∩ (G/B) = B1. If we show this, hitting a by a hyperplane only reduces the dimension by at P most 1, so dim(G/B) = 1. STo show this, take aiei ∈ G/B with am = 0. P i Now limλ→∞ λ aiei must be B1 because it cannot be B2. Then it actually should be B1. Math 224 Notes 59

Theorem 17.4. Let G be connected reductive with dim(T ) = 1. Then G = PGL2 or G = SL2.

0 0 Proof. Let G = PGL2. There is a map G → G and this should induce an isomorphism G/B =∼ G0/B0 because both are P1. There is

1 K G G0 1

1 K B B0 1.

0 0 0 This gives B = T n BU → T n BU = B . The claim is that BU → BU is an isomorphism. If we believe this, we have

0 n K = ker(T → T ) = ker(Gm → Gm; t 7→ t ). The claim is that either n = 1 or n = 2. We have

0 1 → µn → G → G → 1.

Even if characteristic is nonzer, the of µn is governed by characters. So G being connected implies that µn ∈ Z(G). Take n a nontrivial −1 element in W . Then Adn(t) = t . Because µn lives in the center, this is trivial. Therefore inversion is trivial, and so n = 1 or n = 2. For n = 1, this is easy. For n = 2, we want to show that G is SL2. Take

000 ◦ 00 G = (SL2 ×G0 G)red SL2 = G

G G0.

If we can show that the left vertical is an isomorphism, we have B → B00 → B0, and then T → T 00 → T 0 because it is an isomorphism on the unipotent. Then T → T 0 is covering of degree 2 and T 00 is degree 2, so T → T 00 is an isomorphism. Then B → B00 is an isomorphism, so G → G00 is an isomorphism. To see that G000 → G is an isomorphism, we can again take Borels and tori, and see that T 000 → T is an isomorphism. This again can be seen by looking at degree. todo Math 224 Notes 60

18 April 5, 2018

Last time we did the following. ∼ Theorem 18.1. Let G be a connected nonsolvable and T = Gm. Then there exists a nonsolvable map G → PGL2 that is surjective and induces an isomor- 1 phism G/B  P . The only nontrivial case is when dim G = 2. Lemma 18.2. If dim(G) = 2 then G is solvable. Proof. Consider B ⊆ G. Either G = B and G is solvable. If dim(B) ≤ 1, then B is commutative and is contained in the center of G. So we can quotient by it and G is solvable as well.

Theorem 18.3. If G is reductive and dim T = 1, then G is either SL2 or PGL2.

We looked at G → PGL2 and looked at T → Gm. Then the Borels are 0 0 0 0 going to be B = T n BU → B = T n BU . The claim is that BU → BU is an isomorphism. We know that dim(BU ) = 1 and so BU = Ga. So we have a morphism ϕ : Ga → Ga. This shows that

X i φ = ai Frob .

n But this has to be Gm-equivariant, with some Gm → Gm given by x 7→ x . This will show that there cannot be higher Frobenius.

18.1 Groups of semi-simple rank 1 Today we will tighten our control. Let G be reductive. The rank of G is defined as dim T and the semi-simple rank is the rank of G/Z(G)◦. Theorem 18.4. Let G be reductive of semi-simple rank 1.

(1) Either [G, G] is SL2 or PGL2.

(2) If [G, G] = PGL2, then G = PGL2 ×T0. If [G, G] = SL2, then G = 1 SL2 ×T0 or G = GL2 ×T0 . Theorem 18.5. Let G be reductive. (1) [G, G] ∩ Z(G) is finite. (2) [G, G] × Z(G)◦/([G, G] ∩ Z(G)◦) → G is an isomorphism. Using the short exact sequences

1 → [G, G] → G → Z(G)◦/[G, G] ∩ Z(G)0 → 1,

1 → Z(G)0 → G → [G, G]/[G, G] ∩ Z(G)0 → 1, we get the following. Math 224 Notes 61

Corollary 18.6. (1) [G, G] is reductive. (2) G = [G, G] if and only if Z(G)◦ = 1 if and only if Z(G) is finite. (3) Z([G, G]) ⊆ Z(G). (4) Z(G)0 → G → G/[G, G] is surjective with finite kernel. (It is an .)

Proof. What we really have to prove is (4). To prove that the kernel is finite, we need to map G → T so that Z(G)0 → G → T has finite kernel. To do this, find a G on V such that G,→ GL(V ). Inside there, there is a torus Z(G)◦ and it acts on V by a finite number of characters. L Therefore we can write V as V = i Vi. Then G respects this so that so that 0 Q Z(G) → G → i GL(Vi). Then we can take the determinant

0 Y Y Z(G) ,→ G → GL(Vi) → Gm. i i This has finite kernel. Now we show that it is surjective. Define the adjoint quotient Gad = G/Z(G) and take Z(G) → G → G/[G, G] → Gad/[Gad,Gad].

The claim is that [Gad,Gad] = Gad implies that the map is surjective. If we have this fact, then Z(G) surjects onto G/[G, G]. But because G/[G, G] is connected, 0 Z(G) already surjects. We will prove that [Gad,Gad] = Gad later. Now let us prove the classification theorem of semi-simple rank 1 groups. Proof. (1) We know that G = [G, G] × Z(G)0/[G, G] ∩ Z(G)0. 0 (2) First if [G, G] = PGL2, then we have [G, G] ∩ Z(G) = 1 so we are taking 0 0 the product with Z(G) . If [G, G] = SL2, then either G/Z(G) = SL2 or PGL2. 0 0 In the first case, we also have [G, G] ∩ Z(G) = 1 and we get G = SL2 ×Z(G) . 0 0 If G/Z(G) = PGL2, then [G, G] ∩ Z(G) = µ2. Then we have

◦ ◦◦ (SL2 ×T )/µ2 = (SL2 ×Gm/µ2) × T

◦ ◦◦ because µ2 → T looks like µ2 → Gm × T with the µ2 action on Gm part.

18.2 Roots We don’t have the ABC of the structure theory of reductive groups, but we will develop them with roots. Let G be a connected group and let T be a maximal torus. Definition 18.7. A fake root ζ is a nonzero eigenvalue of the adjoint action of T on g. Math 224 Notes 62

Let α be a fake root. We write

◦ ZG((ker(α)) ) = Gα.

Suppose G is reductive. If we appeal to your picture of reductive groups, this will look like Lie(Gα) = gα ⊕ t ⊕ g−α. In general, your group is the semi-direct product

G = Ru(G) o Gred. There are several scenarios for the fake root.

1. α occurs as a root on Gred. In this case, we get

Gα = Uα o Gα,red,

◦ where Uα is the part of Ru(G) centralized by ker(α) .

2. α occurs as a root on Gred, but its rational multiple appears in Ru(G). In this case, we again have Gα = Uα o Gα,red. 0 3. α occurs on Ru(G), but its rational multiple α occurs in Gred. Then Gα = Uα o Gα0,red.

4. No rational multiple of α occurs in Gred. Then Gα = Uα × T so that Gα is solvable. Definition 18.8. A true root is a fake root that occurs on the reductive part of Gα.

Lemma 18.9. For α a true root, either Gα is solvable or Gα/Ru(Gα) is of semi-simple rank 1.

◦ Proof. Consider Gα/Ru(Gα) ⊇ T such that ker(α) is contained in the center. ◦ Consider Gα/Ru(Gα) ker(α) . todo We can consider R ⊆ X∗(T ) the set of true roots. Definition 18.10. Let Λ be a lattice and Λ∨ be the dual roots. For α, we consider sα(λ) = λ − hλ, αˇiα.A finite is a pair of sets

R ⊆ Λ,R∨ ⊆ Λ∨ and a correspondence R ↔ R∨ such that •h α, αˇi = 2,

• sα preserve R ⊆ Λ, • the subgroup that they generate Aut(Λ) is finite. Math 224 Notes 63

The claim is that the set of true roots form a root system. I need to produce coroots for you. Recall that

1 → Ru(Gα) → Gα → Gα/Ru(Gα) → 1 where Gα/Ru(Gα) is a of semi-simple rank 1. Whichever case, there is a canonical map SL2 → Gα/Ru(Gα).

Then there is a corresponding map Gm → T of tori. This we define to be ∨ αˇ ∈ Λ = X∗(T ). We can check hα, αˇi = 2 inside SL2. We can now consider the

W = NG(T )/ZG(T ).

∗ The action of W on X (T ) preserves the set of true roots. Also, sα ⊆ W (Gα) ⊆ 0 1 W by pushing ( −1 0 ) in SL2.

Proposition 18.11. W is generated by the reflections sα. Proof. We do this by induction on the dimension of G. Let w ∈ W . Because T is commutative, consider the map ϕ(t) = w(t)t−1 from T to T . Then either dim(ker(ϕ)) ≥ 1 or ker(ϕ) is finite. ◦ 0 0 If dim(ker) ≥ 1, let S = ker(ϕ) . Take G = ZG(S) so that w ∈ G . If G0 < G, then we are done by induction. If G0 = G, then S is central in G and so we can take G00 = G/S. Then we are also done by induction.

If ker(ϕ) is finite, it is an isogeny ϕ : T → T . Consider ΛQ = Λ ⊗Z Q. ϕ defines an isomorphism, and is defined by v 7→ w(v) − v. Let α ∈ R and let v ∈ ΛQ be such that w(v) − v = α. Now we claim that hα,ˇ vi = −1. If we know that, sαw(v) = sα(v + α) = v − hαvˇ iα − α = v. 0 0 Then we can set w = sαW and w has eigenvalue 1. Then we can peel this off and do induction again. Now let’s prove that hα,ˇ vi = −1. I have an invariant form, and I can average over W . Then we have 2(α, x) hα,ˇ xi = (α, α) and (v + α, v + α) = (w(v), w(v)) = (v, v) implies that hα,ˇ vi = −1. Math 224 Notes 64

19 April 10, 2018

Miraculously, we are developing the theory of roots without the structure theory of reductive groups. Let G be connected and T ⊆ G be a maximal torus. Fake roots are just the eigenvalues of T on g. For γ a character, we can take

◦ Gγ = ZG(ker(γ) ).

If you look at the reductive of Gγ , we can write ◦ ◦ 1 → ker(γ) → Gγ,red → Gγ,red/ ker(γ) → 1.

By the classification from before, there are two scenarios: either Gγ,red is Gm or it is PGL2 or SL2. We say that γ is a true root if both of the following are satisfied: (i) the second scenario occurs, ◦ Look at the adjoint action of Gγ / ker(γ) on gγ , which is a 1-dimensional torus because G/ ker(γ)◦. Then we get a short exact

0 → Lie(Ru(Gγ )) → gγ → Lie(Gγ,red) → 0.

(ii) γ appears as a character of T on Lie(Gγ,red).

We consider the Weyl group W = NG(T )/ZG(T ). For α a true root, we get ∗ an element sα ∈ W (Gα) ⊆ w. Let X (T ) be the lattice of characters of T and X∗(T ) be the lattice of cocharacters of T . For all α, there exists aα ˇ ∈ X∗(T ) such that ˇ ˇ ˇ sα(λ) = λ − hα,ˇ λiα, sα(λ) = λ − hα, λiα.ˇ ∗ ˇ for λ ∈ X (T ) and λ ∈ X∗(T ).

19.1 Examples of root systems We defined root systems last class.

Theorem 19.1. sα generate W . Example 19.2. Consider Λ = Z and Λˇ = Z. Then we can have roots ±1 and coroots ±2. Or we can have roots ±2 and coroots ±1. The first corresponds to PGL2 and the second corresponds to SL2. There are no others, because we need hα, αˇi = 2.

Example 19.3. Consider (Ga × Ga) o Gm with the action given by (5, 7) times the standard character. Then 5, 7 are fake roots in X∗(T ) =∼ Z, because in both cases, Gγ is the entire group. ∗ Example 19.4. Consider (Ga × Ga) o SL2. Then X (T ) = Z and the action is given by (1, −1). Then the eigenvalues are these two and ±2. In all the cases (except for α = 0), we have Gα = G and so the true roots are only those coming from SL2, which are ±2. For α = 0, we have Gα = ZG(Gm) so this is a fake root. You can do this for other representations, like (−2, 0, 2) o SL2. Math 224 Notes 65

Example 19.5. Consider A2, which is {α1, α2, α1 + α2, −α1, −α2, −α1 − α2}. Consider the dual basisω ˇi such that hωˇi, αji = δij. Then we will haveα ˇ1 = 2ˇω1 − ωˇ2 andα ˇ2 = −ωˇ1 + 2ˇω2. This is PGL3 and the dual is the SL3.

Example 19.6. Consider B2. Here, Λ = Zω1 ⊕ Zω2 and the roots are ±ωi and ±ω1 ± ω2. Then for α = ω1 and β = −ω1 − ω2 and we haveα ˇ = 2ˇω1 ˇ ˇ and β =ω ˇ2. In this case, Λ = Span{α, β} and Span{α,ˇ β} ⊆ Λˇ has index ∼ 2. The corresponding group and its dual are SO5 and Sp4 = Spin5. (Index 2 corresponds to this being a double cover.)

Example 19.7. We consider G2. For this, both the group and its dual are generated by the roots.

Definition 19.8. A set of positive roots is a choice of R+ ⊆ R such that there exists a ξ ∈ Λˇ or ξ ∈ Λˇ such that hα, ξˇi= 6 0 for every α ∈ R and ˇ Q R+ = {α : hα, ξi > 0}. The Weyl group acts on the choices of positive roots. We will see that it does so simply transitively. Theorem 19.9. Let G be a connected group and T ⊆ B ⊆ G. Consider the action of true roots that is contained in B. Then it is a system of positive roots. Proof. There exists a representation V and a vector v ∈ V such that {g : gv ⊆ kv} = B. Then the action is given by some character tv = λ(t)v for λ ∈ X∗(T ). We will show that α ∈ B if and only if hα,ˇ λi > 0. Math 224 Notes 66

20 April 12, 2018

We talked about root systems. We said that R+ ⊆ R is a set of positive roots if there exists a λˇ ∈ Λ∨ such that α ∈ R+ if and only if hλ,ˇ αi > 0.

Theorem 20.1. Choose T ⊆ B ⊆ G. Note that we have shown that Gα ∩ B is a Borel in Gα. Then for any root α, either α or −α is in the Borel. The claim is that {α : α ∈ B} is a system of positive roots. Proof. Let V be a representation and v such that gv ⊆ kv if and only if g ∈ B. Then tv = λ(t)v for some λ ∈ Λ = X∗(T ). We will show that α ∈ B implies hα,ˇ λi > 0. We can replace G by Gα. Then we have the uniponent radical Ru(Gα) ⊆ Gα and this acts trivially on v. So we can replace V by V Ru(Gα). Then we can replace Gα with Gα/Ru(Gα). This will come from SL2 → Gα and myα ˇ will be in SL2. Now we assume that G = SL2, and V is generated by gv for g ∈ G. Let P n1 ≥ n2 ≥ · · · ≥ nm be the eigenvalues. Because i ni = 0, we have ni > 0. Then we can look at G/B ⊆ P(V ). We can look at G/B → P(V ) given by g 7→ gv, and v should be a Gm-attractor. This shows that v has eigenvalue n1, which is positive. Here is another proof. For ξ : V → k, an equivariant line bundle on a point is the same as a G-equivariant line bundle L on G/B. Now the condition that gv generates V is equivalent to the condition that the G-equivariant V,→ Γ(G/B, L ) is injective. todo

20.1 Structure of reductive groups Theorem 20.2. Fix T ⊆ G. T (1) B⊇T Ru(B) = Ru(G).

(2) Lie(Ru(G)) is the span of the fake roots. For (1), it suffices to show that the left hand side is normal.

Corollary 20.3. Let S be a torus. If G is reductive then ZG(S) is reductive.

0 Proof. Take S ⊆ T . For B Borel subgroup of ZG(S), we have

\ 0 \ \ Ru(B ) = (Ru(B) ∩ ZG(S)) ⊆ Ru(B) = {1}, B0⊇T B⊇T B⊇T

0 where B runs over Borel subgroups of ZG(S).

Corollary 20.4. For G reductive, ZG(T ) = T .

Proof. We know that ZG(T ) is reductive. So ZG(T ) = T ×Ru(ZG(T )) = T . Corollary 20.5. If α is a root, then Math 224 Notes 67

1. dim gα = 1, 2. cα is root if and only if c = ±1.

◦ Proof. Take α to be a root and take Gα = ZG(ker(α) ). We know that gα lies ◦ in the Lie algebra of Gα but we know that this is reductive. So Gα/ ker(α) is either SL2 or PGL2. It suffices to check in this case. Let’s no prove the theorem.

Proof. We’ll first prove (2). The inclusion Lie(Ru(G)) ⊆ span is obvious. Take γ γ the eigenspace g and consider the corresponding Gγ . Then g is in the Lie algebra of Ru(G) and is contained in Gγ . In the other direction, if γ is a fake γ root then g ⊆ Lie(Ru(Gγ )) and is contained in Ru or every Borel of Gγ . T For (1), we need to show that T ⊆B Ru(B) is normal in G. The claim is L γ γ that G is generated by Gγ . This is because g = g and g ⊆ Lie(Gγ ). So it T γ suffices to show that Ru(B) is preserved under conjugation by each Gγ . T ⊆B T T If Gγ is solvable, then Gγ ∩B = Gγ for some B. Then Gγ ⊆ B ⊆ T ⊆B B. Then it is uninteresting. Suppose Gγ is not reductive. Then

1 → Ru(Gα) → Gα → Gα,red → 1 and Gα,red is generated by its two Borel subgroups that contains a given maximal T torus. Then for T ⊆ Bα ⊆ Gα we want to show that Bα preserves Ru(B). T ⊆B We want to create a unipotent subgroup Rα such that Rα ⊇ R = T ⊆B Ru(B) that contains Ru(Gα) and has dim(Rα) − dim(R) = 1. If such Rα exists, then R is necessarily normal in it by the following lemma. We define \ \ R = Ru(B),Rα = Ru(B). T ⊆B T ⊆B,α∈B

Then we need to check dim(Rα) − dim(R) = 1. But Lie(Rα)/ Lie(R) is spanned by those true roots β for which the following happen: if α ∈ B then β ∈ B. In this case, we show that this actually implies α = β. The Weyl group W acts on the set of positive root systems. A combinatorial fact is that W acts simply transitivity. So every system of positive roots is of the form {α ∈ B} for some Borel B. The next proposition solves this. Lemma 20.6. If H is connected and unipotent, and H0 ⊆ H is such that dim(H) − dim(H0) = 1, then H0 is normal.

Lemma 20.7. If H is a connected unipotent with H0 ⊆ H, if dim(H0) < 0 0 dim(H) then dim(NH (H )) > dim(H ).

0 Proof. We have some Ga ⊆ ZG(H). If H does not contain Ga, we can ad it. Else, quotient out. Math 224 Notes 68

21 April 17, 2018

We proved the following structural theorem. Let G be connected.

T ◦ Theorem 21.1. ( T ⊆B Ru(B)) = Ru(G). Also, we have M Lie(Ru(G)) = gα ⊕ Lie(Ru(ZG(T ))). α6=0 fake

Here, ZG(T ) = T × Ru(ZG(T )).

Corollary 21.2. If S ⊆ G is a torus and G is reductive, then ZG(S) is reduc- tive. If G is reductive, then ZG(T ) = T . From now on, assume G is reductive. Corollary 21.3. Z(G) ⊆ T . Corollary 21.4. If Γ ⊆ G is finite normal unipotent then Γ = {1}.

Proof. G acts on Γ by conjugation, so Γ is central. So Γ is in a torus, which means that Γ is semi-simple.

Let’s look at root systems again. Note that if R1 and R2 are two root systems, we can make a stupid root system R = R1 q R2 and Λ = Λ1 ⊕ Λ2. So there actually is another root system or rank 2, namely A1 × A1. These four A1 × A1,A2,B2,G2 are the all the rank 2 root systems. Here is how you see it. Take α, β a basis. Then we can take hα,ˇ βi ≤ 0. In this case sβ ◦ sα acts on Λ and preserves orientation. So the matrix

 −1 −hα,ˇ βi  hβ,ˇ αi hα,ˇ βihβ,ˇ αi − 1 has strictly less than 2. This shows that hα,ˇ βihβ,ˇ αi ≤ 3.

21.1 Structure of reductive groups II For G a reductive group, its Lie algebra is M M g = gα ⊕ t ⊕ g−α. α∈R+ α∈R+

α 0 Then G = ZG(ker(α) and

α Lie(G ) = gα ⊕ t ⊕ g−α.

Its will have

α α Lie([G ,G ]) = gα ⊕ Span(ˇα) ⊕ g−α. Math 224 Notes 69

Lemma 21.5. t ∈ Z(G) if and only if α(t) = 1 for all α ∈ R+. Proof. Because Gα generate G, it is enough to show this for Gα. But we know what these look like. Corollary 21.6. If Z(G)0 = {1} if and only if [G, G] = G.

α α Proof. It is enough to show that Lie([G, G]) is all of g. But gα ⊆ Lie([G ,G ]). So it suffices to show that Z(G)◦ = 1 implies that t is spanned byα ˇ. This follows from Lie(Z(G)) = (span{α})⊥. Definition 21.7. G is semi-simple if it is reductive and Z(G)◦ = 1. Theorem 21.8. If G is reductive, then

Z(G)◦ → G → G/[G, G] is a isogeny, i.e., it is a surjective map of tori with finite kernel. Corollary 21.9. Z(G) ∩ [G, G] is finite. Also,

[G, G] ∩ Z(G)◦ → G [G, G] ∩ Z(G)◦ is an isomorphism. Corollary 21.10. Z([G, G]) ⊆ Z(G). Z([G, G]) finite implies that [G, G] is semi-simple. We proved that Z(G) = {t ∈ T : α(t) = 1 for all ζ ∈ R}.

0 Corollary 21.11. X∗(Z(G)) = X∗(Z(G) ) as a subset of X∗(T ) is the set ˇ ˇ {λ ∈ X∗(T ): hλ, αi = 0}. Lemma 21.12. The map T → G → [G, G] is a surjection and X∗(G/[G, G]) as a subset of X∗(T ) is {λ : hλ, αˇi = 0}. Corollary 21.13. The following are equivalent: (1) Z(G)◦ = {1} (2) [G, G] = G (3) Span {α} = Λ Q Q (4) Span {αˇ} = Λˇ . Q Q Definition 21.14. G is said to be of adjoint type if Z(G) = 1. Lemma 21.15. G is of adjoint type if Span {α} = X∗(T ). Z For G reductive, G/Z(G) is actually adjoint type.

Definition 21.16. G is said to be simply-connected if Span{αˇ} = X∗(T ). Math 224 Notes 70

Lemma 21.17. For every G reductive there exists

Gsc → [G, G] ,→ G

◦ which is the dual picture of G  G/Z(G)  G/Z(G).

Definition 21.18. For G1,G2 reductive, we say that the map G1 → G2 is an isogeny if it is surjective and has finite kernel on the torus.

Theorem 21.19. Let G be semi-simple and G1 ⊆ G be a connected . Then there exists a unique G2 ⊆ G such that G1 × G2 → G is an isogeny.

◦ Proof. Take a Cartan T and T1 = (T ∩ C1) ⊆ T . We can take G2 =

[ZG(T1),ZG(T1)]. This is the right think to look at, because ZG1×G2 (T1) = T1 × G2. Let’s now look at Borels. Take B ⊆ G and look at the unipotent part U. Then we have M Lie(U) gα. α∈R+ α For each α, we actually have SL2 → G → G, and we can look at the image of Ga and write Uα. Proposition 21.20. The multiplication map

Y Uα → U R+ is an isomorphism of varieties equipped with a T -action.

Proof. Assume that the characters of T on Lie(U) appear with multiplicity 1. Then there exists a Ga ⊆ Z(U) such that the action of T on Ga is given by α0. Then we can look at Q α Uα U

Q U U/U . α6=α0 α α0 Now we can use the following fact.

Suppose we have φ X Y πx πy Spec A, where everything is equipped with a Gm-action, and the grading on A is positive. (This means that A0 = k and An = 0 for n < 0.) If πx, πy are smooth, φ Math 224 Notes 71

−1 −1 defines an isomorphism on the fiber πx (0) → πy (0), and the action on Y is contractible, then φ is an isomorphism. 0 00 Let G be reductive. To each B, we have B/Ru(B). Note that if g and g both conjugates B1 to B2, they define the same map B1/Ru(B1) → B2/Ru(B2). This is because the normalizer of B is itself and then the action of b ∈ B on B/Ru(B) is trivial. So all B/Ru(B) are canonically isomorphic, and this is called the abstract Cartan. Likewise, you can define the abstract Weyl group. Theorem 21.21 (Bruhat decomposition). Let X = G/B be the flag variety. Then the orbits of the diagonal action of G on X × X are in bijection with W .

Given an element w ∈ W , we are going to take (B,Bw) ⊆ X × X. Math 224 Notes 72

22 April 19, 2018

We had this notion of an abstract Cartan that is canonically attached to G. If G is reductive, we look at a Borel B and take B/Ru(B). If g1, g2 conjugates B1 to B2, they give the same isomorphism B1/Ru(B1) → B2/Ru(B2). Similarly we can define the abstract Weyl group. For T , we look at NG(T )/T . If g conjugates (T1 ⊆ B1) → (T2 ⊆ B2), we will get map Adg : NG(T1)/T1 → NG(T2)/T2. But then any normalizer of (T1 ⊆ B1) is in B1, so it is T1 itself. So we can define the abstract Weyl group like this.

22.1 Bruhat decomposition Let X be the flag variety, the variety of Borels.

Theorem 22.1. G-orbits of X × X are in bijection with W .

Proof. We need to construct the maps. Given T ⊆ B and w ∈ NG(T )/T , we consider the orbit of (B, Adw(B)) ∈ X × X. This is independent of the choice of T ⊆ B, because we are going to conjugate anyways. The construction in the other way is more remarkable. Consider B1,B2 two Borels. Then we get

(B1 ∩ B2)/Ru(B1 ∩ B2) B2/Ru(B2)

∼= ∼= B1/Ru(B1) T that doesn’t necessarily commute. Then by the following lemma, B1 ∩ B2 con- tains some maximal torus, so all the maps are isomorphisms. The noncommu- tativity is going to be given by some automorphism of T , and this is going to be the w we want. Lemma 22.2. The intersection of any two Borels contains a maximal torus. Now choose T ⊆ B.

Corollary 22.3. The B-orbits on G/B are in bijection with W = NG(T )/T . Consider Xw = Bw inside G/B. Then this looks like

w X = B/B ∩ Adw(B).

These are called Schubert cells. This is a homogeneous space, but we can say Q more things. Recall that B = T nU and we proved last time that U = α∈R+ Uα w where Uα = Ga. Then we can also write X = U/U ∩ Adw(U). Q Lemma 22.4. (a) U ∩ Adw(U) ⊆ U can be described as α,w(α)∈R+ Uα ⊆ Q α∈R+ Uα. ∼ Q (b) U/U ∩ Adw(U) = α∈R+,w(α)∈R− Uα. Math 224 Notes 73

This lemma is highly believable. At least it is evident at the level of Lie algebras. Corollary 22.5. B × B-orbits on G (with left and right multiplication) are in bijection with W : w 7→ BwB = Gw. These are called Bruhat cells.

w Q We can also write this as G = ( α∈R+,w(α)∈R− Uα)wB. Let w0 ∈ W be w0 the element such that X ⊆ W is open. Then w0 satisfies

dim(X) = dim(G) − dim(B) = dim(g) − dim(b) = |R+|.

w0 + − So dim(X ) = dim(X) means that w0 satisfies w0(R ) = R . This is called the longest element of the Weyl group. In this case,

w0 w0 G = Uw0B,X = Uw0.

22.2 Simple roots Suppose we have a root system R and pick a system of positive roots R+ ⊆ R. Definition 22.6. α ∈ R+ is simple if it cannot be written as a sum of positive roots with nonnegative coefficients α 6= α1 = α2. Let I ≡ R+ be the set of simple roots. Lemma 22.7. (1) α form a basis for Span (α) = (Λ ) . i Q [G,G] Q (2) si generates W . In the semi-simple case, we can look at

Λad = Span{α} ⊆ Λ.

We already had Λˇ sc = Span{αˇ} and so we have

Λad ⊆ Λ ⊆ Λsc.

So there are only finitely many choices for Λ.

Lemma 22.8. Λad can be uniquely recovered by the (I, hαi, αˇji).

+ Proof. We have R ⊆ Λad = Span{α}. Then we can just reflect si(αj) = αj − hαj, αˇiiαi. Then you can look at the set generated by all these reflections. Theorem 22.9. Let B be a Borel. Parabolics that contain B are in bijection with subsets of I. Math 224 Notes 74

Let J ⊆ I. We will find a parabolic PJ with M Lie(PJ ) = Lie(B) ⊕ g−β β where β lies in the positive integral span of αj for j ∈ J. This is going to be given by T ◦ PJ = Ru(PJ ) o MJ ,MJ = ZG(( j∈J ker(αj)) ). L L L Then Lie(MJ ) = β gβ ⊕ t ⊕ β g−β and Lie(Ru(PJ )) = α6=β gα. Let’s now show that these are all the parabolics. If Gm acts on Z, then we can think of Z+ = Maps ( 1,Z), which is the z ∈ Z such that lim tz exists. Gm A t→0 There is a natural map Z+ ⊆ Z. We can also think of Z0 = Maps (pt,Z). Ga Then there is a natural map Z+ → Z0 by taking the attractor, and a Z0 → Z+ by looking at the constant map. If Z = Spec A, with grading, we have Z+ = Spec A+ where A+ is A quotiented out by all the negative degree things. Then A0 is Aquotiented out by all nonzero degree things.

Lemma 22.10. (a) If Z is smooth then Z+,Z0 are smooth. 0 0 (b) If z ∈ Z , then Tz(Z ) is the direct sum of TzZ on which Gm acts triv- + ially. Tz(Z ) is the direct sum of TzZ on which Gm acts with nonnegative eigenvalues. ˇ So here is the construction. For J ⊆ I, we look at λ ∈ X+(t) such that ˇ ˇ ˇ hλ, αji = 0 for j ∈ J and hλ, αii > 0 for j∈ / J. Then λ : Gm → T . Now we + 0 define PJ = G and MJ = G . Math 224 Notes 75

23 April 24, 2018

I want to classify parabolic subgroups. Fix a Borel T ⊆ B and we want to classify the parabolics containing B.

23.1 Classification of parabolics

Theorem 23.1. There is a bijection between J ⊆ I and PJ a parabolic con- taining B, such that M Lie(PJ ) = Lie(B) ⊕ gα,

α∈Span{αj :j∈J} and

T 0 PJ = MJ n Ru(PJ ),MJ = ZG(( j∈J ker(αj)) ), Y Y Gα = Ru(PJ ) ⊆ Ru(B) = Gα. + + α∈R ,α/∈Span{αj } α∈R

0 + Our method was to look at the Gm action on Z and look at Z → Z ,→ Z. For example, on Z = V >0 × V 0 × V <0 an affine space, we will have Z+ = V >0 × V 0 and Z0 = V 0. Lemma 23.2. If Z is smooth, then Z+ and Z0 are smooth. If z ∈ Z0, then 0 0 + >0 TzZ = (TzZ) and TzZ = (TzZ) . ˇ ˇ Now consider a coroot λ : Gm → T such that hλ, αji = 0 if j ∈ J and ˇ + hλ, αji > 0 if j∈ / J. Then we define PJ = G . This PJ is going to contain B, because we have + PJ B = B ⊆ B + 0 and TeB = TeB by design. Now consider MJ = G . We have maps PJ → MJ ˇ and MJ → PJ . Then we have MJ = ZG(λ(Gm)). The claim is that

ker(PJ → MJ ) = Ru(PJ ). This can be shown again by comparing Lie algebras. Let us now do this other direction. Let B ⊆ P be a parabolic. Lemma 23.3. On G/P , there is a very ample G-equivariant line bundle L on G/P .

Proof. We look at a representation V of G and look at G/P → P(V ).

Let V = Γ(G/P, L ). Then we have a P -invariant functional V → L1 and ∗ ∗ dualizing gives (L1) ,→ V . The stabilizer of a line/coline is exactly P . Let us be given ` ⊆ V and StabG(`) = P ⊇ B. Let B act on ` via some character λ and consider

J = {j : hλ, αˇji = 0}. Math 224 Notes 76

I then claim that P = PJ . Let us first do the inclusion PJ ⊆ P . We know that α PJ = Ru(PJ ) o MJ where MJ is generated by G for α in the span of J. Here, Gα acts on V with Bα acing on v via a character. Because

1 → [Gα,Gα] → Gα → Gα/[Gα,Gα] → 1,

α α α α α [G ,G ] receives a surjection from SL2, and G /[G ,G ] receives an action from a torus, it suffices to show that SL2 acts trivially on v. This can be done. Now because PJ ⊆ P , it is enough to show that Lie(PJ ) = Lie(P ). For + α α ∈ R such that g−α ⊆ Lie(P ), we want to show that g−α ⊆ Lie(PJ ). But G stabilizes kv and the same proof on Gα works. We talked about the Bruhat decomposition. We said that orbits in are in bijection with W . Now we can be interested in G-orbits of (G/P1) × (G/P2). These each correspond to J1 ⊆ I and J2 ⊆ I.

Proposition 23.4. The Weyl group WJ = W (MJ ) = hsj : j ∈ Ji is naturally a subgroup of W . Then the G-orbits are in bijection with WJ1 \ W/WJ2 .

23.2 Length Let w ∈ W . We define its length as

l(w) = #{α ∈ R+, w(α) ∈ R−}.

There is a different way to define length. For any w ∈ W , we can write

w = si1 ··· sin .

Definition 23.5. We say that this is a reduced decomposition if l(w) = n.

We will talk about the interplay between length in the Bruhat decomposition. Suppose w = w1w2 be such that l(w1) + l(w2) = l(w). Consider (X × X)w the corresponding G-orbit on X × X. Then there is a map

(X × X)w1 ×X (X × X)w2 → X × X.

Theorem 23.6. This is an isomorphism onto (X × X)w. There is something called the Bruhat order. Then for the Schubert cells, you can prove that [ Xw ⊆ B ··· w ⊆ X, Xw = Xw0 . w0≤w

Suppose w = si1 ··· sin . According to the previous theorem, we have an isomor- phism

(X × X) × (X × X) × · · · × (X × X) → (X × X) . si1 X si2 X X sin sin Math 224 Notes 77

Then you can take closures on both sides. Then we get a of singu- 1 larities (X × X)w. Here, (X × X)si is going to be some P fibered over the flag variety. We know how to get a root data from a reductive group. We want to see to what extent this classifies reductive groups. Consider the data of T ⊆ B ⊆ G and a pinning, i.e., a choice of a nonzero vector in gαi for i ∈ I. This category ∨ ∨ is called C1. There is another category C2 of just R ⊆ Λ and R ≡ Λ . There is a functor C1 → C2. Theorem 23.7. This is an equivalence of categories.

The pinning is needed because the group has more automorphisms than the root system. T/Z(G) will act nontrivially on G, and this can be identified with Q i∈I Gm given by {αi}. Because we don’t want to see pinnings, which seem to be artificial, we can define the following category. Objects are reductive groups, and morphism Mor(G1,G2) are isomorphisms up to inner automorphisms. You can also define a category with G with some enhancement. Index abstract Cartan, 71 orbit, 17 abstract Weyl group, 71 adjoint type, 69 parabolic subgroup, 44 algebraic group, 4 Poincar´e–Birkhoff–Witttheorem, 38 Borel subgroup, 48 positive roots, 65 Bruhat cell, 73 Bruhat decomposition, 71 rank, 60 Cartan subgroup, 55 reductive group, 52 character, 24 representation, 6 coend, 8 root, 61, 62 root system, 62 distribution, 35 Schubert cell, 72 equivariant sheaf, 36 semi-simple, 18, 69 semi-simple rank, 60 faithfully flat quotient, 44 simple root, 73 flag variety, 6 simply-connected, 69 Grassmannian, 43 solvable group, 45

Hilbert scheme, 25 tangent space, 34 homogeneous space, 42 Tannaka duality, 12 torus, 11 ind-scheme, 29 transitive, 15 isogeny, 70 twisted arrow category, 8 Jordan decomposition, 18 unipotent, 18, 19 length, 76 unipotent radical, 52 Lie algebra, 34 universal enveloping algebra, 38 , 4 universal, 40 maximal torus, 50 Weyl group, 63

78