Linear Algebraic Groups Fall 2015 These are notes for the graduate course Math 6690 (Linear Algebraic Groups) taught by Dr. Mahdi Asgari at the Oklahoma State University in Fall 2015. The notes are taken by Pan Yan ([email protected]), who is responsible for any mistakes. If you notice any mistakes or have any comments, please let me know. Contents 1 Root Systems (08/19) 3 2 Review of Algebraic Geometry I (08/26) 14 3 Review of Algebraic Geometry II, Introduction to Linear Algebraic Groups I (09/02) 18 4 Introduction to Linear Algebraic Groups II (09/09) 24 5 Introduction to Linear Algebraic Groups III (09/16) 30 6 Jordan Decomposition (09/23) 34 7 Commutative Linear Algebraic Groups I (09/30) 40 8 Commutative Linear Algebraic Groups II (10/07) 46 9 Derivations and Differentials (10/14) 51 10 The Lie Algebra of a Linear Algebraic Group (10/21) 56 11 Homogeneous Spaces, Quotients of Linear Algebraic Groups (10/28) 61 12 Parabolic and Borel Subgroups (11/4) 66 1 13 Weyl Group, Roots, and Root Datum (11/11) 72 14 More on Roots, and Reductive Groups (11/18) 79 15 Bruhat Decomposition, Parabolic Subgroups, the Isomorphism Theo- rem, and the Existence Theorem (12/2) 86 2 1 Root Systems (08/19) Root Systems Reference for this part is Lie Groups and Lie Algebras, Chapters 4-6 by N. Bourbaki. Let V be a finite dimensional vector space over R. An endomorphism s : V ! V is called a reflection if there exists 0 6= a 2 V such that s(a) = −a and s fixes pointwise a hyperplane (i.e., a subspace of codimension 1) in V . Then V = ker(s − 1) ⊕ ker(s + 1) 2 + − and s = 1. We denote Vs = ker(s − 1) which is a hyperplane in V , and Vs = ker(s + 1) which is just Ra. Let D = im(1 − s), then dim(D) = 1. This implies that given 0 6= a 2 D, there exists ∗ a nonzero linear form a : V ! R such that x − s(x) = hx; a∗i a; 8x 2 V where hx; a∗i = a∗(x). Conversely, given some 0 6= a 2 V and a linear form a∗ 6= 0 on V , set ∗ sa;a∗ (x) = x − hx; a i a; 8x 2 V this gives an endomorphism of V such that 1 − sa;a∗ is of rank 1. Note that 2 ∗ sa;a∗ (x) = sa;a∗ (x − hx; a i a) = x − hx; a∗i a − hx − hx; a∗i a; a∗i a = x − 2 hx; a∗i a + hx; a∗i ha; a∗i a = x + (ha; a∗i a − 2) hx; a∗i a: ∗ So sa;a∗ is a reflection if and only if ha; a i = 2, i.e., sa;a∗ (a) = −a. WARNING: hx; a∗i is only linear in the first variable, but not the second. Remark 1.1. (i) When V is equipped with a scalar product (i.e., a non-degenerate sym- metric bilinear form B), then we can consider the so called orthogonal reflections, i.e., the following equivalent conditions hold: + − Vs and Vs are perpendicular w.r.t. B , B is invariant under s: In that case, 2B(x; a) s(x) = x − a: B(a; a) (ii) A reflection s determines the hyperplane uniquely, but not the choice of the nonzero a (but it does in a root system, which we will talk about later). 3 Definition 1.2. Let V be a finite dimensional vector space over R, and let R be a subset of V . Then R is called a root system in V if (i) R is finite, 0 62 R, and R spans V ; _ ∗ ∗ (ii) For any α 2 R, there is an α 2 V where V = ff : V ! R linearg is the dual of V ; _ (iii) For any α 2 R, α (R) ⊂ Z. Lemma 1.3. Let V be a vector space over R and let R be a finite subset of V generating V . For any α 2 R such that α 6= 0, there exists at most one reflection s of V such that s(α) = −α and s(R) = R. Proof. Suppose there are two reflections s, s0 such that s(α) = s0(α) = −α and s(R) = s0(R) = R. Then s(x) = x − f(x)α, s0(x) = x − g(x)α for some linear functions f(x); g(x). Since s(α) = s0(α) = −α, we have f(α) = g(α) = 2. Then s(s0(x)) = x − g(x)α − f (x − g(x)α) α = x − g(x)α − f(x)α + f(α)g(x)α = x − g(x)α − f(x)α + 2g(x)α = x − (g(x) − f(x))α is a linear function, and s(s0(R)) = R. Since R is finite, s ◦ s0 is of finite order, i.e., (s ◦ s0)n = (s ◦ s0) ◦ (s ◦ s0) ◦ · · · ◦ (s ◦ s0) is identity for some n ≥ 1. Moreover, (s ◦ s0)2(x) = x − (g(x) − f(x))α − (g(x − (g(x) − f(x))α) − f(x − (g(x) − f(x))α)) α = x − 2(g(x) − f(x))α and by applying the composition repeatedly, we have (s ◦ s0)n(x) = x − n(g(x) − f(x))α: But (s ◦ s0)n(x) = x for all x 2 V , therefore, g(x) = f(x). Hence s(x) = s0(x). Lemma 1.3 shows that given α 2 R, there is a unique reflection s of V such that _ s(α) = −α and s(R) = R. That implies α determines sα,α_ and α uniquely, and hence (iii) in the definition makes sense. We can write sα,α_ = sα. Then _ sα(x) = x − x; α α; 8x 2 V: The elements of R are called roots (of this system). The rank of the root system is the dimension of V . We define A(R) = finite group of automorphisms of V leaving R stable and the Weyl group of the root system R to be W = W (R) = the subgroup of A(R) generated by the sα; α 2 R: 4 Remark 1.4. Let R be a root system in V . Let (xjy) be a symmetric bilinear form on V , non-degenerate and invariant under W (R). We can use this form to identify V with V ∗. Now if α 2 R, then α is non-isotropic (i.e., (αjα) 6= 0) and 2α α_ = : (αjα) This is because we saw that (xjy) invariant under sα implies 2(xjα) s (x) = x − α: α (αjα) Proposition 1.5. R_ = fα_ : α 2 Rg is a root system in V ∗ and α__ = α, 8α 2 R. Proof. (Sketch). For (i) in Definition 1.2, R_ is finite and does not contain 0. To see that R_ spans V ∗, we need to use the canonical bilinear form on V × V ∗ to identify VQ = Q − vector space of V generated by the α and ∗ ∗ _ VQ = Q − vector space of V generated by the α with the dual of the other. This way, the α_ generate V ∗. For (ii) in Definition 1.2, sα,α_ is an automorphism of V equipped with the root system t −1 _ t −1 __ R and (sα,α_ ) leaved R stable, but one can check that (sα,α_ ) = sα,α_ and α = α. _ _ _ _ For (iii) in Definition 1.2, note that hβ; α i 2 Z 8β 2 R; 8α 2 R , so R satisfies (iii). Remark 1.6. R_ is called the dual root system of R. The map α 7! α_ is a bijection from R to R_ and is called the canonical bijection from R to R_. WARNING: If α; β 2 R and α + β 2 R, then (α + β)_ 6= α_ + β_ in general. Remark 1.7. (i) The facts sα(α) = −α and sα(R) ⊂ R imply R = −R. (ii) It is also clear that (−α)_ = −α_. −1 2 A(R), but -1 is not always an element of W (R). t −1 t −1 (iii) The equality (sα,α_ ) = sα_,α implies the map u 7! u is an isomorphism from W (R) to W (R_), so we can identify these two via this isomorphism, and simply consider W (R) as acting on both V and V ∗. It is similar for A(R). First Examples Now we give a few examples of root systems. 5 Example 1.8. (A1): V = Re. The root system is R = fα = e; −eg: + − The reflection is sα(x) = −x. Vs = 0, Vs = V . A(R) = W (R) = Z=2Z. The usual scalar ∗ ∗ ∗ product (xjy) = xy is W (R)-invariant. The dual space is V = Re where e : V ! R such that e∗(e) = 1. Then α_ = 2e∗ and hα; α_i = (2e∗)(e) = 2. R_ = fα_ = 2e∗; −2e∗g is a root system in V ∗, which is the dual root system of R. Observe that if we identify V ∗ ∗ _ 2α and V via e $ e , then α = (αjα) . See Figure 1. −e e Figure 1: Root system for A1, Example 1.8 Example 1.9. (A1-non-reduced): V = Re. The root system is R = fe; 2e; −e; −2eg: ∗ ∗ _ ∗ ∗ _ The dual space is R = Re , and the dual root system is R = {±e ; ±2e g. E = 2e∗; (2e)_ = e∗. See Figure 2 Remark 1.10. Example 1.8 and Example 1.9 are the only dimension 1 root systems for V = R.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages92 Page
-
File Size-