<<

1 Groups

Definition 1.1 (). A Group G is a with a binary m : G × G → G (m(g, h) := gh) such that 1. ∀a, b, c ∈ G, (ab)c = a(bc) 2. ∃e ∈ G such that ∀a ∈ G, ea = ae = a 3. ∀a ∈ G, ∃a−1 ∈ G such that aa−1 = a−1a = e If only 1 is satisfied, then G is a . If only 1 and 2 are satisfied, then G is a . If a structure has ∀a, b ∈ G, ab = ba, then it is abelian. Definition 1.2 (Group ). A f : G → H between groups is a homomorphism if f(ab) = f(a)f(b) If the homomorphism is injective, it is a . If the homomorphism is surjective, it is an . If the homomorphism is bijective, it is an .

Lemma 1.1. Let ϕ : G → H be a . Then ϕ(eG) = eH and ϕ(a−1) = ϕ(a)−1

Proof. ϕ(a) = ϕ(aeG) = ϕ(a)ϕ(eG) −1 −1 ϕ(a) ϕ(a) = ϕ(a) ϕ(a)ϕ(eG) = eH = eH ϕ)eG) = ϕ(eG) The other part is similar. Definition 1.3 ( of a Homomorphism). The kernel of a homomorphism f : G → H is the set {a ∈ G : f(a) = eH } and is denoted ker f Definition 1.4 (). If G is a group and H ⊆ G is itself a group under G’s multiplication, then H is a subgroup of G, denoted H < G Trivially, ker f < G Lemma 1.2. A nonempty subset H ⊆ G is a subgroup iff ∀a, b ∈ H, ab−1 ∈ G Definition 1.5 (hom(G, H)). If G, H are groups, then hom(G, H) is the set of from G to H. Definition 1.6 (). Let H < G. The the left coset of H containing a is aH = {ah : h ∈ H}. Right cosets are similarly defined. Lemma 1.3. If |H| < ∞ then |aH| = |H| = |Ha| Proof. Let h : H → aH : h 7→ ah. Then ah = ah0 ⇒ h = h0, so f is bijective. Lemma 1.4. Let H < G. Then for a, b ∈ G, either aH = bH or aH ∩ bH = ∅. Also, aH = bH iff a−1b ∈ H

1 Proof. Suppose aH ∩ bH 6= ∅. Then ∃h, h0 ∈ H such that ah = bh0. So a = bh0h−1 thus a ∈ bH. Thus aH ⊂ bH. This argument is symmetric, thus aH = bH. Then, if aH = bH we have a−1b = hh0−1 ∈ H If a−1b = h ∈ H, then b = ah so b ∈ aH and aH ∩ bH 6= ∅, so aH = bH. Definition 1.7 ( of g). The order of g ∈ G is the smallest positive n such that gn = e, or ∞ is no such n exists. Theorem 1.5 (Lagrange’s Theorem). If G is a finite group, then the order of any element divides |G|.

Proof. We first check that each element has finite order. Since |G| < ∞, there must be m, n > 0 such that gn = gm, where n < m So gm−n = e. Suppose that g has order n. Then consider {e, g, , . . . , gn−1} = H The left cosets of H partition G, and each is size n, so |G| = nk, where k is the number of left cosets of G. Definition 1.8 ( Group). If G, H are groups, then G × H has elements {(g, h): g ∈ G, h ∈ H} and multiplication (g, h)(g0, h0) = (gg0, hh0) Definition 1.9 (). A subgroup H of G is said to be normal if ∀a ∈ G, aH = Ha. We write H E G −1 −1 Lemma 1.6. N E G iff aNa = H ∀a ∈ G iff aNa ⊆ N ∀a ∈ G. Proof. If N is normal, then aN = Na, so aNa−1 ⊆ N. Also, ∀n0 ∈ N, ∃n ∈ N such that an = n0a so ana−1 = n0 ∈ N, thus n0 ∈ aNa−1. Thus aNa−1 = N If aNa−1 = N, then ∀n ∈ N, ∃n0 ∈ N such that an0a−1 = n so an0 = na and thus Na ⊂ aN. This argument is symmetric, so N E G. Definition 1.10 (Index). If H < G then the index of H in G, written [G : H] is the number of left cosets of H. This may be infinite.

Theorem 1.7. If N E G then the set {aN : a ∈ G} is a group of order [G : N] with operation (aN)(bN) = (ab)N Proof. If aN = a0N, bN = b0N then abN = a0b0N. as aN = a0N, a−1a0 ∈ N, similarly b−1b0 ∈ N. (ab)−1(a0b0) = b−1a−1a0b0 = b−1nb for some n ∈ N. Then, there is an n0 such that b−1nb0 = b−1b0n0 = n00 ∈ N Thus, abN = a0b0N This multiplication is associative, because G is a group, and the other group properties follow similarly. Definition 1.11 (). The group in the theorem above is called G/N.

2 Note: f : G → H a homomorphism, then ker f E G. Theorem 1.8. If f : G → H is a homomorphism and N E G, N ⊆ ker f then ∃!f˜ : G/N → H such that f˜(aN) = f(a) ∀a ∈ G. Then Im f˜ = Im(f) and ker f˜ = ker f/N and f˜ is an isomorphism iff f is surjective and N = ker f. Proof. See Hungerford. Corollary 1.9 (First Isomorphism Theorem). If f : G → H is a group homo- , then G/ ker f ' Im f Definition 1.12 (HK). If H and K are subsets of a group, then HK = {hk : h ∈ H, k ∈ K} Definition 1.13 (Join). If H, K < G then H ∨ K is the smallest subgroup of G containing both H and K. Whenever the smallest subgroup of G containing some set S is mentioned, it is the subgroup ∩Hi, where Hi < G and S ⊆ Hi.

Lemma 1.10. If N E G, K < G, then

1. N ∩ K E K 2. N E N ∨ K 3. NK = N ∨ K = KN

Proof. 1. If n ∈ N∩K, a ∈ K, then ana−1 ∈ N∩K, so a(N∩K)a−1 ⊂ N∩K, so N ∩ K E K 2. N E G and N < N ∨ K < G, so N E N ∨ K. 3. Since N ∨K is closed under multiplication, NK ⊆ N ∨K. Let a ∈ N ∨K, write a = n1k1 . . . , nrkr 0 0 Since N is normal, ∀n ∈ N, we can write kjn = n kj for some n ∈ N.

So a = nk1 . . . kr = nk with n ∈ N, k ∈ K. So a ∈ NK.

Theorem 1.11 (Second Isomorphism Theorem). If K, N < G and N E G then K/N ∩ K ' NK/N Proof. Define a homomorphism f : K → NK/N by f(k) = kN. f ... K ...... NK/N ...... ι ...... π ...... N ∨ K ker f = N ∩ K. By the first isomorphism theorem, K/N ∩ K ' Im f

3 Thus, it remains to show that Im f = NK/N. Consider nkN ∈ NK/N. Since N is normal, ∃n0 such that nkN = kn0N = kN = f(k).

Theorem 1.12 (Third Isomorphism Theorem). Let K E G and H E G, with K < H. Then H/K E G/K and (G/K)/(H/K) E G/H Proof. Define a homomorphism f : G/K → G/H by f(aK) = aH. f is a homomorphism, as K ⊂ ker(G → G/H) ker f = H/K, and f is surjective by definition, so by the first isomorphism theorem, the result follows.

Definition 1.14 (). A group G is a simple group if it has no proper nontrivial normal . Now we will determine solutions to the problem of how can a group be described.

1. Listing the elements and making a table

2. Give the generators for G as a subgroups of Sn, finite groups only. 3. Give it as Aut X for some structure X. 4. Build up from simpler groups

5. Give generators for G as a subgroup of GLn (giving a homomorphism f : G → GLn is the beginning of ) 6. Generators and Relations.

Theorem 1.13 (Cayley). Any finite group G is isomorphic to a subgroup of Sn for some n.

Proof. Let n = |G|, and fix a {1, . . . , n} → G such that G = {g1, . . . , gn}. Given g ∈ G, ∀i∃j such that ggi = gj. Note that if ggi = ggk then gi = gk. Define σg : {1, . . . , n} → {1, . . . , n} by σg(i) = j if ggi = gj This is an injection from a finite set into itself, and so a bijection. We will define ϕ : G → Sn : g 7→ σg Check that σgh = σgσh. ϕ is a homomorphism. Also, if ggi = gi then g = e, so σg 6= 1 for g 6= e. So ϕ is an injection, this ϕ(G) ' G and is a subgroup of Sn.

Corollary 1.14. Every finite group is isomorphic to a subgroup of GLn(R) for n = |G|.

Proof. By Cayley’s Theorem, it suffices to show that Sn is isomorphic to a subgroup of GLn  1 if σ(j) = i Define ϕ : S → GL ( ): σ 7→ A written A = n n R ij 0 else

4 Definition 1.15 (F (X)). Given a set X, we choose a set X−1 which is disjoint from X and has the same cardinality, with a bijection X → X−1 : x 7→ x−1, and we define an element 1 ∈ X ∪ X−1. −1 A word is a (a1,..., ) with ai ∈ X ∪ X ∪ {1} such that ∃N such that an = 1∀n ≥ N. −1 A word is reduced if ai = x ⇒ ai+1, ai−1 6= x and if an = 1 then am = 1∀m ≥ n. F (X) is the set of reduced words in X. Lemma 1.15. F (X) is a group with respect to concatenation and reduction of reduced words. Lemma 1.16. If G is a group and f : X → G is a map of sets, then there is a unique homomorphism f : F (X) → G such that f = f ◦ ι, where ι : X → F (X) is the . That is, F (X) is free in the of groups. Proof. Define f(1) = e ∈ G. λ1 λn λ1 λn λ1 λn If x1 . . . xn is a nonempty reduced word, then f(x1 . . . xn ) = f(x1) . . . f(xn) . f is a homomorphism. Corollary 1.17. Every group is a homomorphic of a . Proof. Let X be a set of generators for G, that is, no proper subgroup contains X. Let ι : X → G be the inclusion. Then ι : F (X) → G is a group homomor- phism and is surjective. Thus, by the first isomorphism theorem, ι(F (X)) ' G. That is, G ' F (X)/ ker ι Definition 1.16 (N(H)). The normal subject generated by a set H is the in- tersection, N(H), of all normal subgroups containing H.

And so, any group can be given by generators X and relations R by hX|Ri where G ' F (X)/N(R). G is finitely generated if X is finite. G is finitely presented if both X and R are finite. Definition 1.17 ( of Groups). If G, H are groups, then the group G ∗ H consists of all reduced words in G ∪ H.

2 Structure of Groups

Recall that a free group F (X) is free on X in the . What about the category of abelian groups? n Ln If X = {x1, . . . , xn} then Z = i=1 Z is free on X is the category of abelian groups. For X infinite, we want the restricted , that is, we have only finitely many nonzero terms.

5 Definition 2.1 ( of an ). A basis of an abelian group F is P X ⊆ F such that F = hXi and nixi = 0 ⇒ ni = 0, for ni ∈ Z. 0 0 Lemma 2.1. F has a basis X iff F ' ⊕x∈X Z, where ⊕ is the restricted direct sum.

Proof. Define a map ϕ ⊕0 → F : a 7→ P a x x∈X Z x∈X,ax6=0 x ϕ is a homomorphism. ϕ is also surjective, as F = hXi. If a ∈ ker ϕ then P axx = 0 ⇒ ax = 0, so a = 0. Thus, ϕ is an isomorphism.

n n Lemma 2.2. Assume G ' ⊕i=1Gi, H E G, H ' ⊕i=1Hi and Hi E Gi. Then n G/H ' ⊕i=1Gi/Hi.

Proof. Define πi : Gi → Gi/Hi : gi 7→ giHi. Ππi : G → ⊕Gi/Hi :(gi) 7→ ⊕giHi. Ππi is a surjective homomorphism with kernel H. Theorem 2.3. If X,Y are bases of an abelian group F , then |X| = |Y | = n or both are infinite. |X| is the rank of F . Proof. We may assume |X| = n. Consider 2F = {2u : u ∈ F } < F , and consider F/2F . As F has basis X, F ' Zn and 2F ' ⊕2Z. By the lemma, F/2F ' Zn/ ⊕ 2Z ' ⊕Z/2Z, so |F/2F | = 2n. If |Y | > n, the same argument says |F/2F | > 2n, so |Y | = |X| = n. Theorem 2.4. If F is a of rank n and G < F is nontrivial, then ∃ basis {x1, . . . , xn} of F and r(1 ≤ r ≤ n) and d1, . . . , dr ∈ N such that d1| ... |dr such that G is a free abelian group with basis {dixi|1 ≤ i ≤ r}.

Proof. The main ideas of the proof are the Euclidean algorithm: given n, d ∈ Z, n ∃q, r ∈ Z with 0 ≤ r < |d| with n = qd + r and if {x1, . . . , xn} is a basis for Z , Pn then so is {x1 + i=2 aixi, . . . , xn}. We proceed by induction on n. Suppose n = 1. Then F ' Z, G ≤ F . Let d be the smallest positive element of G. Let n ∈ G. Write n = qd + r, 0 ≤ r < d. As r = n − qd ∈ G, and r < d, r = 0. So n = qd ∈ hdi so G = dZ. Let S be the set of r ∈ Z such that ∃ a basis {y1, . . . , yn} for F and v ∈ G with v = ry1 + k2y2 + ... + knyn. As G is nonempty, S is nonempty. Since {y2, y1, . . . , yn} is a basis, k2, . . . , kn ∈ S. Let d be the smallest positive element of S, which exists if G 6= 0. Pn So there is a basis {y1, . . . , yn} for F and v ∈ G with v = d1y1 + i=2 kiyi. Pn P Pn Write ki = d1qi+ri, then v = d1(y1+ i=2 qiyi)+ riyi and {y1+ i=2 qiyi, y2, . . . , yn} is again a basis for F so ri ∈ S. So since ri < d1, ri = 0 ∀i. Pn Let x1 = y1 + i=2 qiyi, v = d1x1.

6 Let H = hy2, . . . , yni, and consider G ∩ H. We know F = hx1i ⊕ H. We will show that G ' hvi ⊕ (G ∩ H) Pn First hvi ∩ (G ∩ H) = {0}, since av = ad1x1 = i=1 kiyi. them −ad1xi + Pn i=1 kiyi = 0, so −ad1 = ki = 0 for all i, so av = 0. Pn Next let g ∈ G. Write g = ax1 + kiyi. Write a = qd1 + r, 0 ≤ r ≤ d1. =2 So g − qv ∈ G and g − qv = rx1 + kiyi so r ∈ S, and as r < d1, r = 0 so P q − qv = kiyi ∈ G ∩ H. Now we see hvi⊕G∩H → G :(av, g0) 7→ av+g0 is a surjective homomorphism with trivial kernel, and so is an isomorphism. Since H is free abelian of smaller rank, by induction, ∃ a basis x2, . . . , xn for H such that G ∩ H = hd2x2, . . . , drxri with d2| ... |dn. Now {x1, . . . , xn} is a basis for F and G = hd1x1, . . . , dnxni Write d2 = qd1 + r 0 ≤ r < d1, then d2x2 + d1x1 ∈ G and d2x2 + d1x2 = qd1x2 + rx2 + d1x1 = rx2 + d1(x1 + qx2), since {x2, x1 + qx2, . . . , xn} is a basis for F , r ∈ S so r = 0. Thus, d1|d2. Corollary 2.5 (Classification Theorem of Finitely Generated Abelian Groups). Every finitely generated abelian group is isomorphic to a finite direct sum of cyclic groups in which finite cyclic summands, if any, are of orders m1, . . . , mt where m1| ... |mt.

Proof. If G is generated by n > 0 elements, then there is a surjection π : Zn → G. If π is an isomorphism, then done. Else, let K = ker π since K ≤ Zn, there n is a basis {x1, . . . , xn} for Z such that K = hd1x1, . . . , drxri, dr ∈ N, r ≤ n. n Now G ' Z /K ' ⊕x∈X Z/diZ, with di = 0 if i > r. If di = 0, then Z/diZ ' Z, if di = 1, then Z/diZ '{0}. Let m1, . . . , mt in order be the number of di > 1. s Then G ' Z ⊕ Z/m1Z ⊕ ... ⊕ Z/mtZ.

ni Q ni Corollary 2.6. Z/mZ ' ⊕Z/pi Z, where m = pi Corollary 2.7. Any finitely generated abelian group is isomorphic to Zs ⊕ Lt ni i=1 Z/qiZ with s ≥ 0 and qi = pi for pi prime. Definition 2.2 (). The action of a group G on a set X is a G × X → X :(g, x) 7→ g · x such that 1. e · x = x for all x ∈ X.

2. (g1g2) · x = g1 · (g2 · x) We say that G acts on X.

Example: Let G be a group. G acts on G by left multiplication and by right multiplication by inverses, and also by conjugation. Definition 2.3 (Orbits). The orbit of x ∈ X is G · x = {g · x : g ∈ G}.

7 Note, if y ∈ G · x then G · x = G · y. The orbits partition X. Definition 2.4 (Transitive Action). We say that G acts on X transitively if G · x = X for any x ∈ X. Definition 2.5 (Conjugacy Classes). The conjugacy classes of a group G are the orbits of G acting on itself by conjugation.

Definition 2.6 (Stabilizer Subgroup). The stabilizer of x ∈ X is Gx = {g ∈ G : g · x = x}. This is also called the isotropy subgroup.

Lemma 2.8. |G · x = [G : Gx]

Proof. [G : Gx] is the number of left cosets of Gx. Let f : gGx 7→ g · x. To check that f is well defined, we need to check that if gGx = hGx then g · x = h · x. −1 −1 −1 gGx = hGx implies g h ∈ Gx. Thus, (g h) · x = x so h · x = (gg )h · x = g(g−1h) · x = g · x Now we must check that it is a bijection. −1 −1 If g · x = h · x then g h · x = x. So g h ∈ Gx so gGx = hGx, thus f is injective. If g · x ∈ G · x then gGx 7→ g · x and so f is surjective. Definition 2.7 ( of G). The center of G is C(G) = {g ∈ G : gh = hg ∀h ∈ G}.

Definition 2.8 (Centralizer of x in G). The Centralizer of x in G is CG(x) = {g ∈ G : gx = xg} Corollary 2.9. 1. The number of elements in the conjugacy class of x ∈ G is [G : CG(x)]. If |G| < ∞ then the size of a conjugacy class divides |G|. Pk 2. If x1, . . . , xk are distinct conjugacy class representatives, then |G| = i=1[G : C (x )] = |C(G)| + P [G : C (x )] = |C(G)| + P [G:CG(x)] G i xi∈/C(G) G i x/∈C(G) |G·x| Corollary 2.10. If G has order pn then the center of G is nontrivial. Proof. |G| = |C(G)| + P [G : C (x )], and p divides |G| and p divides xi∈/C(G) G i [G : CG(xi)], thus p|C(G). Definition 2.9. Xg = {x : g · x = x} H h X = ∩h∈H X , H < G. Lemma 2.11 (Burnside’s Lemma). Let G be a finite group acting on a finite 1 P g set X. Then the number of orbits is |G| g∈G |X | P g P Proof. g∈G |X | = the number of pairs {(g, x): g · x = x} = x∈X |G · x| = P |G| P 1 x∈X |G·x| = |G| x∈X |G·x| P P 1 P = |G| A an orbit x∈A |A| = |G| A an orbit 1, which is |G| times the number of orbits.

8 The above lemma is often referred to as Burnside’s Lemma or “The Lemma which is not Burnside’s”, as it was discovered in 1900 by Burnside, 1845 by Cauchy, and in 1887 by Frobenius. Theorem 2.12. Let G act on a set X. Then there is a unique corresponding homomorphism τ : G → Sym(X) = SX = Aut(X) sending g to τg, where τg(x) = g · x. Corollary 2.13. 1. G acts on itself by conjugation. Therefore, conjugation by g is an .

−1 2. τ : G → Aut G : g 7→ τg such that τg(h) = ghg is a homomorphism with ker τ = C(G). Definition 2.10 (Inner ). We define an automorphism to be an if it is in Im(τ). That is, it is the set of all automorphisms induced by conjugation. We will denote it by Inn(G).

Lemma 2.14. Inn(G) E Aut(G). −1 Proof. Set τg(h) = ghg −1 Let ϕ ∈ Aut G. We must check that ϕτgϕ is again conjugation. Let h ∈ G. −1 −1 −1 ϕ(τg(ϕ (h))) = ϕ(gkg ) where k = ϕ (h). −1 −1 This is, then, ϕ(g)ϕ(k)ϕ(g ) = ϕ(g)hϕ(g) = τϕ(g)(h). Definition 2.11 (Outer Automorphisms). We define an outer automorphism to be an element of OutG = Aut G/InnG. Definition 2.12 (). Let G, H be groups and τ : H → Aut G a homomorphism. Then we define a new group, X with set G×H and multiplication (g, h)(g0, h0) = 0 0 (gτh(g ), hh ). We write X = G oτ H

Theorem 2.15. G oτ H is a group and (G, 1) ' G E G oτ G

Definition 2.13. Let G act on a set S. Then S0 = {x ∈ S : hx = x, ∀h ∈ G

n Lemma 2.16. If G acts on a finite set S and |G| = p , p prime, then |S| ≡ |S0| mod p.

Proof. S = S0 ∪ G · x1 ∪ ... ∪ G · xr where the xr are such that |G · xi| > 1 n Now |G · xi| = [G : Gxi ] so 1 < |G · xi| divides |G| = p . So p divides |G · xi|. Thus, |S| ≡ |S0| mod p. Theorem 2.17 (Cauchy). If p prime divides |G|, then G has an element of order p.

Proof. Consider S = {(a1, . . . , ap): a1a2 . . . ap = e} Let Zp act on S by k · (a1, . . . , ap) = (ak+1, . . . , ap, a1, . . . , ak) −1 Zp maps S to S as a1 . . . ap = e ⇒ a2 . . . ap = a1 . p Thus, S0 = {(a, . . . , a): a = e}. |S0| ≥ 1, as (e, . . . , e) ∈ S0.

9 By the lemma, |S0| ≡ |S| mod p. |S| = |G|p−1 ≡ 0 mod p p As S0 6= ∅, we have that |S0| = np for n > 0. Thus, ∃a 6= e such that a = e. |a| = p as p is prime. Definition 2.14 (p-groups). A group in which every element has order a power of p, p prime, is called a p-group. Theorem 2.18. If |G| < ∞, then G is a p-group iff |G| = pn for some n. Proof. Necessary by Cauchy, Sufficient by Lagrange.

Definition 2.15 (Normalizer). The normalizer of a subgroup H of G is NG(H) = {g ∈ G : gHg−1 = H}. This is, in fact, a subgroup containing H, and if NG(H) = G then H E G.

Lemma 2.19. If H is a p-subgroup of a finite group G then [NG(H): H] ≡ [G : H] mod p Proof. Let S be the set of left cosets of H in G. H acts on S by left translation, h · aH = haH. |S| = [G : H]. −1 xH ∈ S0 ⇐⇒ hxH = xH∀h ∈ H ⇐⇒ x hxH = H∀h ∈ H ⇐⇒ −1 x Hx = H ⇐⇒ x ∈ NG(H) |S0| = [NG(H): H] n By the lemma and the fact that |H| = p and H acts on S we have |S| ≡ |S0| mod p. So [G : H] ≡ [NG(H): H] mod p.

Corollary 2.20. If H is a p-subgroup of G such that p|[G : H] then NG(H) 6= H.

Proof. 0 ≡ [G : H] ≡ [NG(H): H] ≥ 1. So [NG(H): H] ≥ 1, thus NG(H) 6= H. Definition 2.16 (Sylow p-subgroup). If p is prime, then a Sylow p-subgroup of G is a maximal p subgroup of G. That is, if P ≤ H ≤ G and |H| = pn and P is a Sylow p-subgroup, then H = P . Theorem 2.21 (First Sylow Theorem). Let G be a group of order pnm, m ≥ 1 and (p, m) = 1. Then G contains a subgroup of order pi for 1 ≤ i ≤ n and each subgroup of size pi i < n is normal in a subgroup of size pi+1. Proof. By induction on i. Cauchy is the base case. i |G| Suppose H is a subgroup of G of order p , i < n. Then [G : H] = |H| , so p|[G : H]. Thus NG(H) 6= H. 1 < |NG(H)/H| = [NG(H): H] ≡ [G : H] ≡ 0 mod p, so p divides 0 0 |NG(H)/H| by Cauchy, there is a subgroup H ≤ NG(H)/H with |H | = p. 0 Let K = {g ∈ NG(H): gH ∈ H } ≤ NG(H) ≤ G i+1 Then |K| = p and H E K.

10 Corollary 2.22. All Sylow p-subgroups have order pn where |G| = pnm, (p, m) = 1 Theorem 2.23 (Second Sylow Theorem). If H is a p-subgroup of a finite group G and P is any Sylow p-subgroup, then ∃x ∈ G such that H < xP x−1. In particular, all Sylow p-subgroups are conjugate. Note, that if there is exactly one Sylow p-subgroup for some p, then it is normal. Theorem 2.24 (Third Sylow Theorem). If G is a finite group and p prime, then the number of Sylow p-subgroups divides |G| and is congruent to 1 mod p. Corollary 2.25. If |G| = p2q2 with p, q prime and p, p2 6≡ 1 mod q and q, q2 6≡ 1 mod p then G is abelian.

Definition 2.17 (Even Permutation). A permutation τ in §n is even if it can be written as a product of an even number of transpositions. Similarly, odd permutations. Note: each permutation is either even or odd, not both.  n−1  1 x1 . . . x1 n−1  1 x2 . . . x2  It is based on the effect of τ on ∆ = det   = Q (x −  . . .. .  i≤i

Definition 2.18 (Sign of a Permutation). The sign of τ ∈ Sn is +1 is τ is even and −1 is τ is odd.

Definition 2.19 (). An = {τ ∈ Sn : τ is even }.

An is a group of order n!/2. An = ker ϕ, ϕ : Sn → {±1}, so An E Sn.

Lemma 2.26. Let r 6= s ∈ [n]. Then An = h(rsk) : 1 ≤ k ≤ n, k 6= r, k 6= si = H.

Proof. Assume n > 3. Since every element of An is a product of an even number of transpositions, we must show that (ab)(cd), (ab)(ac) ∈ H (ab)(cd) = (acb)(acd), (ab)(ac) = (acb). So just need all three cycles in H. This follows by brute force.

Corollary 2.27. If N is a normal subgroup of An and N contains a 3-cycle, then N = An. Proof. Suppose (rsc) ∈ N. (rsk) = (rs)(ck)(rsc)2(ck)(rs) = aNa−1 ∈ N. So (rsk) ∈ N, for 1 ≤ k ≤ n, k 6= r, s so An ⊆ N. Thus N = An.

11 Theorem 2.28. Let n ≥ 5. Then An is simple.

Theorem 2.29. Let N be a proper normal subgroup of An. By the corollary, N contains no 3-cycles.

Proof. Suppose that N contains π = c1 . . . ck disjoint cycles, and c1 is a cycle of length r ≥ 4. Write c1 = (a1, . . . , ar). Let δ = (a1a2a3) ∈/ N. π = c1τ, τ = c2 . . . ck. So π−1(δπδ−1) ∈ N, as N is normal. −1 −1 −1 But π (δπδ ) = τ (a1arar−1 . . . a2)(a1a2a3) = (a1a3ar). Suppose π ∈ N, π = c1 . . . ck = c1c2τ where c1, c2 are 3-cycles. c1c2 = (a1a2a3)(a4a5a6), δ = (a1a2a4). −1 −1 π (δπδ ) ∈ N, and equals (a1a4a2a6a3), which reduces to the last case. Thus, every element of N is a product of at most 1 3-cycle and a bunch of transpositions. Now, if π = (a1a2a3)τ ∈ N, with τ =product of disjoint transpositions. 2 2 π = (a1a2a3)τ(a1a2a3)τ = (a1a2a3) = (a1a3a2). So every element of N is a product of disjoint transpositions. Finally, choose σ ∈ N with σ = (a1a2)(a3a4)τ, δ = (a1a2a3) −1 −1 0 So σ (δσδ ) ∈ N is equal to (a1a3)(a2a4) = σ . Since n ≥ 5, ∃b such that b∈ / {a1, a2, a3, a4}, π = (a1a2b). 0 0 −1 σ (πσ π ) ∈ N is (a1a3b) ∈ N. Thus, there is no proper nontrivial normal subgroup of An for n ≥ 5.

3 Rings

Definition 3.1 (). A ring is a nonempty set R together with two binary operations +, · such that 1. (R, +) is an abelian group. 2. (ab)c = a(bc) 3. a(b + c) = ab + ac and (b + c)a = ba + ca

If there is an element 1R such that a1R = 1Ra for all a ∈ R then R is a ring with . If ab = ba for all a, b ∈ R then R is a . Theorem 3.1. Let R be a ring.

1. 0a = a0 = 0∀a ∈ R 2. (−a)b = a(−b) = −(ab)∀a, b ∈ R 3. (−a)(−b) = ab∀a, b ∈ R

4. (na)b = a(nb) = n(ab)∀n ∈ Z∀a, b ∈ R

12 Pn Pn  Pn Pn 5. ( i=1 an) j=1 bj = i=1 j=1 aibj∀ai, bj ∈ R.

Definition 3.2 (Zero Divisors). A nonzero element, a, in a ring R is a left (respectively right) if ∃b 6= 0 ∈ R such that ab = 0 (respectively ba = 0) a is a zero divisor if it is a left and right zero divisor. WARNING: If R has zero divisors, you cannot automatically cancel multi- plication, ie, ab = ac 6⇒ b = c

Definition 3.3 (). An element a ∈ R, R a ring with identity, is a left unit (respectively right) if ∃c ∈ R such that ca = 1R (respectively right if ∃b such that ab = 1R). The element c can be called the left inverse of a and the element b the right inverse of a.

Note: if b is a left inverse of a and c is a right inverse, then b = c. Theorem 3.2. The set of units of R form a group under multiplication. Definition 3.4 (). A ring homomorphism is a function f : R → S such that f(a) + f(b) = f(a + b) and f(a)f(b) = f(ab).

This is a group homomorphism from (R, +) to (S, +), so f(0) = 0. But if R,S have identity, we cannot necessarily say f(1R) = 1S We can now define the . The objects are rings and the are ring homomorphisms. We also have the subcategories of rings with identity and commutative rings.

Definition 3.5 (Integral ). If R is a commutative ring with identity and no zero divisors, then R is called an . Definition 3.6 (Division Ring). A ring R with identity (not 0) where every nonzero element has an inverse is called a division ring.

Definition 3.7 (). A commutative division ring is a field. Theorem 3.3. No zero divisor is a unit. Proof. ab = 0, b = a−1ab = a−10 = 0 Definition 3.8 (). If R is a ring and G is a group, then R(G) =the set of formal R-linear combinations of group elements with coefficients in R. Addition is componentwise and multiplication is distributive and uses the group law. Group Rings are a part of representation theory.

Definition 3.9 (Real ). The real quaternions, H = R(Q8)

13 Definition 3.10 ( Ring). Let A be an abelian group. Then End(A) = hom(A, A) is a ring with (f + g)(a) = f(a) + g(a) as addition and composition of functions as multiplication. Definition 3.11 (). A subset I of R is a left (right) ideal if it is an additive subgroup and x ∈ I for all r ∈ R, x ∈ I. An ideal I is a left ideal and a right ideal.

Note: If R has 1R and I contains a unit, then I = R. Theorem 3.4. If R is a ring and I is an ideal, then the addit If R has identity, then so does R/I. If R is commutative, then so is R/I. Proof. Need to show that multiplication is well defined. Suppose a + I = a0 + I and b + I = b0 + I. Then a0 = a+i and b0 = b+i0 for i, i0 ∈ I. So a0b0 = ab+ai+bi0 +ii0 ∈ ab+I Other parts follow from definition. Theorem 3.5. If f : R → S is a ring homomorphism, then ker f = {r ∈ R : f(r) = 0} is an ideal in R. Proof. We know that ker f is an additive subgroup. If x ∈ ker f, r ∈ R, then f(rx) = f(r)f(x) = f(r)0S = 0S. The converse, that if I is an ideal, then π : R → R/I : r 7→ r + I is a homomorphism with ker π = I is true. Theorem 3.6 (First, Second and Third ). 1. If f : R → S is a ring homomorphism, then R/ ker f ' Im f.

2. If I,J are ideals of R, then R/(I ∩ J) ' (I + J)/J 3. If I ⊆ J are ideals, then J/I is an ideal of R/I and (R/I)/(J/I) ' R/J Lemma 3.7. I is a left ideal of R iff ∀a, b ∈ I, r ∈ R we have a − b ∈ I and ra ∈ I.

Lemma 3.8. If {Ai}i∈I are ideals of R, then ∩Ai is an ideal of R. Definition 3.12 (Ideal Generated by X). If X ⊂ R then the ideal generated b X is the intersection of all ideals in R containing X. We write hXi. Definition 3.13 (PID). An integral domain where every ideal is generated by one element is a principal ideal domain, or PID.

Definition 3.14. If A, B ⊆ R then AB = {a1b1 . . . arbr : ai ∈ A, bi ∈ B} If A, B are ideals, then AB ⊆ A ∩ B Definition 3.15 (Prime Ideal). An ideal P ⊆ R is prime if AB ⊆ P ⇒ A ⊆ P or B ⊆ P for A, B ideals of R.

14 Theorem 3.9. If P is an ideal in a ring R, P 6= R and ∀a, b ∈ R we have ab ∈ P ⇒ a ∈ P or b ∈ P then P is prime. If P is prime and R is commutative, then the converse holds. Proof. Suppose AB ⊆ P , A 6⊆ P for A, B ideals. Let a ∈ A \ P Then ∀b ∈ B, ab ∈ AB ⊆ P , a∈ / P so b ∈ P . Thus, B ⊆ P , so P prime. Let R be a commutative ring. Suppose ab ∈ P , P prime. Consider hai, hbi. If R is commutative, haihbi ⊆ habi. As P is prime, hai ⊂ P or hbi ⊂ P , so a ∈ P, b ∈ P .

Theorem 3.10. If R is a commutative ring with identity, P ⊆ R is prime iff R/P is an integral domain. Proof. Suppose P is prime, we know that R/P is commutative with identity and 1R 6= 0, so it is enough to show that R/P has no zero divisors. If (a+P )(b+P ) = 0+P then ab+P = 0+P so ab ∈ P and R commutative, a ∈ P or b ∈ P so a + P = 0 + P or b + P = 0 + P so R/P has no zero divisors. Conversely, suppose R/P is an integral domain. Since 0 6= 1R + P , we have P 6= R. If ab ∈ P then (a + P )(b + P ) = ab + P = 0 + P so a + P or b + P is 0 + P . Thus, P is prime.

Definition 3.16 (Maximal Ideal). An ideal M in a ring R is maximal if M 6= R and if M ⊆ N ⊆ R, N and ideal implies that N = M or N = R. That is, M is a maximal element in the poset of proper ideals of R with I ≤ J iff I ⊆ J. Theorem 3.11. If R is a ring with identity, then every ideal I ⊆ R contained in a maximal ideal M.

Proof. Look at subposets P of proper ideals J ⊆ R with I ⊆ J. Let I ≤ I1 ≤ ... be a chain in P . Let J = ∪j≥1Ij. We need to prove that J is a proper ideal. Let a, b ∈ J. Then a ∈ Ii, b ∈ Ij for some i, j. We can take i ≤ j, then a ∈ Ij as well. Thus, a − b ∈ Ij, and ra, ar ∈ Ij. So J is an ideal. J is a proper ideal as 1R is not in J, because 1R ∈/ Ij for all j. So J is an upper bound for the chain. By Zorn’s Lemma, we know that P has a maximal element, which is a maximal ideal containing J. Theorem 3.12. If R is a commutative ring with identity, then every maximal ideal is prime.

Proof. Suppose M is a maximal ideal, ab ∈ M but a, b∈ / M. Consider M + hai,M + hbi the M + hai = M + hbi = R. As haihbi ⊆ habi ⊆ M so that R = (M + hai)(M + hbi) = MM + haiM + Mhbihaihbi ⊆ M, so M = R, contradiction.

15 Theorem 3.13. Let M be an ideal in R, a ring with identity. IF M is maximal, R commutative, then R/M is a field. If R/M is a division ring, then M is maximal. Proof. R/M is an integral domain. so it is enough to show that all nonzero elements have inverses. Consider a + M ∈ R/M, where a∈ / M. Consider hai + M. As M maximal, hai + M = R. So 1R = ra + m for m ∈ M, r ∈ R. So ra − 1R ∈ M, thus ra + M = 1R + M. Thus, (r + M)(a + M) = 1R + M. So R/M is a field. Assume that R/M is a division ring. Then for a∈ / M, ∃r such that (a + M)(r + M) = 1R + M, and so ar − 1R ∈ M. Suppose M not maximal, so ∃N, M ( N ( R. If we choose a ∈ N, then ar − 1R ∈ M ⊆ N so 1R ∈ N, thus N = R, contradiction, so M is maximal.

In the category of rings, products exist. Q Let P = Ai as additive groups, with product (ai)(bi) = (aibi), that is, componentwise.

Theorem 3.14 (Chinese Remainder Theorem). If A1,...,An are ideals in R, a Q ring with identity, such that Ai +Aj = R for all i 6= j, Then R/(∩Ai) = R/Ai Lemma 3.15. If R2 = R (such as when R is a ring with identity) then M a maximal ideal implies that M is a prime ideal. Proof. Suppose that M is maximal, and that A, B are ideals such that AB ⊆ M but A 6⊂ M and B 6⊂ M. Let a ∈ A \ M and b ∈ B \ M. Consider M + hai = {m + r : m ∈ M, r ∈ A}. As M is maximal, M + hai = M + hbi = R. As R = R2 = (M + hai)(M + hbi) ⊆ M as (m + r)(m0 + s) = mm0 + ms + rm0 + rs. So R ⊆ M, thus R = M, contradicting M being maximal.

Definition 3.17 (Divides). A nonzero element a of a commutative ring R divides b ∈ R if ∃x ∈ R such that ax = b. If a|b and b|a then a and b are associates. Also, (a) = (b). Theorem 3.16. If R is an integral domain, then a, b are associates iff a = br, r a unit.

Definition 3.18 (Irreducibles and Primes). Let R be a commutative ring with identity. c is irreducible if c is a nonzero nonunit such that c = ab ⇒ a or b is a unit. p is prime if p is a nonzero nonunit such that p|ab ⇒ p|a or p|b. Lemma 3.17. R is an integral domain.

16 1. p prime iff hpi= 6 h0i is prime. 2. Every prime p ∈ R is irreducible. 3. c is irreducible iff hci is maximal with respect to inclusion among principal ideals. 4. If R is a PID then c is prime iff c is irreducible. 5. Every associate of an irreducible (or prime) is irreducible (or prime).

Proof. 1. Suppose p is prime and ab ∈ hpi. Then p|ab so p|a or p|b, thus a ∈ hpi or b ∈ hpi. So hpi is prime. Suppose hpi is prime and p|ab. Then ab ∈ hpi so a ∈ hpi or b ∈ hpi, so p|a or p|b. 2. If p prime is equals ab then p|a or p|b. WLOG, p|a. so a = pc. So p = pcb, so cb = 1.

3. If c irreducible, suppose hci ( hdi, c = de for some e ∈ R. As hci ( hdi, e not a unit, but c is irreducible, so d is a unit, thus, hdi = R.

4. Suppose R is a PID. Then irreducible in the PID means maximal ideal, which means prime. 5. If c = du then hci = hdi.

Definition 3.19 (Unique Factorization Domain). An integral domain R is a unique factorization domain if every nonzero nonunit can be written a = c1 . . . ck with ci irreducible and if a = c1 . . . ck = d1 . . . dm with ci, dj irreducible, then k = m and, up to reordering, ci, di are associates. Definition 3.20 (Noetherian Ring). We say that a ring R is Noetherian if it satisfies the ascending chain condition, that is, A1 ⊂ A2 ⊂ ... is an ascending chain of ideals then ∃n such that An = Am for all m ≥ n. Lemma 3.18. If R is a PID then R is Noetherian.

Proof. Let (a1) ⊂ (a2) ⊂ ... be a chain of ideals. Let A = ∪i≥1(ai). A is an ideal. As R is a PID, A = hai for some a ∈ R. As a ∈ A, a ∈ hani for some n. Thus a ∈ hani ⊆ haji for j ≥ n, and hani = A. Theorem 3.19. If R is a PID then R is a UFD. Proof. Let S be the set of nonzero nonunits in R with no irreducible factoriza- tion. Suppose a ∈ S. Consider hai as a is not irreducible, ∃x ∈ R irreducible such that a = xa1 for some a1 ∈ R.

17 So a1 ∈ S, else a has irreducible factorization. We inductively construct the ascending chain hai ⊂ ha1i ⊂ ha2i ⊂ ... which contradicts R being Noetherian. So S is empty. Suppose a = a1 . . . ak = d1 . . . d` with ci, dj irreducible. As R a PID, irre- ducibles are prime. So c1 prime, so it must divide di for some i. Then rc1 = di, but di irreducible and c1 not unit so r is a unit. So c1, di are associates. Inductively, we obtain uniqueness.

4 Modules

Definition 4.1 (). Let R be a ring. A Left R-module is an abelian group (A, +) together with a map R × A → A written as (r, a) 7→ ra such that ∀r, s ∈ R, a, b ∈ A we have

1. r(a + b) = ra + rb 2. (r + s)a = ra + sa 3. (rs)a = r(sa)

4. If 1R exists, then 1Ra = a. Sometimes, this is called a Unitary R-module.

Note, that if ϕ : S → R is a ring homomorphism then every R-module is also an S-module by sa = ϕ(s)a. Definition 4.2 (R-). A function f : A → B where A, B are R-modules is an R-module homomorphism if f(a + b) = f(a) + f(b) and f(ra) = rf(a). This defines the category of R-modules. Definition 4.3 (R-submodule). Let R be a ring and A and R-module. Then B ⊆ A is an R-submodule if B < A and rb ∈ B for all r ∈ R, b ∈ B. Note, that if f : A → B is an R-module homomorphism, then ker f is a submodule of A. Lemma 4.1. If B is an R-submodule of an R-module A, then the group A/B is an R module via r(a + B) = ra + B. Theorem 4.2 (Isomorphism Theorems). As for groups and rings, we have the isomorphism theorems:

1. f : A → B an R-module homomorphism, then R/ ker f ' Im f a submod- ule of B. 2. B,C are R-submodules of A. Then B/(B ∩ C) ' (B + C)/C 3. C ⊆ B ⊆ A then B/C ⊆ A/C and (A/C)/(B/C) ' A/B

18 Definition 4.4 (Submodule Generated by X). Let A be an R-module and X ⊆ A be a subset. Then the submodule of A generated by X is this intersection of all submodules containing X.

Definition 4.5 (Products). If {Ai}i∈I are R-modules then the group ⊕Ai is an R-module with action r(ai) = (rai). This is a product in the category of R-modules. Definition 4.6 (). If f : A → B and g : B → C are R-module homomorphisms, then the sequence f g ...... ABC...... Is exact at B if ker g = Im f. A sequence is exact if it exact at each term. Definition 4.7 (Short Exact Sequence). A short exact sequence is the exact sequence ...... 0 ...... ABC...... 0 In any short exact sequence, f is injective and g is surjective. Lemma 4.3 (Short 5 Lemma). Let R be a ring and f g ...... 0 ...... ABC...... 0 ...... γ . α . β ...... f 0 ... g0 ...... 0 ...... 0 ...... 0 ...... 0 .... A .... B .... C .... 0 be a with both rows short exact . Then 1. If α, γ injective then β injective 2. If α, γ surjective then β surjective 3. If α, γ bijective then β bijective Proof. The third part clearly follows from parts 1 and 2. Part 1: Let b ∈ B with β(b) = 0. As β(b) = 0, g0(β(b)) = 0 = γ(g(b)) As γ is injective, g(b) = 0. Also, as b ∈ ker g, we have b ∈ Im f, so b = f(a) for some a ∈ A. β(f(a)) = 0 = f 0(α(a)) as f 0 is injective, α(a) = 0 and as α is injective, a = 0. Thus, b = f(a) = f(0) = 0, so β is injective. Part 2: Let b0 ∈ B0. Then consider g0(b0) ∈ C0. Since γ is surjective, g0(b0) = γ(c) for some c ∈ C. As g surjective, c = g(b) for some b ∈ B. Thus γ(g(b)) = g0(β(b)) = g0(b0). So g0(β(b) − b0) = 0. Thus β(b) − b0 ∈ ker g0 = Im f 0, so β(b) − b0 = f 0(a0) for some a0 ∈ A0. As α is surjective, a0 = α(a) for some a ∈ A. Let m = b − f(a) ∈ B. Then β(m) = β(b) − β(f(a)) = β(b) − f 0(α(a)) = β(b) − f 0(a0) = b0.

19 ...... Theorem 4.4. Let R be a ring, 0 ...... A1 ...... B ...... A2 ...... 0be a short exact sequence of R-modules. Then the following are equivalent:

1. ∃R-mod homomorphism h : A2 → B such that g ◦ h = 1A2

2. ∃R-mod homomorphism k : B → A1 such that k ◦ f = 1A1 ι π .... 1 .... 2 ...... 3. The short exact sequence is isomorphic to 0 ...... A1 ...... A1 ⊕ A2 ...... A2 ...... 0so B ' A1 ⊕ A2

Proof. The first implies the second easily by looking at the diagram: ...... 0 ...... A1 ...... B ...... A2 ...... 0 ...... 1A . ϕ . 1A . 1 . . 2 ...... 0 ...... A1 ...... A1 ⊕ A2 ...... A2 ...... 0

If it is an isomorphism (ie, ϕ is an isomorphism), then we take h = ϕ◦ι2 ◦1A2 and k = π1 ◦ ϕ. For 1 implies 3, we have a module homomorphism ϕ : A1 ⊕ A2 → B by ϕ(a1, a2) = f(a1) + h(a2). ι π .... 1 .... 2 ...... 0 ...... A1 ...... A1 ⊕ A2 ...... A2 ...... 0 ...... ϕ . . 1A1 . . 1A2 ...... f ... g ...... Consider 0 ...... A1 ...... B ...... A2 ...... 0 If this is commutative, then the Short says we are done. For a ∈ A1, ϕ(ι1(a)) = ϕ(a, 0) = f(a) + h(0) and f(1A1 (a)) = f(a).

For the other square, (a1, a2) ∈ A1 ⊕ A2 and 1A2 (π2(a1, a2)) = a2 and g(ϕ(a1, a2)) = g(f(a1) + h(a2)) = a2, so commutative. If a sequence satisfies the above conditions, we say that the sequence splits: ...... 0 ...... AA...... 0 LLL

...... 0 ...... CC...... 0 From now on, we will assume that rings have identity and that modules are unitary. Definition 4.8 (Linearly Independent). A subset X of an R-module is linearly independent if for distinct x1, . . . , xn ∈ X and ri ∈ R then r1x1 + ... + rnxn = 0 ⇒ ri = 0 for all i. If X is not linearly independent, then it is linearly dependent. Definition 4.9 (Spans). If Y ⊆ A generates A as an R-module, then we say Y spans A.

20 Definition 4.10 (Basis). A set B which is linearly independent and spans A is called a basis. Theorem 4.5. Let R be a ring with identity. The following are equivalent on R an R-module.

1. F has a nonempty basis P 2. F ' x∈X R, the direct sum, all but finitely many terms are zero. 3. Let ι : X → F be the inclusion map, and given any R-module A and f : X → A as a map of sets then ∃!f˜ : F → A an R-module homomorphism such that f˜◦ ι = f. That is, F is free on X in the category of R-modules. P Proof. 1 ⇒ 2: Let X be a basis for F . Define ϕ : x∈X → F by ϕ((rx))sumx∈X rxx. This is an R-module homomorphism. As X is linearly independent and spanning ϕ is an injective and surjective homomorphism, so it is an isomorphism. th 2 ⇒ 1: Let ex = (0,..., 1R,..., 0) in the x coordinate. Then {ex : x ∈ X} is a basis for P R. x∈PX 2 ⇒ 3: Let F x∈X R. Suppose f : X → A is a map of sets, A an R-module, ˜ P and ι : X → F : x 7→ ex. Then f((rx)) = x∈X rxf(a). 3 ⇒ 2: F is free on X. P R is free on X by the above, and so X and P x∈X x∈X R both free, and so they must be isomorphic, as free objects on a set X are equivalent. Corollary 4.6. Every R-module A is a homomorphic image of a free R-module P P F and thus isomorphic to x∈X R/B where B is some submodule of x∈X R/ If A is finitely generated, then we can take F to be as well.

Proof. If (ai)i∈I is a generating set for A, then f : I → A : i 7→ ai gives a map ˜ P P f : x∈X R → A so A ' x∈X R/ ker f. Note: If F is a free R-module, there can be submodules which are not free R-modules. However, if R is a division ring, then every submodule of a free R-module is free. Warning: If X,Y are bases for a F then it is possible that |X|= 6 |Y | However, if R is a commutative ring or a division ring, then |X| = |Y | necessarily.

Definition 4.11 (Invariant Property). Let R be a ring with identity. We say that R has the invariant dimension property if, for any free R-module F then any two basis for F have the same cardinality, called rank F Lemma 4.7. Let R be a ring with identity, I a proper ideal, F a free R-module with basis X and π : F → F/IF the canonical projection. Then F/IF is a free R/I-module with basis π(X) and |π(X)| = |X|.

21 Proof. We first show that π(X) spans F/IF . Let u + IF ∈ F/IF . Then P P P u ∈ F so u = rxx so u + IF = rxx + IF = (rx + I)(x + IF ) = P x∈X x∈X (rx + I)π(x). Thus, π(X) spans F/IF . P P P Now we suppose (rx + I)π(x) = 0 So rxx + IF = 0. Then rxx ∈ P P x∈X IF , so rxx = sjuj where sj ∈ Iuj ∈ F . P X is a basis, so each uj is a linear combination of elements of X, so sjuj = P cxx with cx ∈ I. P P Sp rxx = cxx since x is a basis, rx = cx for all x ∈ X. So rx ∈ I and thus, cx + I = 0 + I. So π(X) is linearly independent, and x 6= x0 ⇒ π(x) 6= π(x0). Thus π(X) is a basis for F/IF over R/I and |π(X)| = |X|. Lemma 4.8. If f : R → S is a surjective ring homomorphism, and S has the invariant dimension property, then so does R. Proof. S ' R/I for some ideal I of R with bases X and Y . Then F/IF is a free S-module with bases π(X) and π(Y ). As S has the invariant dimension property, |X| = |π(X)| = |π(Y )| = |Y |. Corollary 4.9. Every commutative ring has the invariant dimension property. Proof. Let M be a maximal ideal. Then R/M is a field, which has the invariant dimension property. Thus R has the invariant dimension property.

We will now assume R is a PID. Theorem 4.10. Let F be a free R-module where R is a PID. Let G be a submodule of F . Then G is a free R-module and rank G ≤ rank F . Proof. For F < ∞. Let X = {x1, . . . , xn} be a basis for F . Let Fi be the sub,module generated by {x1, . . . , xi}. Let Gi = G ∩ Fi, note, Gi ⊂ Gi+1 and Gi = Gi+1 ∩ Fi, Gn = G. Also, Fi+1/Fi ' R and G1 ⊂ F 1 ' Rxi ' R. So G1 is isomorphic to an ideal, thus G1 = hci = Rc ' R is c 6= 0. Thus, G1 ' 0 or R, so G1 is free of rank ≤ rank F . Consider 0 → Gi → Gi+1 → Gi+1/Gi → 0. Then Gi+1/Gi ' Gi+1/(Gi+1 ∩ Fi) ' (Gi+1 ∩ Fi)/Fi < Fi+1/Fi So Gi+1/Gi ' 0 or R. By induction, Gi is free of rank less than rank Fi = i. If Gi+1/Gi = 0 then Gi+1 = Gi so done. If Gi+1/Gi = R then 0 → Gi → Gi+1 → R → 0 so rank Gi+1 ≤ rank Gi+1 ≤ i + 1, so G = Gn has rank ≤ rank Fn = n.

Definition 4.12 (Θa). Let A be a module over an integral domain R. Then for a ∈ A, let Θa = {r ∈ R : ra = 0}

Lemma 4.11. 1. Θa is an ideal in R.

2. At = {a ∈ A :Θa 6= 0} is a submodule of A.

22 3. ∀a ∈ A, R/Θa = Ra = {ra : r ∈ R}

Proof. Part 2: Let a, b ∈ At. Then ∃r, s ∈ R such that ra = sb = 0. Then rs(a − b) = rsa − rsb = s0 − r0 = 0. And, if a ∈ At with sa = 0, s 6= 0 then s(ra) = (sr)a = 0. So s ∈ Θra 6= {0}. So ra ∈ At.

Definition 4.13 (). At is called the torsion submodule of A. A is called a torsion module if A = At. A is called torsion free if At = 0. Theorem 4.12. 1. Free modules over integral domains are torsion free 2. ∃A, a torsion free module, which is not free. P Proof. Part 1: Let X be a basis for some free module F . If f = rixi then P rf = rrixi = 0. Then rri = 0 for all i, so r = 0. Part 2: Q as a Z-module. Theorem 4.13. A finitely generated torsion free module A over a PID R is free. Proof. Assume A 6= 0. Let X be a finite set of generators for A. If x ∈ X then rx = 0 ⇒ r = 0 as A is torsion free. P So ∃S ⊂ X nonempty which is maximal with respect to x∈S rxx = 0 ⇒ rx = 0∀x. That is, S is maximal with respect to “the submodule of A generated by S is free”. P Let y ∈ X \ S. Then ∃ry, rx and x ∈ S such that ryy + rxx = 0. That P x∈S P is, ryy = − x∈S rxx ∈ span S = F . Note, ry 6= 0, as otherwise x∈X rxx = 0 with some rx 6= 0, contradicting linear independence. So ra ∈ F for all a ∈ A. But consider ϕ : A → A : a 7→ ra. ϕ is an R-module homomorphism. Then ker ϕ = {0} as A is torsion free. ϕ(A) ⊆ F is a submodule, so A ' ϕ(A) which is a submodule of a free module over of a PID. Theorem 4.14. Let A be a finitely generated module over a PID R. Then A ' At ⊕ F with F free.

Proof. Let F = A/At. F is finitely generate if A is. If r(a + At) = 0 then ra ∈ At., so s(ra) = 0 for some s 6= 0 in R. Thus (sr)a = 0. so a ∈ At or sr = 0 ⇒ r = 0 as R integral domain s 6= 0. So F is torsion free, thus, F is free. 0 → At → A → F → 0 splits, so A ' At ⊕ F . Definition 4.14 (A(p)). Let A be a torsion module over a PID R. If p ∈ R is i i prime, then A(p) = {a ∈ A :Θa = hp i for some i ∈ N} = {a ∈ A : p a = 0 for some i ∈ N}. Lemma 4.15. Let A be a torsion module over a PID R. Then A(p) is a submodule of A for each prime p of R.

23 Proof. Let a, b ∈ A(p), then pia = 0 = pjb. WLOG, i ≤ j. Then pj(a − b) = 0 so a − b ∈ A(p). pi(ra) = r(pia) = ra = 0 so ra ∈ A(p). Theorem 4.16. Let A be a torsion R-module, R a PID. Then A = P A(p) where the sum is over all distinct primes p. If A is finitely generated, then all but finitely many A(p) are zero.

Proof. We first show that a ∈ A ⇒ a1 + ... + ak where ai ∈ A(pi). We write Θa = hri. n1 nk r = up1 . . . pk where pi prime and ni > 0. r Let ri = ni . Consider hr1, . . . , rki = hdi for some d ∈ R. pi Then d|ri for all i, thus d is a unit. Thus, hr1, . . . , rki = h1Ri = R/ Pk Thus 1 = s1r1 + ... + skrk so a = 1Ra = i=1 siria ni ni But p (siria) = si(p ri), so siria ∈ A(pi). i Pi Thus, a ∈ A ⇒ a = ai with ai ∈ A(pi), pi prime. Fix a prime p ∈ R and let A1 be the submodule of A spanned by the A(q), q 6= p. i P Let a ∈ A1 ∩ A(p). Then p a = 0 and a = ai, ai ∈ A(pi), pi 6= p ni So ∀i, ∃n such that pi ai = 0. Qk ni i Let d = i=1 pi , then da = 0. Consider hd, p i = hci. i i Now c|d and c|p so c is a unit. Thus hd, p i = h1Ri = R, so ∃s, t ∈ R such i that sd + tp = 1R. i i So a = 1Ra = (sd + tp )a = s(da) + t(p a) = 0 + 0 = 0 so A1 ∩ A(p) = {0}. P P Consider the homomorphism ϕ : A(p) → A :(ai) 7→ ai. This is surjective by the first part and injective by the second part, this ϕ is an isomor- phism. Lemma 4.17. Let A be a module over a PID R such that pnA = 0, pn−1A 6= 0 n for some prime p, n > 0. Let a ∈ A have Θa = hp i, then

1. If A 6= Ra then ∃b ∈ A, b 6= 0 such that Ra ∩ Rb = {0} 2. ∃ submodule C of A such that A ' C ⊕ Ra

Proof. 1. If A 6= Ra then ∃s ∈ A \ (Ra), since pnc ∈ pnA = 0, ∃ a least j > 0 such that pjc ∈ Ra(1 ≤ j ≤ n) j−1 j k So p c∈ / Ra, p c = r1a for some r1 ∈ R, r1 = rp with k ≥ 0 p 6 |r So k ≥ j ≥ 1. Write b = pj−1c − rpj−1a. We will check that Ra ∩ Rb = {0}. Note b 6= 0 as pj−1c∈ / Ra but rpk−1a ∈ Ra. j k Also note that pb = p c − rp a = r1a − r1a = 0. Suppose sb ∈ Ra for some s ∈ R, sb 6= 0. Then pb = 0 and sb 6= 0, so p 6 |s.

24 n n n So p , s relatively prime, so hp , si = h1Ri, so ∃x, y such that yp + xs = 1R. n j−1 k−1 b = 1Rb = sxb + yp b = x(sb) ∈ Ra. b = p c − rp a ∈ Ra so pj−1c ∈ Ra contradiction, so Ra ∩ Rb = {0}.

2. If A = Ra take C = 0. If A 6= Ra then let S be the set of all submodules b such that Ra ∩ B = {0}. By part 1, we know that S is nonempty, so we order it by ⊆. We claim that S has a maximal element.

Check that if {Ai} is an increasing chain then ∪Ai is a submodule. Apply Zorn’s Lemma. So let C be a maximal element of S. Consider A/C. So pn(A/C) = 0, thus pn(a + C) = 0 but pn−1(a + C) 6= C as pn−1a 6= 0 and Ra ∩ C = {0} Apply part 1 to A/C so A/C ' R(a + C) or ∃b + C 6= 0 + C such that R(a + C) ∩ R(b + C) = 0 + C. Consider C0 = R(b + C). Then Ra ∩ C0 = {0} as Ra ∩ C = {0} and R(a + C) ∩ R(b + C) = 0 + C so C0 ∈ S, contradiction. So A = Ra ⊕ C.

Theorem 4.18. Let A be a finitely generated module over a PID R. Then k L` ni A ' R ⊕ i=1 R/(pi ), pi prime and ni ∈ N not necessarily distinct.

Proof. We know that A ' F ⊕ At, with F free and At torsion. π1 : A → F and π2 : A → At. F is generated by π1(generators of A) and At k k is generated by π2(generators of A). So F ' R . Thus, A ' R ⊕ At. s We know At = ⊕j=1A(pj), pj distinct primes. So again, each A(pj) is m nk finitely generated. It remains to check that each A(pj) ' ⊕k=1R/pj . This proof is by induction on the number of generators of A(pj). If this is one, then n A(pj) ' Rc ' R/Θc ' R/pj for some n. n We suppose that this is true for all cases with generators, that pj A(pj) = 0 n−1 and pj A(pj) 6= 0. By lemma, ∃a, C such that A(pj) ' Ra ⊕ C and C has fewer generators. We can take a to be a generator of A(pj). `−1 nk n ` nk So C ' ⊕k=1R/pj and Ra ' R/pj so A(pj) = ⊕k=1R/pj as required. Definition 4.15 (Middle ). Let A, B be R-modules, A a right R- module (denoted AR) and B a left R-module (denoted RB). Consider f : A ⊕ B → C a homomorphism of groups such that f(a1 + a2, b) = f(a1, b) + f(a2, b), f(a, b1 + b2) = f(a, b1) + f(a, b2) and f(ar, b) = f(a, rb). Then f is said to be middle linear.

We want to find a module, which we will call A⊗R B such that every middle linear map factors through it.

25 Definition 4.16 ( Product). Let AR and RB be R-modules, let F be the free abelian group with basis A ⊗ B and let K be the subgroup of F generated by (a + a0, b) − (a, b) − (a0, b), (a, b + b0) − (a, b) − (a, b0), (ar, b) − (a, rb) for each a, a0 ∈ A, b, b0 ∈ B The quotient group, F/K is called the tensor product and is written A⊗R B. The element (a, b) + K is written a ⊗ b and (0, 0) + K is written as 0.

Warning: A ⊗R B is GENERATED by {a ⊗ b : a ∈ A, b ∈ B}, a general P element is ni(ai, bi). Definition 4.17 (S, R-bimodule). A is an S, R-bimodule if A is a left S-module and a right R-module, and s(ar) = (sa)r. We can denote this as SAR

If B is a left R-module and A is an S, R-bimodule, then A ⊗R B is a left S-module by s(a ⊗ b) = (sa) ⊗ b. Let f : A × B → C be a middle linear map of groups. Let π : A × B → A ⊗R B :(a, b) 7→ a ⊗ b. π is middle linear.

Theorem 4.19. Let AR and RB be R-modules. If g : A × B → C is middle linear, then ∃!˜g : A ⊗R B → C a homomorphism such that g˜ ◦ π = g. A × B ...... g ...... π ...... g˜ ...... A ⊗R B ...... C

Proof. Let F be free on A × B. So ∃!g1 : F → C homomorphism such that g1(a, b) = g(a, b). As g is middle linear, and a homomorphism, K ⊆ ker g1. Thusg ˜1 : F/K → C is well defined.g ˜(a ⊗ b) = g(a, b).

r In general, if A is a finitely generated abelian group A ' Z ⊕ At, then r A ⊗Z Q ' Q , that is, tensor product removes torsion.

Lemma 4.20. Let AR, RB be R-modules, R has identity. Then A ⊗R R ' A as right R-modules, R ⊗R B ' B as left R-modules. ˜ Proof. Let f : A × R → A by (a, r) 7→ ar. f is middle linear, so ∃f : A ⊗R R → A : a ⊗ r 7→ ar As R is an (R,R)-bimodule, A ⊗ R is a right R-module and f˜(a ⊗ rr0) = arr0 = (ar)r0 = f(a ⊗ r)r0 so f˜ is an R-module homomorphism. ˜ ˜ ˜ P P f is surjective, as f(a ⊗ 1R) = a. If f( ni(ai ⊗ ri)) = 0 ⇒ niairi = 0 ⇒ P ai(niri) = 0. P P P Then ni(ai ⊗ ri) = ai(niri) ⊗ 1R = ( ai(niri)) ⊗ 1R = 0 ⊗ 1R = 0

Theorem 4.21. 1. Z[x] ⊗Z R ' R[x] 2. Z/m ⊗ Z/n ' Z/(m, n)

26 3. If AR, RBS, SC are modules, then A ⊗R B is a right S-module, B ⊗S C s a left R-module and (A ⊗R B) ⊗S C ' A ⊗R (B ⊗S C). P 4. If {Ai}i∈I are right R-modules and B is a left R-module, then ( Ai) ⊗R P B ' (Ai ⊗R B)

5. If F,G are free R-modules with bases X and Y respectively, then F ⊗R G is a free R-module with basis {x ⊗ y : x ∈ X, y ∈ Y }. 6. If f : A → A0 and g : B → B0 are R-module homomorphisms, then 0 0 ∃! group homomorphism A ⊗R B → A ⊗R B denoted f ⊗ g such that f ⊗ g(a ⊗ b) = f(a) ⊗ g(b). 7. Tensor product is right exact. That is, if D is an R-module, R commuta- f g f⊗1 g⊗1 tive and A → B → C → 0 is exact, then A ⊗ D → B ⊗ D → C ⊗ D → 0 is also.

Definition 4.18 (k-). Let k be a commutative ring with identity 1k.A k-algebra A is a ring such that (A, +) is a left k-module and k(ab) = (ka)b = a(kb). If A is a division ring then we call it a division algebra. Definition 4.19 (k-). An algebra homomorphism is a ring homomorphism that is also a k-module homomorphism. Definition 4.20 (Subalgebra). A subalgebra is a of A that is a k- submodule of A. We now have a category whose objects are k- and whose morphisms are k-algebra homomorphisms. There is also the category of commutative k- algebras. The subalgebra generated by X ⊆ A is the intersection of all subalgebras containing X. A is finitely generated if ∃ a finite set X such that hXi = A. Theorem 4.22. If A is a finitely generated commutative k-algebra with identity, k a field, then A ' k[x1, . . . , xn]/I for some n ∈ N and some I ⊆ k[x]. Proof. If X ⊂ A then there is a k-algebra homomorphism f : k[X] → A : x 7→ x. If A is finitely generated with generating set X then f : k[X] → A is surjec- tive, as Im f is a subalgebra of A containing X. So A ' k[X]/ ker f ' k[X]/I as k-algebras.

5 Fields and √ 2 −b± b2−4ac We know that the solutions to ax + bx + c = 0 are x = 2a . There is also a cubic and a quartic formula. Galois Theory shows that there is no quintic formula. Field Extensions

27 Definition 5.1 (Extension Field). A field F is an extension of a field K iff K is a subfield of F . In this case, we can regard F as a K-. We define [F : K] = dimK F . If [F : K] < ∞ we say F is a finite extension. Otherwise we say F is an infinite extension.

For example, as z ∈ C an be expressed uniquely as z = r + is with r, s ∈ R, then [C : R] = 2. Proposition 5.1 (Analogue of Lagrange’s Theorem). Suppose that K ⊆ E ⊆ F are field extensions.

1. If {xi : i ∈ I} is a basis of E over K and {yj : j ∈ J} is a basis of F over E, then {xiyj : i ∈ I, j ∈ J} is a basis of F over K. 2. [F : K] = [F : E][E : K]

Proof. Let z ∈ F , then there exist αj ∈ E, almost all zero, such that z = P P αjyj. For each j there exists βji, almost all zero, such that αj = βjixi. j P P i Substituting in, we get z = j i βjixiyj. Thus, xiyj, i ∈ I, j ∈ J generates F over K as a vector space. P To see that they are independent, suppose i,j γijxiyj = 0, where γij ∈ K, almost all zero. Since {yj : j ∈ J} are independent over E, for each j, P i γijxi = 0. Similarly, since {xi : i ∈ I} are independent over K, we get γij = 0 for all i, j. Question: Is the analogue of Cauchy also true? That is, if we’ve got [F : Q] = 10, we know any K which is an intermediate field must have degree 2 or 5. Can we say that there is an E such that [E : Q] = 5? Definition 5.2 (Algebraic over K). Let F be an extension of K. Then α ∈ F is algebraic over K iff there exists a nonzero f(x) ∈ K[x] such that f(α) = 0. Otherwise, α is transcendental. The extension F of K is algebraic iff every α ∈ F is algebraic over K. √ e.g. 2 is algebraic over Q since it is a root of x2 − 2 = 0, while π, e are transcendental over Q. Proposition 5.2. If F is a finite extension of K, then F is an algebraic ex- tension of K. Proof. Suppose [F : K] = n. Let α ∈ F . Then the elements 1, α, α2, . . . , αn can’t be distinct and independent over n K. Hence, there exist ai ∈ K not all zero such that a0 + a1α + ... + anα = 0 so α is algebraic over K.

√Remark:√ As we will soon see, the converse doesn’t hold. For example, Q( 2)( 3) ... is an infinite algebraic extension. Notation. Suppose F is an extension of K and S ⊆ F .

28 K[S] is the subring of F generated by K ∪ S. K(S) is the subfield of F generated by K ∪ S. When S = {s1, . . . , st} we will just write K[s1, . . . , st] and K(s1, . . . , st). Suppose now that F is an extension of K and α ∈ F is algebraic over K. Consider the corresponding ring epimorphism ϕ : K[x] → K[α]: f(x) 7→ f(α). ker ϕ is nontrivial. Since K[x] is a PID, there exists a nonzero polynomial 0 6= p(x) ∈ K[x] such that ker ϕ = (p(x)) Furthermore, we can assume p(x) is monic, as K is a field. Since K[x]/(p(x)) ' K[α] is an integral domain, it follows that p(x) is irreducible. Hence, (p(x)) is actually a maximal ideal. Hence K[x]/(p(x)) is a field, and so K[α] = K(α).

Definition 5.3. p(x) is the irreducible polynomial of α over K, denoted by Irr(α, K, x). Continuing out analysis, let deg(p(x)) = n. Then {1, α, . . . , αn−1} are lin- early independent over K. If not, then there exist ai ∈ K not all zero, such that n−1 n−1 a0+a1α+...+an−1α = 0, then α is a root of f(x) = a0+a1x+...+an−1x so f(x) ∈ (p(x)). This means p(x)|f(x), contradiction. To see that these elements generate K(α), let β ∈ K(α). Then, there exists a polynomial f(x) ∈ K[x] such that β = f(α). By the division algorithm, there exist q(x), r(x) with deg r(x) < n such that f(x) = q(x)p(x) + r(x). Thus, n−1 β = f(α) = r(α) so β = b0 + b1α + ... + bn−1α for some bi ∈ K. Summing up... Theorem 5.3. Suppose α is algebraic over K and that p(x) = Irr(α, K, x).

1. K(α) = K[α] 2. K(α) ' K[x]/(p(x)) 3. [K(α): K] = deg p(x)

Definition 5.4 (Finitely Generated Extension). Let F be an extension of K. Then F is finitely generated over K iff there exist α1, . . . , αn ∈ F such that F = K(α1, . . . , αn).

Proposition 5.4. Let F = K(α1, . . . , αn) be a finitely generated extension of K. If each αi is algebraic over K, then F is a finite algebraic extension.

Proof. It is enough to show that [F : K] < ∞. Let K0 = K and Ki = K(α1, . . . , αi). Consider the tower of extensions K0 ⊆ K1 ⊆ K2 ⊆ ... ⊆ KN . Then Ki+1 = Ki(αi+1) and αi+1 is algebraic over Ki. Hence [Ki+1 : Ki] < ∞. Thus, [F : K] = [Kn : Kn−1] ... [K1 : K0] < ∞. Definition 5.5. Suppose that F is an extension of K and let σ : K → L be an of fields. Then an embedding τ : F → L extends σ iff the following diagram commutes:

29 τ ... F ...... L ...... id . inc . L ...... σ ... K ...... L If σ = id, then we say that τ is an embedding of F over K. τ ... FL...... inc...... inc ...... K Definition 5.6. Suppose that σ : K → L is an embedding of fields. Then we can extend σ to an embedding of the corresponding polynomial rings σ : K[x] → Pn i Pn i L[x]: i=1 aix 7→ i=1 σ(ai)x . We denote the image of each f ∈ K[x] by σf. Theorem 5.5. Suppose σ : K → L is an isomorphism of fields and that f(x) ∈ K[x] is irreducible. Let u, v be roots of f, σf respectively. Then σ extends uniquely to an isomorphism τ : K(u) → L(v) such that τ(u) = v. Proof. Clearly, there is at most one such map. To see that such an isomorphism exists, note that the isomorphism σ : K → L induces an isomorphism K[x] → L[x]. This map induces an isomorphism K[x]/(f) → L[x]/(σf). Recalling the proof of Theorem 1, we can define the desired isomorphism K(u) → K[x]/(f) → L[x]/(σf) → L(v) u 7→ x + (f) 7→ x + (σf) 7→ v Theorem 5.6. Let f(x) ∈ K[x] be a polynomial of degree n ≥ 1. Then there exists an extension F = K(u) satisfying 1. u ∈ F is a root of f 2. [F : K] ≤ n 3. If f is irreducible, then [F : K] = n and F = K(u) is uniquely determined up to isomorphism. Proof. Let p(x) be a monic irreducible factor of f. Then identify K with the canonical subfield of K[x]/(p), we can take F = K[x]/(p) and u = x + (p).

Corollary 5.7. If f1, . . . , fn ∈ K[x] are nonconstant , then there exist an extension E of K in which each fi has a root. Definition 5.7 (Algebraically Closed). The field F is algebraically closed if every nonconstant polynomial f(x) ∈ F [x] has a root. Equivalently, each non- constant f(x) ∈ F [x] splits into a product of not necessarily distinct linear factors.

30 Theorem 5.8. For each field K, there exists an extension L of K such that L is algebraically closed. Proof. The theorem is an easy consequence of the following result: Claim - If E is any field, then there exists an extension F such that each nonconstant f(x) ∈ E[x] has a root in F . Assuming the claim, we can complete the proof as follows: Define inductively a tower of extensions K = E0 ⊂ E1 ⊂ ... ⊂ En ⊂ ... n ∈ N, such that each nonconstant f(x) ∈ En[x] has a root in En+1. Clearly, L = ∪n∈NEn is an algebraically closed extension of K. Proof of Claim (E. Artin): Let S = {xf : f ∈ E[x] is a nonconstant polynomial}. Consider I = (f(xf ): xf ∈ S) of the in infinitely many variables E[S].

Claim: I 6= E[S]. Suppose not, then there exists an equality g1f1(xf1 ) +

... + gnfn(xfn ) = 1 where gi ∈ E[S]. Then, there exist finitely many variables x1, . . . , xN with n ≤ N such that Pn each gi only involves x1 up to xN . thus, the equality becomes i=1 fi(x1, . . . , xN )fi(xfi ) = 1. 0 Let E be an extension of E in which each fi has a root αi. Let αi = 0 for 0 n < i ≤ N. Then working in E [S] and substituting αi for xi, we obtain 0 = 1, the classical contradiction. Thus, I is a proper ideal. By Zorn’s lemma, there exists a maximal ideal M such that I ⊆ M ( E[S]. Thus, E[S]/M is a field in which each nonconstant polynomial f(x) ∈ E[x] has the root xf + M. Identifying E with the obvious subfield of E[S]/M, we are done. Definition 5.8 (An Algebraic Closure). An extension E of K is an algebraic closure if

1. E is an algebraic extension of K

2. E is algebraically closed.

Corollary 5.9. If K is any field, then there exists an algebraic closure Kalg of K. Proof. Let E be an algebraically closed extension of K. Define Kalg = {α ∈ E : α is algebraic over K} Claim 1: Kalg is a field. Let α, β ∈ Kalg. Then K(α, β) is a finite algebraic extension of K. Since α − β and α/β are in K(α, β), given β 6= 0, it follows that α − β, α/β are in Kalg. Claim 2: Kalg is algebraically closed. Let f(x) ∈ Kalg[x] be a nonconstant n polynomial. Let α ∈ E be a root of f. Let f(x) = β0 + β1x + ... + βnx alg where βi ∈ K . Then [K(β0, . . . , βn): K] < ∞. Since α is algebraic over K(β0, . . . , βn), we have [K(β0, . . . , βn, α): K(β0, . . . , βn)] < ∞, hence alg [K(β0, . . . , βn, α): K] < ∞ and so α ∈ K .

31 Theorem 5.10. If E and E0 are algebraic closures of K then E and E0 are isomorphic over K. We will use:

Lemma 5.11. Let σ : F → L be an embedding of the field F into the alge- braically closed field L. Suppose α is algebraic over F and f(x) = Irr(α, F, x). Then the number of extensions of σ to an embedding of F (α) into L is equal to the number of distinct roots of σf in L. Proof. Let β be any root of σf in L. By Theorem 2, there is a unique embedding τ : F (α) → τ such that τ(α) = β. Conversely, if τ is any embedding extending σ, then τ(α) must be a root of σf. Now, we prove Theorem 5. Proof. Clearly, Theorem 5 is an immediate consequence of the following (slightly) more general statement. Theorem 5.12 (AC). Suppose E is an algebraic extension of K and that σ : K → L is an embedding into an algebraically closed field L.

1. There exists an extension of σ to an embedding of E into L. 2. If E is algebraically closed and L is algebraic over σ(K) then any such extension is an isomorphism E and L.

Proof. 1. Let P be the partially ordered set consisting of all pairs (τ, F ) where K ⊆ F ⊆ E is a subfield and τ : F → L is an extension of σ ordered by 0 0 0 0 (τ, F ) ≤ (τ ,F ) iff F ⊆ F and τ |F = τ. Then (σ, K) ∈ P and so P 6= ∅. Suppose that {(τi,Fi): i ∈ I} is a linearly ordered subset of P. Let F = ∪i∈I Fi and τ = ∪i∈I τi. Then (τ, F ) ∈ P and (τi,Fi) ≤ (τ, F ) for all i ∈ I. By Zorn’s Lemma, there exists a maximal element (τ, F ) of P Claim: E = F . Suppose not, let α ∈ E \ F . Then ∃ an extension of τ to the algebraic extension F (α) of F . But this contradicts the maximilaity of (τ, F ). 2. Suppose that E is algebraically closed and L is algebraic over σ(K). Let τ : E → L extend σ. Then τ(E) is algebraically closed and L is algebraic over τ(E). Hence, L = τ(E).

Convention: From now on, Kalg will denote some fixed algebraic closure of K.

32 Definition 5.9. Suppose K ⊆ E ⊆ Kalg. Then

alg alg 1. eK (E,K ) is the set of of E into K over K.

alg 2. AutK (K ) is the group of automorphisms of the algebraic closure such that π(α) = α for all α ∈ K.

Theorem 5.13. Suppose that K ⊆ E ⊆ Kalg.

alg alg 1. Each σ ∈ eK (E,K ) extends to an automorphism τ ∈ AutK (K ).

alg 2. If [E : K] < ∞ then |eK (E,K )| ≤ [E : K].

Proof. 1. Immediate consequence of Theorem 6.

2. Let E = K(α1, . . . , αn). Consider the tower of extensions K ⊆ K(α1) ⊆ ... ⊆ K(α1, . . . , αn) = E. alg Let K0 = K and Ki = K(α1, . . . , αi). Suppose inductively that |eK (Ki,K )| ≤ alg [Ki : K]. Fix some σ ∈ eK (Ki,K ). Then the number of ways of extend- ing σ to Ki+1 = Ki(αi+1) is equal to the number ri+1 of distinct roots of Irr(αi+1,Ki, x). alg alg Hence, |eK (Ki+1,K )| ≤ |eK (Ki,K )|ri+1 ≤ [Ki : K][Ki+1 : Ki] = [Ki+1 : K].

alg Remark: If ri = deg Irr(αi,Ki+1, x), for 1 ≤ i ≤ n then |eK (E,K )| = [E : K]. Galois Theory Definition 5.10 (Galois Group). Let F be a finite extension of K. Then, the corresponding Galois Group is AutK (F ) = {σ ∈ Aut(F ): σ|K = idK }. Remark: Clearly, we can suppose that K ⊆ F ⊆ Kalg and so each σ ∈ alg AutK (F ) extends to an automorphism τ ∈ AutK (K ). alg Also clearly, AutK (F ) ⊆ eK (F,K ) and so | AutK (F )| ≤ [F : K]. In particular, AutK (F ) is a finite group. Basic idea of Galois Theory: The finite group AutK (F ) encodes lots of useful information about the extension of F over K. Counterexamples:

1. Consider F = Q(21/3) then 21/3 is the unique root of x3 − 2 = 0 in F . Hence AutQ(F ) = 1. p 2. Let K = Fp(t), where t is transcendental over Fp. Let α satisfy α = t and consider F = K(α). Since xp − t = xp − αp = (x − α)p, α is the p unique root of x − t = 0 in K and so AutK (F ) = 1.

The first says that we need to adjoin all the roots of an irreducible equation, the second is also something to be wary of.

33 Definition 5.11 (Fixed Field). Let F be a field and let G ≤ Aut F . Then the corresponding fixed field of G is F G = {α ∈ F : σ(α) = α for all σ ∈ G} Definition 5.12 (Galois Extension). Let F be a finite extension of K and let G G = AutK (F ). Then F is a Galois extension of K iff F = K. Clearly, this is an attempt to capture the idea that G is “large” in the sense that it moves as many elements as possible. Another way of capturing this idea would be that |G| = [F : K]. Fortunately, we’ll eventually see that the two notions coincide. √ Example: F = Q( 2) is a Galois Extension of Q with Galois Group AutQ F ' C2 √ Proof.√First note that we can define an automorphism σ ∈ AutQ F by r+s 2 7→ r − s 2 for r, s ∈ Q. Clearly σ(α) = α iff α ∈ Q. Since [F : Q] = 2, we must have AutQ F ' C2. Example: Consider the irreducible polynomial p(x) = x3 − 2. Let α = 21/3 and let ω be a primitive third , ie ω 6= 1 and ω3 = 1. So ω2 +ω+1 = 0. Then the roots of p(x) are {α, ωα, ωα}. Let F = Q(α, ωα, ωα). Then F is a Galois extension of Q and AutQ F ' S3. G Proof. We have already seen that G = AutQ F ' S3. To see that F = Q 3 2 consider the tower of extensions Q ⊆ Q(α) ⊆ Q(α, ω) = F . Thus [F : Q] = 6. G Finally note that if E = F then Q ⊆ E ⊆ F and 6 = |G| ≤ | AutE F | ≤ [F : E] ≤ [F : Q] = 6, so all are equal.

Remark: This argument shows that if | AutK F | = [F : K] then F is a Galois Extension of K. Open Question: Let G be any finite group. Does there exist a Galois exten- sion F of Q such that AutK F ' G. Remark: We will soon see that if G is any finite group, then there exists a finite extension K of Q and a Galois Extension F of K such that AutK F ' G. We next try to understand which finite extensions are Galois. alg | AutK (E)| ≤ |eK (E,K )| ≤ [E : K]. E is Galois if both are equalities. The first will be done with splitting fields/normal extensions, and the second will be done with the number of roots of irreducible equations/separable extensions.

Definition 5.13 (). Let {fi : i ∈ I} be a family of nonconstant polynomials in K[x]. Then the extension F of K is a splitting field of {fi : i ∈ I} iff

1. Each polynomial fi splits into a product of linear factors in F [x]

2. For each i, let Ri be the set of roots of fiin F . Then F = K(∪iRi)

Example: Using our previous notation, E = Q(α, ωα, ωα) is a splitting field of x2 − 2 ∈ Q[x]. Remark: Up to isomorphism, the splitting field of {fi : i ∈ I} is the subfield alg alg of K generated by all roots of the fi in K .

34 Theorem 5.14. If K ⊆ E ⊆ Kalg, then the following are equivalent.

alg 1. For all σ ∈ AutK K , σ[E] = E.

alg 2. For all τ ∈ eK (E,K ), τ[E] = E. 3. E is the splitting field of a family of polynomials in K[x]

4. Every irreducible polynomial in K[x] which has a root in E splits into linear factors in E[x].

alg alg Proof. 1 ⇒ 2: Every τ ∈ eK (E,K ) extends to a σ ∈ AutK K . 2 ⇒ 4: Suppose p(x) ∈ K[x] is irreducible and α ∈ E is a root of p(x). Let β ∈ Kalg be any root of p(x). Then the isomorphism K(α) → K(β) extends to an embedding τ : E>Kalg. Since τ[E] = E, we see that β ∈ E. Since E contains the roots of p(x) in Kalg, it follows that p(x) splits into linear factors in E[x]. 4 ⇒ 3: Clearly, E is the splitting field of {Irr(α, K, x): α ∈ E}. 3 ⇒ 2: Let E be the splitting field of {fi : i ∈ I}. For each i ∈ I, let Ri be alg the finite set of roots of fi in E. If τ ∈ eK (E,K ). Then τ[Ri] ⊆ Ri and so τ[Ri] = Ri. Since E = K(∪iRi), it follows that τ[E] = E. alg alg 2 ⇒ 1: If σ ∈ AutK (K ) then τ = σ|E ∈ eK (E,K ). Definition 5.14 (Normal Extension). The extension E of K is normal iff it satisfies the conditions of Theorem 8. Corollary 5.15. Suppose that K ⊆ E ⊆ Kalg and that E is a normal extension of K. Then if K ⊆ F ⊆ E, then E is also a normal extension of F . Proof. Take your pick of any condition in Theorem 8.

1/p Remark: Of course, if K = Fp(t) and α = t , then E = K(α) is a normal extension of K. We can eliminate this problem as follows: Definition 5.15 (Separable Polynomial). An irreducible polynomials f(x) ∈ K[x] is separable if f has no repeated roots in Kalg.

Theorem 5.16. If p(x) ∈ K[x] is irreducible, then the following are equivalent:

1. p(x) is separable 2. The formal derivative p0(x) 6= 0.

Proof. Reading Exercise, Hungerford III.6.10.

p 0 p−1 Let K = Fp(t) and f(x) = x − t, then f (x) = px = 0. Corollary 5.17. If the of K is zero, then every irreducible poly- nomial is separable.

35 Definition 5.16 (Separable Extension). Suppose that K ⊆ E ⊆ Kalg.

1. The element α ∈ E is separable over K if Irr(α, K, x) is separable 2. The extension E of K is separable if every α ∈ E is separable over K.

Theorem 5.18. Suppose that K ⊆ F ⊆ E ⊆ Kalg.

1. If α ∈ E is separable over K, then α is separable over F . 2. If E is a separable extension of K, then E is also a separable extension of F .

Proof. 1: Irr(α, F, x)| Irr(α, K, x). 2: Immediate from 1. Theorem 5.19. If E is a finite extension of K, then the following are equiva- lent:

1. E is a separable extension of K.

2. E = K(α1, . . . , αn) where each αi is separable over K

alg 3. |eK (E,K ) = [E : K].

Proof. 1 ⇒ 2: Utterly obvious. 2 ⇒ 3: Consider the tower of extensions K ⊆ K(α1) ⊆ K(α1, . . . , αn). Let alg K0 = K and Ki = K(α1, . . . , αi). Suppose inductively that |eK (Ki : K )| = alg [Ki : K]. Let τ ∈ eK (Ki,K ), then the number of extensions of τ to Ki+1 = Ki(αi+1) is equal to the number of distinct roots ri+1 of Irr(αi+1,Ki, x). Since αi+! is separable over Ki+1, we have ri+1 = deg Irr(αi+1,Ki, x) = [Ki+1,Ki] So alg |eK (Ki+1,K )| = [Ki : K][Ki=1 : Ki] = [Ki+1 : K]. 3 ⇒ 1: Suppose that E is not a separable extension of K. Choose α ∈ E such that α is not separable over K. Let E = K(α1, . . . , αn) where α1 = α, then alg alg |eK (K,K )| < [K(α1): K] and arguing as above, we find |eK (K,K )| < [E : K].

We can now characterize Galois Extensions Theorem 5.20. If E is a finite extension of K, then the following are equiva- lent:

1. E is a Galois Extension of K. 2. E is a separable normal extension of K.

3. | AutK (E)| = [E : K].

36 alg Proof. 2 ⇒ 3: Since E is normal over K, eK (E,K ) = AutK E; and since E alg is separable over K, |eK (E,K )| = [E : K]. G 3 ⇒ 1: Let G = AutK (E) and let F = E . Then K ⊆ F ⊆ E. Also [E : K] = |G| ≤ | AutF E| ≤ [E : F ] ≤ [E : K], so F = K thus E is a Galois Extension. G 1 ⇒ 2: Let G = AutK (E), Then E = K. Let α ∈ E and let f(x) = Irr(α, K, x). Let R be the set of roots of f(x) in E. Consider the polynomial Q g(x) = β∈R(x − β). Since every σ ∈ G permutes the set R it follows that the coefficients of g(x) are G-invariant. Hence, g(x) ∈ K[x] and so f(x)|g(x). Also, it is clear that g(x)|f(x). Since both are monic, it follows that g(x) = f(x). In particular, Irr(α, K, x) is separable and so α is separable over K. Furthermore, Irr(α, K, x) splits into linear factors in E[x]. It follows that E is a normal extension of K. Theorem 5.21 (The Fundamental Theorem of Galois Theory). Let E be a finite Galois Extension of K and let G = AutK E be the Galois Group. Then there exist mutually inverse between Sub(G) = {H|H < G} and SubK (E) = {F : F is a subfield such that K ⊆ F ⊆ E} defined by H 7→ EH the fixed field, F 7→ AutF E, the corresponding Galois Group. Furthermore

0 1. H ⊆ H0 iff EH ⊇ EH

0 2. If H ⊆ H0 then [H0 : H] = [EH : EH ]

3. If K ⊆ F ⊆ E, then F is a Galois Extension of K iff AutF E E AutK E. In this case, AutK F ' AutK E/ AutF E

We begin the proof of The Fundamental Theorem.

Proof. Suppose F is a subfield with K ⊆ F ⊆ E. Then E is also a Galois H extension of F . If H = AutF E, then E = F . Thus, F 7→ H = AutF E 7→ H E = F . Thus the map F 7→ AutF E is certainly injective. The surjectivity is an immediate consequence of a result of Artin which will be proved following this. This result of Artin suggests the following question: What are the finite subgroups of Aut C? 0 Continuing with the proof, let H,H ≤ G = AutK E. Then there exist cor- 0 0 responding subfields F,F such that H = AutF E and H = AutF 0 E. Suppose 0 that H ⊆ H0. Then clearly F = EH ⊇ EH = F 0. 0 Conversely, assume F ⊇ F . Then AutF E ⊆ AutF 0 E. Thus, (1) holds. For (2), suppose that H ⊆ H0. Recall that |H| = [E : F ] and |H0| = [E : F 0]. Thus [H0 : H] = |H0|/|H| = [E : F 0]/[E : F ] = [F : F 0]. For (3), we suppose that K ⊆ F ⊆ E. Clearly F is a separable extension of K. Thus, the following are equivalent:

1. F is a Galois extension of K 2. F is a normal extension of K

37 alg 3. σ[F ] = F for all σ ∈ AutK K

4. σ[F ] = F for all σ ∈ AutK E

Suppose now that F is a Galois extension of K. Then we can define a π homomorphism AutK E → AutK F : σ 7→ σ|F . Clearly ker π = AutF E E AutK F . To see that AutK F ' AutK E/ AutF F we must show that π is surjective. Let τ ∈ AutK F . alg Then, there exists ϕ ∈ AutK K such that ϕ|F = τ. Thus, if σ = ϕ|E ∈ AutK E we have σ|F = τ as required. Finally, suppose that F isn’t a Galois extension of K. Then there exists 0 σ ∈ G = AutK E such that F = σ[F ] 6= F . It is easily checked that AutF 0 E = −1 0 σ AutF Eσ . Since F 6= F , AutF 0 E 6= AutF E and so AutF E is not a normal subgroup of G = AutK E. Lemma 5.22 (Artin). Let E be any field and let H be a finite subgroup of Aut E. H Let F = E . Then E is a finite Galois extension of F and H = AutF E. We shall require the following (interesting) result.

Theorem 5.23 (Primitive Element Theorem). If E is a finite separable exten- sion of K, then there exists an element α ∈ E such that E = K(α). Proof. First suppose that K is a finite field. Then E is also a finite field and hence E∗ is cyclic. Let α ∈ E∗ be a generator, then clearly E = K(α). Hence, we can suppose that K is infinite. Arguing by induction, it is enough to consider the case when E = K(β, γ). Let [E : K] = n. Since E is a separable extension of K, there exist exactly n distinct embeddings of E into Kalg, say Q σ1, . . . , σn. Consider the polynomial f(x) = i6=j([σiβ + xσiγ] − [σjβ + xσjγ]). Then f(x) isn’t the zero polynomial, and so has only finitely many roots. Choose some a ∈ K such that f(a) 6= 0. Let α = β + aγ. Then σ1(α), . . . , σn(α) are all distinct. Thus, there are at least n distinct embeddings of K(α) into Kalg. So [K(α): K] ≥ n and so K(α) = E. Corollary 5.24. Let E be a separable algebraic extension of F . Suppose there exists n ≥ 1 such that [G(α): F ] ≤ n for all α ∈ E. Then [E : F ] ≤ n. Proof. Choose α ∈ E such that [F (α): F ] = mis maximal. Suppose there exists β ∈ E \ F (α). Let γ ∈ F (α, β) satisfy F (γ) = F (α, β). Then [F (γ): F ] > m, contradiction.

Now we can prove Lemma 2 by Artin.

Proof. Let α ∈ E and let σ1, . . . , σr ∈ H be a maximal subset such that σ1(α), . . . , σr(α) are distinct. Then if τ ∈ H, by maximality, {τσ1(α), . . . , τσr(α)} = {σ1(α), . . . , σr(α)}. Qr Consider the polynomial f(x) = i=1(x − σi(α)). Then α is a root of f and τf = f for all τ ∈ H. Hence, the coefficients of f lie in EH = F .

38 Furthermore, f is separable and splits into linear factors in E[x]. Applying Corollary 5, we also see that [E : F ] ≤ |H|. Thus, E is a finite Galois extension of F . Finally, |G| ≤ | AutF E| = [E : F ] ≤ |H|, and so H = AutF E is the Galois group. The following easy observation is often useful. Proposition 5.25. Assume that F is a finite separable extension of K. Then there exists a finite Galois extension E of K such that K ⊆ F ⊆ E.

Proof. Choose α ∈ F with F = K(α). Let f(x) = Irr(α, F, x) and let α1, . . . , αn alg be the roots of f(x) ins F . Then E = K(α1, . . . , αn) satisfies our require- ments.

Lemma 5.26. 1. Each positive r ∈ R has a square root in R. 2. If p(x) ∈ R[x] has odd degree, then p(x) has a root in R. 3. Each z ∈ C has a square root in C. 4. There doesn’t exist a field E ⊇ C such that [E : C] = 2. Proof. (1) and (2) are the intermediate value theorem, (3) obvious. (4): Suppose E exists. Then E = C(α) for any α ∈ E \ C. Set f(x) = x2 + bx + c = Irr(α, , x). Then the roots of f(x) are z = √ C −b± b2−4c 2 , contradiction. Theorem 5.27. C is algebraically closed. Proof. It is enough to show that C has no proper algebraic extensions. Suppose that F is a finite algebraic extension of C. Then F is a finite separable extension of R and hence there exists a finite Galois extension E of R such that R ⊆ C ⊆ F ⊆ E. n Let G = AutR E be the corresponding Galois group and let |G| = 2 m where m is odd. By Sylow, there is a subgroup H < G with |H| = 2n. Let K = EH . Then [K : R] = [G : H] = m. Suppose m > 1. Then there exists α ∈ K such that K = R(α) and hence deg Irr(α, R, x) = [K : R] = m, which is odd, so it has a root, contradicting irreducibility. So m = 1.

Next note that C is a Galois extension of R, so AutR C ' AutR E/ AutC E, n−1 hence P = AutC E has order 2 . We claim that n = 1, so that E = C as required. We suppose not. By Sylow there exists N < P such that [P : N] = 2. Let K = EN , then [K : C] = [P : N] = 2, contradiction. Definition 5.17 (Galois Group of a Polynomial). Let f(x) ∈ K[x]. Then the Galois group of f over K is AutK E where E is the splitting field of f over K. Remark: We usually work with separable polynomials, so that E is a Galois extension.

39 Lemma 5.28. Let f(x) ∈ K[x] be a polynomial with Galois group G.

1. If f(x) has exactly n distinct roots, in Kalg, then G is isomorphic to a subgroup of Sn. 2. If f(x) is a separable irreducible polynomial of degree n, then G is isomor- phic to a transitive subgroup of Sn.

4 Example:√ Consider f(x) = x − 2 ∈ Q[x]. By Eisenstein, f(x) is irreducible. Let α = 4 2. Then, the roots are ±α, ±iα. Letting E be the splitting field of f over Q, we have E = Q(α, i), so considering Q ⊂ Q(α) ⊂ Q(α, i) we see that [Q(α): Q] = 4 and [Q(α, i): Q] = 2, since i∈ / Q(α) ⊆ R. Hence [E : Q] = 8. Thus, the Galois Group G of f over Q satisfies |G| = 8 and G < S4, and |S4| = 4 × 3 × 2, it follows that G is the Sylow two subgroup of S4, so G ' h(1234)i o (24). Theorem 5.29. Suppose p is a prime and that f(x) ∈ Q[x] is an irreducible polynomial of degree p with exactly 2 nonreal roots in Qalg. Then, the Galois group of f over Q is isomorphic to Sp. alg Proof. Let α1, . . . , αp ∈ Q be the roots of f(x). then G is isomorphic to a subgroup of Sym({α1, . . . , αp}). Since G acts transitively on the roots, p||G| and so G contains a p-cycle. Also, complex conjugation induces a transposition. So, after suitably ordering the roots, we can suppose G contains τ = (12) and k σ = (1i2 . . . ip), hence, there exists 1 ≤ k ≤ p − 1 such that σ = (12j3 . . . jp), as p is prime. Thus, again relabeling the roots, we can suppose G contains τ = (12) and σ = (12 . . . p). So G also contains O(12)O−1 = (23) and so on. Since G contains (12), (23),..., (p − 1p), it follows that G = Sp.

Example: consider f(x) = x5 − 4x + 2 ∈ Q[x]. By Eisenstein, f(x) is irreducible. By Calc 1, it is easily checked that f(x) has exactly three real roots. Thus, the Galois group of f over Q is S5. Example: Consider f(x) = x5 + 3x + 15 ∈ Q[x]. By Eisenstein, f is irre- ducible. Then f(x) has exactly one real root. This time, complex conjugation induces a product of two transpositions. So if G is the Galois Group of f over Q, then G contains both a five cycle and a product of two transpositions. Unfortunately, this still leaves at least three candidates: S5, A5, h(12345)i o h(25)(34)i ' D5 We will use the following, but put off the proof until later:

Theorem 5.30. Let f(x) ∈ Z[x] be a monic polynomial and let p be a prime. Let f(x) = f(x)( mod p), the polynomial obtained by reducing the coefficients alg mod p. Suppose that f(x) has no multiple roots in Fp . Then, there exists a bijection {α1,..., αn} → {α1, . . . , αn} between the roots of f(x) and the roots of f(x) together with an embedding of the Galois group G of f over Fp into the Galois group G of f over Q which gives an embedding of the actions of these groups on the roots.

40 5 Reducing mod 2, we get f(x) = x + x + 1 ∈ F2. Now working in F2[x], we get x5 + x + 1 = (x2 + x + 1)(x3 + x2 + 1). These have no linear factors, and alg thus are irreducible over F2. f(x) has five distinct roots in F2 . Furthermore, the splitting field of f over F2 is F26 . Hence, the corresponding Galois group of 6 f is AutF2 F2 ' C6. 2 6 Let σ ∈ AutF2 F2 be a generator. Then σ permutes the roots of x + x + 1 and x3 + x2 + 1 so is a 2-cycle times a 3-cycle. Hence, the Galois group G of f over Q contains an element σ which is a 3 product of a 2-cycle and a 3-cycle. So σ is a transposition, thus G = S5. Problem: We find an irreducible f(x) ∈ Q[x]of degree seven with Galois group S7. 5 2 Solution: We claim that the polynomial a(x) = x + x + 1 ∈ F2[x] is irreducible. But b(x) = x2 + x + 1 is the only irreducible quadratic, and it doesn’t divide a(x), which has no linear factors. Consider f(x) = a(x)b(x) = x7 + x5 + x4 + 1. The splitting field over f is E = F210 . Thus, the Galois Group G of f over F2 is cyclic of order 10. Let σ ∈ G be a generator. Clearly σ is the product of a 5-cycle and a 2-cycle. Let f(x) = x7 + 3x5 + 3x4 + 3 ∈ Q[x]. Let G be the Galois group of f over Q. Then G contains a product σ of a 5-cycle and a 2-cycle. Raise it to the fifth power, and we get a 2-cycle, σ5 ∈ G. As G also contains a 7-cycle, G ' S7. Theorem 5.31. For each n ≥ 2, there exists an irreducible poynomail f(x) ∈ Q[x] of degree n with Galois group Sn. Clearly, we can suppose that n ≥ 4.

Lemma 5.32. SUppose the n ≥ 4 and G ≤ Sn satisfies: 1. G is transitive 2. G contains a (n − 1)-cycle 3. G contains a transposition.

Then G = Sn. Proof. Let Ω = {1, . . . , n}. Let σ ∈ G be an (n − 1)-cylce and let α ∈ Ω be the fixed point of σ. Then Gα acts transitively on Ω \{α}, it follows that for all β ∈ Ω, Gβ acts transitively on Ω \{β}. This means that G acts 2-transitively on Ω; ie, if α 6= β, γ 6= δ, then there exists g ∈ G such that g(α) = γ and g(β) = δ. Hence, if τ ∈ G is a transposition, then we can conjugate τ to any other transposition by a suitable element of G. Since G contains every transposition, G = Sn. Lemma 5.33. For each prime p and d ≥ 1, there exists an irreducible f(x) ∈ Fp[x] of degree d.

41 Proof. Reading Exercise We will now prove the Theorem.

Proof. Let a(x) ∈ F2[x] be irreducible of degree n−1 and let b(x) = (x+1)a(x) = n n−1 x + bn−1x + ... + b0 ∈ F2[x]. n n−1 Similarly, let c(x) = x + cn−1x + ... + c0 ∈ F3[x] be chosen so that it factors as either g(x)h)(x) where g is an irreducible quadratic and h is irreducible of odd degree or gh1h2 where g is an irreducible quadratic and h1, h2 are distinct irreducible of odd degree. By the Chinese Remainder Theorem, for each 0 ≤ i ≤ n − 1, there exists 0 ≤ `i < G such that `i ≡ bi( mod 2) ≡ ci( mod 3). n n−1 Consider f(x) = x +7`n−1x +...+7. By Eisenstein, f(x) is irreducible. Hence, the Galois group G is transitive. Reducing mod 2, G contains an n − 1 cycle. Reducing mod 3, we see that G must also contain a transposition. Thus, G = Sn. Corollary 5.34. If G is any finite group, then there exists a finite extension K of Q and a Galois extension E of K such that G ' AutK E.

Proof. Let |G| = n. By Cayley, we can suppose that G ≤ Sn. Let f(x) ∈ Q[x] be irreducible of degree n with Galois group Sn and let E be the splitting field of f(x). Then K = EG satisfies our requirements. We will now begin our discussion of Solvable groups and radical extensions. In this section we will study Galois groups of prescribed type: cyclic, abelian and solvable Definition 5.18 (Cyclic/Abelian Extension). a finite extension E of K is cyclic/abelian if E is a Galois Extension of K and AutK E is cyclic/abelian.

Let α ∈ Qalg \Q, and E ⊆ Qalg is maximal such that α∈ / E. Then any finite extension of E is cyclic. Let K be a field of characteristic p ≥ 0 and let n ≥ 1 satisfy p 6 |n. Then the polynomial xn − 1 has n distinct roots in Kalg, since xn − 1 and nxn−1 have no common roots. These roots form a finite multiplicative subgroup of (Kalg)∗ and hence form a . A generator of this group is called a primitive nth root of unity. e.g., working over , we have x3 − 1 = (x − 1)(x2 + x + 1). The primitive Q √ rd −1± −3 3 roots of unity are 2 . Let α be a primitive nth root of unity. Then if 1 ≤ r ≤ n, then αr is also a primitive root iff (r, n) = 1. So there are exactly ϕ(n) primiitive roots of unite. ∗ r ∗ Of course, this is also the order of Zn. If fact, α is primitive iff r ∈ Zn. Definition 5.19 (Cyclotomic Extension). With the above hypotheses, the split- ting field of xn − 1 over K is called the cyclotomic extension of order n.

42 Theorem 5.35. With the above hypotheses, let F be the cyclotomic extension of order n:

1. F = K(ξ), where ξ is any primitive nth root of unity. 2. F is an abelian extension of K of dimension d for some d|ϕ(n). Further- more, if n is prime, then F is a cyclic extension.

∗ 3. AutK F is isomorphic to a subgroup of order d in Zn. Remark: It is possible that d < ϕ(n). For example, let ω be a primitive 5th root of unity. Then [R(ω): R] = [C : R] = 2. ∗ th Note that |F23 | = 7, hence F23 contains a primitive 7 root of unity. In 6 5 4 3 2 particular, x + x + x + x + x + x + 1 isn’t irreducible over F2. Proof. Clearly the irreducible factors of xn − 1 are separable and so F is a Galois extension of K. Let ξ ∈ F be a primitive nth root of unity. Then 2 n−1 F = K(1, ξ, ξ , . . . , ξ ) = K(ξ). Also, if σ ∈ AutK F , then σ is uniquely determined by its value σ(ξ) = ξr. Clearly ξr is also a primitive nth root and ∗ ∗ so r ∈ Zn. So we obtain an injective homomorphism AutK F → Zn : σ 7→ r. ∗ Hence AutK F is isomorphic to a subgroup of Zn of order d|ϕ(n). So AutK F is abelian, and cyclic if n is prime, since Zn is now a field. Finally, by Galois Theory, [F : K] = | AutK F | = d. Theorem 5.36. Let K be a field of char p ≥ 0 and let n ≥ 1 with p 6 |n. Suppose that K contains a primitive nth root of unity. Let a ∈ K and let α be a root of xn − a. Then K(α) is a cyclic extension of dimension d for some d|n. Furthermore, αd ∈ K. Hence if b = αd ∈ K then Irr(α, K, x) = xd − b. Proof. Let ζ ∈ K be a primitive nth root of unity. Then α, ζα, . . . , ζn−1α are distinct roots of xn − α. If follows that K(α) is a Galois extension of K. Let G = AutK K(α) be the corresponding Galois group. If σ ∈ G, then th σ(α) = ωσα where ωσ is a not necessarily primitive n root of unity. Thus, we obtain an injective homomorphism π : G → {ζi : 0 ≤ i ≤ n − 1}' Cn. Thus G is cyclic of order d for some d|n. Let σ ∈ G be a generator, then th σ(α) = ωσα for some primitive d root of unity ωσ. d d d d d G Furthermore, σ(α ) = σ(α) = (ωσα) = α . Thus, α ∈ K(α) = K.

th Remark: This theorem no longer holds if K doesn’t√ contain a primitive n 3 root of unity. eg, consider x3 − 2 ∈ Q[x]. Then Q( 2) isn’t a Galois extension of Q. We next work towards

Theorem 5.37. Let K be a field of char K = 0 which contains a primitive nth root of unity. If E is a cyclic extension of K of dimension n, then there exists α ∈ E such that E = K(α) and α is a root of xn − a ∈ K[x]. Until further notice, we restrict our attention to fields of characteristic zero.

43 Definition 5.20. Let K be a field of characteristic zero and let E be a finite alg extension of K with [E : K] = n. Let eK (E,K ) = {σ1, . . . , σn}. For each α ∈ E E, the corresponding is NK (α) = σ1(α) . . . σn(α) and the corresponding E trace is trK (α) = σ1(α) + ... + σn(α). Lemma 5.38. If E is a finite Galois Extension of K and α ∈ E, then

E Y NK (α) = σ(α)

σ∈AutK E and E X trK (α) = σ(α),

σ∈AutK E and both are in K.

alg E E Proof. Under these hypotheses, eK (E,K ) = AutK E. Clearly NK (α), trK (α) E E are fixed by each Aut ∈ AutK E. So NK (α) and trK (α) lie in K. Lemma 5.39. Suppose that E is a finite extension of K and α, β ∈ E. Then E E E E E E NK (αβ) = NK (α)NK (β) and trK (α + β) = trK (α) + trK (β). E [E:K] E Futhermore, if γ ∈ K then NK (γ) = γ and trK (γ) = [E : K]γ. E ∗ ∗ So what is the kernel of NK : E → K ? Well, if [E : K] = n, and ζ ∈ K is an nth root of unity, then ζ is in the kernel. Definition 5.21 (Characters). Let G be a group and let K be a field. Then a of G is a homomorphism χ : G → K∗.

Theorem 5.40 (Artin). If χ1, . . . , χn are distinct characters of a group G in the field K, then χ1, . . . , χn are linearly independent over K. ie, if a1, . . . , an ∈ K Pn are not all zero, then the function i=1 aiχi is not identically zero.

Proof. Suppose not and that χ1, . . . , χn is chosen with n minimal say a1χ1 + ... + anχn = 0 where a1, . . . , an ∈ k. Then clearly n > 1 and each ai 6= 0. Since χ1 6= χ2, there exists a z ∈ G such that χ1(z) 6= χ2(z). But we have for all x ∈ G a1χ1(zx) + ... + anχn(zx) = 0 ie, a1χ1(z)χ1 + ... + anχn(z)χn = 0. Multiplying the original formula by χ1(z), we obtain a1χ1(z)χ1 + ... + anχ1(z)χn = 0. Subtracting this from the first, we obtian a2(χ2(z)−χ1(z))χ2 + ... + an(χn(z) − χ1(z))χn = 0. Since a2(χ2(z) − χ1(z)) 6= 0, this contradicts the minimality of n.

Corollary 5.41. If K is any field and σ1, . . . , σn ∈ Aut K are distinct, then σ1, . . . , σn are linearly independent over K.

∗ ∗ Proof. We regard each σi as an isomorphism from K to K . Theorem 5.42 (Hilbert’s Theorem 90). Let K be a field of characteristic zero and let E by a cyclic extension of K of dimension n. Let σ ∈ AutK E be E ∗ a generator. If α ∈ E then NK (α) = 1 iff there exists β ∈ E such that α = β/σ(β).

44 Proof. Let G = AutK E be the cyclic Galois group. ⇐: Suppose there exists β ∈ E∗ such that α = β/σ(β). Then

E E E E E NK (α) = NK (β)/NK (σ(β)) = NK (β)/NK (β) = 1

. E ⇒: Conversely, suppose that NK (α) = 1. To make this proof easier to read, we shall use exponential notation as follows: if τ, σ ∈ G and η ∈ E, then we write ητ instead of τ(η). Thus ητ+θ = ητ ηθ = τ(η)θ(η), etcetera. E 1+σ+σ2+...+σn−1 In particular, for each η ∈ E, NK (η) = η . By Artin’s Theorem on Characters, the map on E defined by id + ασ + α1+σσ2 + ... + n−2 α1+σ+...+σ σn−1 is not identically zero. 2 n−2 n−1 Hence, there exists θ ∈ E such that θ+αθσ+α1+σθσ +...+α1+...+σ θσ 6= 2 3 n−1 n 0. Notice that αβσ = αθσ + α1+σθσ + α1+σθσ + ... + α1+...+σ θσ . Since 1+σ+...+σn−1 E n σ α = NK (α) = 1 and σ = 1 we see that αβ = β, and so α = β/σ(β). We can now prove Theorem 21 Proof. Let K be a field of characteristic zero which contains a primitive nth root of unity ζ and let E be a cyclic extension of K of dimension n with Galois group G. Let σ ∈ G be a generator. E −1 −1 n Since [E : K] = n, we have that NK (ζ ) = (ζ ) = 1. Hence, by Hilbert’s Theorem 90, there exists α ∈ K∗ such that ζ−1 = α/σ(α) and so σ(α) = ζα. Since ζ ∈ K, we have σ2(α) = σ(ζα) = ζ2α. Continuing in this fashion, we obtain σi(α) = ζiα for 1 ≤ i ≤ n. Thus {α, ζα, . . . , ζn−1α} is a set of n distinct elements under the action of G. Hence [K(α): K] ≥ n and so E = K(α). Also, σ(αn) = σ(α)n = (ζα)n = αn. Thus a = αn ∈ EG = K. Clearly α is a root of xn − a ∈ K[x]. We continue to work with fields of characteristic zero. Definition 5.22 (Radical Extension). An extension F of a field K is a radical extension iff there exists a tower K ⊆ K(u1) ⊆ ... ⊆ K(u1, . . . , un) = F such di that for each 1 ≤ i ≤ n, there exists a di ≥ 1 such that ui ∈ K(u1, . . . , ui−1). √ 4 √ Example: Let α = 2 and i = −1 then Q(α, i) is a radical extension of Q as Q ⊆ Q(α) ⊆ Q(α, i). Definition 5.23 (Solution by Radicals). Let f(x) ∈ K[x] and let E be the splitting field of f over K. Then the equation f(x) = 0 is solvable by radicals iff there exists a radical extension F of K such that K ⊆ E ⊆ F . Remark: We have no required that F should be a radical Galois extension, but the next lemma says that we can assume it is also Galois. Recall that if F is a separable extension of K then there exists a Galois extension E of K such that E ⊆ F ⊆ E. In fact, there exists a minimal one called the normal closure.

45 Lemma 5.43. If F is a radical extension of K and N is the normal closure of F , then N is also a radical extension of K. Proof. Let F = K(α) and let g(x) = Irr(α, K, x). Then N is the splitting field of g(x) over K. Let {α1, . . . , αr} be the distict roots of g(x) in N. Let K ⊂ K(u1) ⊂ ... ⊂ K(u1, . . . , un) = F = K(α) wirness that F is a radical extnesion of K. For each 1 ≤ q ≤ r, let σi ∈ AutK N satisfy σi(α) = αi. Then K ⊂ K(σi(u1)) ⊂ ...K(σi(u1), . . . , σi(un)) = K(α) witnesses that K(αi) is a radical extension of K. Hence the following tower witnesses that N is a radical extension of K.

K ⊂ K(u1) ⊂ ... ⊂ K(u1, . . . , un) = K(α1) ⊂ K(α1, σ2(u1)) ⊂ ... ⊂ K(α1, σ2(u1), . . . , σ2(un)) = K(α1, α2) ⊂

... ⊂ K(α1, . . . , αn) = N

Suppose that K is a field of characteristic zero. We wish to characterize those f(x) ∈ K[x] which are solvable by radicals. Let E be the splitting field of f and suppose that F is a radical Galois extension such that K ⊆ E ⊆ F . Since E is Galois over K, we have AutK E ' AutK F/ AutE F . In particular, AutK E is a homomorphic image of AutK F . Thus, it is enough to understand the structure of AutK F . Let K = K0 ⊂ K1 ⊂ ... ⊂ Kn = F withness that F is a radical extension of K. To simplify matters, suppose that K contains all relevant roots of unity. Then each Ki ⊂ Ki+1 is a cyclic extension. IN particular, consider the special case where K = K0 ⊂ K1 ⊂ K2 = F .

Then AutK0 K1, AutK1 K2 are cyclic, AutK0 K1 ' AutK0 K2/ AutK1 K2.

Thus AutK0 K2 is an extension of AutK0 K1 by AutK1 K2, that is, 1 → AutK1 K2/

AutK0 K2 → AutK0 K1 → 1 where the outer pair are cyclic. Thus, we must study the smallest S such that each abelian group is in S and such that S is closed under taking extensions and homo- morphic images. Definition 5.24 (Commutator). Let G be a group. If a, b ∈ G then the corre- sponding commutator is [a, b] = aba−1b−1. The of G is G0 = h[a, b]: a, b ∈ Gi. Example: If G is abelian, then G0 = 1. Theorem 5.44. Let G be a group.

0 1. G E G 2. G/G0 is abelian

0 3. if N E G, then G/N is abelian iff G ≤ N.

46 Proof. 1. If a, b ∈ G, and π ∈ Aut G, then π([a, b]) = [π(a), π(b)] Hence π[G0] = G0. In particular, this is true if π is an inner automorphism, 0 hence, G E G. 2. Let a, b ∈ G. Since (ab)(ba)−1 = aba−1b−1 = [a, b] ∈ G0 it follows that abG0 = baG0, and so G/G0 is abelian.

0 3. Finally, suppose N E G. If G ≤ N, then the above argument shows that G/N is abelian. Conversely, suppose that G/N is abelian. Let a, b ∈ G. Since abN = baN, it follows that [a, b] = (ab)(ba)−1 ∈ N.

Examples: Clearly, if S is a simple nonabelian group then S0 = S. Hence if 0 n ≥ 5, then An = An. Definition 5.25 (Perfect). G is perfect if G = G0. 0 Let n ≥ 5, since Sn/An ' Z2 it follows that Sn ≤ An, since An is perfect, it 0 follows that Sn = An. Consider S3. The normal subgroups of S3 are 1, A3, S3. As S3 is not abelian, 0 00 0 and S3/A3 ' Z2, S3 = A3 and S3 = A3 = 1. The normal subgroups of S4 are 1,A4,S4 and V = {1, (12)(34), (13)(24), (14)(23)}. Clearly 1 < V < A4 < S4. 0 −1 −1 As S4/A4 ' Z2, A4 contains S4. Since [(12), (23)] = (12)(23)(12) (23) = 0 (13)(23) = (132) ∈/ V . So S4 = A4. 0 0 Since A4/V ' Z3, so A4 = V , and so V = 1. Thus, S4 is an iterated extension of abelian groups. Definition 5.26 (Derived Subgroups). For each i ≥ 0, the ith derived subgroup 0 G(i) of G is defined by G(0) = G and G(i+1) = G(i) . Definition 5.27 (). G is solvable iff there exists n ∈ N such that G(n) = 1.

If G is abelian, then G is solvable. S4 is solvable. If n ≥ 5, then S5 is not solvable. Theorem 5.45. 1. If G is solvable, then every subgroup of G is solvable, and every homomorphic image of G is solvable.

2. Suppose G is a group and N E G. If N, G/N are solvable, then so is G. Proof. 1. First suppose that H ≤ G. An easy induction shows that H(i) ≤ G(i) for all i ≥ 0. It follows that H is also solvable. Next suppose that f : G → H is an epimorphism. An easy induction shows that f[G(i)] = H(i) for all i ≥ 0. It follows that H is also solvable. 2. Let f : G → G/N be the canonical surjection. Then there exists n ≥ 0 such that f[G(n)] = (G/N)(n) = 1. Hence G(n) ≤ N. By the first part, G(n) is also solvable, and hence, there exists k ≥ 0 such that G(n+k) = G(n)(k) = 1.

47 Definition 5.28. A subnormal series of G is a chain of subgroups G = G0 > G1 > G2 > . . . > Gn = 1 such that Gi+1 E Gi for all 0 ≤ i ≤ n − 1 and the quotients Gi/Gi+1 are called the factors of the series. A subnormal series is a composition series if Gi/Gi+1 is simple for all 0 ≤ i ≤ n − 1. A subnormal series is a solvable series if Gi/Gi+1 is abelian.

Examples, let V =< a > ⊕ < b > where < a >'< b >' Zn and let −1 G = V o < c > where < c >' C2 and cac = b. Then G > V > hai > 1. Note that < a >6 /G. This is why it is called a subnormal series. Suppose that G is solvable and n is minimal such that G(n) = 1. Then G = G(0) > G(1) > . . . > G(n) = 1. This is actually a normal series, and a solvable series. “Clearly”, every finite group has a composition series. To see this, let G be finite, and inductively let Gi+1 be a maximal normal subgroup of Gi. This gives a composition series. Of course, not every group has a composition series. For example, Z, has no such series. Theorem 5.46. A group G is solvable iff G has a solvable series. Proof. The above shows ⇒. So, we suppose that G = G0 > G1 > . . . > Gn = 1 is a solveable series. (i) We claim that G ≤ Gi for 0 ≤ i ≤ n. Clearly this is true when i = 0. (i) Suppose inductively that G ≤ Gi. Since Gi/Gi+1 is abelian, it follows that (i)0 (i+1) G ≤ Gi+1, hence, G ≤ Gi+1. Theorem 5.47. A finite group G is solvable iff G has a composition series all of whose factors are cyclic of prime order. Proof. ⇐, by theorem 26. ⇒, we have already seen the finite group G has a composition series G = G0 > . . . > Gn = 1. Since G is solvable, it follows that each factor Gi/Gi+1 is also solvable. Since Gi/Gi+1 is simple and abelian, it is Zp for some p prime. Theorem 5.48. If K is a field of characteristic zero, and f(x) ∈ K[x], then the following are equivalent:

1. The equation f(x) = 0 is solvable by radicals. 2. The Galois group of f(x) over K is solvable.

Proof. 1 ⇒ 2: Suppose that f(x) = 0 is solvable by radicals. Let E be the splitting field of f over K. Then there exists a radical Galois extension F of K such that K ⊆ E ⊆ F . Since AutK E ' AutK F/ AutE F , it is enough to show that AutK F is solvable. Since F is a radical extension of K, there exists a tower K ⊂ K(u1) ⊂ ... ⊂ K(u1, . . . , un) = F such that for each 1 ≤ i ≤ n, di there exists di ≥ 1 such that ui ∈ K(u1, . . . , ui−1) Let d = d1 . . . dn and let ξ

48 be a primitive dth root of unity. Let L = K(ξ) be the corresponding cyclotomic extension and consider the following diagram of extensions. F (ξ) ...... cyclotomic ...... F L = K(ξ) ...... cyclotomic ...... K Then F (ξ) is a Galois extension of K. Since AutK F ' AutK F (ξ)/ AutF F (ξ), it is enough to show that AutK F (ξ) is solvable. Finally, since AutK L ' AutK F (ξ)/ AutL F (ξ), it is enough to show that AutL F (ξ) is solvable. Con- sider the tower of extensions L ⊂ L(u1) ⊂ ... ⊂ L(u1, . . . , un) = F (ξ). Let L0 = L and Li = L(u1, . . . , ui) for 1 ≤ i ≤ n. For each 0 ≤ i ≤ n − 1, Li th di+1 contains a primitive di+1 root of unity and Li+1 = Li(ui+1) where ui+1 ∈ Li. It follows that Li+1 is a cyclic extension of Li, that is, Li+1 is a Galois extension of Li such that AutLi Li+1 is cyclic. Suppose inductively that AutL Li is solvable. L ⊂ Li ⊂ Li+1. Then

AutL Li ' AutL Li+1/ AutLi Li+1. By hypothesis AutL Li is solvable and

AutLi Li+1 is cylic, hence AutL Li+1 is also solvable. Thus AutL F (ξ) is solv- able. To prove that 2 implies 1, it is enough to prove the following theorem: Theorem 5.49. Suppose K is a field of characteristic 0 and F is a finite Galois Extension of K. If the Galois Group AutK E is solvable, then there exists a radical Galois Extension F of K such that K ⊆ E ⊆ F . Proof. We argue by induction on n = [E : K], the case n = 1 being trivial. Suppose inductively that the result holds for al such extensions K0 ⊆ K0 with 0 0 k = [E : K ] < n. Since G = AutK E is a finite solvable group, G has a composition series, all of whose factors are cyclic of prime order. In particular, there exists H E G such that [G : H] = p for some prime p. Let ξ be a primitive pth root of unity and let N = E(ξ) be the corresponding cyclotomic extension. Also let M = K(ξ) and consider the following diagram.

49 N = E(ξ) ...... cyclotomic ...... E M = K(ξ) ...... cyclotomic ...... K Clearly M is a radical extension of K. Hence it is enough to find a radical extension of M which contains N. Next onbserve N = E(ξ) is a Galois extension of K. Since E is a Galois Extension of K, it follows that σ[E] = E for every σ ∈ AutM N ≤ AutK N. Hence, we can define a homomorphism θ : AutM N → AutK E = G by σ 7→ σ|E. Since each σ ∈ AutM N satisfies σ(ξ) = ξ, it follows that θ is an injection. Hence, AutM N is also solvable and | AutM N| ≤ | AutK E|. There are 2 cases to consider. Case A: Suppose that θ[AutM N] is a proper subgroup of AutK E. Then [N : M] = | AutM N| < | AutK E| = n. Hence, by induction hypothesis, there exists a radical extension F of M containing N. Case B: Otherwise, θ : AutM N → AutK E = G is an isomorphism. Let −1 J = θ (H). Then J is a normal subgroup of AutM N of index p, and clearly J is also solvable. Let L = N J and consider the following diagram: ...... N ...... 1 ...... L ...... J ...... M ...... AutM N Thus, L is a Galois extension of M and AutM L ' AutM N/ AutL N = AutM N/J. Thus | AutM L| = p and so L is a cyclic extension of M. Since M contains a primitive pth root of unity, there exists u ∈ L such that L = M(u) and u is a root of some polynomial xp − a ∈ M[x]. In particular, L is a radical extension of M. Also notice that [N : L] = |J| < [N : M] = [E : K] = n and that AutL N = J is solvable. Hence by inductin hypothesis, there exists a radical extension F of L which contains N. Finally, a couple of loose ends... Example: Let f(x) = x3 − 3x + 1 ∈ Q[x].

50 1. The equation f(x) = 0 is solvable by radicals

2. The splitting field E of f(x) over Q[x] isn’t a radical extension.

Proof. By Hungerford p272, the Galois group of f(x) is G = A3 ' C3. Hence [E : Q] = 3. Suppose that E is a radical extension of Q. Then there exists α ∈ E such that E = Q(α) and α is the root of an irreducible polynomial x3 −a ∈ Q[x]. Since E is normal, E must contain all roots of x3 − a, ie, α, ξα, ξ2α where ξ is a primitive 3rd root of unity. In particular, ξ ∈ E, which is impossible since [Q(ξ): Q] = 2.

Another loose end, what are the finite subgroups of AutQ C? Theorem 5.50 (Artin). Suppose K is an algebraically closed field of charac- teristic 0 and 1 6= G is a finite subgroup. Let F = KG. Then [K : F ] = |G| = 2 and K = F (i) where i2 = −1. Proof. By Artin, [K : F ] = |G| and K is a Galois extension of F . Let E = F (i) 2 where i = −1. Then K is also a Galois extension of E. Let H = AutE K. Then it is enough to show that H = 1 so that E = K. Suppose not and let p be a prime such that p||H|. Let C ≤ H be cyclic of order p and let L = KC . Then, [K : L] = p and K is a cyclic extension of L. If ξ is a primitive pth root fo unity, then Irr(ξ, L, x) has degree ≤ p−1. Thus, ξ ∈ L. Hence, K is the splitting field of some irreducible polynomial xp − a ∈ L[x]. Let p α = α1, . . . , αp ∈ K be the roots of x −a. Since K is algebraically closed, there exists some β ∈ K such that βp = α. K p p Computing with N = NL , we find −a = (−1) α1 . . . αp = (−1) N(α) = (−1)pN(βp) = (−1)pN(β)p. Thus, N(β) ∈ L satisfies N(β)p = (−1)p−1a. Since L doesn’t contain a root of xp − a = 0, it follows that p = 2. Thus N(β)2 = −a. But since i ∈ L, we have that a = (−1)(−a) = (iN(β))2, which is a contradiction.

6

Let k be a field. Recall from linear algebra that an endomorphism of a vector space (free module over a division ring) can be represented by a in one way for each basis.

Definition 6.1 (Change of Basis). Let {ei} and {fi} be two bases of a vector Pn space V . Let C be the matrix define by ej = i=1 cijfi. C is called the change −1 of basis matrix, and CAC represents the linear operator A in the basis {fi}. Definition 6.2 (Similar Matrices). We say that the matrices A and B are similar if ∃C such that CAC−1 = B

Similar matrices are representations of the same linear operator in different bases.

51 Theorem 6.1 (Rational Canonical Form). Every matrix is similar to a block   0 0 ... −a1  1 0 ... −a2  diagonal matrix with blocks of the form    ......   . . . .  0 ... 1 −am Proof. Recall, V is a k-module. We fix ϕ : V → V . Give V the structure of a k[x]-module by setting x · v = ϕ(v). P We notice that if v = biei then x · v = ϕ(v) = A · v P P P P P Remark: If you change basis, then v = bjej = j bj i cijfi = i( j cijbj)fi r As k[x] is a PID, we have V ' ⊕`k[x]/f`(x) ⊕ k[x] where f1|f2| . . . fk. But k[x] has ∞ dimension over k, as x, x2, x3,... are linearly independent over k, so r = 0 Pm` i Thus, V ' ⊕`k[x]/f`. Lets write f` = i=1 d`i x and lets assume as k is a field that all the f`’s are monic. That is, d` = 1. 2 m −1 Let m` = deg f`. So 1, x, x , . . . , x ` generate V` ' k[x]/f`. m −1 Now: how does x act on V`? It acts on this basis {1, x, . . . , x ` } by k k+1 m`−1 m` Pm`−1 i shifting, so x·x = x , thus x·x = x = i=0 (−d`i )x , so x : V` → V`   0 0 ... −d`0

 1 0 ... −d`1    has matrix  ...... .  . . . .  0 ... 1 −d `m`−1 Definition 6.3 (Algebraically Closed). A field K is algebraically closed if every polynomial in k has a root. Theorem 6.2 (Jordan Canonical Form). If k is algebraically closed, then every  λ 1 ... 0   0 λ . . . 0  matrix is similar to a block diagonal matrix with blocks of the form    ......   . . . .  0 0 . . . λ

m n` Proof. Assume that k is algebraically closed. We know that V ' ⊕`=1k[x]/p` (x) where p`(x) is prime. The primes in k[x] with k algebraically closed are of the form x−a for a ∈ k. n` n` Suppose p`(x) = x − λ`, with λ` ∈ k. Then p` (x) = (x − λ`)

We’ll show that V` has a basis such that ϕ` : V` → V`, where ϕ` = ϕ|V` has a matrix as above. n` We will show that the companion matrix of p` is similar to a Jordan block. We define B = A − λI for λ the root of p`. We have a k[y]-module structure by y · v = Bv. We check that V is a cyclic k[y]-module. Then ynv = Bnv = (A−λI)nv = f(A)v = 0 but yn−1v 6= 0. So V ' k[y]/yn as a k[y]-module. The the rational canonical form of B has ai = 0 unless i = n − 1, and an−1 = 1. CAC−1 = CBC−1 + CλIC−1 = CBC−1 + λI

52 So, in this basis, A is the of a Jordan block. We reverse the order of the basis and obtain the Jordan Form. How can we compute the JCF?

Definition 6.4 (Eigenvalues and Eigenvectors). λ ∈ k is an eigenvalue of A if ∃x 6= 0 such that Ax = λx. x ∈ V is an eigenvector of A if ∃λ ∈ k such that Ax = λx. Thus, the λ’s in the JCF are all eigenvalues. If V has a basis of eigenvectors of A, then A is similar to a diagonal matrix.

Theorem 6.3. The Following Are Equivalent:

1. λ is an eigenvalue.

2. ∃x 6= 0 such that Ax = λx 3. (A − λI)x = 0 4. A − λI is not invertible

5. det(A − λI) = 0.

m m Definition 6.5 (Generalized Eigenspace). Let Eλ = {v ∈ V :(A−λI) v = 0}. m m Eλ is called the generalized eigenspace and v ∈ Eλ is called a generalized eigenvector.

m m m−1 m+1 Note: Eλ is a subspace, if v ∈ Eλ then (A−λI)v ∈ Eλ ,(A−λI)Eλ ⊆ m m+1 N m Eλ ⊆ Eλ , and ∃N such that Eλ = Eλ for all m ≥ N. max max max If λ 6= µ then Eλ ∩ Eµ = {0}, and {Eλ : λ eigenvalue } span V and max P max are disjoin, so V ' ⊕λEλ . Thus, dim V = dim Eλ = n. max So the Jordan Canonical Form gives a lower bound on dim Eλ since the bounds must add to n, we have that the number of λ in the Jordan Canonical max Form is equal to dim Eλ . Corollary 6.4. All eigenvalues show up in the JCF.

An algorithm for computing the JCF:

1. Compute the eigenvalues λ of A

1 max 2. For each λ, compute Eλ,...,Eλ . max n n−1 n n−1 m 3. If Eλ = Eλ 6= Eλ choose a basis for Eλ /Eλ . Claim: {(A−λI) vi : 1 ≤ i ≤ k, 0 ≤ m ≤ n−1} is linearly independent. Look at {(A−λI)vi} ⊆ n−1 n n−1 Eλ is linearly independent. Complete this to a basis for Eλ /Eλ .

Continue in this fashion until you obtain B = {v1, . . . , vk} a basis for V m where each vi ∈ Eλ for some m and vi ∈ B implies (A − λI)vi ∈ B if nonzero. This gives the desired basis.

53 Definition 6.6 (Minimal Polynomial). Let I = {f ∈ k[x]} such that f(A)v = 0 n for all v ∈ V . Pick a basis v1, . . . , vn for V . I = ∩i=1Θvi , Θvi = {f ∈ k[x]: f · vi = 0} with this being the k[x]-module structure for V with ϕ a linear operator.

Since each Θvi is nonzero and k[x] is an integral domain, I = ∩Θvi is nonzero, and I is an ideal in k[x]. As k[x] is a PID, I = hfi for some f ∈ I. We call f the minimal polynomial. Lemma 6.5. The minimal polynomial of the companion matrix of f is itself.

Proof. Recall, if A is a block of the rational canonical form, then e1, . . . , en is a Pk i P i basis for V . If g(x) = cix such that g(A) = 0, then g(A)e1 = ciA ei = P i=0 ciei+1 so if k + 1 ≤ n then g = 0. So degree of minimal polynomial of A is n Pn P i n ≥ n. note that, by definition, A e1 = i=0 aiei+1, 0 = − aiA ei − A e1 = n P i n Pn−1 i (A + aiA )e1 = 0 so f(x) = x + i=0 aix satisfies f(A)v = 0 for all v ∈ V , so f(A) = 0. Thus, the minimal polynomial divides f, but has at least the same degree, so it equals f up to multiplication by a constant.

Corollary 6.6. If A is in rational canonical form, then Ai is the companion matrix of fi, f1| ... |f` then f` is the minimal polynomial. Definition 6.7 (Characteristic Polynomial). The characteristic polynomial of A is det(A − xI). det(CAC−1 − xI) = det(C(A − xI)C−1) = det(A − xI). So to compute the characteristic polynomial, we may take A to be in JCF . Theorem 6.7 (Cayley-Hamilton). The minimal polynomial of A divides the characteristic polynomial of A.

7

1. Rings of Quotients and localizations Definition 7.1 (Multiplicative Set). A subset S of a ring R is multiplicative iff 1 ∈ S and a, b ∈ S implies that ab ∈ S. Definition 7.2 (Ring of Fractions). Let S be a multiplicative subset of the ring R. Then we define the ∼ on R × S by (r, s) ∼ (r0, s0) iff there exists t ∈ S such that t(rs0 − r0s) = 0. r For each (r, s) ∈ R × S the corresponding ∼ class is denoted by s . Let S−1R = {r/s :(r, s) ∈ R × S}. It is easily checked that S−1R is a ring with operations r/s + r0/s0 = (rs0 + r0s)/ss0 and r/s ∗ r0/s0 = rr0/ss0, zero is 0/1 and identity is 1/1. −1 Furthermore, there exists a canonical homomorphism φS : R → S R : r 7→ −1 r/1 and every element of φS[S] is a unit in S R. Remarks:

54 1. If 0 in S then S−1R is the trivial ring. 2. If 0 ∈/ S and S doesn’t contain any zero divisors, then ∼ reduces to the expected relation.

3. Consider R = Z6 = {0, 1, 2, 3, 4, 5} and S = {0, 2, 4}. Consider the relation ≡ on R × S given by (r, s) ≡ (r0, s0) iff rs0 − r0s = 0. Then (0, 1) ≡ (0, 2) −1 and (0, 2) ≡ (3, 1), but (0, 1) 6≡ (3, 1). It is easily checked that S R ' Z3 so φS isn’t injective. Theorem 7.1. Let S be a multiplicative subset of the ring R. Suppose that −1 0 ∈/ S and S doesn’t contain any zero divisors. Then φS : R → S R is injective.

Proof. Suppose that φS(r) = r/1 = 0/1. Then there exists s ∈ S such that s(r1 − 0) = 0. That is, sr = 1. Since s isn’t a zero divisor, r = 0. Examples

1. If R is an integral domain and S = R \{0}, then S−1R is a field, called the quotient field of R.

2. Suppose R = Z, and S = {3n : n ≥ 0}. Then Z[1/3] = {z/3n : z ∈ Z, n ≥ 0}.

Convention: if R is an integral domain and S ⊆ R \{0} is multiplicative, then we identify R with its canonical copy in S−1R. Definition 7.3 (Extension and Retraction). Let S be a multiplicative subset of the ring R.

1. If I ⊆ R is an ideal, then the extension of I in S−1R is S−1I = {a/s : a ∈ I, s ∈ S}.

−1 −1 2. If J ⊆ S R is an ideal, then the contraction of J in R is φS (J).

−1 −1 Remarks: Clearly, S I, φS (J) are ideals. Lemma 7.2. With the above hypotheses, S−1I = S−1R iff S ∩ I 6= ∅.

1 1 s −1 Proof. If s ∈ S ∩ I, then 1 = s 1 ∈ S I. Conversely suppose that S−1I = S−1R. Then there exist a ∈ I and s ∈ S a 1 such that s = 1 . Hence, there exists t ∈ S such that ts − ta = t(s − a) = 0. So ts = ta ∈ S ∩ I.

Lemma 7.3. With the above hypotheses,

−1 −1 1. I ⊆ φS (S I) −1 −1 −1 2. If I = φS (J) for some ideal J ⊆ S R, then S I = J. Hence every ideal of S−1R has the form S−1I for some ideal I ⊆ R.

55 3. If P ⊆ R is a prime ideal and S ∩ P = ∅, then S−1P is a prime ideal of −1 −1 −1 S R and φS (S P ) = P . Proof. 1 is obvious. −1 −1 −1 2: suppose that I = φS (J). To see that S I ⊆ J, let x ∈ S I. Then 1 r there exists r with φS(r) ∈ J and s ∈ S such that x = r/s. Hence x = s 1 = 1 r s r s φS(r) ∈ J. Conversely suppose that s ∈ J. Then φS(r) = r/1 = 1 s ∈ J hence r ∈ I and /s ∈ S−1I. 3: By Lemma 1, S−1P is an ideal such that S−1P 6= S−1R. Suppose that r r0 −1 rr0 a s s0 ∈ S P . Then there exists a ∈ P and t ∈ S such that ss0 = t . Hence, there exists t0 ∈ S such that t0trr0 − t0ss0a = 0. Thus, tt0rr0 = t0ss0a ∈ P , since 0 0 −1 −1 tt ∈ S and S ∩ P = ∅, we have rr ∈ P . Finally, by part 1, P ⊂ φS (S P ). −1 −1 −1 Conversely, suppose that r ∈ φS (S P ). Then φS(r) ∈ S P and so there r a exists a ∈ P and s ∈ S such that 1 = s . Hence, there exists t ∈ S such that tsr = ta ∈ P . Since ts∈ / P , r ∈ P . Theorem 7.4. With the above hypotheses, we can define a bijection {P ⊆ R : P prime ideal,P ∩S = ∅} → {Q ⊆ S−1R : Q prime} by P 7→ S−1P . Proof. By the third part of Lemma 2, the map P 7→ S−1P is an injection between these sets. To see it is a surjection, let J be a prime ideal of S−1R. −1 −1 −1 Then S (φS (J)) = J, so it is enough to show that φS (J) is a prime ideal. −1 Suppose that ab ∈ φS (J). Then φS(a)φS(b) ∈ J and so φS(a) ∈ J or −1 −1 φS(b) ∈ J. Thus, a ∈ φS (J) or b ∈ φS (J). Definition 7.4 (Localization at P ). Suppose that P is a prime ideal of R. Then R \ P is a multiplicative subset of R. The localization of R at P is defined to be −1 RP = (R \ P ) R. −1 If I ⊆ R is an ideal, then we write IP = (R \ P ) I.

Example: Let R = Z and P = (3), then ZP = {z/n : z ∈ Z, (n, 3) = 1}. Theorem 7.5. Suppose that P is a prime ideal of the ring R.

1. There exists a bijection between {Q ⊆ R : Q prime and Q ⊆ P } → {J ⊆ RP : J prime} given by Q 7→ QP .

2. PP is the unique maximal ideal of RP .

Proof. 1 is just a special case of the previous theorem. For 2, we let J ⊆ RP be any maximal ideal. Then J is prime. Hence, there exists a prime ideal Q ⊆ P such that QP = J. But then J = QP ⊆ PP so J = PP . Definition 7.5 (). A ring R is local if R has a unique maximal ideal. Theorem 7.6. If R is a ring, then TFAE 1. R is a local ring

56 2. The nonunits of R form an ideal. Proof. 1 ⇒ 2: Let M be the unique maximal ideal of R. Clearly each element of M is a nonunit. Conversely, suppose that r ∈ R is a nonunit. Then I = (r) is a proper ideal. Hence, I ⊆ M. So r ∈ M. Thus, M consists of the nonunits of R. 2 ⇒ 1: Pretty trivial. Theorem 7.7 (Nakayama’s Lemma). Let R be a local ring and let m be the maximal ideal of R. Suppose that E is a finitely generated R-module. If mE = E then E = 0.

Proof. Suppose that E is a counterexample with a minimal number of gener- ators, say, x1, . . . , xn. Since mE = E, there exist m1, . . . , mn ∈ m such that xn = m1x1 + ... + mnxn, so then (1 − mn)xn = m1x1 + ... + mn−1xn−1. Since 1 − mn ∈/ m, it follows that 1 − mn is a unit. But then, x1, . . . , xn−1 generates E, contradicting minimality of n.

2. Integral Ring Extensions Definition 7.6 (Integral over R). Let S be a ring extension of the ring R and let α ∈ S. Then α is integral over R iff there exists a monic polynomial f(x) ∈ R[x] such that f(α) = 0. √ alg − −3 Example: Let R = Z and S = Q . Then α = 2 ∈ S is integral over Z since α2 + α + 1 = 0.

Definition 7.7 (Annihilator). Let R be a ring and let M be an R-module. Then, the annihilator of M, Ann(M) = {r ∈ R : rm = 0, ∀m ∈ M}. The module is faithful iff Ann(M) = 0. Theorem 7.8. If S is an extension ring of R and α ∈ S, then TFAE

1. α is integral over S. 2. The ring R[α] is finitely generated as an R-module

3. There exists a faithful R[α]-module M which is finitely generated as an R-module.

We shall make use of the following lemma:

Lemma 7.9. Let S be a ring and let A = (aij) be an n × n matrix over S.   b1  .  Suppose that M is an S-module and that b1, . . . , bn ∈ M satisfy A  .  = 0. bn Then (det A)bi = 0 for 1 ≤ i ≤ n.

57 Proof. We will sketch a proof. Consider the case when n = 3. We just  1 0 0  check that det Ab2 = 0. Consider the matrix B2 =  0 b2 0 . Then 0 0 1   a11 a12b2 a13 det(AB2) = det(A) det(B2) = det Ab2. Also, AB2 =  a21 a22b2 a23  a31 a32b2 a33   a11 a11b1 + a12b2 + a13b3 a13 hence det(AB2) =  a21 a21b1 + a22b2 + a23b3 a23  = 0 a31 a31b1 + a32b2 + a33b3 a33 Now we will prove the theorem. Proof. 1 ⇒ 2: Let g(x) ∈ R[x] be a monic polynomial such that g(α) = 0. Say that deg g = n. If β ∈ R[α], there exists f(x) ∈ R[x] such that β = f(α). Singe g(x) is monic, there exists q(x), r(x) ∈ R[x] with degree < n such that f(x) = q(x)g(x) + r(x). Thus β = f(α) = r(α). Hence 1, α, . . . , αn−1 generates R[α] as an R-module. 2 ⇒ 3: Just take M = R[α]. 3 ⇒ 1: Let M be a faithful R[α]-module which is finitely generated as an R- module, say by b1, . . . , bn. Since αM ⊆ M, there exist aij ∈ R such that αbi =

α − a11 −a12 ... −a1n

. . . . aijbj. Hence, by the previous lemma, if d = . . . .

α − an1 −an2 ... −ann then dbi = 0 for 1 ≤ i ≤ n. Since M is a faithful R[α]-module, it follows that d = 0. Thus, α is a root of the polynomial det(xI − A) ∈ R[x]. And so α is integral over R. Definition 7.8 (Integral Extension). Let S be a ring extension of R. Then S is integral over R iff every α ∈ S is integral over R. Corollary 7.10. If S is a ring extension of R and S is finitely generated as an R-module, then S is an integral extension of R. Proof. Let α ∈ S. Then S is a faithful R[α]-module which is finitely generated as an R-module. Hence, α is integral over R.

Corollary 7.11. If S is a ring extension of R and s1, . . . , sn ∈ S are integral over R, then R[s1, . . . , sn] is a finitely generated R-module and hence is an integral extension of R.

Proof. We argue by induction that R[s1, . . . , si] is a finitely generated R-module. For the induction step, note that R[s1, . . . , si+1] = R[s1, . . . , si][si+1]. Since si+1 is integral over R[s1, . . . , si], then R[s1, . . . , si+1] is finitely gener- ated as an R[s1, . . . , si]-module. Since R[s1, . . . , si] is finitely generates as an R- module, it follows that R[s1, . . . , si+1] is finitely generates as an R-module.

58 Corollary 7.12. Let S be a ring extension of R and and let Rˆ = {α ∈ S : α is integral over R}. Then Rˆ is a subring of S called the integral closure of R in S. Proof. Let α, β ∈ Rˆ. Then R[α, β] is an integral extension of R. Hence α − β, αβ ∈ Rˆ. Definition 7.9 (Integrally Closed). 1. With the above hypotheses, R is in- tegrally closed in S iff Rˆ = R. 2. An integral domain R is integrally closed iff R is integrally closed in its quotient field K. Proposition 7.13. If R is a UFD, then R is integrally closed. Proof. Suppose that a/b is integral over R, where a, b ∈ R and there exists a prime p such that p|b and p 6 |a. There exist a0, . . . , an−1 ∈ R such that n n−1 n n−1 n (a/b) + an−1(a/b) + ... + a0 = 0, hence a + anba + ... + b a0 = 0. Since p|b, it follows that p|an and so p|a, contradiction.

Example: Z[i] is the integral closure of Z in Q(i).

Proof. Clearly Z[i] ⊆ Zˆ since i is integral over Z. Since Z[i] is a UFD, it is integrally closed in Q(i). The result follows. Definition 7.10 (Number Field). A number field L is a finite extension of Q. If L is a number field, then the ring of algebraic of L is the integral closure of Z in L. √ Example: Let ω = −1+ −3 . Then [ω] is the algebraic closure of in √ 2 Z Z Q( −3). Proof. Just like before, using the fact that Z[ω] is a UFD. Definition 7.11 (Lies Above). Let S be an extension ring of R and let I be an ideal of R. Then the ideal J of S lies above I iff J ∩ R = I.

Nonexample: Clearly no ideal of Q lies above the ideal (2) of Z. Theorem 7.14 (Lying Over Theorem). Let S be an integral extension of R and let P be a prime ideal of R. Then there exists a prime ideal Q of S which lies over P . Proof Delayed Z ⊂ Z[i], then (2) has (1 + i) and (1 − i) lying over it. Z ⊂ Z[ω], then (2) ⊂ Z[ω](2). Proposition 7.15. Let S be an integeral extension of R and let σ : S → A be a ring homomorphism. Then σ[S] is an integral extension of σ[R].

n n−1 Proof. Let α ∈ S. Then there exists an−1, . . . , a0 ∈ R such that α +an−1α + n n−1 ... + a0 = 0. Applying σ we obtain σ(α) + σ(an−1)σα + ... + σ(a0) = 0, thus, σ(α) is integral over σ[R].

59 Application: Let R be an integral domain of char 0 with quotient field K and let E be a finite algebraic extension of K. If α ∈ E in integral over R, then E E NK (α), trK (α) are also integral over R. In particular, if R is integrally closed, E E then NK (α), trK (α) ∈ R. alg E Proof. If σ ∈ eK (E,K ), then σ(α) is integral over R. Hence, so are NK (α) = Q E P alg σ(α) and tr (α) = alg σ(α) σ∈eK (E,K ) K σ∈eK (E,K ) Definition 7.12 (Integral Homomorphism). A ring homomorphism f : R → S is integral iff S is an integral extension of f[R]. Definition 7.13. Suppose that f : R → S is a ring homomorphism and T is a multiplicative subset of R. Then, slightly abusing notation by writing T −1S in- stead of f[T ]−1S, we can define a ring homomorphism T −1f : T −1R → T −1S : r/t 7→ f(r)/f(t). Furthermore, the following diagram commutes: ... S ...... T −1S ...... −1 . f . . . T f ...... R ...... T −1R Proposition 7.16. Let f : R → S be integral and let T be a multiplicative subset of R. Then T −1f : T −1R → T −1S is also integral. Proof. To simplify notation, we just consider the case where f and T −1f are inclusions. −1 n n−1 Let s/t ∈ T S. Then there exist a0, . . . , an−1 ∈ R such that s +an−1s + n n−1 n ...+a0 = 0. Gence (s/t) +an−1/t(s/t) +. . . , a0/t = 0, hence s/t is integral over T −1R. Proposition 7.17. Suppose that T is an integrally closed integral domain and that T is a multiplicative subset of R such that 0 ∈/ T . Then T −1R is also integrally closed. Proof. Let K be the quotient field of R and hence also of T −1R. Suppose −1 that α ∈ K is integral over T R. Then there exists a0, . . . , an−1 ∈ R and n n−1 t0, . . . , tn−1 ∈ T such that α + an−1/tn−1α + ... + a0/t0 = 0. Let t = t0 . . . tn−1 ∈ T . Then it is clear that tα is integral over R and so tα ∈ R. But this means that α ∈ T −1R. We will now prove the Lying Over Theorem: Proof. Let S be an integral extension of R and let P be a prime ideal of R. Let T = R \ P ad consider teh ring of quotients T −1R and T −1S. To simplify notation, we shall suppose that each of the following maps is an influsion.

60 ... S ...... T −1S ...... −1 . f . . . T f ......

...... −1 R ... T R = RP (If collapsing occurs, it is easily checked that the following arguments remain valud using the canonical homomorphisms) By Proposition 3, T −1S is an integral extension of T −1R, and by Theorem −1 −1 3, T R is a local ring with maximal ideal mP = T P . Also, by Lemma 2.3, mP lies over P . −1 −1 Claim: mP T S 6= T S. −1 −1 −1 −1 Suppose that mP T S = T S. Then there exist mi ∈ mP and bi ∈ T S −1 such that 1 = m1b1 + ... + mnbn. Let B = RP [b1, . . . , bn]. Since T S is an integral extension of RP , it follows that B is a finitely generated RP -module. By the equation, for each 1 ≤ i ≤ n, we have mP B = B. By Nakayama’s Lemma, B = 0, which is a contradiction. −1 −1 It follows that mP T S is contained in a maximal ideal n of T S. Since −1 −1 mP ⊆ n ∩ T R, and mP is a maximal ideal of T R, it follows that mP = n ∩ T −1R. Thus: ... S ...... T −1S N ...... −1 . . f . T f ...... −1 . R T R ...... P ...... MP Let Q = n ∩ S. Clearly Q is a prime ideal of S. Finally note that P = −1 mP ∩ R = (n ∩ T R) ∩ R = (n ∩ S) ∩ R = Q ∩ R. Thus, Q lies above P . Proposition 7.18. Let S be an integral extension of R and let P be a prime ideal of R. Suppose that that Q is a prime ideal of S which lies above P . Then Q is a maximal ideal iff P is a maximal ideal. Proof. Suppose that P is a maximal ideal. Then R/P is a field and S/Q is an integral domain, which is integral over R/P by Prop 2. Since R/S ⊆ S/Q ⊆ (R/P )alg, it follows that S/Q is a field. Thus, Q is a maximal ideal. Next suppose that Q is a maximal ideal. Then S/Q is a field which is integral over the integral domain R/P . Suppose that R/P isn’t a field. Then R/P has a maximal ideal m such that 0 6= m 6= R/P . By the Lying Over Theorem, there exists a prime ideal of S/Q which lies above m, which is impossible.

61 3. Integral Galois Extensions Standing Hypotheses: Let R be an integral domain of characteristic 0 which is integrally closed in its quotient field K. Let E be a finite Galois extension of K and S be the integral closure of R in E. Let G = AutK E be the corresponding Galois Group. Remark: Clearly, if σ ∈ G, then σ[S] = S.

Proposition 7.19. Suppose that P is a maximal ideal of R and that P and R are prime ideals of S that lie above P . Then there exists σ ∈ G such that σ[P] = R. Proof. Suppose that σ[P] 6= R for all σ ∈ G. Then σ[P] 6= τ[R] for all σ, τ ∈ G. By Proposition 5, each σ[P] and τ[R] is a maximal ideal of S. Hence, by the Chinese Remainder Theorem, there exists x ∈ S such that x ≡ 0 mod σ[P] for all σ ∈ G and ≡ 1 mod τ[R] for all τ ∈ G. E Q Since R is integrally closed in K, it follows that NK (x) = σ∈G σ(x) ∈ E S ∩ K = R and so NK (x) ∈ P ∩ R = P . However, since x∈ / τ[R] for all τ ∈ G, it follows that τ(x) ∈/ R for all E Q τ ∈ G. Since R is a prime ideal, it follows that NK (x) = τ∈G τ(x) ∈/ R and E so NK (x) ∈/ P , contradiction. Definition 7.14 (Decomposition Group). Fix a maximal ideal P of R and let P be a maximal ideal of S which lies above P , we define the decomposition group of P to be GP = {σ ∈ G : σ[P] = P}

Then, regarding R/P as a subfield of S/P, each σ ∈ GP induces an auto- morphism σ of S/P which fixes R/P pointwise. Thus we obtain a homomor- phism G → AutR/P (S/P): σ 7→ σ.

Definition 7.15 (Decomposition Field). The decomposition field of P is Edec = GP E , the fixed field of GP . Let Sdec be the integral closure of R in Edec. Let D = P ∩ Sdec. Then D is a prime ideal of Sdec which lies above P . Hence D is a maximal ideal of Sdec. Furthermore, by the previous proposition, P is the unique prime ideal of S which lies above D. Proposition 7.20. Edec is the smallest subfield F of E such that K ⊆ F ⊆ E and P is the unique prime ideal of S which lies above the prime ideal ∩ of S ∩ F . Proof. We’ve just seen that Edec is such a subfield. To see that is it hte smallest such subfield, let F be any such subfield and let H = AutF E. Let Q = P ∩ F . Since P is the unique prime ideal of S which lies above Q, it follows that H dec GP H ≤ GP . Hence F = E ⊇ E = E .

Example: Suppose R = Z, E = Q(i). Let P = (2). GThen (1 + i), (1 − i) are the ideals of S = Z[i] which lie above P . Let dec 1 P = (1 + i). Then GP = 1 and so E = E = Q(i).

62 Let R = Z and E = Q(ω), where ω2 + ω + 1 = 0. Again, let Z = (2). Then P = 2Z[ω] is the unique prime ideal of S = Z[ω] which lies above P . Then dec G GP = G and E = E = Q. Proposition 7.21. Under the canonical injection, R/P ,→ Sdec/D, we have R/P = Sdec/D.

−1 Proof. Let σ ∈ G \ GP . Then clearly σ P= 6 P. −1 dec Define Dσ = σ P ∩ S . Since P is the unique prime ideal of S lying above D, it follwos that Dσ 6= D. dec Clearly Dσ is a prime ideal of S . Since Dσ lies above the maximal ideal dec P of R, then by Proposition 5, we know that Dσ is a maximal ideal of S . Now fix some x ∈ Sdec. We seek an element α ∈ R such that x ≡ α mod D. By the Chinese Remainder Theorem, there exists y ∈ Sdec such that y ≡ x mod D, y ≡ 1 mod Dσ for all σ ∈ G \ GP . −1 In particular, we have that y ≡ 1 mod σ P for all σ ∈ G \ GP and hence, σ(y) ≡ 1 mod P for all σ ∈ G \ GP . In summary, we have that y ≡ x mod P and σ(y) ≡ 1 mod P for all σ ∈ G \ G√.

Edec Now consider α = NK (y) ∈ R, since R is integrally closed in K. Let dec alg Edec eK (E ,K ) = {τ1, . . . , τm} so that NK (y) = τ1(y) . . . τm(y). Then for each 1 ≤ i ≤ m, there exists σi ∈ G such that τi = σi|Edec . Notice that if σ ∈ GP , then σ|Edec = id. Hence, we can suppose that σ1 = 1 and σi ∈ G \ GP for 2 ≤ i ≤ m. Edec dec It follows that α = NK (y) ≡ x mod P. Since α, x ∈ S , it follows that α ≡ x mod D, as required. Let S = S/P and R = R/P . For each α ∈ S, let α ∈ S be the corresponding element. Then, for each σ ∈ GP , the corresponding σ ∈ AutR S satisfies σ(α) = σ(α). Finally, for each f ∈ S[x], let f ∈ S[x] be the corresponding polynomial. More Hypotheses: From now on, we also suppose that R is a finite field. (This isn’t really essential, but it allows us to ignore separability issues.) Theorem 7.22. 1. S is a finite Galois extension of R with [S : R] ≤ [E : K]

2. The homomorphism GP → AutR S : σ 7→ σ is surjective. Proof. 1. It is enough to show that [S : R] ≤ [E : K]. Let α ∈ S and consider a corresponding α ∈ S. Let f(x) = Irr(α, K, x). Then deg f ≤ [E : K]. Since the roots of f are integral over R, it follows that the coefficients of f are also integral over R. Since R is integrally closed in K, it follows that f(X) ∈ R[x]. Clearly f(α) = 0, thus f ∈ R[x] is a polynomial of degree ≤ [E : K] such that f(α) = 0. Hence [S : R] ≤ [E : K].

63 dec dec 2. Since GP = AutEdec E and S /D = R/P , we can suppose that K = E and G = GP . Let α ∈ S satisfy S = R(α). Let f(x) = Irr(α, K, x). Then we have already seen that f(x) ∈ R[x] so we can form f(x) ∈ R[x]. Now

notice that any τ ∈ AutR S is uniquely determined by its effect on α and that α must also be a root of f(x). Let β = τ(α). Qm Since the roots of f(x) are integral over R, we see that f(x) = i=1(x − αi), α1 = α, splits into linear factos in S[x]. Applying the canonical Qm homomorphism, we obtain that f(x) = i=1(x − αi) has a corresponding splitting in S[x]. In particular, there exists 1 ≤ i ≤ m such that αi = β. Let σ ∈ G = GP satisfy σ(α) = αi. Then σ(α) = αi = β and so σ = τ.

Theorem 7.23. With the above hypotheses, suppose further that there exists a monic irreducible f(x) ∈ R[x] such that E is the splitting field of f over K and alg f(x) has no multiple roots in R . Then:

1. The homomorphism GP → AutR S is an isomorphism. 2. S is the splitting field of f over R.

Proof. 1. Let {α1, . . . , αn} ⊆ S be the roots of f(x) and let {α1,..., αn} ⊆ S be their reductions mod P. For any σ ∈ GP , we have σ(αI ) = σ(αi) 1 ≤ i ≤ n. hence, if σ = 1, then σ = 1.

2. Clearly S contains the splitting field E = R[α1,..., αn] of f over R. Because GP → AutR S surjectively, we see that AutE S = 1, and so E = S.

Warning: Consider the irreducible monic f(x) = x2 + 3 ∈ Z[x]. Reducing 2 2 mod 2, we have that f(x) = x + 1 = (x + 1) ∈ F2[x], so the splitting field of f over is . Let P = (2) and let S be the integral closure of in the splitting F2 F2 √ Z √ −1+ −3 field E = Q( −3). Recall that S = Z[ω] where ω = 2 . Also, P = 2S is the unique prime ideal of S lying above P , thus S = S/P = . Extended Example Consider the irreducible polynomial f(x) = x4 + 5x2 + 10x − 15 ∈ Z[x]. Let E be the splitting field and G e the Galois group. It is easily checked that f(x) has exactly two real roots. It follows that G 6≤ A4. Reducing modulo 3, we obtain the following decomposition into irreducibles 4 2 3 x + 2x + x = x(x + 3x + 1) ∈ F3[x]. Thus, G contains a 3 cycle. Since G is a transitive subgroup of S4, it contains every 3 cycle, and so A4 ≤ G, so G = S4. Let S be the ring of algebraic integers in E. Consider the ideal P = (3) of Z and let P be a maximal ideal of S which lies above P . Since S = S/P' F33 . We have that GP ' C3. Hence [G : GP ] = 8. Thus, there are exactly 8 maximal ideals of S which lie over P and [Edec : Q] = 8. Question: How many maximal ideals of Sdec lie above P ?

64 Answer: Suppose that D0 is any maximal ideal of Sdec which lies above P . By the Lying Over Theorem, there exists a maximal ideal R of S which lies over D0, and clearly R lies over P . Thus, we must compute the size of {Sdec ∩ R : R is a maximal idela of S lying over P }. Let R be any such ideal of S. Then either GR = GP or GR ∩ GP = 1. Let R = σP for some σ ∈ G. Then GR = GP iff σ ∈ NG(GP ). Hence the number of such ideals R is [NG(GP ),GP ] = [S3 : A3] = 2, so there are two such ideals, P and P0. Then D = P ∩ Sdec 6= P0 ∩ Sdec = D0, since P is the unique maximal ideal of S which lies above D = P ∩ Sdec. Now suppose that R satisfies GR ∩ GP = 1, then {σR : σ ∈ GP } consists of three ideals, each lying over R ∩ Sdec. Hence the number of maximal ideals of Sdec lying above P is 2 + (8 − 2)/3 = 4. 4 2 2 2 If we reduce modulo 2 instead, we obtain x +x +1 = (x +x+1) ∈ F2[x], so it is unclear what is S/P where P lies above (2). We also don’t know what GP is. 4. N¨otherianRings and Modules Definition 7.16 (N¨otherianModule). Let R be a ring and let M be an R- module. Then M is Noetherian iff M satisfies the ascending chain condition (ACC) on submodules. IE, if M1 ⊆ M2 ⊆ ... is an increasing chain of submod- ules, then there is an n such that M` = Mn for all ` ≥ n. Definition 7.17 (N¨otherianRing). The ring R is N¨otherianiff R is a N¨otherian R-module. Theorem 7.24. If R is a ring and M is an R-module, then the following are equivalent:

1. M is N¨otherian. 2. M satisfies the maximum condition for submodules: ie, whenever S is a nonempty set of submodules of M, then S contains a maximal element under inclusion. 3. Every submodule of M is finitely generated.

Proof. (1) ⇒ (2): Let S be a nonempty collection of submodules of M. Suppose S doesn’t contain a maximal element. Then we can inductively choose Mn ∈ S such that M0 ( M1 ( ..., contradiction. (2) ⇒ (3): Let N be a submodule of M. Let S = {A : A is a finitely generated submodule of N}. Then S has a maximal element A. We claim that A = N. Otherwise, there exists b ∈ N \A and so A ( A+Rb ∈ S, contradiction. (3) ⇒ (1) suppose that M0 ⊆ M1 ⊆ ... is a chain of submodules of M. Then N = ∪nMn is a submodule of M and so is finitely generated. Then N is finitely generated by, say, a1, . . . , at. There exists n such that a1, . . . , at ∈ Mn. Clearly M` = Mn for all ` ≥ n.

65 Corollary 7.25. A ring R is N¨otherianiff every ideal is finitely generated. Example: In particular, every PID is Noetherian. Question: Suppose that E is a field such that [E : Q] < ∞. Let R be the ring of algebraic integers in E. Is R N¨otherian?

Proposition 7.26. 1. If M is a N¨otherian R-mdoule, then every submodule of M and every quotient of M is also Noetherian. 2. Suppose that M is an R-module and N is a submodule. If N and M/N are N¨otherian,then M is also N¨otherian.

Proof. 1. Obvious. 2. Claim: Suppose that E ⊆ F are submodules of M. If E ∩ N = F ∩ N and (E + N)/N = (F + N)/N then E = F . To prove the claim, we suppose that x ∈ F . Then there exists y ∈ E and u, v ∈ N such that x + u = y + v. Hence x − y = v − u ∈ F ∩ N = E ∩ N. Thus x = (x − y) + y ∈ E. Now, we let M0 ⊆ M1 ⊆ ... be an increasing chain of submodules of M. Consider teh associated chains M0 ∩N ⊆ ... and (M0 +N)/N ⊆ ... of submod- ules of N, M/N respectively. Since N and M/N are N¨otherian,there exists n such that for all ` ≥ n, M` ∩ N = Mn ∩ N and (M` + N)/N = (Mn + N)/N. Hence M` = Mn for all ` ≥ n.

Corollary 7.27. If M1,...,Mn are N¨otherian R-modules, then M1 ⊕ ... ⊕ Mn is also N¨otherian. Suppose that M is an R-module, and M1,...,Mn are R-modules such that M = M1 + ... + Mn. If M1,...,Mn are N¨otherian,then M is N¨otherian.

Proof. Arguing by induction, we can suppose that n = 2. Then M1 and (M1 ⊕ M2)/M1 ' M2 are N¨otherian,and hence M1 ⊕ M2 is. We can define a surjective homomorphism from M1 ⊕...⊕Mn to M1 +...+ Mn = M by (m1, . . . , mn) 7→ m1 + ... + mn. The result follows. Proposition 7.28. If R is a N¨otherianring and M is a finitely generated R-module, then M is N¨otherian,

Proof. Let x1, . . . , xn generate M. Then define a surjective homomorphism R ⊕ ... ⊕ R → M by (r1, . . . , rn) 7→ r1x1 + ... + rnxn. Since R ⊕ ... ⊕ R is N¨otherian,the result follows.

Theorem 7.29. Let K be a field such that [K : Q] < ∞ and let R be the ring of algebraic integers in K. Then R is N¨otherian.

Proof. Note that it is enough to show that R is finitely generated as a Z-module. For then, by Prop 10, R is a N¨otherian Z-module and hence is also a N¨otherian R-module. n n−1 Choose γ ∈ K such that K = Q(γ). Let Irr(γ, Q, x) = x + qn−1x + n ... + q0, the qi ∈ Q. Clearing denominators, a0, . . . , an−1 ∈ Z such that tγ + n−1 an−1γ + ... + a0 = 0. Thus, α = tγ ∈ R and K = Q(α).

66 √ Warning:√ It doesn’t follwo that R = Z[α]. For example, let K = Q( −3). Then Z[ −3] ( R. Note that [K : Q] = [Q(α): Q] = n. Let α = α1 . . . , αn be the distinct Q conjugates of α in the normal closure E of K. Let ∆ = i

Corollary 7.31. If R is N¨otherian,then R[x1, . . . , xn] is N¨otherian.

Corollary 7.32. Let R be a N¨otherianring and let S = R[a1, . . . , an] be an extension ring which is finitely generated as a ring over R. Then S is also N¨otherian.

Proof. We can define a surjective homomorphism f : R[x1, . . . , xn] → R[a1, . . . , an]. By the homework, the result follows. We will now prove the Hilbert Basis Theorem:

Proof. Let I be an ideal of R[x]. For each n ≥ 0, let Jn be the set of all leading coefficients an of polynomails of degree n such that n (*) anx + ... + a0 ∈ I. n+1 Clearly, Jn is an ideal of R. Furthermore, if (∗) holds, then anx + ... + n a0x = x(anx + ... + a0) ∈ I, and so J0 ⊆ J1 ⊆ .... As R is N¨otherian,then

67 there exists t such that J` = Jt for all ` ≥ t. For each 0 ≤ ` ≤ t, let a`1, . . . , a`s` be generators of J` and let f`i ∈ I be a polynomial of degree ` with leading coefficients a`i. We claim that X = {f`i : 0 ≤ ` ≤ t, 1 ≤ i ≤ s`} generates I. We shall prove by induction of d = deg f that if f ∈ I then f ∈ (X). First, suppose that d = 0 and that f = r ∈ I ∩ R = J0.

Then clearly, r ∈ (a01, . . . , a0s0 ) ⊆ (X). Now suppose that d > 0 and that the result holds for all 0 ≤ k < d. Let f ∈ I be a polynomial of degree d with leading coefficient r. First suppose that d ≤ gt. Then r ∈ J and so r = r a + . . . r a for d 1 d1 sd dsd some r ∈ R. Thus the polynomial r f + . . . r f ∈ (X) ⊆ is a polynomial i 1 d1 sd dsd Psd of degree d with leading coefficient r. Hence f − i=1 rifdi ∈ I has degree at most d − 1 and so by induction hypothesis lies in (X). It fllows that f = Psd Psd (f − i=1 rifdi ) + i=1 rifdi ∈ (X).

Finally, suppose that d > t. Then r ∈ Jd = Jt and so r = r1at1 +...+rst atst d−t sor some ri ∈ R. Thus the polynomial r1x ft1 + ... + rst ftst ∈ (X) ⊆ I has leading coefficient r and degree d. Arguing as in the previous case, we see inductively that f ∈ (X). An application... Definition 7.18 (Affine n-space). Let k be any field and n ≥ 1. kn is called affine n-space over K.

Let k[~x] = k[x1, . . . , xn]. Definition 7.19 (Algebraic Set). A subset V ⊂ kn is algebraic iff there is a n subset S = {pi(~x): i ∈ I} ⊆ k[~x] such that V = V (S) = {~a ∈ k : pi(~a) = 0 for all i ∈ I}.

4 Example: We can regard SL2(k) as an algebraic subset of k defined by identifying it with {(a, b, c, d): ad − bc = 1}, which is defined by the single polynomial p(~x) = x1x4 − x2x3 − 1. Suppose that V = V (S) is an algebraic set and that I is the ideal of k[~x] generated by S. Clearly, if f(~x) ∈ I then f(~a) = 0 for all ~a ∈ V = V (S), and so V (S) ⊆ V (I). Also since I ⊇ S, it is clear that V (I) ⊆ V (S). Thus V (S) = V (I). Thus we have a “natural” surjective map from ideals of k[~x] to algebraic subsets of kn taking I to V (I). The map is NOT injective, however, as V (x) and V (x2) are both {0}. We can also define a “natural” map from the algebraic subsets of kn to ideals of k[~x] by V 7→ J(V ) where J(V ) is the ideal of all polynomials vanishing on V . Definition 7.20 (Radical of an Ideal). If I is an ideal of the ring R, then the radical of I is the ideal Rad I = {r ∈ R : rn ∈ I for some n ≥ 1}. The ideal I is a radical ideal iff I = Rad I. Example: If V ⊂ kn is an algebraic set, then J(V ) is a radical ideal. Hence, we actually “natural” map from algebraic subsets of kn to radical ideals of k[~x] by V 7→ J(V ).

68 This map must be injective, because V = V (J(V )). By definition, V ⊆ V (J(V )). Let I be an ideal such that V = V (I). Then I ⊆ J(V ) and so V = V (I) ⊇ V (J(V )), hence V = V (J(V )). Remark: If k is an arbitrary field, then the map need not be surjective. (x2+1) ⊂ R[x]. Howeer, the correspondence is a bijection when k is algebraically closed. This is the content of Hilbert’s Nullstellensatz. Theorem 7.33. Let k be any field and n ≥ 1.

1. If V ⊆ kn is an algebraic set, then there exists a finite subset of polynomials S ⊂ k[x1, . . . , xn] such that V = V (S).

2. If V1 ⊇ V2 ⊇ ... ⊇ Vn ⊇ ... is a descending sequence of algebraic subsets n of k , then there exists t such that V` = Vt for all ` ≥ t.

Proof. 1. Let I be an ideal such that V = V (I) since k[x1, . . . , xn] is N¨otherian, I is finitely generated, say, by S. Clearly V = V (S).

2. Consider J(V1) ⊆ .... Then there is a t such that J(Vt) = J(V`) for all ` ≥ t. Hence Vt = V (J(Vt)) = V (J(V`)) = V` for all ` ≥ t.

Remark: The following variant of the first part of the theorem is often useful: If S is a subset of k[x1, . . . , xn], then there exists a finite subset S0 such that V (S) = V (S0). Definition 7.21 (Variety of Representations). Let G be a finitely generated group with fixed generating set T = {t1, . . . , tr}. Let R = {W (y1, . . . , yr): W is a word such that W (t1, . . . , tr) = 1} be the corresponding set of relations. Then, the corresponding variety of representations of (G, T ) in SL2(C) is 4r r the algebraic subset of C defined by VG,T = {(M1,...,Mr) ∈ SL2(C) : W (M1,...,Mr) = 1 for all W ∈ R} Remark: To see that this is an algebraic set, we must check that the condition W (M1,...,Mr) = 1 corresponds to a set of polynomial equations. Let Rep(G, SL2(C)) be the set of all group homomorphisms π : G → SL2(C). Then we can define a bijection Rep(G, SL2(C)) → VG,T by π 7→ (π(t1), . . . , π(tr)). Of course R is infinite, and since most groups are not finitely presented, we apparently need infintiely many words to define VG,T , except...by the Hilbert Basis Theorem

Theorem 7.34. With the above hypotheses, there exists a finite R0 ⊆ R such r that VG,T = {(M1,...,Mr) ∈ SL2(C) : W (M1,...,Mr) = 1 for w ∈ R0}. This is not a contradiction, though the least it can tell us is that most finitely generated groups do not have faithful linear representations.

69 Definition 7.22 (Residually Free). A group G is residually free iff for all 1 6= g ∈ G, there exists a homomorphism π : G → F into a free group F such that π(g) 6= 1. Some examples/remarks: Clearly, every residually free group is torsion-free, and every free group is residually free. If G1,...,Gn are residually free, then G1 ⊕ G2 ⊕ ... ⊕ Gn is residually free.

Proof. Let 1 6= g = (g1, . . . , gn) ∈ G1 × ... × Gn, then there exists i such that gi 6= 1, hence there is a homomorphism G1 × ... × Gn → Gi → F such that (π ◦ pi)(g) = π(gi) 6= 1.

φ1 φ2 Theorem 7.35. Suppose that G1 → G2 → ... → Gn → ... is a seqeunce of surjective homomorphisms between finitely generated residually free groups, then there exists t such that φe  is an isomorphism for all ` ≥ t. Basis Facts about Free Groups:

 1 2   1 0  1. The matrices , free generate a free group on two gen- 0 1 2 1 erators.

2. The commutator subgroup of the free group F2 on 2 generators is a free group on infinitely many generators. Hence, if G is a finitely generated residually free group and 1 6= g ∈ G, then there exists π : G → F2 such that π(g) 6= 1.

 1 2   1 0  π : G → F = , ≤ SL ( ). 2 0 1 2 1 2 C Remark: There exists a finitely generated residually free group which isn’t finitely presented.

Proof. Recall that F2 × F2 is residually free. Hence if G ≤ F2 × F2, then G is also residually free. By Google, there exists a finitely generated subgrtoup G ≤ F2 × F2 which is not finitely presented. We will now prove Theorem 15.

Proof. Let T1 = {t1, . . . , tr} be a generating set for G1; and for n ≥ 1, let (n) (n) Tn = {t1 , . . . , tr } be the image of T1 under the surjective homomorphism G1 → ... → Gn. Then Tn is a set of generators for Gn. For each n ≥ 1, let r 4r Vn = VGn,Tn ⊆ (SL2(C) ) ⊆ C . (n) (n) (n+1) (n+1) Since φn[Tn] = Tn+1, for each word w if w(t1 , . . . , tr ) = 1, then w(t1 , . . . , tr ) = 1. Thus, the corresponding sets of relations satisfy Rn ⊆ Rn+1; and so the algebraic sets satisfy Vn ⊇ Vn+1. Hence V1 ⊇ ...Vn ⊇ .... By Theorem 13,

70 there exists n such that V` = Vn for all ` ≥ n. hence it is enough to prove the following: If φk : Gk → Gk+1 isn’t an ismorphism, then Vk ) Vk+1. Since φk : G → Gk+1 is not an ismorphism, there exists a word such that (k) (k) (k+1) (k+1) g = w(t1 , . . . , tr ) 6= 1 but φk(g) = w(t1 , . . . , tr ) = 1. In particular, w ∈ Rk+1. Since Gk is residually free, there exists a homomrophism π : Gk → F2 such that π(g) 6= 1. We can suppose that F2 ≤ SL2(Z) ≤ SL2(C). Thus (k) (k) (k) (k) (k) (k) (π(t1 ), . . . , π(tr )) ∈ Vk. However, since w(π(t1 ), . . . , π(tr )) = π(w(t1 , . . . , tr )) = (k) (k) π(g) 6= 1 and w ∈ Rk+1, we see that (π(t1 ), . . . , π(tr )) ∈/ Vk+1. And so the claim is proved. 5. Transcendence Bases Definition 7.23 (Algebraically Dependent). Let F be an extension field of K and let S ⊆ F . Then S is algebraically dependent over k iff there exist distinct s1, . . . , sn ∈ S and 0 6= p(~x) ∈ k[x1, . . . , xn] such that p(s1, . . . , sn) = 0. Otherwise, S is algebraically independent. Examples: √ 1. { 2} is algebraically dependent over Q. 2. {π} is algebraically independent over Q. √ 3. {π, π} are algebraically dependent over Q. To see this, let p(x1, x2) = 2 √ x1 − x2 ∈ Q[x1, x2]. Then p(π, π) = 0.

4. Let F = k(y1, . . . , yn) be the field of rational functions in the variables {y1, . . . , yn}. Then {y1, . . . , yn} is algebraically independent over k.

Definition 7.24 (Transcendence Basis). A subset S ⊆ F is a transcendence basis of F over K iff S is a maximal independent subset.

Remark: Transcendence Bases always exist. Why? Because if {Si : i ∈ I} is an increasing chain of independent sets, then ∪iSi is also independent, because of the finitary nature of dependence. Now apply Zorn’s Lemma. Example: Let F = k(y1, . . . , yn) be a field of rational functions. Then {y1, . . . , yn} is a transcendence basis. Theorem 7.36. Let F be an extension of K and S ⊆ F be algebraically inde- pendent over K. Then if u ∈ F \ K(S), then the following are equivalent:

1. S ∪ {u} is algebraically independent.

2. u is transcendental over K(S).

Proof. 2 ⇒ 1: Assume u is transcendental over K(S). SUppose there exist distinct s1, . . . , sn−1 ∈ S and f(~x) ∈ k[x1, . . . , xn] such that f(s1, . . . , sn−1, u) = 0. Then u is a root of the polynomial f(s1, . . . , sn−1, xn) ∈ K(S)[xn].

71 r r−1 Express f = hrxn + hr−1xn + ... + h0 where hi ∈ k[x1, . . . , xn−1]. Since u is transcendental over K(S), we have that hi(s1, . . . , sn−1) = 0 for 0 ≤ i ≤ r. Since S is algebraically independent over K, it follows that hi = 0 for 0 ≤ i ≤ r. Thus f = 0. hence S ∪ {u} is algebraically independent over K. 1 ⇒ 2: Assume that S ∪ {u} is algebraically independent over K. Suppose Pn i f(x) = i=0 aix ∈ k(S)[x] is such that f(u) = 0. Then there exists a finite subset {s1, . . . , sr} ⊆ S such that ai ∈ K(s1, . . . , sr) for 0 ≤ i ≤ n. Let fi, gi ∈ k[x1, . . . , xr] be such that ai = fi(s1, . . . , sr)/gi(s1, . . . , sr). Define g = g0g1 . . . gn and for 0 ≤ i ≤ n, let f i = gfi/gi = fig0 . . . gi−1gi+1 . . . gn ∈ k[x1, . . . , xr]. Notice that for 0 ≤ i ≤ n, ai = f i(s1, . . . , sr)/g(s1, . . . , sr), and −1 Pn i so f(x) = g(s1, . . . , sr) i=0 f i(s1, . . . , sr)x . Pn i Let h(x1, . . . , xr, x) = i=1 f (x1, . . . , xr)x . Since f(u) = 0 and g(s1, . . . , sr) 6= 0, it follows that h(s1, . . . , sr, u) = 0. Since S ∪ {u} is algebraically independent over K, it follows that h = 0. And so, f i = 0 for 0 ≤ i ≤ n. Hence, ai = 0 for 0 ≤ i ≤ n, and so f = 0. Hence, u is transcendental over K(S).

Corollary 7.37. Let F be an extension of K and S ⊆ F be algebraically inde- pendent over K. Then TFAE:

1. S is a transcendence basis for F over K. 2. F is algebraic over K(S).

An ? Vector Spaces over K

1. k-linear span hSi of S. 2. Basis = Independent Generating Set

Field extension F over K

1. K-algebraic closure of S, ie, K(S)alg ∩ F 2. Transcendence Basis = Independent Generating Set

Question: Suppose S, T ⊆ F are transcendence bases over K. Does |S| = |T |? Notation: From now on, we fix some extension F of K and write algK (S) = K(S)alg ∩ F .

Theorem 7.38. The closure operator algK (S) satisfies the following properties:

1. If S ⊆ F , then S ⊆ algK (S).

2. S, T ⊆ F and S ⊆ T then algK (S) ⊆ algK (T ).

3. If S ⊆ F then algK (algK (S)) = algK (S)

72 4. If S ⊆ F and a, b ∈ F satisfy b ∈ algK (S ∪ {a}) \ algK (S), then a ∈ algK (S ∪ {b}) \ algK (S). Proof. Homework

The above result says that (F, algK ) is a matroid. Remark: The most important axiom is clearly 4, which is called the exchange property. The other axioms 1,2,3 are satisfied by every natural closure operator. e.g., let G be a group, and for each S ⊆ G, let cl(S) be the subgroup generated by S. Definition 7.25. Let (X, cl) be a matroid.

1. The subset I ⊂ X is dependent iff there exists x ∈ I such that x ∈ cl(I \ {x}). Otherwise, I is independent. 2. The subset B ⊆ X is a basis iff B is an independent subset of X such that cl(B) = X.

Question: Does every matroid have a basis? Definition 7.26 (Dependent Sets and Bases). Let (X, cl) be a matroid.

1. The subset I ⊆ X is dependent iff there is x ∈ I such that x ∈ cl(I \{x}). Otherwise independent. 2. A basis is an independent set I such that cl(I) = X.

Warning: There exist matroids without bases. e.g. consider the closure operation cl on N defined by cl(S) = S if |S| < ∞, cl(S) = N if |S| = ∞. Remark: Of course, in the matroid (F, algK ), matroid independence is ex- actly the same as algebraic independence. Lemma 7.39 (5.7). Suppose that I ⊆ X is independent and x ∈ X \ I. Then TFAE: 1. I ∪ {x} is independent. 2. x∈ / cl(I). Proof. (1) ⇒ (2): By definition. (2) ⇒ (1): Suppose that I ∪ {x} is not independent. We claim that x ∈ cl(I). If not, there exists y ∈ I such that y ∈ cl(I \{y} ∪ u). Since I is independent, y∈ / cl(I \{y}). Thus y ∈ cl((I \{y}) ∪ {x}) \ cl(I \{y}). Hence x ∈ cl((Y \{y}) ∪ {y}) = cl(I) as required. As an immediate consequence, we obtain: Lemma 7.40. If I ⊆ X is independent then TFAE:

73 1. I is a basis 2. cl(I) = X.

Theorem 7.41. If B is a fintie basis of X, then every basis C satisfies |C| = |B|.

Proof. The result is clear if B = ∅. Hence, we can suppose that B = {b1, . . . , bn}. Let C be any basis of X. Claim: There exists c1 ∈ C such that {c1, b2, . . . , bn} is a basis of X. Since cl({b2, . . . , bn}) 6= X = cl(C). Thus there exists c1 ∈ C such that c1 ∈/ cl({b2, . . . , bn}). In particular, {c1, b2, . . . , bn} is independent. Also, c1 ∈ cl({b1, . . . , bn}) \ cl({b2, . . . , bn}) and so b1 ∈ cl({c1, b2, . . . , bn}). It follows that cl({c1, b2, . . . , bn}) = X. Similarly, there exists c2 ∈ C such that {c1, c2, b3, . . . , bn} is a basis. Con- tinuing in this fashion, we eventually obtain {c1, . . . , cn} ⊆ C which is a basis. Thus, C = {c1, . . . , cn}. Definition 7.27 (Finitary). The matroid (X, cl) is finitary iff whenever x ∈ cl(S), there exists a finite S0 ⊆ S such that x ∈ cl(S0).

Example: (F, algK ) is a finitary matroid. Lemma 7.42. If (X, cl) is a finitary matroid, then X has a basis. Lemma 7.43 (5.11). If (X, d) is a finitary matroid, then X has a basis. Proof. We can apply Zorn to the poset of independent subsets of X.

Theorem 7.44. If (X, cl) is a finitary matroid, then any two bases have the same cardinality. Proof. Let B,C be bases of X. We can suppose B is infinite. It follows that C is also infinite. For each b ∈ B, there exists a finite subset Cb ⊆ C such that b ∈ cl(Cb). Hence, cl(∪b∈BCb) = X and so C = ∪b∈BCb. Hence |C| ≤ |B|ℵ0 = |B|. Similarly, |B| ≤ |C| and so |B| = |C|. Definition 7.28 (Transcendence Degree). If F is an extension field of K then the transcendence degree is tr dim F/K = |S| where S is any transcendence basis of F over K. If K is the prime subfield of F , we write tr dim F .

Theorem 7.45. Suppose F,E are algebraically closed fields. Then TFAE:

1. F ' E 2. char F = char E and tr dim E = tr dim F .

74 Proof. (1) ⇒ (2): Obvious. (2) ⇒ (1): We can suppose that E and F are both extensions of the same prime field K. Let tr dim E = tr dim F = λ, and let X = {xα : α < λ} and Y = {yα : α < λ} be transcendence bases of E,F . Then we can define an isomorphism ϕ : k(X) → k(Y ): xα 7→ yα. This extens to an isomorphism ϕ : F = K(X)alg → K(Y )alg = E. Corollary 7.46. Fix some characteristic p ≥ 0.

1. There are exactly ℵ0 countable algebraically closed fields of char p up to isomorphism. 2. If κ is any uncountable cardinal, there exists a unique algebraically closed field of char p and cardinality κ up to isomorphism.

Proof. 1. For each 0 ≤ n ≤ ℵ0, there exists a unique algebraically closed field of characteristic p and transcendence degree n up to isomorphism.

2. It is easily checked if F is any uncountable field, then tr dim F = |F |. The result follows.

6. The Hilbert Nullstellensatz Let K be a field and let F be a field extension. If I is an ideal of K[x1, . . . , xn], n then VF (I) = {~a ∈ F : p(a) = 0 for all p ∈ I}. In this section, we prove the following: Theorem 7.47 (Hilbert Nullstellensatz). Suppose F is an algebraically closed extension of K. If I is a proper ideal of K[x1, . . . , xn], then VF (I) 6= ∅. As we will see, the above theorem is equivalent to:

Theorem 7.48. Let K be a field and let K[a1, . . . , an] be a finitely generated ring extension. If K[a1, . . . , an] is a field, then K[a1, . . . , an] is algebraic over K. We will prove the equivalence:

Proof. Proof that the above implies Nullstellensatz: Let I be a proper ideal of K[x1, . . . , xn]. Then there exists a maximal M such that M ⊇ I. Consider the field k[x1, . . . , xn]/M = k[a1, . . . , an] where ai = xi + M. By assumption, k[~a] is algebraic over K. Since F is algebraically closed, there exists an embedding ϕ : k[~a] → F over F . Then clearly (ϕ(a1), . . . , ϕ(an)) n is a zero of I in F . Thus VF (I) 6= ∅. The other implication: Let k[a1, . . . , an] be a field which is finitely gen- eratoed over K as a ring extension. Consider the ring homomorphism ϕ : k[x1, . . . , xn] → k[a1, . . . , an] by xi 7→ ai. Then ker ϕ = M is a maximal ideal of K[x1, . . . , xn]. By the Nullstel- alg n lensatz, M has a zero (ξ1, . . . , ξn) ∈ (K ) . Consider teh homomorphism

75 alg ψ : K[x1, . . . , xn] → K , xi 7→ ξi. Then M ⊆ ker ψ, and so ker ψ = M. Thus alg K[a1, . . . , an] ' K[ξ1, . . . , ξn]. Since each ξi ∈ K , it follows that K[~a] is an algebraic extension of K. We will next prove theorem 22.

Lemma 7.49. Let R ⊆ S ⊆ T be rings such that R is N¨otherianand T = R[t1, . . . , tn] is finitely generated as a ring over R. If T is finitely generated as an S module, then S is also finitely generated as a ring over R.

Proof. :et {w1, . . . , wm} be a finite system of generators of the S-module T , ij which includes t1, . . . , tn}. Then for each 1 ≤ i, j ≤ m there exist a` ∈ S for Pm ik 1 ≤ ` ≤ m such that wiwk = `=1 a` w`. 0 ik 0 Consider the ring S = R[{a` : 1 ≤ i, k, ` ≤ m}]. Then T = S w1 + ... + 0 S wm. Since every product of powers of {t1, . . . , tn} lies in the right hand side. Since S0 is a finitely generated ring extension of a N¨otherianring, it follows that S0 is also N¨otherian.Since T is a finitely generated S0-module, it follows that T is a N¨otherian S0-module. Since S0 ⊆ S ⊆ T , it follows that S is a finitely generated S0-module. Since S0 is finitely generated as a ring extension of R, it follows that S is also finitely generated as a ring extension of R.

Lemma 7.50. Let E = K(z1, . . . , zt) be the field of rational functions in the variables z1, . . . , zt where t ≥ 1. Then E is NOT finitely generated as a ring over K.

Proof. Suppose that X = {f1/g1, . . . , fs/gs} satisfies E = K[X], where fi, gi ∈ k[z1, . . . , zn]. Let p be any irreducible polynomial which doesn’t divide any of 1 teh gi. Then clearly p ∈/ K[X], contradiction. We will now prove Theorem 22.

Proof. Let K[a1, . . . , an] be a field which is finitely generated over K as a ring. Suppose that k[~a] isn’t algebraic over K. Let {z1, . . . , zt} be a transcendence basis of K[~a] over K where t ≥ 1, and let S = K(z1, . . . , zt). Then K[~a] is a finitely generated algebraic extension of S. And so, K[~a] is finitely generated as an S-module. Hence, applying Lemma 8, we see that S is finitely generated as a ring over K. This contradicts Lemma 9. (The above method of proof of the Nullstellensatz is due to Zariski and Artin)

Theorem 7.51. Let F be an algebraically closed extension of K and let I be an ideal of K[x1, . . . , xn]. If f ∈ K[x1, . . . , xn], then TFAE:

1. f(~a) = 0 for all ~a ∈ VF (I). 2. f ∈ Rad I

76 Proof. 2 ⇒ 1 : Obvious. 1 ⇒ 2 : [Rabinowitz’s Trick] We can suppose that f 6= 0. Consider the ideal J of K[x1, . . . , xn, y] generated by I ∪ {1 − yf}. Then clearly VF (J) = ∅, and so J = K[x1, . . . , xn, y], hence, there exist polynomials gi ∈ k[x1, . . . , xn, y] P 1 and hi ∈ I such that g0(1 − yf) + i gihi = 1. Substituting f for y in this n1 nr equation, we have a1/f h1 + ... + ar/f hr where ai ∈ K[x1, . . . , xn] and m ni ≥ 0. Clearing the denominators, we obtain f = b1h1 + ... + brhr for some m ≥ 0, bi ∈ k[x1, . . . , xn]. Thus, if m ≥ 1, then f ∈ Rad I. If m = 0, then 1 ∈ I, and so I = K[~x], and the result is trivial. In particular, suppose that k is algebraically closed and reconsider the in- n jective map from algebraic subsets of k to radical ideals of k[x1, . . . , xn] by V 7→ J(V ). We have seen that this is injective. Claim: The above map is also surjective. Proof. If J is a radical ideal of k[~x], then V (J) 7→ J.

8

Definition 8.1 (Category). A category is a class C of objects together with 1. A class of disjoin sets, hom(A, B), one for each pair of objects A, B ∈ C 2. For each triple, A, B, C ∈ C we have a function from hom(B,C) × hom(A, B) to hom(A, C), called composition of morphisms, which is

(a) Associative, that is, given f : A → B, g : B → C, h : C → D, we have (h ◦ g) ◦ f = h ◦ (g ◦ f)

(b) For each object B ∈ C , there is a morphism called 1B : B → B such that for all f : A → B and g : B → C, 1B ◦ f = f and g ◦ 1B = g.

Definition 8.2 (Equivalence). In a category C , a morphism f : A → B is an equivalence if there exists a morphism g : B → A such that f ◦ g = 1B, g ◦ f = 1A. If there is an equivalence from A to B, we say that A and B are equivalent.

Definition 8.3 (Products). Let C e a category and {Ai : i ∈ I} a family of objects of C . A product for the family is an object P of C together with maps πi : P → Ai such that if B ∈ C has maps fi : B → Ai, then there is a unique map g : B → P such that fi = πi ◦ g. That is, fi factors through πi. B ...... f ...... 2 ...... g ...... f ...... 1 ...... π ...... 2 .... A1 ...... P ...... A2 π1

77 Lemma 8.1. If (P, {πi}) and (Q, {ψi}) are both products of the family {Ai : i ∈ I} then P and Q are equivalent.

Proof. Since P is a product, ∃!g : Q → P such that ψi = πi ◦ g and since Q is a product, ∃!f : P → Q such that πi = ψi ◦ f. Then πi = πi ◦ g ◦ f for all i ∈ I. So g ◦ f : P → P must be the unique option making this commute, so g ◦ f = 1P . Similarly, f ◦ g = 1Q. Products are defined using a .

Definition 8.4 (Universal Object). An object I in C is universal (or initial) if ∀C ∈ C there is a unique morphism f : I → C. Definition 8.5 (Couniversal Object). An object T in C is couniversal (or terminal) if ∀C ∈ C there is a unique morphism g : C → T . Lemma 8.2. If I and I0 are universal for C then I and I0 are equivalent. Similarly, two couniversal objects must be equivalent. Proof. Since I is universal, there is a unique morphism f : I → I0. Since I0 is universal, there is a unique morphism g : I0 → I. As I is universal, g ◦ f = h = 1I , as we know that the identity morphism exists. Similarly, f ◦ g = 1I0 .

Products are a special case. Given {Ai : i ∈ I} objects of some category C , define the category CA whose objects are the objects B of C with maps fi : B → Ai for each i. A morphism (B, {fi}) → (C, {gi}) in CA is a morphism h : B → C of C B ...... f2 ...... h ...... f1 ...... g2 ...... A1 ...... C ...... A2 such that fi = gi ◦ h for all i. That is, it satisfies g1 The product is then the couniversal object in CA.

Definition 8.6 (Covariant ). Let C and D be two categories. A covariant functor T : C → D is two functions, T : ob C → ob D and T : 0 0 mor C → mor D : hom(C,C ) 7→ hom(T (C),T (C )) such that T (1C ) = 1T (C) for all C ∈ C and T (f ◦ g) = T (f) ◦ T (g) Example: Let G be the category of groups. Fix g ∈ G . Then hom(G, −): G → Sets : H 7→ hom(G, H) is a covariant functor. If H → H0 is a homomorphism, then h˜ : hom(G, H) → hom(G, H0): ϕ 7→ h ◦ ϕ

78 Definition 8.7 (Contravariant Functor). Let C and D be two categories. A covariant functor T : C → D is two functions, T : ob C → ob D and T : 0 0 mor C → mor D : hom(C,C ) 7→ hom(T (C ),T (C)) such that T (1C ) = 1T (C) for all C ∈ C and T (f ◦ g) = T (g) ◦ T (f)

Definition 8.8 (). Let C be a category. The opposite cate- gory has objects ob C and morphisms hom(Aop,Bop) = hom(B,A) There is a contravariant functor from C → C op. And given a contravariant functor C → D, we can define a covariant functor C op → D.

Definition 8.9 (). Let C , D be categories and let S, T : C → D be covariant . A natural transformation α : S → T is a function that assigns to each object 0 of C a morphism αC : S(C) → T (C) of D such that ∀f : C → C of C the following diagram commutes. αC ... S(C) ...... T (C) ...... S(f) . T (f) ...... 0 0 αC ... 0 S(C ) ...... T (C )

Definition 8.10 (). If C is a category whose objects are all sets and every morphism is function, then an object F ∈ C is free on a set X if ∃!ι : X → F a set inclusion such that ∀f : X → A of sets, for A ∈ ob C , ∃!f : F → A such that the following diagram commutes. f ... FA...... f ...... ι ...... X

9 Representation Theory

Recall that if G is a finite group with |G| = pn for some prime p, then Z(G) 6= 1. Hence, there exists some t ≥ 1 such that Zt(G) = G. Theorem 9.1. If G is a finite p-group, then G is .

In this section, we will prove: Theorem 9.2 (Burnside). If G is a finite group and |G| = paqb for some primes p, q, then G is solvable.

79 Example: Consider the D5. Then |D5| = 10 and Z(D5) = 1. Thus, D5 is not nilpotent. 2 Example: Consider A5 = 2 ∗ 3 ∗ 5. Basic Strategy: Show that G isn’t simple, assuming G isn’t cyclic of order p. Then the result follows by induction. Definition 9.1 (k-Representation). Let G be a finite group and let k be a field. Then a k-representation of G is a homomorphism π : G → GL(V ): g 7→ πg where 0 6= V is a finite dimensional vector space over k. The degree of π is deg π = dimk V . Remark: We usually just write representation. Examples: 1. A representation of degree 1 is just a homomorphism π : G → k∗.

2. Suppose that G acts on the finite set X. Let kX = ⊕x∈X kex. Then the corrsponding permutation representation is the homomorphism π : G → GL(kX) defined by πg(ex) = eg·x. Definition 9.2 (G-invariant Subspace). Let π : G → GL(V ) be a representa- tion. Then the subspace W ≤ V is G-invariant iff πg[W ] = W for all g ∈ G. Examples: 0,V are G-invariant subspaces - the trivial invariant subspaces. Definition 9.3 (Subrepresentation). If W is a nonzero G-invariant subspace, then the associated subrepresentation is the homomorphism π|W : G → GL(W ) by g 7→ πg|W . Definition 9.4 (Irreducible). The representation π : G → GL(V ) is irreducible iff there are no nontrivial G invariant subspaces. Example: If deg π = 1, then π is irreducible. Question: Is every representation a direct sum of irreducible representations? Theorem 9.3 (Maschke). Suppose that p = char k 6 ||G|. If π : G → GL(V ) is a k-representation, and W ≤ V is a nontrivial G- invariant subspace, then there exists a G-invariant subspace U < V such that V = W ⊕ U. Proof. Let S < V be any subspace such that V = W ⊕ S and let θ : V = W ⊕ S → W , that is, w + s 7→ w, be the associated projection. Then θ(w) = w for all w ∈ W and θ[V ] ⊆ W . Furthermore, if ψ ∈ homk(V,V ) satisfies these two conditions, then there exists a corrsponding de- composition V = W ⊕ (1 − ψ)[V ] where (1 − ψ)[V ] = {v − ψ(v): v ∈ V }. Claim: Suppose that πg(ψ(v)) = ψ(πg(v)) for all g ∈ G and v ∈ V . Then (1 − ψ)[V ] is G-invariant. Proof of Claim: For all g ∈ G and v ∈ V , we ave πg(v − ψ(v)) = πg(v) − πg(ψ(v)) = πg(v) − ψ(πg(v)) = (1 − ψ)[V ]. As πg is an invertible linear trans- formation, we must have that πg((1 − ψ)[V ]) = (1 − ψ)[V ], and so the claim holds.

80 Thus, it is enough to find a G-invariant projection. Let θ : V → W be our original projection. Define ψ : V → V by ψ(v) = 1 P −1 |G| g∈G πgθπg (v). This is the only point where we use the fact that p = char k 6 ||G|. −1 1 P −1 −1 1 P −1 If h ∈ G, then πhψπh = |G| g∈G πhπgθπg πh = |G| g∈G πhgθπhg = 1 P −1 |G| a∈G πaθπa = ψ and so πhψ = ψπh. Thus, it is enough to check that ψ satisfies conditions (i) and (ii). −1 −1 (i): If w ∈ W and g ∈ G then πg(θ(πg (w))) = πg(πg (w)) = w since −1 πg [W ] = W . 1 P −1 Hence ψ(w) = |G| g∈G πgθπg (w) = w −1 (ii): Finally, if v ∈ V is arbitrary and g ∈ G, then πG(θ(πg (v))) ∈ W , since θ[V ] ⊆ W . Thus ψ(v) ∈ W . Corollary 9.4. If char k 6 ||G| then every k-representation of G decomposes into a direct sum of irreducible representations.

i Here if π : G → GL(Vi), 1 ≤ i ≤ d, are representations then their direct sum 1 d is the representation φ = π ⊕...⊕π = G → GL(V1⊕...⊕Vd), φg(v1+...+vd) = 1 d πg (v1) + ... + πg (vd) where vi ∈ Vi and g ∈ G. Even more explicitly...if we choose a basis Bi of Vi, then each φg has the form of a block diagonal matrix, with respect to B = ∪iBi. Convention: From now on, we shall only consider representations over C. In particular, every representation will be a direct sum of irreducible representa- tions. A reminder of some linear algebra. Let Mn(C) be the ring of n × n matrices over C. For each A ∈ Mn(C), the characteristic polynomial is det(Ix − A) = |Ix − A|. The roots of the characteristic polynomial are called characteristic roots or eigenvalues. The trace is the sum of the diagonal entries which is the sum of the eigenvalues counted with multiplicities. Some basic results:

1. Suppose that A ∈ Mn(C) and B ∈ Mn(C) is invertible, then |Ix − BAB−1| = |B(Ix − A)B−1| = |Ix − A|. If particular, it makes sense to speak of the characteristic polynomial and trace of a linear transforma- tion f : V → V .

2. (Cayley-Hamilton) If A ∈ Mn(C) and p(x) = |Ix − A|, then p(A) = 0. In particular, the minimal polynomial divides the characteristic polynomial.

Theorem 9.5. 1. Suppose that A, B ∈ Mn(C) satisfy AB = BA and let α1, . . . , αn, β1, . . . , βn be the characteristic roots of A, B. Then, after renumbering the βi if necessary, the characteristic roots of AB are α1β1, . . . , αnβn.

2. If p(x) ∈ C[x] , then the characteristic roots of p(A) are p(α1), . . . , p(αn) −1 −1 −1 3. If A is invertible, then the characteristic roots of A are α1 , . . . , αn .

81 Definition 9.5 (Character). If π : G → GL(V ) is a representation, then the corresponding character is the function χπ : G → C by χπ(g) = tr(πg). The character χπ is irreducible iff π is irreducible. Lemma 9.6. If π : G → GL(V ) is a representation, then

1. χπ(1) = dimC V = deg π −1 2. χπ(g) = χπ(hgh ) for all g, h ∈ G. Proof. 1. Obvious.

2. Fix some basis B of V and let Mg be the corresponding matrix of πg. Then −1 −1 χπ(hgh ) = tr(MhMgMh ) = tr(Mg) = χπ(g).

In particular, each character is a , ie, it is constant on every conjugacy class of G.

Definition 9.6 (C`(G)). C`(G) = {f ∈ GC is a class function }.

Lemma 9.7. [(a)]

1. C`(G) is a vector space over C.

2. dimC C`(G) =the number of conjugacy classes. Proof. (a) is trivial. Clearly, the characteristic functions on the conjugacy classes of G form a basis. Remark, eventually we will show that the distinct irreducible characters form a basis of C`(G).

Lemma 9.8. Let π : G → GL(V ) be a representation.

1. If g ∈ G then χπ(g) is a sum of roots of unity. In particular, χπ(g) is an algebraic integer.

2. If g ∈ G then χπ(g) = χπ(g)

3. If g ∈ G, then |χπ(g)| ≤ deg π. Furthermore |χπ(g)| = deg π iff πg = λ idV for some λ ∈ C.

Proof. Fix some basis B of V and let Mg be the matrix of πg. Let d = dimC V = deg π.

n 1. Let g ∈ G have order n. Then Mg = I. Hence, by Theorem 4, every th characteristic root of Mg is an n root of unity. The result follows.

82 2. Let λ1, . . . , λd be the characteristic roots of Mg. Then the characteristic −1 −1 −1 −1 roots of Mg are λ1 , . . . , λd . Since the λ+i are roots of unity, λi = λi. The result follows.

3. Again, let λ1, . . . , λd be the characteristic roots of Mg. Since each λi is a root of unity, we have |λi| = 1. HEnce |χπ(g)| = |λ1 + ... + λn| ≤ |λ1| + ... + |λd| = deg π. Furthermore, if ξ, η ∈ C∗, then |ξ + η| ≤ |ξ| + |η|, and |ξ + η| = |ξ| + |η| iff η = rξ for some r ∈ R+. Thus, |χπ(g)| = deg π iff λ1 = ... = λd = λ ∈ C. Thus Mg satisfies two polynomial equations, xn − 1 = 0 and (x − λ)d = 0

Hence, Mg satisfies the gcd of these polynomials, which must be x − λ, since xn − 1 has no repeated roots.

Definition 9.7 (Intertwines). Suppose that π : G → GL(V ) and ρ : G → GL(W ) are representations, then the linear map f : V → W intertwines π and ρ iff for every g ∈ G, the following diagram commutes: f ... V ...... W ...... π . ρ . g . g ...... f . ... V ...... W ie, f is a homomorphism between two G-actions. Let homG(π, ρ) be the vector space of intertwiners between π and ρ. Examples: Suppose π : G → GL(V ) is a representation and that W ≤ V is a G-invariant subspace. Let f : V → W be a G-invariant projection. Then f ∈ homG(π, π|W ). We always have 0 ∈ homG(π, ρ). We always have that λ idV ∈ homG(π, π) for every λ ∈ C. Definition 9.8 (Equivalence). The represenations π and ρ are equivalent or isomorphic if there exists an invertible intertwiner between π and ρ. Theorem 9.9 (Schur’s Lemma). Suppose that π : G → GL(V ) and ρ : G → GL(W ) are irreducible representations.

1. if π and ρ aren’t equivalent, then homC(π, ρ) = 0. That is, the zero map f ≡ 0 is the only intertwiner between π and ρ.

2. If π and ρ are equivalent, then dim homC(π, ρ) = 1, ie, if wlog π = ρ, then the only interetwiners are λ idV for λ ∈ C.

83 Proof. [(a)]

Suppose dimC homG(π, ρ) 6= 0 and let f : V → W be a nonzero inter- twiner. Clearly ker f is a G-invariant subspace of V . Since ker f 6= V and π is irreducible, ker f = 0, and so f is injective. Similarly, Im f is a G-invariant subspace of W . Since Im f 6= 0 and ρ is irreduicble, it follows that Im f = W . Thus, f is invertible and ρ, π are equivalent. 1.2.Since π, ρ are equivalent, we can suppose that π = ρ. It is now enough to show that if f ∈ homG(π, π), then f = λ idV for some λ ∈ C. Let λ be an eigenvalue of f. Then the corresponding eigenspace U 6= 0 and is clearly G-invariant. Since π is irreducible, it follows that U = V and the result follows.

Corollary 9.10. Suppose that π : G → GL(V ) and ρ : G → GL(W ) are irreducible representations and let h : V → W be any linear map. We define ˜ 1 P −1 h = |G| g∈G ρg hπg. Then

1. If π and ρ are inequivalent, then h˜ = 0. ˜ tr(h) 2. if π and ρ are equal, then h = dim V idV . C Proof. First note that h˜ intertwines π and ρ. If π, ρ are inequivalent, then h˜ ≡ 0. ˜ Now suppose that π, ρ are equal. Then h = λ idV for some λ ∈ C. To ˜ tr(g) evaluation λ, notice that λ dim V = tr(h) = tr(h), and so λ = dim (V ) . C C G 1 P Definition 9.9. We define a scalar product on C by hϕ, ψi = |G| g∈G ϕ(g)ψ(g).

Theorem 9.11. [(i)]

1. If χπ is an irreducble character, then hχπ|χπi = 1

2. If χπ, χρ are irreducible characters and π, ρ aren’t equivalent, then hχπ|χρi = 0. Proof Delayed. Remark: Thus the distinct irreducible characters form a orthonormal set in C`(G). WE shall soon see that they actually form an orthonormal basis. Already we see that there are only finitely many non-isomorphic irreducible representations, and this number is bounded above by the number of conjugacy classes.

G 1 P Definition 9.10. We defined a scalar product on C by hφ, ψi = |G| g∈G φ(g)ψ(g).

Theorem 9.12. If π and ρ are irreducible representations, then hχπ, χρi = 1 if π, ρ are equivalent and otherwise 0.

84 Proof Postponed.

Corollary 9.13. Let π : G → GL(V ) be a representation and V = W1⊕...⊕Wk be a decomposition into irreducible representations.

If ρ : G → GL(W ) is an irreducible representation, then the rumber of π|Wi which are isomorphic to ρ is given by hχπ, χρi.

Proof. Let πi = π|Wi . Then χπ = χπ1 + ... + χπk .

Hence hχπ, χρi = hχπ1 + ... + χπk , χρi = hχπ1 , χρi + ... + hχπk , χρi = the number of i such that πi is isomorphic to ρ. Corollary 9.14. Suppose that π : G → GL(V ) and ρ : G → GL(W ) are representations.

1. π and ρ are isomorphic iff χπ = χρ.

2. π is irreducible iff hχπ, χπi = 1.

Proof. a) Immediate from previous corollary. b) If π = `1π1 +...+`sπs is a decomposition into irreducible representations, 2 2 then hχπ, χπi = `1 + ... + `s.

From now on, let π1, . . . , πh be the distinct irreducible representations of G, and let ni = deg πi. Definition 9.11. Consider the action of G on itself by left multiplication and let ρ : G → GL(CG) be the corresponding permutation representation, known as the . Thus CG = ⊕g∈GCeg, and for each g ∈ g we have ρg(et) = egt.

Lemma 9.15. If ρ is the regular representatation of G, then χρ(g) = |G| if g = 1 and 0 if g 6= 1.

Proof. If g = 1, then χρ(1) = dimC CG = |G|. If g 6= 1, then ρg(et) = egt 6= et, and so all the diagonal elements of the corresponding permutation matrix are zero.

Corollary 9.16. Every irreducible representation πi is conmtained in the reg- ular representation ρ with multiplicity ni = deg πi.

Proof. The multiplicity of πi in ρ is given by

85 hχρ, χπi i = 1 X = χ (g)χ (g) |G| ρ πi g∈G 1 X = χ (g)χ (g−1) |G| ρ πi g∈G 1 = χ (1)χ (1) |G| ρ πi 1 = |G| deg π |G| i = n

2 2 Corollary 9.17. |G| = n1 + ... + nh.

Ph 2 2 Proof. |G| = dimC CG = i=1 ni deg πi = n1 + ... + nh. Next we will reluctantly turn to the proof of theorem 7.15 Proof. Let π : G → GL(V ) and ρ : G → GL(W ) be (not necessarily distinct) irreducible representations. Fixing bases of V and W , let Ag = (aij(g)) and Bg = (bk`(g)) be the corresponding matrices of πg, ρg. (If π = ρ, we choose Ag = Bg). Consider any linear map h : V → W and let C = (c`i) be the corresponding m × n matrix, where dimC V = n and dimC W = m. Let D = (djk) be the ˜ 1 P −1 matrix corrsponding to h = |G| g∈G ρg hπg. 1 P −1 Then djk = |G| i` bk`(g )c`iaij(g). g∈G First suppose that π, ρ are not equivalent. Then h˜ is the zero map, and so each dkj = 0. Regard the RHS of the above as a linear form in the variables c`i. Since the form vanishes identically, each coefficient is 0. Hence, Claim 1: If 1 P −1 π 6= ρ, then for all k, `, i, j, we have |G| g∈G bk`(g )aij(g) = 0. 1 P −1 1 P P P It follows that hχπ, χρi = |G| g∈G χπ(g)χρ(g ) = |G| g∈G ( i aii(g)) ( k bkk(g)) = P 1 P i,k |G| aii(g)bkk(g) = 0. ˜ ext suppose that π = ρ. Then we have that h = tr(g)/ dimC V id. Arguing as in the previous case, we see that if k 6= j, then dkj = 0, and so for all `, i, we 1 P −1 have |G| g∈G ak`(g )aij(g) = 0. 1 P −1 In particular, Claim 2: If k 6= j, then |G| g∈G akk(g )ajj(g) = 0. Now suppose that k = j and choose h such that c`i = 1 if ` = i = k and 0 otherwise. Then tr(h) = 1. Hence the above formulae give Claim 3: for each 1 ≤ 1 P −1 1 k ≤ n = dimC V , |G| akk(g )akk(g) = n . Applying Claim 2 and Claim

86 1 P −1 P 1 P −1 3, we get hχπ, χπi = |G| χπ(g)χπ(g ) = i,j |G| g∈G aii(g)akk(g ) = P 1 P −1 1 k |G| g∈G akk(g)akk(g ) = n n = 1. This concludes the proof.

Theorem 9.18. The characters χπ1 , . . . , χπh are an orthonormal basis for C`(G). Corollary 9.19. h = the number of conjugacy classes of G. Proof. The characteristic functions of the conjugacy classes of G form a basis of C`(G). Corollary 9.20. If G is abelian, then every irreducible representation of G has degree 1.

2 2 Proof. Since G is abelian, h = |G|. As G = n1 + ... + n|G|, ni = 1 for all i.

Notation: Irreducible representations π1, . . . , πh, irreducible characters χ1, . . . , χh, conjugacy classes C1,...,Ch, and Fixed representative gi ∈ Ci. For each group we have the character table of G.

C1 ...Cj ...Ch . χ1 . . . . . χi ...... χi(gj) ...... χh .

For A4, we get

1 (12)(34) (123) (132) χ1 1 1 1 1 2 χ2 1 1 ω ω 2 χ3 1 1 ω ω χ4 3 −1 0 0

The first three are the representations of C3, a homomorphic image of A4, but now we must determine π4. Consider the action of A4 on X = {1, 2, 3, 4} and let ϕ : A4 → GL(CX) be the corresponding permutation representation. Then χϕ(g) = fix(g), the number of fixed points of g, which is χ1 + χ4. Note the following A4-invariant decomposition. P So CX = Cvx ⊕ V0 where vx = e1 + e2 + e3 + e4 and V0 = i aiei|a1 + a2 + a3 + a4 = 0}. Thus χ4 is the character of ϕ|V0 . Remark: Suppose that G acts 2-transitively on X and π : G → GL(CX) is the corresponding permutation representation. Then χπ − 1 is an irreducible character.

87 Proof. As above, we have a G-invariant decomposition CX = Cvx ⊕V0, so χπ −1 is the character of π|V0 . Hence, it’s enough to show that hχπ, χπi = 2. P 2 Claim: g∈G fix(g) = 2|G|. 1 P 1 P 2 Assuming the claim, hχπ, χπi = |G| g∈G χπ(g)χπ(g) = |G| g∈G fix(g) = 2. To prove the claim, we count the number of elements of Ω = {ha, b, gi ∈ X × X × G|g(a) = a, g(b) = b} in two ways. (1) For each g ∈ G, the number of elements ha, b, gi ∈ Ω is fix(g)2, thus P 2 |Ω| = g∈G fix(g) . (2) For each a ∈ X, the numebr of elmeents ha, b, gi ∈ Ω is given by P P b∈X |Ga,b| = |Ga|+ b∈X\{a} |Ga,b| = |Ga|+[Ga : Ga,b]|Ga,b| by 2-transitivity, P and this is |Ga| + |Ga| = 2|Ga|. Thus, |Ω| = a∈X 2|Ga| = 2[G : Ga]|Ga| = 2|G|. To explain why the columns of a character table are orthogonal, recall that 1 P |G| g∈G χi(g)χj(g) = δij. q q 1 Ph Ph |Ck| |Ck| Thus |G| k=1 |Ck|χi(gk)χj(gk) = δij. Or k=1 |G| χi(gk) |G| χj(gk) = δij q |Ck| In other words, the rows of the matrix with ij term |G| χi(gl) form an orthonormal basis of Ch. In other words, the matrix is unitary. If follows that q q Ph |Ck| |C`| the columns are also orthonomal, thus χi(gk) χi(g`) = δk`. √ i=1 |G| |G| |Ck||C`| Ph And so, |G| i=1 χi(gk)χi(g`) = δk`. Ph |G| Hence χi(gk)χi(g`) = δkl. i=1 |Ck| In summary we have proved:

Theorem 9.21 (The Orthogonality Relations). [(a)]

1 P −1 1. |G| g∈G χi(g)χj(g ) = δij.

Ph −1 |G| 2. χi(gk)χi(g ) = δk`. i=1 ` |Ck| In order to prove Burnside’s Theorem, we only require:

|Ck| Theorem 9.22 (Folklore). For each 1 ≤ i, k ≤ h, the number χi(gk) is deg πi an algebraic integer. Proof Postponed. Before proving Burnside, we prove

Theorem 9.23. For each 1 ≤≤ h, the degree deg πi divides |G|. Ph −1 Proof. By the first orthogonality relation, we have i=1 |Ck|χi(gk)χi(gk ) = Ph |Ck| −1 |G| |G|. And so, χi(gk)χi(g ) = , which is a sum of products of i=1 deg πi k deg πi algebraic integers, and so is an algebraic integer. Hence, |G| is a rational deg πi algebraic integer.

88 Burnside’s Theorem is an ”easy” consequence of the following slightly tech- nical result which explains most of the zeros in our above character tables.

Theorem 9.24. If (|Ck|, deg πi) = 1, then either

1. χi(gk) = 0, or

2. χi(g) = deg πiω for some root of unity ω, in which case πi(g) = ωI is a scalar matrix.

Proof Postponed. Now we can prove Burnside. Proof. Suppose that G is a counterexample of minimal order. Then G must be a simple nonabelian group of order |G| = paqb for some primes p 6= q and a, b ≥ 1. Let Q be a Sylow q-subgroup of G and let 1 6= g ∈ Z(Q). Since Q ≤ CG(g) G G c and |g | = [G : CG(c)], it follows that |g | = p for some c ≥ 1. Let g ∈ Ck. Let π1 = 1, π2, . . . , πh be the irreducible representations of G. Consider some 2 ≤ ` ≤ h. If p| deg π`, then we can write deg π` = pd`. On the other hand, if p 6 | deg π`, then (|Ck|, deg π`) = 1. Since G is simple and nonabelian, we must havde that χ`(g) = 0 for each such `. Ph Appealing to the second orthogonality condition, we see that 0 = χ`(g)χ`(1) = P `=1 1 + σ `≥2 χ`(g) deg π` = 1 + p d`χ`(g) p| deg π ` P Thus, −1/p = d`χ`(g) is an algebraic integer. We still have three things to prove that we have delayed. First we prove Theorem 12.

Proof. Suppose πi(gk) isn’t a scalar matrix. Then, let ck = |Ck| and ni = deg πi. |Ck| By Theorem 10, χi(gk) is an algebraic integer. Let α = χi(gk)/ deg πi. deg πi Claim: α is an algebraic integer. Proof of claim: Since (Ck, ni) = 1, there exist a, b ∈ Z such that ack+bni = 1. |Ck| Hence α = ackα + bniα = a χi(gk) + bχi(gk) is an algebraic integer. deg πi m1 mn Recall that χi(gk) = ω + ... + ω i for a suitably chosent root ω of unity. Also, since πi(gk) isn’t a scalar matrix, Lemma 3c implies that |χi(gk)| < deg πi and so |α| < 1. Let α = α1, . . . , αr be the distinct conjugates of α over . Then m Q m1 ni αs = ωs + ... + ωs for some root of unity ωs. It follows that each |αs| < 1. Let E = (α) and consider N E(α) = α . . . α . Since α is an algebraic Q Q 1 r integer, it follows that N E(α) ∈ . Since |N E(α)| < 1, it follows that N E(α) = Q Z Q Q 0. And hence, α = χi(gk)/ deg πi = 0. Next, we will prove Theorem 8. We will make use of the following east observation:

Lemma 9.25. If χ is an irreducible character of G, then so is χ.

89 Proof. Let g 7→ Mg be the matrix representation corresponding to χ. −1 t Then χ corresponds to g 7→ (Mg ) . Since hχ, χi = hχ, χi = 1, it follows that χ is an irreducible character. And now the proof of the theorem.

Proof. We suppose that f ∈ C`(G) satisfies hχi, fi = 0 for 1 ≤ i ≤ h. We must show that f = 0. Let ρ : G → GL(CG) be the regular representation and P consider the map ρf = f(g)ρg. Pg∈G P Notice that ρf (e1) = g∈G f(g)ρg(e1) = g∈G f(g)eg. Hence, it is enough to show that ρf = 0. As ρ = ⊕π deg πiπi summed over the irreducible representations, we know P that ρf = ⊕π deg π f(g)πg = ⊕π deg ππf and so it is enough to show that P g∈G πf = g∈G f(g)πg = 0. −1 P −1 To see this, first note that if t ∈ G, then πtπf πt = g ∈ Gf(g)πtπgπt = P −1 g∈G f(tft )πtgt−1 = πf . Hence, by Schur’s Lemma, it follows that πf = λI for some λ ∈ C. Finally, we note that X deg πλ = f(g)χπ(g) = |G|hf, χπi = 0 g∈G Thus, λ = 0.

And finally, we will now prove Theorem 10.

Proof. Identifying each basis vector eg ∈ CG with the corresponding element g ∈ G, we obtain a natural structure on CG. Furthermore, each representation θ : G → GL(V ) extends linearly to a ring homomrophism θ : G → End(V ). For each conjugacy class C opf G, define c = P g ∈ C k i g∈Ci CG. Then it is easily checked that c1, . . . , ch is a basis of Z(CG) (the center). ij Ph ij Here there exist ` ∈ C such that cicj = ` ck. Pk P P k=1 k Since cicj = a b = ab where a ∈ Ci, b ∈ Cj. We see that each ij `k ∈ Z. Now let π : G → GL(V ) be an irreducible representation. Then apply π to the definition of multiplication, adn get

h X ij π(ci)π(cj) = `k π(ck) k=1

By Schur’s Lemma, since each ci is in the center of the group ring, it follows that π(ci) = λiI for some λi ∈ C. Taking traces, we see that deg πλi = P χ (g) = |C |χ (g ). Hence, λ = |Ci| χ (g ). g∈Ci π i π i i deg π π i Hency, by substituting into the above formula, we obtain

h |Ci| |Cj| X |Ck| χ (g ) χ (g ) = `ij χ (g ) deg π π i deg π π j k deg π π k k=1

90 |Ci| Hence, the ring R = Z[{ deg π χπ(gi)|1 ≤ i ≤ h}] is finitely generated as a Z- |Ci| module, and hence is an integral extension of Z. In particular, each deg π χπ(gi) is an algebraic integer.

91