Noncommutative Algebra
Total Page:16
File Type:pdf, Size:1020Kb
Noncommutative Algebra Andrew Kobin Fall 2015 Contents Contents Contents 1 Structure Theory 1 1.1 Motivation . .1 1.2 Matrices over Rings . .3 1.3 Basic Structure of Noncommutative Rings . .6 1.4 Semisimple Rings and Modules . .7 1.5 Primitive Rings and Ideals . 14 1.6 Jacobson Density Theorem . 17 1.7 Structure of Artinian and Noetherian Rings . 20 1.8 The Nil Radical . 22 1.9 Tensor Products . 24 2 Representation Theory 30 2.1 Group Representations . 30 2.2 G-Modules . 32 2.3 Splitting Fields . 37 2.4 Character Theory . 41 2.5 Applications of Character Theory . 47 2.6 Induction and Restriction . 52 3 Nonassociative Algebras 58 3.1 Linear Nonassociative Algebras . 58 3.2 Lie Algebras . 60 3.3 Representations of Lie Algebras . 64 4 Additional Topics 69 4.1 Projective Modules . 69 4.2 A Brief Introduction to Group Cohomology . 73 4.3 Growth of Groups . 76 i 1 Structure Theory 1 Structure Theory These notes are taken from Algebra III, a course on noncommutative algebra taught by Dr. Louis Rowen in the fall of 2015 at the University of Virginia. The companion text for the course is Rowen's Graduate Algebra: Noncommutative View. The main topics covered are: Matrix rings and the representation theory of rings Simple, semisimple, primitive and prime rings and their ideals Artinian and noetherian rings Group rings, group algebras and group representations 1.1 Motivation Cayley's Theorem says that every finite group G is (isomorphic to) a subgroup of a symmetric group Sn. This is a natural starting place for the study of the representation theory of finite groups. Sn naturally embeds into GLn(F) via the (left) regular representation Sn −! GLn(F) ( 1 if j = π(i) π 7−! (aij) where aij = 0 otherwise 0 1 For example, the permutation π = (1 2) 2 S maps to the matrix . This is a very 2 1 0 natural way to think of permutations. Moreover, since any finite group embeds into Sn, composition gives us the (left) regular representation of G: G,! Sn ! GLn(F). There are of course other representations of a group that are of interest. A primary topic of interest across algebra is, broadly, how to classify objects. Historically, the approach has been to define an appropriate `simple' object { simple group, simple ring, simple module, simple algebra, etc. { and classify all of them. Then one can study more complicated objects by viewing how they break down into simple objects. In general, the theories of simple groups and rings are difficult to study, and in some ways incomplete, but simple modules have a nice classification which we will study in these notes. Proposition 1.1.1. A module M over a ring R is simple if it satisfies any of the following equivalent conditions: (1) (Definition) If L ⊆ M is a submodule then either L = 0 or L = M. (2) Ra = M for all nonzero a 2 M. (3) For all a; b 2 M with a 6= 0, there is some r 2 R satisfying ra = b. (4) M is isomorphic to R=L for L a maximal left ideal of R. 1 1.1 Motivation 1 Structure Theory Definition. Given a commutative ring C, a C-algebra is a ring R which is also a C-module and satisfies the property that (cr1)r2 = r1(cr2) = c(r1r2) for all c 2 C and r1; r2 2 R. One may think of an algebra as a ring that is also a module. In the easiest case, if F is a field and R is an F -algebra, then R is actually a vector space of dimension dimF R over F . Definition. An abstract lattice is a set L with binary operations _ (sup) and ^ (inf) that satisfy the following axioms for all a; b; c 2 L: (1) (a _ b) _ c = a _ (b _ c) and (a ^ b) ^ c = a ^ (b ^ c). (2) (a _ b) ^ a = a and (a ^ b) _ a = a. (3) There exists an element 0 2 L satisfying 0 _ a = a and 0 ^ a = 0. (4) There exists an element 1 2 L satisfying 1 _ a = 1 and 1 ^ a = a. Notice that one could switch _ and ^ (as well as 0 and 1) in the definition above and have the same axiomatic structure { this is called duality. For an R-module M, define the lattice L(M) = fR-submodules N ⊆ Mg. For this lattice, sup is N1 + N2 and inf is N1 \ N2. The trick here is that the dual of L(M), while still a lattice, is not immediately recognizable as something of the form L(A) for an R-module A. Thus L(M) is not dualizable in general. However, L(M) does have the modularity property: Proposition 1.1.2. Suppose K; L and N are all submodules of M, and suppose K ⊂ N. Then N \ (K + L) = (L \ N) + K. Proof. (⊇) It's clear that L \ N ⊆ N and L \ N ⊆ (K + L). On the other hand, K ⊆ N by hypothesis and K ⊆ K + L so the right containment holds. (⊆) For the left, suppose a 2 N \ (K + L), so that a = b + c for some b 2 K and c 2 L. Then c = a − b, but a 2 N and b 2 K ⊆ N so we see that c 2 N. Hence c 2 L \ N so a = b + c 2 K + (L \ N). There is a dual property in this case: if K ⊃ N then N + (K \ L) = (L + N) \ K. However this isn't particularly enlightening since one can simply rearrange the labels for the modules and obtain this statement as a consequence of Proposition 1.1.2. In any case, the axiom of modularity is self-dual. Another important example of dual concepts in ring and module theory is the pairing of noetherian and artinian. Proposition 1.1.3. An R-module M is left noetherian if it satisfies one of the following equivalent conditions: (1) (Definition) Every left submodule of M is finitely generated. (2) (ACC) For every chain M1 ⊂ M2 ⊂ · · · of left submodules of M, there exists an n such that Mk = Mn for all k ≥ n. This is called the ascending chain condition. (3) Every nonempty collection of submodules of M contains a maximal member. 2 1.2 Matrices over Rings 1 Structure Theory Proposition 1.1.4. An R-module M is left artinian if it satisfies one of the following equivalent conditions: (1) (DCC) For every chain M1 ⊃ M2 ⊃ · · · of submodules of M, there exists an n such that Mk = Mn for all k ≥ n. This is called the descending chain condition. (2) Every nonempty collection of submodules of M contains a minimal member. Notice that there is no nice condition like the ‘finitely generated' condition for artinian modules; the descending chain condition is usually taken as the definition. Since noetherian and artinian modules are dual in their submodules lattices, we can prove theorems for one and immediately obtain theorems for the other. Theorem 1.1.5. Suppose K ⊆ M is a submodule. Then M is noetherian if and only if K and M=K are both noetherian. Proof. (Sketch) First, we claim that N1 ⊆ N2, N1 \ K = N2 \ K and N1 + K = N2 + K, then N1 = N2. This directly implies the result. There is a dual theorem for artinian modules. Definition. A ring R is left noetherian if R is noetherian as a left module over itself. Similarly, R is left artinian if R is artinian as a left module over itself. One defines right noetherian and right artinian rings in a similar way. Although there is no basic theorem relating noetherian and artinian modules, there is a theorem for rings, which we will prove in Section 1.7: Theorem 1.1.6 (Hopkins-Levitsky). Every left artinian ring is left noetherian. 1.2 Matrices over Rings Given a ring R, we can form a set M(m; n; R), the set of m × n matrices with entries in R. M(m; n; R) is always an R-module. The most interesting case is Mn(R), the set of n × n matrices over R. Mn(R) is also a ring, called a matrix ring. Define eij to be the matrix with 1 in the ijth position and 0 elsewhere. This is called a matrix unit. Matrices can then be written: n X (aij) = aijeij: i;j=1 These are especially nice for computations. For example, the identity matrix can be written Pn 1 = i=1 eii. Definition. A set of matrix units of a ring R is any set feij j 1 ≤ i; j ≤ ng that satisfy (1) eijek` = δjkei` for all i; j; k; `. Pn (2) 1 = i=1 eii. 3 1.2 Matrices over Rings 1 Structure Theory 2 Remark. The eii are special because each one is an idempotent: eii = eii. ∼ Proposition 1.2.1. A ring S has a set of n × n matrix units if and only if S = Mn(R) for some ring R. Proof. ( ) = ) is clear. ( =) ) Suppose feijg is a set of matrix units for S. Define R = e11Se11 { the fact that ∼ this is a ring follows from e11 being an idempotent. We want to show that S = Mn(R). Define the map ' : S −! Mn(R) s 7−! (rij) where rij = e1isej1: To show ' is a homomorphism, take s; s0 2 S and consider n 0 0 X 0 '(s)'(s ) = (e1isej1)(e1is ej1) = (e1isek1)(e1ks ej1) k=1 n n X 0 X 0 = e1is (ek1e1ks ej1) = e1is (ekks ej1) k=1 k=1 n 0 X = e1iss ej1 since ekk = 1 k=1 = '(ss0): Next, take s 2 ker ' so that '(s) = (rij) = 0.