
LTCC Representation theory Matt Fayers, based on notes by Markus Linckelmann thanks to Diego Millan Berdasco for corrections Contents 1 Basics 2 1.1 Some linear algebra . .2 1.2 Rings . .3 1.3 Algebras . .4 1.4 Modules . .6 1.5 Module homomorphisms . .7 1.6 Direct sums of modules . .9 1.7 Decomposable modules, simple modules and maximal submodules . 11 1.8 Representations of algebras and the structure homomorphism . 13 2 Semisimple modules and semisimple algebras 15 3 Idempotents 19 4 The Jacobson radical 23 5 Local algebras and indecomposable modules 26 6 Projective modules 28 7 Representation theory of finite groups 30 1 2 LTCC Representation theory 1 Basics 1.1 Some linear algebra In this course I’ll assume that you’re pretty familiar with basic linear algebra. But we’ll begin by revising a few simple concepts. Throughout these notes we fix a field F, and all vector spaces will be over F. dim will always mean dimension over F. Some basic notation: for any vector space V, we write idV (or just id, if it’s clear what V is) for the identity function from V to V. Also, we’ll abuse notation by writing 0 instead of f0g for the zero vector space. Definition. Suppose V and W are vector spaces. 1. The direct sum V ⊕ W is the set of all symbols v ⊕ w for v 2 V, w 2 W, with addition and scalar multiplication defined by (v ⊕ w) + (v0 ⊕ w0) = (v + v0) ⊕ (w + w0), x(v ⊕ w) = (xv) ⊕ (xw) 0 0 for x 2 F, v, v 2 V, w, w 2 W. 2. The tensor product V ⊗ W is the set of all formal sums of symbols v ⊗ w with v 2 V and w 2 W, modulo the relations (xv) ⊗ w = v ⊗ (xw), (v + v0) ⊗ w = v ⊗ w + v0 ⊗ w, v ⊗ (w + w0) = v ⊗ w + v ⊗ w0 0 0 for x 2 F, v, v 2 V, w, w 2 W. We make V ⊗ W is a vector space via x(v ⊗ w) = (xv) ⊗ w for x 2 F, v 2 V, w 2 W. Note that if B is a basis for V and C is a basis for W, then f b ⊕ 0 j b 2 Bg [ f0 ⊕ c j c 2 Cg is a basis for V ⊕ W, and f b ⊗ c j b 2 B, c 2 Cg is a basis for V ⊗ W. Hence if V and W are finite-dimensional then dim(V ⊕ W) = dim(V) + dim(W), dim(V ⊗ W) = dim(V) dim(W). Definition. Suppose V is a vector space and W 6 V. The quotient space V/W is the set of all cosets v + W = f v + w j w 2 Wg with vector space structure given by (u + W) + (v + W) = (u + v) + W, x(v + W) = (xv) + W for u, v 2 V and x 2 F. Basics 3 Suppose B is a basis for V and C is a basis for W with C ⊆ B. Then f b + W j b 2 B n Cg is a basis for V/W. Hence if V is finite-dimensional, then dim(V/W) = dim(V) − dim(W). Note that we can have u + W = v + W even when u 6= v, so we need to be careful. A precise (and useful) statement is the following. Coset Lemma. Suppose V is a vector space, u, v 2 V and W 6 V. Then u + W = v + W if and only if u − v 2 W. 1.2 Rings Now let’s revise some basic definitions for rings. Definition. A ring is a set R with two special elements 0 and 1 and two binary operations +, × such that: • R is an abelian group under +, with zero element 0; • × is associative, i.e. r × (s × t) = (r × s) × t for all r, s, t 2 R; • 1 × r = r × 1 = r for all r 2 R; • (the distributive law) r × (s + t) = (r × s) + (r × t) and (r + s) × t = (r × t) + (s × t) for all r, s, t 2 R. We’ll use standard conventions of notation, writing rs instead of r × s, and writing expres- sions like rs + t without brackets on the understanding that we do the multiplication first. We also adopt the following convention: if r 2 R and X ⊆ R, then we write rX to mean frx j x 2 Xg. Similarly we define Xr, rXs, XrY etc. We refer to the element 1 as the identity element of R, and to 0 as the zero element. Note that (unlike some people) we do not require these two elements to be distinct. The only effect of this is to permit the trivial ring f0g which has addition and multiplication defined in the only possible way. Very occasionally we will prove results that only hold for non-trivial rings, but in general the trivial ring will be allowed. We’ll need the following definitions in order to define an algebra. Definition. 1. If R is a ring, the centre of R is the set Z(R) = f a 2 R j ab = ba for all b 2 Rg . 2. If R and S are rings, a homomorphism from R to S is a function f : R ! S such that f(ab) = f(a)f(b), f(a + b) = f(a) + f(b) and f(1R) = 1S for all a, b 2 R. Note that in the definition of a homomorphism, the condition f(1R) = 1S rules out, for example, the trivial map which sends a to 0 for all a 2 R (unless S is the trivial ring). 4 LTCC Representation theory 1.3 Algebras Now we come to algebras. Definition. An F-algebra is a ring A together with a specified ring homomorphism from F to Z(A). Almost always in these notes we shall be considering algebras over F, so we will just say ‘algebra’ to mean ‘F-algebra’. Remarks. 1. Some people’s definition of an algebra A requires that the homomorphism from F to Z(A) be injective. The only difference between this and our definition is that it rules out the possibility that A is the trivial ring. Indeed, if A is a ring f : F ! A is a non- −1 injective homomorphism, take 0 6= x 2 F with f(x) = 0; then 1A = f(1F) = f(xx ) = f(x)f(x−1) = 0f(x−1) = 0, so that A is the trivial ring. We prefer to allow the trivial ring; for example, if V is a non-zero F-vector space, then the set EndF(V) of linear maps from V to V is an F-algebra; if we want to extend this to the case V = 0, we need to allow the trivial ring as an algebra. 2. In the cases where A is non-trivial (so the corresponding ring homomorphism is injec- tive), it is customary to identify F with its image under the homomorphism, and thereby regard F as a subring of Z(A). 3. An algebra A is naturally a vector space over F; if A is trivial this is obvious, while if A is non-trivial and we regard F as a subring of A as above, then the scalar multiplication is just given by the ring multiplication in A. This leads to an alternative definition of an algebra: it is an F-vector space equipped with a binary operation × which which satisfies the associative and identity laws and is bilinear. Definition. Suppose A and B are algebras. A homomorphism from A to B is an F-linear map which is also a ring homomorphism. Note that not every ring homomorphism between two algebras is an algebra homomor- phism. For example, take F = A = B = C; then complex conjugation is a ring homomorphism, but is not C-linear. We use standard terminology associated with homomorphisms: an isomorphism is a homo- morphism which is also a bijection (in which case its inverse is also an isomorphism), and two algebras are isomorphic if there is at least one isomorphism between them. An automorphism of an algebra A is an isomorphism from A to A. The kernel of a homomorphism is the set of elements that map to 0. As is always the case with linear maps, a homomorphism is injective if and only if its kernel is 0. Definition. Suppose A is an algebra. • A subalgebra of A is an F-subspace which is also a subring (i.e. which is closed under multiplication and contains 1). • A left ideal of A is an F-subspace I of A such that ai 2 I for all a 2 A, i 2 I. • A right ideal of A is an F-subspace I of A such that ia 2 I for all a 2 A, i 2 I. Basics 5 • An ideal of A is a left ideal which is also a right ideal. We write I L A, I R A, I A to mean that I is a left ideal, a right ideal or an ideal of A, respectively. P P P Some key examples: if A is a non-trivial algebra, then F is a subalgebra of A. If X is any subset of A, let AX = f ax j a 2 A, x 2 Xg. Then the subspace of A spanned by AX is a left ideal of A. Similarly the subspace spanned by XA is a right ideal, and the subspace spanned by AXA = f axb j a, b 2 A, x 2 Xg is an ideal.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages31 Page
-
File Size-