DELIGNE’S THEOREM ON TENSOR CATEGORIES

DENE LEPINE

Abstract. We look at and explain the concepts needed to understand the statement of Deligne’s Theorem on Tensor Categories, which states that an arbitrary tensor is equivalent to the category of representations of some super group. Throughout the paper we assume a background in only basic undergraduate mathematics.

Contents 1. Introduction1 Acknowledgments2 2. A Brief Introduction to Supermath2 3. Superalgebras5 4. Super Hopf Algebras9 5. Representation Theory 14 6. Tensor Categories 15 7. H-Categories 19 8. Supergroups 22 References 24

1. Introduction Supermathematics is the study of algebraic structures (vector spaces, algebras, modules, groups, etc.) that have a decomposition into “even” and “odd” pieces. Besides the purely mathematical reason to study supermath, one of the main motivations for supermath is the theory of . Supersymmetry is how supermath acquired such a “super” name and is where supermath offers much of its application and derives its motivation. Supersymmetry is a field within partical physics which studies the nature of fermions and bosons. An overly simplified example of a super is if you consider two particles, either fermions or bosons, their spin can be represented as elements of a . When we take these two particles and allow them to interact, their resulting spin will be exactly the tensor product of the spins of the individual particles. This is just a simple example but supersymmetry works with complex theorems and results using supermath. Deligne’s Theorem on Tensor Categories was first published in 2002 and was a result drawn from Deligne’s past work. This theorem ties together multiple fields of math and gives an equivalence to more easily work with tensor categories. The goal of this paper is actually quite simple, we want to be able to clearly understand and state Deligne’s Theorem on Tensor Categories. To do so, we will start with simple concepts in super math, representation theory 1 2 DENE LEPINE and then build these up with clear examples. Then we will bring together the concepts and examples to fully understand Deligne’s Theorem. This paper is aimed at readers with at least a background of undergraduate level algebra. Specifically, we assume the reader has taken some courses on basic group theory and has seen many of the linear algebra concepts taught at the undergraduate level. Althought there are some concepts looked at in this paper that may be beyond the understanding of an undergraduate student, our hope is to properly explain and build up from known concepts so that our reader can conceptualize Delgine’s Theorem.

Acknowledgments. This work was completed as part of the Co-op Work-Study Program at the University of Ottawa and was partially funded by the N.S.E.R.C. Discovery Grant of Professor Alistair Savage. Furthermore, I would like to thank Professor Alistair Savage for his understanding and guidance throughout this project.

2. A Brief Introduction to Supermath In this section, we look at the “super” analog of some of the mathematical structures that we already know and studied to certain extent. While doing so, we will develop the terms and examples that are common place while working in supermath. Both the terms and examples will be used throughout this paper as we build more advanced mathematical structures. The goal of this section is to give the reader a strong foundation in supermath so that we may build fluidly to our final theorem.

Definition 2.1 (Super Vector Space). A super vector space (SVS) or Z2-graded vector space is a vector space, V , over a field, K, with a decomposition, V = V0 ⊕ V1. When working with super vector spaces, we will view the subscripts of the subspaces as 0, 1 ∈ Z2. Definition 2.2 (Homogeneous). Let V be a super vector space. An element of V is said to be homogeneous if it is an element of either V0 or V1. Definition 2.3 (Parity). The parity of nonzero homogeneous elements v ∈ V , denoted by p(v) = |v|, is defined as: ( 0 if v ∈ V , p(v) = |v| = 0 1 if v ∈ V1.

We call all elements of V0 even and V1 odd. Definition 2.4 ( of a SVS). Given a finite-dimensional super vector space, V = V0 ⊕ V1, where dim V0 = p and dim V1 = q then we say that V is of dimension p|q. Definition 2.5 (Linear Transformation). Let V and W be vector spaces over K.A linear transformation from V to W is a map, f : V → W , such that for all x, y ∈ V and k ∈ K we have f(x) + f(y) = f(x + y) and f(kx) = kf(x). We let L (V,W ) denote the set of all linear transformations V → W . Notice that L (V,W ) is also a vector space under pointwise addition and scalar multiplication. DELIGNE’S THEOREM ON TENSOR CATEGORIES 3 Definition 2.6 (). Let V be a vector space. We define the dual space, denoted V ∗, to be the vector space L (V, K). ∗ Remark 2.7. Notice that if V is of dimension p with basis B = {b1, . . . , bp}, then V is ∨ ∨ ∨ also of dimension p with basis B = {bi | bi ∈ B}, where bi (bj) = 1 if and only if bi = bj. ∗ ∨ Furthermore, if V is a SVS then V is also a SVS where p(bi ) = p(bi). Definition 2.8 (Grade-Preserving/Grade-Reversing). A linear transformation, f : V → W , between super vector spaces is said to be grade-preserving or even if f(Vi) ⊆ Wi for i = 0 and 1. It is called grade-reversing or odd if f(Vi) ⊆ W1−i. Definition 2.9 (Transpose). Given a f : V → W then the transpose map is f ∗ : W ∗ → V ∗ defined for all g ∈ W ∗ via f ∗(g) = g ◦ f. Notice that while working with SVS p(f ∗) = p(f). Definition 2.10 (Isomorphic). Two SVS, V and W , are isomorphic if there exists a grade- preserving bijective linear transformation between V and W . Such a map is called an . If there exists such a transformation, then we write V ∼= W .

p|q p q p|q p Example 2.11. Consider the SVS K = K ⊕ K , for some field K, where K0 = K and p|q q p K1 = K . Recall that any vector space V of dimension p is isomorphic to K . Similarly, if ∼ p|q V is a super vector space with dim V = p|q then V = K . Example 2.12. Let V and W be SVS. Notice L (V,W ) itself is a super vector space with L (V,W ) = L (V,W )0 ⊕ L (V,W )1 defined by:

L (V,W )0 = {f : V → W | f grade-preserving}, L (V,W )1 = {f : V → W | f grade-reversing}. Example 2.13. Another example to consider is the vector space of m×n matrices, denoted Mm×n(K). We turn these matrices into super matrices by partitioning both the rows and colums into two. For example, the following m × n matrix can be turned into a super matrix by   a1,1 ··· a1,r a1,r+1 ··· a1,n . . . .  ......   a ··· a   . . . .  1,1 1,n     . .. .  as,1 ··· as,r as,r+1 ··· as,n  X00 X01  . . .  →   = .  as+1,1 ··· as+1,r as+1,r+1 ··· as+1,n  X10 X11 am,1 ··· am,n  ......   ......  am,1 ··· am,r am,r+1 ··· am,n We say that these matrices are of dimension s|p × r|q where s + p = m and r + q = n, denoted Ms|p×r|q(K). A matrix as above is defined to be even if X01 = X10 = 0, and odd if X00 = X11 = 0. Thus we have

Ms|p×r|q(K) = Ms|p×r|q(K)0 ⊕ Ms|p×r|q(K)1. Definition 2.14 (Bilinear Map). A bilinear map, B : V × W → U, is a function such that for all v, v1, v2 ∈ V , w, w1, w2 ∈ W and k ∈ K, we have 4 DENE LEPINE

• B(v, w1 + w2) = B(v, w1) + B(v, w2),

• B(v1 + v2, w) = B(v1, w) + B(v2, w),

• B(kv, w) = kB(v, w) = B(v, kw).

Definition 2.15 (Free Vector Space). Given a set, S, the free vector space generated by S is F (S) = {φ: S → K | φ(s) = 0 for all except finitely many s ∈ S}. With the usual pointwise addition and scalar multiplication of functions this is, in fact, a P vector space. We will write s∈S φ(s)s instead of φ(s).

Definition 2.16 (Tensor Product). Let V and W be vector spaces over K. We define Z ⊆ F (V × W ) to be the subspace defined by   (v1 + v2, w) − (v1, w) − (v2, w),    (v, w1 + w2) − (v, w1) − (v, w2),  Z = v, v1, v2 ∈ V, w, w1, w2 ∈ W, k ∈ K . k(v, w) − (kv, w),    k(v, w) − (v, kw) 

The tensor product of V and W is the quotient space F (V × W )/Z, denoted by V ⊗ W . We write v ⊗ w for (v, w) + Z. The tensor product has the following properties for all v, v1, v2 ∈ V, w, w1, w2 ∈ W and k ∈ K:

(v1 + v2) ⊗ w = (v1 ⊗ w) + (v2 ⊗ w),

v ⊗ (w1 + w2) = (v ⊗ w1) + (v ⊗ w2), and kv ⊗ w = v ⊗ kw.

Lastly and foremost, V ⊗ W has a universal mapping property. This property states that for all vector spaces U and all bilinear maps B : V × W → U there exists a unique linear map `: V ⊗ W → U such that for all v ∈ V and w ∈ W , B(v, w) = `(v ⊗ w).

Remark 2.17 (Dimension of V ⊗ W ). Let V and W be finite-dimensional vector spaces with bases b = {b1, . . . , bp} and B = {B1,...,Bq} respectively. The tensor product V ⊗ W has basis b ⊗ B = {bi ⊗ Bj | bi ∈ b and Bj ∈ B}. Thus we know that if dim(V ) = p and dim(W ) = q then dim(V ⊗ W ) = pq. For more on this refer to [1, §13].

Definition 2.18 (Tensor Product of Super Vector Spaces). Given SVS V = V0 ⊕ V1 and W = W0 ⊕ W1, the resulting V ⊗ W is naturally a super vector space with the Z2-grading

(V ⊗ W )0 = (V0 ⊗ W0) ⊕ (V1 ⊗ W1) and (V ⊗ W )1 = (V0 ⊗ W1) ⊕ (V1 ⊗ W0).

Definition 2.19 (Superring). A superring or Z2- is a ring, R, with the decom- postion R = R0 ⊕R1 where the product map R ×R → R is even. As with SVS, the elements of R0 are called even and elements of R1 odd. DELIGNE’S THEOREM ON TENSOR CATEGORIES 5 Definition 2.20 (Ring ). A ring homomorphism is an even map f : R → S, with R and S superrings, such that for all r, s ∈ R f(r + s) = f(r) + f(s), f(r · s) = f(r) · f(s), and

f(1R) = 1S.

Definition 2.21 (Discrete ). A discrete supergroup is a Z2-graded group, (G, ·), where · is such that Gi · Gj ⊆ Gi+j.A Z2-graded group is such that G = G0 t G1, where t is the disjoint union.

3. In this section, we start by defining the “super” structure on associative algebras and give multiple examples of such structures. We then proceed by defining and giving examples of other super structures directly relating to superalgebras, like super coalgebras and supermod- ules. To follow, we will bring some of these definitions together to define one of the most important super structures looked at in this section, a super bialgebra. Definition 3.1 (). An associative superalgebra over some field K is a triple (A, ∇, η) where A is a SVS over K, ∇: A ⊗ A → A is an even associative linear map and η : K → A. Furthermore, we require that the following diagrams commute. A A ⊗ A A ⊗ A ∇ A ∇ A ⊗ A ∇

∇ ∇⊗idA idA η⊗idA idA⊗η A ⊗ A A ⊗ A ⊗ A A idA⊗∇ ∼ Notice when we say that idA ⊗ η : A → A ⊗ A the domain is actually A ⊗ K = A. For a, b ∈ A, we often write a · b or ab for ∇(a ⊗ b). Definition 3.2 (Supercommutative). A superalgebra, A, is said to be supercommutative if for all homogeneous a1, a2 ∈ A, we have

|a1||a2| a1a2 = (−1) a2a1. Example 3.3. Recall from Example 2.12 that L (V,W ) forms a super vector space. Now consider L (V,V ), which is also denoted End(V ) or gl(V ). We define the multiplication to be regular composition of maps. To confirm that the image of End(V )i ⊗ End(V )j is in End(V )i+j, take f, g ∈ End(V ) such that p(f) = i, p(g) = j and v ∈ V such that p(v) = k then p((f ◦ g)(v)) = p(f(g(v))) = i + j + k thus p(f ◦ g) = i + j. Example 3.4. Consider the super vector space of supermatrices of dimension p|q × p|q over some field K, denoted Mp|q(K). To make this a superalgebra, we use usual matrix multiplication and show that Mp|q(K)i · Mp|q(K)j ⊆ Mp|q(K)i+j, notice the following • If both X,Y ∈ Mp|q(K)0 then  X Y 0  X · Y = 00 00 , 0 X11Y11

thus X · Y ∈ Mp|q(K)0. 6 DENE LEPINE

• If X ∈ Mp|q(K)0 and Y ∈ Mp|q(K)1 then  0 X Y  X · Y = 00 01 , X11Y10 0

thus X · Y ∈ Mp|q(K)1. • If X is Mp|q(K)1 and Y is Mp|q(K)0 then  0 X Y  X · Y = 01 11 , X10Y00 0

thus X · Y ∈ Mp|q(K)1. • If X,Y ∈ Mp|q(K)1 then  X Y 0  X · Y = 01 10 , 0 X10Y01

thus X · Y ∈ Mp|q(K)0. Thus Mp|q(K)i · Mp|q(K)j ⊆ Mp|q(K)i+j as required.

Remark 3.5. Let V be a SVS of dimension p|q over K. We can pick bases for V0 and V1, ∼ b = {b1, b2, . . . , bp} and B = {B1,B2,...,Bq} respectively. We can show that End(V ) = Mp|q(K) by F : L (V,V ) → Mp|q(K) defined for any T ∈ gl(V )  [T (b )] ··· [T (b )] [T (B )] ··· [T (B )]   X X  F (T ) = 1 b p b 1 b q B = 00 01 , [T (b1)]B ··· [T (bp)]B [T (B1)]B ··· [T (Bq)]B X10 X11

where, given x ∈ V ,[T (x)]Y denotes the column coordinate vector of T (x) with respect to the basis Y . Now notice that if T is an even transformation then this matrix will be a block diagonal matrix with X01 = X10 = [0] and if T is an odd transformation then the matrix will be a block off-diagonal matrix with X00 = X11 = [0]. To show that this is, in fact, an isomorphism is left to reader.

Example 3.6 (Polynomial Superalgebra). A polynomial superalgebra over K is defined to be the algebra over K generated by p even variables, Xi, and q odd variables, Yj, with relations XiXj = XjXi, XiYj = YjXi, and YiYj = −YjYi for all i and j. This algebra is denoted by K[X1,...,Xp | Y1,...,Yq]. An element of K[X1,...,Xp | Y1,...,Yq] is called a monomial if it is only a product of variables. Furthermore, we determine the parity of a monomial by the sum of variables parities in Z2. Example 3.7 (Rank 1 Clifford Algebra). The rank 1 Clifford algebra is a superalgebra with the underlying SVS C[x]/(x2 − 1) = C ⊕ Cx, where C is even and Cx is odd, and multiplication being the standard polynomial multiplication. With this multiplication it is 2 2 2 easy to verify that (C[x]/(x − 1))i · (C[x]/(x − 1))j ⊆ (C[x]/(x − 1))i+j is true for all i and j. Definition 3.8 (Tensor of Algebras). Given A and B algebras then naturally A ⊗ B is a vector space. To define an algebra structure on A ⊗ B, we define multiplication of any a1 ⊗ b1, a2 ⊗ b2 ∈ A ⊗ B by

(a1 ⊗ b1) · (a2 ⊗ b2) = a1a2 ⊗ b1b2. DELIGNE’S THEOREM ON TENSOR CATEGORIES 7 Definition 3.9 (Tensor of Superalgebras). Analogously if A and B are both superalgebras then A ⊗ B is a SVS as seen in Definition 2.18. Furthermore, A ⊗ B will also be an superalgebra with multiplication defined, for all simple tensors a1 ⊗ b1, a2 ⊗ b2 ∈ A ⊗ B, by

|a2|·|b1| (a1 ⊗ b1) · (a2 ⊗ b2) = (−1) (a1a2 ⊗ b1b2). Example 3.10 (Rank r Clifford Algebra). Using the rank 1 Clifford algebra from Example 2 3.7 we can construct a rank r Clifford algebra by forming the tensor product C[x1]/(x1 −1)⊗ 2 2 C[x2]/(x2 − 1) ⊗ · · · ⊗ C[xr]/(xr − 1) with multiplication as defined in Definition 3.9. While working with rank r Clifford Algebras we write xi for 1⊗· · ·⊗xi ⊗· · ·⊗1 for all i ∈ {1, . . . , r}. The rank r Clifford algebra has a basis given by monomials in the xi. The parity 0 component is spanned by monomials of even degree and the parity 1 component is spanned by monomials of odd degree. Notice that the rank r Clifford algebra is a supercommutative superalgebra since for all i, j ∈ {1, . . . , r} with i 6= j we have xixj = (−1)xjxi. Definition 3.11 (). Let (A, ∇, η) be a superalgebra. A right supermodule or right Z2-graded , M, is a SVS together with an even linear map σ : M ⊗ A → M, called an action, such that the following diagrams commute. M ⊗ A σ M σ M ⊗ A M σ M ⊗ A

idM idM ⊗∇ σ⊗idA idM ⊗η M ⊗ A ⊗ A M ∼ These diagrams commute when we identify M = M ⊗ K.A left supermodule is defined analogously. p|q p q Example 3.12. Notice that K = K ⊕ K over Mp|q(K) with standard matrix multipli- cation forms a right supermodule (or a left supermodule if we view Kp|q as column vectors). To insure that this is a module, notice that Kp|q is an under standard vector addition, 1 ∈ Mp|q(K) is just the identity matrix and distributivity is inherited from distribu- p|q p|q tivity of matrix multiplication over addition. Lastly to insure that Mp|q(K)i · Kj ⊆ Ki+j, consider the following cases: p|q • If Y ∈ Mp|q(K)0 and X ∈ K0 then       Y00 0 X0 Y00X0 p|q Y · X = · = ∈ K0 . 0 Y11 0 0 p|q • If Y ∈ Mp|q(K)0 and X ∈ K1 then       Y00 0 0 0 p|q Y · X = · = ∈ K1 . 0 Y11 X1 Y11X1 p|q • If Y ∈ Mp|q(K)1 and X ∈ K0 then       0 Y01 X0 0 p|q Y · X = · = ∈ K1 . Y10 0 0 Y10X0 p|q • If Y ∈ Mp|q(K)1 and X ∈ K1 then       0 Y01 0 Y01X1 p|q Y · X = · = ∈ K0 . Y10 0 X1 0 8 DENE LEPINE

p|q p|q Concluding that in fact Mp|q(K)i · Kj ⊆ Ki+j for all i, j ∈ Z2, as required. Definition 3.13 (Super Coalgebra). A super coalgebra is a triple (C, ∆, ) where C is a SVS over K and ∆: C → C ⊗ C and : C → K are even such that the following diagrams commute. C ∆ C ⊗ C C ⊗ C idC ⊗ C ⊗idC C ⊗ C

∆ ∆⊗idC idC ∆ ∆ id ⊗∆ C ⊗ C C C ⊗ C ⊗ C C

That is, for all c ∈ C, (idC ⊗ ) ◦ ∆(c) = c = ( ⊗ idC ) ◦ ∆(c). We call ∆ the comultiplication and  the counit. P Remark 3.14. While working with coalgebras, we write ∆(c) = c1 ⊗ c2 and we will sometimes write c = c1(c2) = (c1)c2 instead of (idC ⊗ ) ◦ ∆(c) = c = ( ⊗ idC ) ◦ ∆(c). Definition 3.15 (Super Bialgebra). A super bialgebra is a 5-tuple (B, ∇, η, ∆, ) such that (B, ∇, η) is an associative superalgebra over some field K,(B, ∆, ) is a super coalgebra over K, and the following diagrams are commutative. B ⊗ B ∇ B ∆ B ⊗ B

∆⊗∆ ∇⊗∇ id ⊗µ⊗id B ⊗ B ⊗ B ⊗ B B B B ⊗ B ⊗ B ⊗ B In the above, µ: B ⊗ B → B ⊗ B is the flip operator. ∇ ∼ B ⊗ B B K ⊗ K = K η ⊗  η⊗η ⊗ ∼= B ⊗ B B K K K ∆ B η 

id K K K

Example 3.16. Recall the polynomial superalgebra, K[X1,...,Xp | Y1,...,Yq], as described in Example 3.6. We make this into a super bialgebra by defining (f) = f(0,..., 0 | 0,..., 0) for all f ∈ K[X1,...,Xp | Y1,...,Yq] and we define ∆(W ) = W ⊗ 1 + 1 ⊗ W where W ∈ {X1,...,Xp,Y1,...,Yq} and extend by linearity and multiplicatively. It is important to notice that the relation YiYj = −YjYi of the polynomial superalgebra implies 2 2 that Yi = 0. Thus for all Yi ∈ {Y1,...,Yq}, ∆(Yi ) = 0. To verfiy this consider the following. 2 2 ∆(Yi ) = ∆(Yi)∆(Yi) = (Yi ⊗ 1 + 1 ⊗ Yi) 2 2 = Yi ⊗ 1 + Yi ⊗ Yi − Yi ⊗ Yi + 1 ⊗ Yi = 0 DELIGNE’S THEOREM ON TENSOR CATEGORIES 9 4. Super Hopf Algebras In this section, we take the super bialgebras previously defined and use them to define super Hopf algebras. Continuing, we start by first looking at some examples of both Z2- graded Hopf algebras and ungraded Hopf algebra. One example to pay special attention to is the group algebra example, since it will be the motivation for when we talk about supergroups. Lastly, we will look at and prove some important properties concerning the duals of superalgebras, super coalgebras, super bialgebras and super Hopf algebras. Definition 4.1 (Super Hopf Algebra). A super Hopf algebra is a 6-tuple (H, ∇, η, ∆, , S) where (H, ∇, η, ∆, ) is a super bialgebra and S is an even linear map S : H → H, called the antipode, such that the following diagram commutes. H ⊗ H ∇ H ∇ H ⊗ H

(4.1) idH ⊗S η◦ S⊗idH H ⊗ H H H ⊗ H ∆ ∆ That is for all h ∈ H

∇ ◦ (idH ⊗ S) ◦ ∆(h) = η ◦ (h) = ∇ ◦ (S ⊗ idH ) ◦ ∆(h). Example 4.2. Continuing with the polynomial superalgebra from Example 3.16, we will define the antipode, S, of H = K[X1,...,Xp | Y1,...,Yq] so that it is a super Hopf al- gebra. We define S : H → H to be the unique algebra homomorphism such that for all h ∈ {X1,...,Xp,Y1,...,Yq} we have S(h) = −h. Now to verify that this antipode, S, satisfies the diagram in Definition 4.1, we can look at h ∈ {X1,...,Xp,Y1,...,Yq} and consider

∇(idH ⊗ S(∆(h))) = ∇(idH ⊗ S(h ⊗ 1 + 1 ⊗ h)) = ∇(h ⊗ 1 − 1 ⊗ h) = h − h = 0, η((h)) = η(0) = 0,

∇(S ⊗ idH (∆(h))) = ∇(S ⊗ idH (h ⊗ 1 + 1 ⊗ h)) = ∇(−h ⊗ 1 + 1 ⊗ h) = −h + h = 0.

Thus for all h ∈ {X1,...,Xp,Y1,...,Yq}, we have ∇(idH ⊗ S(∆(h))) = η((h)) = ∇(S ⊗ idH (∆(h))). Since {X1,...,Xp,Y1,...,Yq} generates H, it follows that (4.1) commutes. We conclude that H is a super Hopf algebra. L∞ ⊗k Example 4.3 (Tensor Algebra). Let V be a SVS over K. Thus T (V ) = k=0 V is a SVS where V ⊗0 = K and the parity decomposition is as described in Definition 2.18. We call T (V ) a tensor algebra. It will be a superalgebra where ∇: T (V ) ⊗ T (V ) → T (V ) is defined Nk N` for all i=1 vi, j=1 wj ∈ T (V ) by k ! ` !! O O ∇ vi ⊗ wj = v1 ⊗ · · · ⊗ vk ⊗ w1 ⊗ · · · ⊗ w`, i=1 j=1 10 DENE LEPINE

and extend by linearity. Continuing, we let ∆: T (V ) → T (V ) ⊗ T (V ), : T (V ) → K and S : T (V ) → T (V ) be algebra . Now we define these maps such that T (V ) is a super Hopf algebra. For all x ∈ V we define ∆(x) = x ⊗ 1 + 1 ⊗ x, ∆(1) = 1 ⊗ 1, (x) = 0, and S(x) = −x, and extend linearly and multiplicatively to higher tensor powers. With these definitions it is straightforward to verify that T (V ) is a Hopf superalgebra. Example 4.4 (Symmetric Algebra). Consider the Hopf superalgebra, T (V ), from Exam- ple 4.3. We call the quotient algebra T (V )/I the symmetric algebra, denoted S(V ), where I is the ideal generated by {x ⊗ y − (−1)p(x)p(y)y ⊗ x | x, y ∈ V }. Now S(V ) is inheritely an algebra, Furthermore, the maps ∆, , ∇, η, and S as defined in Example 4.3 will induce maps on the quotient such that S(V ) will also be a super Hopf algebra. Notice that S(V ) is supercommutative since for all x, y ∈ V 0 = x ⊗ y − (−1)p(x)p(y)y ⊗ x =⇒ x ⊗ y = (−1)p(x)p(y)y ⊗ x. Example 4.5 (Group Algebra). Given a group G. The group algebra of G is the vector space ( ) X K[G] = agg | ag ∈ K ∀g ∈ G , g∈G with multiplication defined for all ag, bh ∈ K and g, h ∈ G to be

(agg) · (bhh) = (agbh)gh and extended by linearity. Now to make this an example of a Hopf algebra, we define ∆: K[G] → K[G] ⊗ K[G], : K[G] → K, and S : K[G] → K[G] for all g ∈ G to be ∆(g) = g ⊗ 1 + 1 ⊗ g, ( 1 if g = e , (g) = G 0 if g 6= eG, S(g) = g−1. Given these definitions, it is straightforward to verify that these maps satisfy the conditions of Definition 4.1. Thus K[G] is a Hopf algebra. Throughout the rest of this work, we will need some commonplace identifications and the following lemma will provide us with the proofs of such identifications. Lemma 4.6. (i) If A is finite-dimensional vector space, then ϕ:(A∗ ⊗ A∗) → (A ⊗ A)∗ defined for all f ⊗ g ∈ A∗ ⊗ A∗ and a ⊗ b ∈ A ⊗ A via ϕ(f ⊗ g)(a ⊗ b) = f(a)g(b) is an isomorphism when extended by linearity.

(ii) Similarly, the map ψ : K∗ → K defined for all f ∈ K∗ via ψ(f) = f(1) is an isomorphism. DELIGNE’S THEOREM ON TENSOR CATEGORIES 11

Proof. To prove (i), assume that A is of dimension p with basis B = {a1, . . . , ap}. Thus, we know then that both A∗ ⊗ A∗ and (A ⊗ A)∗ are of dimension p2 with respective basis ∗ ∗ ∗ {ai ⊗ aj | ai, aj ∈ B} and {(ai ⊗ aj) | ai, aj ∈ B}. To show that ϕ is an isomorphism, ∗ ∗ ∗ ∗ notice that for any ai ⊗ aj ∈ {ai ⊗ aj | ai, aj ∈ B} and for all a ⊗ b ∈ A ⊗ A we have ∗ ∗ ∗ ∗ ∗ ∗ ϕ(ai ⊗ aj )(a ⊗ b) = (ai ⊗ aj) (a ⊗ b). Thus ϕ(ai ⊗ aj ) = (ai ⊗ aj) and we can easily conclude that ϕ is surjective and therefore an isomorphism. The proof for (ii) is straightforward and follows immediately from the definition of ψ.

We will view the of Lemma 4.6 as identifications. Proposition 4.7. (1) Given a finite-dimensional superalgebra, (A, ∇, η), then the dual of said superalgebra, (A∗, ∇∗, η∗), is a super coalgebra. Furthermore, given a finite-dimensional super coalgebra (C, ∆, ) then the dual, (C∗, ∆∗, ∗), is a superalgebra.

(2) If (B, ∇, η, ∆, ) is a finite-dimensional super bialgebra then the dual (B∗, ∆∗, ∗, ∇∗, η∗) is a super bialgebra.

(3) Given a finite-dimensional super Hopf algebra, (H, ∇, η, ∆, , S), then the dual of said Hopf algebra, (H∗, ∆∗, ∗, ∇∗, η∗,S∗), is a super Hopf algebra. Proof. To prove (1), take (A, ∇, η) to be a finite-dimensional superalgebra. We already know that the dual, A∗, is a SVS. To check that ∇∗ and η∗ satisfy the diagrams in Definition 3.13, notice that taking the dual of the A and the transposes of ∇ and η results in the following commutative diagrams.

∗ id∗ ⊗η∗ η∗⊗id∗ A∗ ∇ A∗ ⊗ A∗ A∗ ⊗ A∗ A A∗ A A∗ ⊗ A∗

∗ ∗ ∗ ∇ ∇ ⊗idA idA∗ ∇∗ ∇∗ id ∗ ⊗∇∗ A∗ ⊗ A∗ A A∗ ⊗ A∗ ⊗ A∗ A∗ Thus (A∗, ∇∗, η∗) is a super coalgebra.

Given a super coalgebra, (C, ∆, ), we can use a similar process as above to check that (C∗, ∆∗, ∗) satisfies the axioms of a superalgebra. This is left to the reader.

To prove (2), take (B, ∇, η, ∆, ) to be a finite-dimensional super bialgebra. Notice that (B, ∇, η) is an associative super algebra and by (1) we can conclude that (B∗, ∇∗, η∗) is a super coalgebra. Similarly, (B, ∆, ) is a super coalgebra and again by (1) we then know that (B∗, ∆∗, ∗) is a super algebra. To verify that the diagrams in Definition 3.15 commute, notice that taking the dual of B and taking the tranposes of the maps ∇, η, ∆, and  yields the following commutative diagrams.

∗ ∗ B∗ ⊗ B∗ ∆ B∗ ∇ B∗ ⊗ B∗

∇∗⊗∇∗ ∆∗⊗∆∗

id ∗ ⊗µ⊗id ∗ B∗ ⊗ B∗ ⊗ B∗ ⊗ B∗ B B B∗ ⊗ B∗ ⊗ B∗ ⊗ B∗ 12 DENE LEPINE

∗ ∗ ∆∗ ∗ ∼ B ⊗ B B K ⊗ K = K ∗ η∗⊗η∗ η∗ ∗⊗∗ ⊗ ∼= B∗ ⊗ B∗ B∗ K K K ∇∗

B∗ ∗ η∗

id K K K Thus, it follows that (B∗, ∆∗, ∗, ∇∗, η∗) is a super bialgebra.

To prove (3), let (H, ∇, η, ∆, , S) be a finite-dimensional super Hopf algebra. Notice that (H, ∇, η, ∆, ) is a super bilagebra thus by (2) we know that (H∗, ∆∗, ∗, ∇∗, η∗) is a super bilaglebra. Given this we need only verify that the antipode, S∗ : H∗ → H∗, satisfies Diagram 4.1. Notice that when we consider the dual of H and the transposes of ∇, η, ∆, , and S, we get the following diagram.

∗ ∗ H∗ ⊗ H∗ ∆ H∗ ∆ H∗ ⊗ H∗

∗ ∗ ∗ idH∗ ⊗S  ◦η S⊗idH∗ H∗ ⊗ H∗ H∗ H∗ ⊗ H∗ ∇∗ ∇∗ Thus (H∗, ∆∗, ∗, ∇∗, η∗,S∗) is a super Hopf algebra. Example 4.8. Recall the superalgebra, from Example 3.4, consisting of supermatrices. We ∗ use Proposition 4.7 to construct the super coalgebra, C = Mp|q(K) . We know C has a basis fi,j where for all X ∈ Mp|q(K), fi,j(X) is defined as the (i, j)-th entry of X. In fact C is a SVS, where we define fi,j to be even if 1 ≤ i, j ≤ p or p + 1 ≤ i, j ≤ q and otherwise odd. To ∗ ∗ make this into a super coalgebra we know ∇ : C → C ⊗ C and η : C → K for all fi,j ∈ C by

∗ ∇ (fi,j) = fi,j ◦ ∇ and ∗ η (fi,j) = fi,j ◦ η. Since we have specific definitions for ∇ and η, we can deduce specific equations for both ∇∗ ∗ ∗ and η . First for ∇ , consider for any fi,j ∈ C and M,N ∈ Mp|q(K)

p+q ∗ X ∇ (fi,j)(M ⊗ N) = fi,j(MN) = Mi,kNk,j k=1 p+q X = fi,k(M)fk,j(N) k=1 p+q X = (fi,k ⊗ fk,j)(M ⊗ N). k=1 DELIGNE’S THEOREM ON TENSOR CATEGORIES 13

∗ To deduce η , first η : K → Mp|q(K) is defined for all k ∈ K as η(k) = k · I, where I is the identity matrix. Recall the identification of K∗ with K given in Lemma 4.6. We compute: ( 1 if i = j, η∗(f ) = f ◦ η(1) = f (I) = i,j i,j i,j 0 if i 6= j.

∗ ∗ ∗ ∗ Now to verify that this satisfies the equation idC ⊗ η (∇ (c)) = c = η ⊗ idC (∇ (c)) for all c ∈ C, for all fi,j ∈ C, consider:

p+q ! ∗ ∗ ∗ X η ⊗ idC (∇ (fi,j)) = η ⊗ idC fi,k ⊗ fk,j k=1 p+q p+q X ∗ X ∗ ∗ ∗ = η (fi,k) ⊗ fk,j = fi,j = fi,k ⊗ η (fk,j) = η ⊗ idC (∇ (fi,j)). k=1 k=1 Thus C is a super coalgebra. Example 4.9. Given Proposition 4.7, let us consider the dual of the Hopf algebra from Example 4.5, denoted by O(G). Notice that G generates the basis for K[G] thus the basis for K[G]∗ is generated by maps from G to K which is the vector space, O(G). By Proposition 4.7, we compute S∗, ∗, and η∗ for all f ∈ O(G), x ∈ G, and k ∈ K S∗(f)(x) = f ◦ S(x) = f(x−1), ∗ η (f) = f ◦ η = f(eG), ∗  (k) = k · 1O(G). Now before explicitly computing ∆∗ and ∇∗, it is important to notice that since G is finite then the map ψ : O(G) ⊗ O(G) → O(G × G) defined for all f, g ∈ O(G) and (a, b) ∈ G × G via ψ(f ⊗ g)(a, b) = f(a)g(b) is an isomorphism. With such an identification, we can deduce for all f, g ∈ O(G) ∇∗(f) = f ◦ ∇ and ∆∗(f ⊗ g) = (f ⊗ g) ◦ ∆. Definition 4.10 (Cocommutatitive). A coalgebra, (C, ∆, ), is called cocommutative if the following diagram is commutative.

C ⊗ C ∆ C

µ ∆ C ⊗ C Where µ: A ⊗ B → B ⊗ A is the flip operator defined for all a ∈ A and b ∈ B via µ(a ⊗ b) = b ⊗ a, and extended by linearity.

Example 4.11. Recall the Hopf algebra K[G] from Example 4.5. Notice that K[G] is co- commutative since for all g ∈ K[G], ∆(g) = 1 ⊗ g + g ⊗ 1 = µ(∆(g)). 14 DENE LEPINE 5. Representation Theory In this section, we briefly define representations of a group and some associated structures utilizing these representations. Although this section is short, it holds some important information that we will use later with our more complex supergroup structure. Definition 5.1 (Representation). A representation of a group G on a vector space V over K is a group homomorphism from G to GL(V ) = {f ∈ L (V,V ) | f is invertible}. That is, ρ: G → GL(V ) is a map where ρ(g1g2) = ρ(g1)ρ(g2) for all g1, g2 ∈ G. We will call V a representation of G and leave the map ρ implied. We will also write g · v for ρ(g)(v). Definition 5.2 (G-module). Equivalent to the notion of representations we define G-modules as followed. Let G be a group. A left G-module is a vector space, A, with a map ρ: G×A → A. The map ρ is defined for all g, g1, g2 ∈ G and a, a1, a2 ∈ A via

g · (a1 + a2) = g · a1 + g · a2,

g1 · (g2 · m) = (g1g2) · m. We write g · a for ρ(g, a). Definition 5.3 (Subrepresentation). Given a representation, ρ, of G on V then a subspace W ⊆ V is called G-invariant if ρ(g, w) ∈ W for all g ∈ G and w ∈ W .A subrepresentation is the restriction of ρ to a G-invariant subspace, W .

We say ρ: G → GL(V ) is irreducible if the only G-invariant subspaces are the zero subspace and the whole vector space V . Definition 5.4 (Direct Sum of Representations). Let G be a group and V and W two representations of G. The direct sum of group representations is the direct sum of vector spaces V ⊕W with the action of g ∈ G for all (v, w) ∈ V ⊕W given by g ·(v, w) = (g ·v, g ·w). Definition 5.5 (Tensor Product of Representations). Let (V, ρ) and (W, σ) be representa- tions of some group, G. The tensor product V ⊗ W is a representation of G whose action, P ρ ⊗ σ, is defined for all g ∈ G and vi ⊗ wi ∈ V ⊗ W by (ρ ⊗ σ)(g)(v ⊗ w) = ρ(g)(v) ⊗ σ(g)(w). Example 5.6. Let V be a vector space. Considering the group of permutations on X = ⊗n {1, . . . , n}, denoted Sn, we define the representation of Sn on V by the action, ρ. We define ρ for all σ ∈ Sn and all v1 ⊗· · ·⊗vn ∈ V1 ⊗· · ·⊗Vn as ρ(σ, v1 ⊗· · ·⊗vn) = vσ−1(1) ⊗· · ·⊗vσ−1(n) and extend by linearity. Definition 5.7 (Invariant). Let (V, ρ) be a representation of G. We say that v ∈ V is invariant if ρ(g, v) = v for all g ∈ G.

Definition 5.8 (Partition). Given n ∈ N, a partition of n is a list of non-increasing natural numbers whose sum is n. Example 5.9. As an example, all the possible partitions of 4 are (1, 1, 1, 1), (2, 1, 1), (2, 2), (3, 1), and (4). DELIGNE’S THEOREM ON TENSOR CATEGORIES 15 Remark 5.10. Later it will be important to know the fact that the irreducible representa- tions of Sn are naturally enumerated by the partitions of n. We let Vλ denote the irreducible representation of Sn corresponding the the partition λ of n. More information on this can be read in [4, §7–8].

6. Tensor Categories For the following section, we will only be considering fields, K, that are algebraically closed of characteristic zero, such as the field of complex numbers. We begin by first recalling the definition of a category and proceed to look at different types of categories. Specifically, we look at monoidal categories and work towards defining a particular type of monodial category, K-tensor category, and then a regular K-tensor category. More can be read on category theory throughout [5] and [3]. Definition 6.1 (Category). A category, C, consists of a class of objects, denoted ob(C), and a class of morphisms or maps between the objects, denoted Hom(C). We write Hom(a, b) or HomC(a, b) for the class of all morphisms from a to b where a, b ∈ ob(C). These collections must satisfy: • For a, b, c ∈ ob(C), there is a composition map, Hom(a, b) × Hom(b, c) → Hom(a, c), defined by g ◦ f where f ∈ Hom(a, b), g ∈ Hom(b, c) and g ◦ f ∈ Hom(a, c). • For f ∈ Hom(a, b), g ∈ Hom(b, c), and h ∈ Hom(c, d), we have (h◦g)◦f = h◦(g ◦f). • For any a ∈ ob(C), there is an identity , ida, such that for every morphism f : x → a and g : a → y we have ida ◦ f = f and g ◦ ida = g. Example 6.2 (Category of Super Vector Spaces). Let K-SVect be the category whose object are super vector spaces over some field K and morphisms are all even linear transformations between these SVS. We now check that composition is associative and there is identity map: • The composition of morphisms will be the usual composition of linear transforma- tions. Notice that if f, g are both even linear transformations then f ◦ g will also be even. • Associativity is directly inherited from the associativity of composition of linear trans- formations. • For any SVS V ∈ ob(K-SVect), the identity morphism is the identity transformation on V . Definition 6.3 (Functor). Let C and D be categories. A functor is a map F : C → D such that • we have F (X) ∈ ob D for every X ∈ ob(C) and • for all f ∈ Hom(C), where f : X → Y , we have F (f) ∈ Hom(D) with F (f): F (X) → F (Y ) satisfying the following conditions: ∗ F (idX ) = idF (X) for every X ∈ ob(C) and ∗ F (g ◦ f) = F (g) ◦ F (f) for all morphisms f : X → Y and g : Y → Z in Hom(C). Definition 6.4 (Equivalence of Categories). Let C and D be categories. An equivalence of categories consists of two functors F : C → D and G: D → C and two natural isomorphisms ϕ: FG → idD and ψ : idC → GF . If such functors and maps exist, then we say C and D are equivalent. 16 DENE LEPINE Definition 6.5 (). A monoidal category is a category C with a bifunctor ⊗: C × C → C called the tensor product or monoidal product such that there is

• a left and right identity, I ∈ ob(C), such that for all A ∈ C, I ⊗A ∼= A and A⊗I ∼= A, • for all A, B, C ∈ ob C an isomorphism αA,B,C :(A ⊗ B) ⊗ C → A ⊗ (B ⊗ C), called an associator, and • the following diagrams commute for all A, B, C, D ∈ C:

α (A ⊗ I) ⊗ B A,I,B A ⊗ (I ⊗ B)

A ⊗ B

(A ⊗ (B ⊗ C)) ⊗ D αA,B,C ⊗idD αA,(B⊗C),D ((A ⊗ B) ⊗ C) ⊗ D A ⊗ ((B ⊗ C) ⊗ D)

α(A⊗B),C,D idA⊗αB,C,D α (A ⊗ B) ⊗ (C ⊗ D) A,B,C⊗D A ⊗ (B ⊗ (C ⊗ D))

Example 6.6. An example of a monoidal category is K-SVect with standard tensor prod- uct as the monoidal product and the one dimensional even vector space K = K0 being the identity, I, and lastly we need only define α. We define α for all A, B, C ∈ K-SVect P P αA,B,C :(A ⊗ B) ⊗ C → A ⊗ (B ⊗ C) to be αA,B,C ( (ai ⊗ bi) ⊗ ci) = ai ⊗ (bi ⊗ ci) for all P (ai ⊗ bi) ⊗ ci ∈ (A ⊗ B) ⊗ C. With these definitions, it is easy to check that the diagrams in Definition 6.5 commute.

Definition 6.7 (Braided Monodial Category). We say a category, C, is a braided monoidal category if for all A, B ∈ C there is an isomorphism, γA,B : A ⊗ B → B ⊗ A, such that the following diagrams commute for all objects A, B, C ∈ C:

(A ⊗ B) ⊗ C

α γA,B ⊗idC A,B,C (B ⊗ A) ⊗ C A ⊗ (B ⊗ C)

αB,A,C γA,B⊗C B ⊗ (A ⊗ C) (B ⊗ C) ⊗ A

idB ⊗γA,C αB,C,A

B ⊗ (C ⊗ A) DELIGNE’S THEOREM ON TENSOR CATEGORIES 17

A ⊗ (B ⊗ C)

−1 idA⊗γB,C αA,B,C A ⊗ (C ⊗ B) (A ⊗ B) ⊗ C

−1 γ αA,C,B A⊗B,C (A ⊗ C) ⊗ B C ⊗ (A ⊗ B) −1 γA,C ⊗idB αC,A,B

(C ⊗ A) ⊗ B

Definition 6.8 (Symmetric Monoidal Category). A category, C, is a symmetric monoidal category if it is a braided monoidal category that is maximally symmetric. That is, for all A, B ∈ C we have γA,B ◦ γB,A = idA⊗B, and for any A, B, C ∈ C the following diagrams commute: γ A ⊗ I A,I I ⊗ A B ⊗ A γA,B γB,A

A A ⊗ B A ⊗ B idA⊗B Example 6.9. A few simple examples of symmetric braided monodial categories are the following. • The category of sets, denoted Set, is the category whose objects are sets and mor- phisms are functions between sets. This is an example of a monoidal category where ⊗ is just the Cartesian product and the unit object is any fixed singleton, in fact we can go farther and say that this is a symmetric braided monoidal category. • The category of groups, denoted Grp, is the category whose objects are groups and morphisms are group homomorphisms. Similarly to Set, Grp is a symmetric braided monoidal category whose unit is the trivial group and ⊗ is standard Cartesian product. Definition 6.10 (Rigid Monoidal Category). An object A is left rigid if there is an object B ∈ C and morphisms ηA : I → A ⊗ B and A : B ⊗ A → I such that both of the following compositions are identities.

η ⊗id α A A A (A ⊗ B) ⊗ A A,B,A A ⊗ (B ⊗ A) idA⊗A A

−1 id ⊗η α B B A B ⊗ (A ⊗ B) B,A,B (B ⊗ A) ⊗ B A⊗idB B We say B is right rigid if there exists A such that the above is satisfied. An object is said to be rigid if it is both left and right rigid. A rigid monoidal category, C, is a monoidal category such that every object is rigid.

Example 6.11. Considering K-SVect, we can show that K-SVect is a rigid monidal category. Given any V ∈ K-SVect we can pick a basis, B, and consider the dual space V ∗. We let 18 DENE LEPINE B∨ = {v∨ | v ∈ B} denote the dual basis of V ∗. To show that V is left rigid, we define ∗ ∗ V : V ⊗ V → K for all v ∈ V , f ∈ V via

V (f ⊗ v) = f(v) ∗ we extend this definition by linearity and define ηV : K → V ⊗ V for all k ∈ K via X ∨ ηV (k) = k v ⊗ v . v∈B Now it suffices to show that the compositions in Definition 6.10 are identities for all elements ∨ ∼ ∼ in B and B . We know that K ⊗ V = V = V ⊗ K. Now take any k ∈ K and b ∈ B and notice that

(idV ⊗ V ) ◦ (αV,V ∗,V ) ◦ (ηV ⊗ idV )(k ⊗ b) = idV ⊗ V (αV,V ∗,V (ηV ⊗ idV (k ⊗ b))) X ∨ = idV ⊗ V (αV,V ∗,V ((k v⊗v ) ⊗ b))

v∈B X ∨ = idV ⊗ V ( kv ⊗ (v ⊗ b)) v∈B X = kv ⊗ (v∨(b)) v∈B = kv ⊗ 1 = v ⊗ k ∼= k ⊗ v. Notice that the second to last equality is given by that fact that v∨(b) = 1 only when v = b, otherwise v∨(b) = 0. Similarly consider for all b∨ ∈ B∨ and k ∈ K then we have −1 ∨ −1 ∨ (V ⊗ idV ∗ ) ◦ (αV ∗,V,V ∗ ) ◦ (idB ⊗ ηV )(b ⊗ k) = V ⊗ idV ∗ (αV ∗,V,V ∗ (idB ⊗ ηV (b ⊗ k))) −1 ∨ X ∨ = V ⊗ idV ∗ (αV ∗,V,V ∗ (b ⊗ (k v ⊗ v ))) v∈B X ∨ ∨ = V ⊗ idV ∗ ( (b ⊗ kv) ⊗ v ) v∈B X = (k · b∨(v)) ⊗ v∨ v∈B = k ⊗ v∨ ∼= v∨ ⊗ k. The final step is given again by b∨(v) = 1 only when b = v, otherwise b∨(v) = 0. Thus, given these definitions the diagrams, in Definition 6.10 are identities and therefore V is left rigid. We can use similar functions to show that V is also right rigid and therefore we can conclude that V is, in fact, rigid. This is for any V ∈ K-SVect and therefore K-SVect is a rigid monoidal category.

Definition 6.12 (K-Linear Category). Let K be some field. A category is said to be K-linear if HomC(a, b) is a K-vector space for all a, b ∈ C and the composition map, ◦: Hom(b, c) ⊗ Hom(a, b) → Hom(a, c) defined for all f, g ∈ HomC by ◦(f, g) = f ◦ g, is bilinear.

Definition 6.13 (K-Tensor Category). A K-tensor category, C, is a K-linear rigid symmetric braided monoidal category. DELIGNE’S THEOREM ON TENSOR CATEGORIES 19

Example 6.14. We have already seen that K-SVect is a monoidal category as seen in Example 6.6. To show that it is, in fact, a K-tensor category, first recall Example 6.11 which showed us that K-SVect is a rigid monoidal category. Now notice that for all A, B ∈ K- SVect, we define γAB : A ⊗ B → B ⊗ A to be γAB(a ⊗ b) = b ⊗ a, extended by linearity. With this definition, the diagrams in Definition 6.5 and Defintion 6.8 will all commute, thus K-SVect is a rigid symmetric braided monoidal category. Recall that L (V,W ) is a K-vector space and the standard composition of maps is bilinear. Therefore K-SVect is a K-linear rigid symmetric braided monoidal category. Thus, we conclude that K-SVect is, in fact, a K-tensor category. Definition 6.15 (Schur Functor). Let C be a K-tensor category. Then for X ∈ C, n ∈ N and a partition, λ, of n then we say the value of the Schur Functor, denoted Sλ, on X is ! 1 X S (X) := (V ⊗ X⊗n)Sn := ρ(g) (V ⊗ X⊗n), λ λ n! λ g∈Sn where:

• the action ρ is the diagonal action arising from the standard action of Sn on Vλ and ⊗n the action of Sn on X described in Example 5.6 and • (−)Sn denotes the subspace of invariants by action ρ. Definition 6.16 (Subquotient). We say an object is a subobject if it sits within another object of the same cateogry. A subquotient is a quotient object of a subobject. Definition 6.17 (Finitely ⊗-Generated). A category, C, is finitely ⊗-generated if there exists X ∈ C such that every other object is isomorphic to a subquotient of a direct sum of objects X⊗n. If X is a such an object we call it a ⊗-generator. Example 6.18. Some examples of finitely ⊗-generated categories are: • The category K-SVect of finite-dimensional SVS with the odd supervector space K as the ⊗-generator. • The category Rep(G) of finite-dimensional representations of a finite group G. • The category Rep(G) of finite-dimensional representations of an affine algebraic group G over K. Any faithful representation X of G is the ⊗-generator of this category. We will not consider this example in further detail as it is beyond the scope of this paper. Definition 6.19 (Regular K-Tensor Category). A K-tensory category, C, is called normal if • it is finitely generated and • for every object Y ∈ C there exits n ∈ N and a partition, λ, of n such that Sλ(Y ) = 0. 7. H-Categories In the following section, we take our previously defined super Hopf algebras, H, and look at and prove some properties held by the categories of finite-dimensional modules, H-Mod, and of finite-dimensional comodules, H-Comod. Lastly, we prove that the categories H- Mod and H∗-Comod are equivalent. 20 DENE LEPINE First, suppose (H, ∇, η, ∆, , S) is a Hopf algebra. We consider the category of finite- dimensional H-modules, denoted H-Mod. Now take M and N as finite-dimensional H- modules. Notice that M ⊗ N is also an H-module when we define for all h ∈ H, where P ∆(h) = h1 ⊗ h2, and for all m ⊗ n ∈ M ⊗ N X  X h(m ⊗ n) = ∆(h)(m ⊗ n) = h1 ⊗ h2 (m ⊗ n) = h1m ⊗ h2n.

∗ We continue by noticing that if M is an object of H-Mod then M = HomK(V, K) is also an element of H-Mod with the following action. We define the action for all m ∈ M, F ∈ M ∗ and h ∈ H via (hF )(m) = F (S(h)m). Proposition 7.1. If (H, ∇, η, ∆, , S) is a Hopf algebra then the category of finite-dimensional H-modules is a K-linear rigid monoidal category. Proof. Notice that H-Mod forms a monoidal category with the definition above and K as the identity object. To show rigidity, take any M ∈ H-Mod with basis B. Now we consider M ∗ ∨ ∨ ∗ ∗ with the dual basis B = {m | m ∈ B} and define M : M ⊗M → K for all f ⊗m ∈ M ⊗M via M (f ⊗ m) = f(m) ∗ and extend by linearity. We define ηM : K → M ⊗ M for all k ∈ K via X ∨ ηM (k) = k m ⊗ m . m∈B With these definitions, one can prove that H-Mod is rigid monoidal category. The proof is done analously to Example 6.11 and is left to the reader. Lastly, the homomorphisms in H-Mod are module homomorphisms and therefore H-Mod is a K-linear category. Thus we know that H-Mod is a K-linear rigid monoidal category. Proposition 7.2. If (H, ∇, η, ∆, , S) is a cocommutative Hopf algebra then H-Mod is a K-tensor category. Proof. Let (H, ∇, η, ∆, , S) be a cocommutative Hopf algebra. We know from Proposi- tion 7.1 that H-Mod is a K-linear rigid monoidal category. To show that H-Mod is a braided category, for any M,N ∈ H-Mod we define the map γM,N : M ⊗ N → N ⊗ M for all m ⊗ n ∈ M ⊗ N

γM,N (m ⊗ n) = (n ⊗ m) and extend by linearity.

We need only show that γM,N is an H-module homomorphism. To do so take any m ⊗ n ∈ P M ⊗ N and h ∈ H, where ∆(h) = h1 ⊗ h2, then consider X   γM,N (h(m ⊗ n)) = γM,N (∆(h)(m ⊗ n)) = γM,N h1 ⊗ h2 (m ⊗ n) X  X = γM,N h1m ⊗ h2n = h2n ⊗ h1m X  = h2 ⊗ h1 (n ⊗ m) = ∆(h)(γM,N (m ⊗ n)) = h(γM,N (m ⊗ n)). Notice that the second-to-last equation is true since H is cocommutative. Therefore, we know that H-Mod is a braided monoidal category. Moreover, notice that γN,M ◦ γM,N = idM,N . DELIGNE’S THEOREM ON TENSOR CATEGORIES 21 Thus H-Mod is, in fact, a symmetric monoidal category. Finally, we can conclude that H-Mod is a K-tensor category. Definition 7.3 (Super Comodule). Suppose (C, ∆, ) is a super coalgebra over K.A right super comodule over C is a SVS, M, together with ρ: M → M ⊗ C, called the coaction on M, such that ρ is even and the following diagrams commute. ρ ρ ρ M ⊗ C M M ⊗ C M M ⊗ C

idM idM ⊗∆ ρ⊗idC idM ⊗ M ⊗ C ⊗ C M

In other words, for all m ∈ M, (idm ⊗ ∆) ◦ ρ(m) = (ρ ⊗ idC ) ◦ ρ(m) and (idM ⊗ ) ◦ ρ(m) = ∼ idM (m), where we are identifying M ⊗ K = M.A left comodule is defined similarly to a right comodule. We denote the category of C-comodules by C-Comod. ∗ Lemma 7.4. Let H and M both be finite-dimensional. The map ω : H⊗M → HomK(M,H) P ∗ P P defined for all hi ⊗ fi ∈ H ⊗ M and m ∈ M via ω( hi ⊗ fi)(m) = fi(m)hi is an isomorphism. Proof. This proof is left to reader. To sketch the proof, it is done by first noticing that ∗ dim(H ⊗ M ) = dim(HomK(M,H)) then it suffices to show that ω is injective or surjective.

It follows, we will view the map ω of Lemma 7.4 as an identification. Proposition 7.5. Fix some finite-dimensional cocommutative super Hopf algebra, (H, ∇, η, ∆, , S). The category H-Comod is then a K-tensor category. Proof. Suppose (H, ∇, η, ∆, , S) is a finite-dimensional cocommutative super Hopf algebra. Notice that the morphisms of H-Comod are K-vector spaces and therefore H-Comod is a K-linear category. We define the coaction on tensor products of comodules for all M,N ∈ H-Comod with coactions ρM and ρN . Now M ⊗ N is also an H-comodule with the coaction ρM⊗N defined as the composition of the following maps ρ ⊗ρ id ⊗µ⊗id ∇⊗id M ⊗ N −−−−→M N H ⊗ M ⊗ H ⊗ N −−−−−−−→H N H ⊗ H ⊗ M ⊗ N −−−−−−→M⊗N H ⊗ M ⊗ N, where µ is the flip operator. With such a definition of the coaction on tensor products of comodules, H-Comod is a monoidal category. Now H-Comod is inherently a symmetric braided category. The proof of symmetry and braided left to the reader and is done similarly to Proposition 7.2. To show that H-Comod is rigid, first let (M, ρM ) be an object of H-Comod. Notice ∗ ∗ that (M , ρM ∗ ) is an object of H-Comod when ρM ∗ is defined for all f ∈ M and all m ∈ M via X ρM ∗ (f)(m) = (S ⊗ f) ◦ ρM (m) = S(hi) ⊗ f(mi), P ∗ where ρM (m) = hi ⊗ mi. Now to show that any comodule, M, is left rigid, since M is an ∗ ∗ object of H-Comod we define M : M ⊗ M → K for all simple tensors m ⊗ f ∈ M ⊗ M via M (m ⊗ f) = f(m) and extend by linearity. A similar map can be defined to show that M is right rigid. Thus we then know that H-Comod is also rigid and therefore a K-tensor category. 22 DENE LEPINE Proposition 7.6. Fix some finite-dimensional super Hopf algebra (H, ∇, η, ∆, η, S) then the categories H-Mod and H∗-Comod are equivalent. Proof. Fix some finite-dimensional super Hopf algebra, (H, ∇, η, ∆, η, S). We define the functors F : H-Mod → H∗-Comod and G: H∗-Comod → H-Mod via F (M) = M ∗ and G(N) = N ∗, for all M ∈ H-Mod and all N ∈ H∗-Comod. Similarly, for all f ∈ Hom(H-Mod) and all g ∈ Hom(H∗-Comod) we define F (f) = f ∗ and G(g) = g∗. To check that both F and G are well defined, take (M, σ) ∈ H-Mod and consider (M ∗, σ∗). Notice that taking the dual of M and the transpose of σ yields the following diagrams.

∗ ∗ ∗ M ∗ ⊗ H∗ σ M ∗ σ M ∗ ⊗ H∗ M ∗ σ M ∗ ⊗ H∗

idM∗ id ∗ ⊗∇∗ σ∗⊗id ∗ ∗ ∗ M H idM ⊗η M ∗ ⊗ H∗ ⊗ H∗ M ∗ Thus F (M) ∈ H∗-Comod and by the same process for N ∈ H∗-Comod then G(N) ∈ H- Mod. We know that taking the dual respects composition and the dual of the identity is the identity on the dual space. Thus F and G are, in fact, functors. Furthermore, we also know that for any vector space, M, and any linear map, f, then we naturally have V ∼= V ∗∗ and f ∼= f ∗∗. Thus H-Mod and H∗-Comod are equivalent.

8. Supergroups In our final section, we take all the concepts looked at throughout this paper and bring them together to define supergroups and finally super-representations. With these defined, we show that super-representations are K-tensor categories. Lastly, we present Deligne’s Theorem on Tensor Categories.

One source of Hopf algebras are the Hopf algebras O(G) of functions on a group G. Inspired by this, we use the associated terminology for an arbitrary super Hopf algebra. Definition 8.1 (Affine Algebraic Supergroup). An affine algebraic supergroup (supergroup for short), G, is the formal dual of a supercommutative super Hopf algebra. By this, we mean that we have a supercommutative super Hopf algebra O(G), and structures on G are defined to be the dual structures on O(G). For instance, • a G-module is, by definition, an O(G)-comodule.

• Similarly, the category Rep(G) of G-super modules is the category of super comod- ules over O(G).

• Any element of G0 is an algebra homomorphism O(G) → K. Definition 8.2 (Parity Involution). Given a super algebra its parity involution is the algebra automorphism which maps homogeneous elements, g, to (−1)p(g)g. DELIGNE’S THEOREM ON TENSOR CATEGORIES 23 Consider σ ∈ G. If G is a standard group then σ2 = 1 makes sense but if G is a supergroup then there is more to define to make sense of σ2 = 1. We say that the map σ satisfies σ2 = 1 if for all h ∈ O(G) (σ ⊗ σ) ◦ (∆(h)) = (h). Now we define the inner automorphism, on a supergroup G, induced by σ as the map O(G) → O(G) defined via

(σ ⊗ idO(G) ⊗ (σ ◦ S)) ◦ (∆ ⊗ idO(G)) ◦ (∆). The inner automorphism from above corresponds to the parity involution if for all homoge- neous h ∈ O(G) p(h) (σ ⊗ idO(G) ⊗ (σ ◦ S)) ◦ (∆ ⊗ idO(G)) ◦ (∆(h)) = (−1) h.

Definition 8.3 (Inner Parity). An inner parity of a supergroup G is an element σ ∈ G0 such that σ2 = 1 and the inner automorphism of G induced by σ is the parity involution. Let G be a supergroup and σ ∈ G be an inner parity. For (V, ρ) ∈ Rep(G), we have the endomorphism, σρ, of V given by the composition ρ σ⊗idV ∼ V −→O(G) ⊗ V −−−→ K ⊗ V = V. Definition 8.4 (Super-Representation). Given a supergroup G and an inner parity σ, then Rep(G, σ) is the category whose objects are objects, (V, ρ), of Rep(G) such that σρ is the p(v) parity endomorphism (i.e. for all homogeneous v ∈ V ,(σ ⊗ idV ) ◦ ρ(v) = (−1) v). Proposition 8.5. Let G be a supergroup and σ be an inner parity. The category Rep(G, σ) is a K-tensor category. Proof. Let G be a super group and let σ : O(G) → K be an inner parity. Consider the category Rep(G, σ). By Proposition 7.2 the category Rep(G) is a K-tensor category. If we can show that Rep(G, σ) closed under tensor product and that it is also rigid then it is also a K-tensor category. Take any two M,N ∈ Rep(G, σ). We know that the action of σ on both M and N is the parity endomorphism. Take any m ∈ M and n ∈ N. We know that

X 0  p(m) (σ ⊗ idM ) ◦ ρM (m) = (σ ⊗ idM ) hi ⊗ mi = (−1) m and

X 00  p(n) (σ ⊗ idN ) ◦ ρN (n) = (σ ⊗ idN ) hj ⊗ nj = (−1) n. Now we consider the H-comodule M ⊗ N. Since σ is an algebra homomorphism, we have that σ(∇(h1 ⊗ h2)) = σ(h1)(h2). Thus we have for simple tensors

(σ ⊗ idM⊗N ) ◦ ρM⊗N (m ⊗ n) = (σ ⊗ idM⊗N ) ◦ (∇ ⊗ idM⊗N ) ◦ (idH ⊗ µ ⊗ idN ) ◦ (ρM (m) ⊗ ρN (n))

X 0 00  = (σ ⊗ idM⊗N ) ◦ (∇ ⊗ idM⊗N ) ◦ hi ⊗ hj ⊗ mi ⊗ nj

X 0 00  = (σ ⊗ idM⊗N ) ◦ hihj ⊗ mi ⊗ nj

X 0 00 = σ(hi)σ(hj ) ⊗ mi ⊗ nj = (−1)p(m)m ⊗ (−1)p(n)n = (−1)p(m⊗n)m ⊗ n. 24 DENE LEPINE We now extend this by linearity and we then know that M ⊗ N is in Rep(G, σ). Thus Rep(G, σ) is a K-linear symmetric braided monoidal category. To show that Rep(G, σ) is rigid, we need only show that if M ∈ Rep(G, σ) then M ∗ ∈ Rep(G, σ). Consider the following commutative diagram:

ρ M O(G) ⊗ M

id ⊗ρ σ⊗id σS⊗idM O(G) idO(G)⊗σS⊗idM M

ρ M O(G) ⊗ M O(G) ⊗ O(G) ⊗ M M σ⊗σS⊗idM

∆⊗idM

⊗idM

idM

Notice that the action of σ on M (i.e. (σ ⊗ idM ) ◦ ρ) is the inverse of the action of σS on M (i.e. (σS ⊗ idM ) ◦ ρ). Since the parity isomorphism is its own inverse and σ acts by the parity isomorphisms then σS acts also by the parity isomorphism. Knowing this, we ∗ ∗ ∗ ∗ ∗ ∼ now consider the dual comodule (M , ρ ) where ρ : M → O(G) ⊗ M = HomK(M, O(G)) is defined as ρ∗(f) = (f ⊗ S) ◦ ρ for all f ∈ M ∗. Finally we compute the action of σ on ρ∗.

∗ ρ σ⊗idM∗ f −→ (S ⊗ f) ◦ ρ −−−−→ σ ◦ (S ⊗ f) ◦ ρ = (σS ⊗ f) ◦ ρ = f ◦ (σS ⊗ idM ) ◦ ρ

We already know that (σS ⊗ idM ) ◦ ρ acts as the parity isomorphism thus it is easy to check that this action is the parity isomorphism of f. Thus we have shown that if M ∈ Rep(G, σ) then so is M ∗. Therefore, we know that Rep(G, σ) is rigid. Furthermore, we can conclude that Rep(G, σ) is then a K-tensor category. The following theorem is the primary motivation for this paper. It is referred to as Deligne’s Theorem on Tensor Categories and was first seen and proven by Deligne in [2].

Theorem 8.6. Every regular K-tensor category is equivalent to one of the form Rep(G, σ), for some supergroup G and some inner parity σ. Deligne’s Theorem on Tensor Categories highlights the importance of supergroups in both math and physics. In particular, regular tensor categories are ubiquitous in these subjects, and Deligne’s Theorem tells us that we lose no generality in assuming that such categories are categories of representations of super groups.

References [1] Sterling K. Berberian. Linear algebra. Dover Publications, Inc., Mineola, NY, 2014. Reprint of the 1992 original, With a new errata and comments. [2] P. Deligne. Cat´egoriestensorielles. Mosc. Math. J., 2(2):227–248, 2002. Dedicated to Yuri I. Manin on the occasion of his 65th birthday. [3] Pavel Etingof, Shlomo Gelaki, Dmitri Nikshych, and Victor Ostrik. Tensor categories, volume 205 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2015. DELIGNE’S THEOREM ON TENSOR CATEGORIES 25

[4] William Fulton. Young tableaux, volume 35 of London Mathematical Society Student Texts. Cambridge University Press, Cambridge, 1997. With applications to representation theory and geometry. [5] Saunders Mac Lane. Categories for the working mathematician, volume 5 of Graduate Texts in Mathe- matics. Springer-Verlag, New York, second edition, 1998.