<<

TENSOR CATEGORIES, Z+-, AND GROTHENDIECK RINGS

BY

WOO HYUNG LEE

A Thesis Submitted to the Graduate Faculty of

WAKE FOREST UNIVERSITY GRADUATE SCHOOL OF ARTS AND SCIENCES

in Partial Fulfillment of the Requirements

for the Degree of

MASTER OF ARTS

Mathematics & Statistics

May, 2018

Winston-Salem, North Carolina

Approved By:

Frank Moore, Ph.D., Advisor

Ellen Kirkman, Ph.D., Chair

Hugh Howards, Ph.D. Acknowledgements

The goal of this expository thesis is to provide a detailed explanation on the role of Grothendieck Rings in understanding tensor categories, and is heavily based on the monograph “Tensor Category” by Etingof et al.

The work would not have been possible without the guidance and supervision of my advisor William F. Moore, Ellen Kirkman, and Hugh Howards. I have my utmost gratitude for the help received in writing this thesis, and also, many thanks to my fellow graduate students for supporting me in this endeavor.

ii Table of Contents

Acknowledgements ...... ii

Abstract ...... iv

Chapter 1 Preliminary Materials ...... 1 1.1 Abelian Categories and Exact Sequences ...... 1 1.1.1 Additive Categories ...... 1 1.1.2 Abelian Categories ...... 7 1.1.3 Exact Sequences and the Grothendieck ...... 10

Chapter 2 Monoidal Categories...... 24 2.1 Monoidal Categories and Monoidal ...... 24 2.1.1 Monoidal Categories ...... 24 2.1.2 Monoidal Functors ...... 33 2.1.3 The Category of G-graded Vector Spaces and Cocycles . . . . 39 2.1.4 Group Actions on Categories ...... 47 2.1.5 Duals in Monoidal Categories ...... 51

Chapter 3 Tensor Categories, Grothendieck Rings, and Z+ Rings ...... 65 3.1 Tensor Categories and Grothendieck Rings ...... 65

3.2 Z+-rings ...... 69 3.3 Gr(C) and Z+-rings ...... 82

Chapter 4 Conclusion ...... 86

Vita...... 87

iii Abstract

Woo Hyung Lee In this paper, we will define and prove various properties of abelian categories, monoidal categories and tensor categories. Then, we will show how to define a ring that represents a tensor category, which will be called the Grothendieck Ring, and prove that it constitutes a very specific type of ring called a Z+-ring.

iv Chapter 1: Preliminary Materials

1.1 Abelian Categories and Exact Sequences

1.1.1 Additive Categories

What is a category? Informally one can think of a category simply as an algebraic structure comprising of objects with arrows in between them. At first, it may seem that categories are redundant as we already have well-defined notions of sets and functions in between them. However, unlike sets that come with the inherent problem regarding the set of all sets, categories deal with arbitrary objects that may be larger than sets. In fact, when the objects of a category and the set of maps between them are precisely sets, we denote such category to be small, and large if otherwise.

Now that we have a basic notion of what categories are, let us define them formally.

Definition 1.1.1. A category C consists of the following:

• A class Ob(C) of objects

• A class Hom(C) of morphisms between objects

• For any three objects A, B, C in C, there exists a binary operation

◦ : Hom(A, B) × Hom(B,C) → Hom(A, C),

denoted the composition of morphisms.

1 and satisfies the following axioms:

• For every object A in C, there exists a morphism idA : A → A, denoted the

identity morphism of A, and for any object X, Y in C, and for any morphism

f : X → A, g : A → Y , idA ◦ f = f and g ◦ idA = g.

• For any four objects A, B, C, and D in C, if f : A → B, g : B → C, and

h : C → D, then h ◦ (g ◦ f) = (h ◦ g) ◦ f.

Some elementary, but very important examples of categories are the categories, Ab,

Set. Ab is the category of abelian groups, where its objects are abelian groups, and the morphisms are group homomorphisms. Recall that any can be thought of as a module over the ring of , and any Z-module is an abelian group with respect to its additive operation. Hence it can be useful to think of Ab as the category of modules over Z.

Similarly, the category Set, is a category with sets as objects and morphisms as functions between sets. From the definition one can easily see that if we forget the abelian group structure in the category Ab, any object of Ab becomes a set and any becomes a function between sets. What has been described here is an example of what is called a monoidal , and this particular functor is called the forgetful functor, as we are “forgetting” some structure on our objects and morphisms.

2 There is an object of particular importance when exploring categories, called the zero object. A zero object 0 in a category C is an object that is both an initial object and a terminal object. An initial object of C is an object I such that for every object A ∈ C, there exists a unique morphism i : I → A. Similarly, a terminal object of C is an object T such that for every object A ∈ C, there exists a unique morphism t : A → T .

To give you an example of a category where there are no zero objects, consider Set, the category of sets. The morphisms in this category are precisely functions between sets, and we can see that the only initial object of Set is the empty set, whereas every singleton set is a terminal object.

Now, let us think about the category of vector spaces over a field K. In this category, the objects are vector spaces, and the morphisms are homomorphisms.

Let V and W be two such vector spaces, and let φ, ψ : V → be two vector space ho- momorphisms between V and W . Then, φ+ψ is another vector space homomorphism between V and W , and it is easy to see that the set of all vector space homomorphisms between V and W is an abelian group under of homomorphisms. Hence, we can say that Hom(V,W ) is an object in Ab, and this is precisely what it means to be an enriched category over Ab.

Definition 1.1.2. Let D be a category. We say that a category C is enriched over D if for each pair of objects X, Y in ob(C), Hom(X,Y ) ∈ D.

For this paper, we will primarily be focusing on Ab-enriched categories. But now, let

3 us formally explore the notion of functors that was briefly mentioned above.

Definition 1.1.3. Let C and D be categories. A functor F : C → D is a map with the following properties:

• For each object X ∈ C, there exists an object Y ∈ D such that F (X) = Y .

• For every morphism f : X → Y in HomC(X,Y ), there exists a morphism g :

F (X) → F (Y ) in HomD(F (X),F (Y )), such that F (f) = g with the following

properties:

(a) For every object X ∈ C, F (idX ) = idF (X).

(b) For every pair of morphisms f : X → Y and g : Y → Z in Hom(C),

F (g ◦ f) = F (g) ◦ F (f).

The above definition of functors technically defines what are called covariant func- tors. We also have contravariant functors which are of similar construction with just the arrows reversed. For example, if C is a locally small category (i.e.

HomC(X,Y ) are sets), then Hom(A, −): C →Set is a covariant functor, whereas

Hom(−,A): C →Set is a contravariant functor.

We have thus far explored the basic notions of categories and functors, so let us go ahead and define additive categories.

Definition 1.1.4. An C is a category that satisfies the following:

4 • Every set HomC(X,Y ) has the structure of an abelian group such that the

composition of morphisms is biadditive with respect to this structure. (Ab-

enriched) Note that Hom(−, −) is actually a set here, as it must be an object

in Ab.

• There exists a zero object 0 in C.

• For any two objects X,Y ∈ C, there exists an object Z ∈ C and morphisms

iX : X → Z, iY : Y → Z, pX : Z → X, and pY : Z → Y , such that

pX ◦ iX = idX , pY ◦ iY = idY , and (iX ◦ pX ) + (iY ◦ pY ) = idZ . (Direct Sums)

A small Ab-enriched category is called a ringoid, which is a set with two binary operators + and ×, such that a(b+c) = ab+ac, and (b+c)a = ba+ca. Hence the set

Hom(C) in a small additive category C, with respect to its abelian group structures, if we take function compositions to be the multiplicative binary operator, becomes a ringoid. It is also easy to see that an Ab-enriched category R with precisely one object is a ring.

Lastly, notice that the third property of an additive category precisely refers to the . Since there must exist an object Z as defined above in C for every two objects X,Y ∈ C, we have that any additive category C must be equipped with a bifunctor ⊕ : C × C → C, and we denote Z = X ⊕ Y .

A functor F : C → D between two additive categories C and D is called additive if

5 the maps

HomC(X,Y ) → HomD(F (X),F (Y )),X,Y ∈ C are homomorphisms of abelian groups.

Hence for any two morphisms f, g : X → Y , F (f + g) = F (f) + F (g). Now, from this definition and the definition of direct sums, we can easily obtain the following proposition.

Proposition 1.1.5. Let C and D be additive categories. Then, for any additive functor F : C → D, there exists a natural isomorphism F (X) ⊕ F (Y ) ' F (X ⊕ Y ).

We are almost ready to define abelian categories, but before that, we must define the kernel and cokernel of a morphism in an additive category.

Definition 1.1.6. Let C be an additive category and let f : X → Y be a morphism in

C. Then, the kernel of f, denoted Ker(f) is an object K with a morphism k : K → X such that fk = 0, and if k0 : K0 → X such that fk0 = 0 exists, then there exists a unique morphism l : K0 → K such that kl = k0.

Similarly, the cokernel of f, denoted Coker(f) is and object C with a morphism c : Y → C such that cf = 0, and if c0 : Y → C0 such that c0f = 0 exists, then there exists a unique morphism r : C → C0 such that rc = c0.

If Ker(f) and Coker(f) exists, they are unique up to a unique isomorphism.

Now we are ready to define an .

6 1.1.2 Abelian Categories

Definition 1.1.7. An abelian category is an additive category C such that for every morphism φ : X → Y , there exists a sequence

j K →k X →i I → Y →c C where:

(a) j ◦ i = φ.

(b)( K, k) = Ker(φ), and (C, c) = Coker(φ).

(c)( I, i) = Coker(k), and (I, j) = Ker(c).

To illustrate the sequence in the above more concretely, let V and W be vector spaces over a field K, and let φ : V → W be a linear transformation. Then, the above sequence becomes,

iker(φ) φ iim(φ) ker(φ) → V → im(φ) → W →c W/im(φ)

where iker(φ) is the embedding of ker(φ) in V , φ is the a surjective linear transforma- tion from V to im(φ) induced by φ, iim(φ) is the embedding of im(φ) in W , and and c is a surjective map taking w ∈ W to w + im(φ) (called a quotient map).

A more traditional way to define an abelian category is to define it as an additive category that has all kernels and cokernels, where all monomorphisms (injections)

7 and epimorphisms (surjections) are normal, i.e. every monomorphism is a kernel of some morphism, and every epimorphism is a cokernel of some morphism. Notice that

Definition 6 encodes this information via the illustrated sequence, which is called the canonical decomposition of a morphism.

Another important concept in abelian categories is the notion of pushouts and pull- backs.

Definition 1.1.8. Let C be an abelian category, and let X,Y,Z be objects in C. Let f : Z → X and g : Z → Y be two morphisms. Then, the pushout of f and g is an object P and two morphisms i1 : X → P , and i2 : Y → P such that the following diagram

P i2 Y

i1 g X Z f

commutes and for any other triple (Q, j1, j2) satisfying a similar diagram, then there exists a unique u : P → Q such that the following diagram:

Q j2 u

P i2 Y j1 i1 g X Z f

8 commutes. If such a (P, i1, i2) exists, it is unique up to a unique isomorphism.

A pullback (P, i1, i2) is similarly defined by reversing the arrows in the above diagrams.

The following theorem gives us a more intuitive way of thinking about abelian cate- gories.

Theorem 1. (Mitchell) Every abelian category is equivalent, as an additive category, to a full subcategory of left modules over an associative unital ring A.

A full subcategory of a category C is defined as:

Definition 1.1.9. Let C be a category. Then, a full subcategory S of C is given by

(a) A subcollection of Ob(C), denoted Ob(S).

(b) A subcollection of Hom(C), denoted Hom(S).

(c) For every X ∈ S, idX ∈ Hom(S).

(d) For every f : X → Y in Hom(S), X, Y are in Ob(S).

(e) For every pair of morphisms f, g ∈ Hom(S), if f ◦ g is defined, then f ◦ g ∈

Hom(S).

(f) For every pair of objects X,Y ∈ S, HomS (X,Y ) = HomC(X,Y ).

9 1.1.3 Exact Sequences and the Grothendieck Group

Definition 1.1.10. Consider a sequence of morphisms, S, in an abelian category:

fi−1 fi · · · → Xi−1 → Xi → Xi+1 → · · ·

• We say S is exact in degree i if Im(fi−1) = Ker(fi).

• We say S is exact if it is exact in every degree.

f g • An of the form 0 → X → Y → Z → 0 is called a short exact

sequence.

In the above short exact sequence, notice that f must be an injective morphism, and g must be a surjective morphism. Hence it is helpful to think of X as a subobject of Y , f as an embedding of X into Y , and similarly Z as the corresponding quotient object Y/X, i.e. Coker(f).

We’ve looked at short exact sequences, and as one might expect, there are long exact sequences, defined as an exact sequence of more than 3 (often infinite) non-zero terms.

A natural question that arises then is if we have a short exact sequence, how do we construct a long exact sequence? In order to answer this question, we will begin by defining Ext.

Let R be a ring, and let M,N be left R-modules. Then, a projective resolution of M is an exact sequence

· · · → P1 → P0 → M → 0,

10 where each Pi is a projective R-module. Applying the left-exact contravariant functor

Hom(−,N), we get the complex

0 → Hom(P0,N) → Hom(P1,N) → · · · .

Let di be the map Hom(Pi−1,N) → Hom(Pi,N) in the above complex, and let

i th i i H = Ker(di+1/Im(di) be the i cohomology. Then, we define Ext (M,N) = H .

Proposition 1.1.11. Let 0 → A → B → C → 0 be a short exact sequence. Then, there is a long exact sequence given by

0 → Hom(C,N) → Hom(B,N) → Hom(A, N) →

Ext1(C,N) → Ext1(B,N) → Ext1(A, N) → Ext2(C,N) → · · · .

It is easy too see that Ext0(M,N) = Hom(M,N). Since Hom(−,N) is a left ex- act functor, if we take the projective resolution of M, then 0 → Hom(M,N) →α

β 0 Hom(P0,N) → Hom(P1,N) is exact. This implies that Ker(β) = Ext (M,N), is equal to Im(α), but α is injective. Hence we have that Im(α) ' Hom(M,N), and we have the claim.

Also, another interesting fact is that Exti(M,N) = 0 for every i > 0 if M is projective.

This is because, if M is projective, M admits a finite projective resolution of the form

0 → M id→M M → 0, and we see that applying Hom(−,N) to the resolution yields

Hi = 0 for all i > 0. There is a similar result for injective modules as well.

11 Definition 1.1.12. Let S : 0 → X → Z → Y → 0, and S0 : 0 → X → Z0 → Y → 0 be short exact sequences. The sequence S is called an extension of Y by X.A

0 0 morphism f from S to S is a morphism f : Z → Z that restricts to idX , and induces idY . The set of exact sequences 0 → X → Z → Y → 0 is denoted Ext(Y,X), and is called the set of extensions of Y by X.

Proposition 1.1.13. There exists a 1 − 1 correspondence between the isomorphism classes of extensions of X by Y and Ext1(X,Y ).

Proof. Let 0 → N →α P → X → 0 be a short exact sequence such that P is projective.

By applying the functor Hom(−,Y ) and truncating, we get the exact sequence

Hom(P,Y ) → Hom(N,Y ) →δ Ext1(X,Y ) → Ext1(P,Y ).

Since P is projective, we have that Ext1(P,Y ) = 0, and hence we obtain the exact sequence

Hom(P,Y ) → Hom(N,Y ) →δ Ext1(X,Y ) → 0.

Because the sequence is exact, notice that δ must be surjective. Now, let σ ∈

Ext1(X,Y ). Then, as δ is surjective, there must be a β ∈ Hom(N,Y ) such that

δ(β) = σ.

Take Z to be the pushout of α and β. In an abelian category, all pushouts exist, and to be more precise, Z is the cokernel of the map N → P ⊕Y , defined by n 7→ α(n)−β(n).

12 Hence we get the commuting diagram below, where the map Z → X is induced by the map P → X, and the 0-map Y → X.

0 N α P π X 0

β i idX j 0 Y Z  X 0 (S)

Specifically, we will define the induced map  : Z → X by choosing an element p + y ∈ P ⊕ Y that maps to z ∈ Z, and setting (z) = π(p). If we have p + y and p0 + y0 that both map to z, then p − p0 + y − y0 is in the image of the map N → P ⊕ Y , and this implies that π(p) = π(p0), and hence  is well-defined. If z is in Ker(), then z = p + y where π(p) = 0X . Hence p = α(n) for some n ∈ N, and by how we defined

Z, p + y = p + y − (α(n) − β(n)) = y + β(n) = y0, and y0 is clearly an element of

Im(j). Hence Ker() is contained in Im(j). On the other hand, because the map

Y → X is the 0-map, Im(j) must be in the kernel of the map Z → X. Hence we have that S is exact at Z. Since Z is the cokernel of the map N → P ⊕ Y , Ker(j) must be trivial, and hence S is exact at Y . Lastly, the map P → X is a surjection, and as the diagram commutes, the map Z → X must also be a surjection. Hence S is exact at X, and we have that S is a short exact sequence.

Now, let φ be a map from the isomorphism classes of extensions of X by Y to

Ext1(X,Y ). By construction of S, we see that for any σ ∈ Ext1(X,Y ) we can

find a short exact sequence S, so we see that φ is onto.

13 Given an arbitrary exact sequence 0 → Y → Z → X → 0 and an exact sequence

0 → N → P → X → 0 where P is projective, we can define a map from P → Z by projectivity of P , and then define a corresponding map N → Y using the exactness property. Hence this shows that for any short exact sequence we can find a β ∈

Hom(N,Y ).

Finally, suppose that there exist β 6= β0 such that δ(β) = δ(β0) = σ ∈ Ext1(X,Y ).

Then, since δ(β0) − δ(β) = δ(β0 − β) ∈ Ker(δ), we have that there exists an f ∈

Hom(P,Y ), such that f ◦ α = β0 − β. Consider the following diagrams.

N α P

β+fα i+jf j Y Z

N α P

β i0−j0f j0 Y Z0

The first diagram is obtained by adding fα to β and jf to i in the pushout diagram of

Z, and the second is obtained by subtracting fα from β0 and j0f from i0 in the pushout diagram of Z0. Applying the of pushouts to these diagrams yields

0 0 0 that Z → Z → Z is idZ0 , and similar construction yields that Z → Z → Z is idZ .

Hence we have that Z ' Z0, and combining all of the previous parts, we obtain the following diagram.

14 j0 0 0 Y Z0  X 0

= ' = j 0 Y Z  X 0

One can easily check that the upper left box commutes, and applying the universal property of pushouts, and the right box commutes since for any x ∈ X, we can find a p ∈ P such that i(p) = i00(p) = x, and the isomorphism we constructed precisely sends (p) to 0(p).

There is an important example to look at for the purpose of this paper. Let G be a group and A be an abelian group with a G-action. Then, A is a G module, and we have the following definition.

Definition 1.1.14. A cohomology group of G with coefficients in A is defined as

i the group ExtG(Z,A) in the category of G-modules where G acts on Z trivially, and is denoted Hi(G, A).

Definition 1.1.15. A bar resolution of Z in the category of G-modules is defined by

∂2 ∂1 · · · → Pi → P1 → P0 → Z → 0 with the following properties:

i+1 • Pi := ZG where the group action is given by g(g0, g1, ··· , gi) = (gg0, g1, ··· , gi).

15 • ∂i : Pi → Pi−1 is defined by

0 i−1 i ∂i(g0, ··· , gi) = (−1) (g0g1, g2, ··· , gi)+···+(−1) (g0, g1, ··· , gi−1gi)+(−1) (g0, ··· , gi−1).

If we apply the contravariant functor HomG(−,A) to the bar resolution, the induced map di : HomG(Pi−1,A) → HomG(Pi,A) is given by

0 1 i−1 i di(f)(g1, ··· , gi) = (−1) g1f(g2, ··· , gi)+(−1) f(g1g2, ··· , gi)+···+(−1) f(g1, ··· , gi−1gi)+(−1) f(g1, ··· , gi−1).

d1 d2 Definition 1.1.16. For the above complex 0 → HomG(P0,A) → HomG(P1,A) →

th th ··· , the i -cocycle and the i -coboundary are given by Ker(di+1) and Im(di), and are denoted Zi(G, A) and Bi(G, A).

By Definition 11, we now have an alternate definition of Hi(G, A), given by

Hi(G, A) := Zi(G, A)/Bi(G, A).

In this section, we will conclude with an overview of (local) finiteness of abelian categories.

Definition 1.1.17. Let K be a field. Then, a K-linear abelian category C is locally finite if

(a) For every X,Y ∈ C, HomC(X,Y ) is finite dimensional.

(b) Every object X in C has finite length, i.e. there exists a filtration 0 = X0 ⊂

X1 ⊂ · · · ⊂ Xn = X such that each Xi/Xi−1 has no non-trivial proper sub- objects (simple).

Here, we also define injectivity and surjectivity for additive functors.

16 Definition 1.1.18. Let C, D be locally finite abelian categories. An additive functor F : C → D is injective if F is bijective on the sets of morphisms (often denoted fully faithful), and is surjective if every simple object of D is a subquotient of F (X) for some X ∈ C.

In a locally finite abelian category, we have an important corollary given by Schur’s Lemma.

Lemma 1.1.19. (Schur’s Lemma) Let C be an abelian category, and X,Y be simple. Then, any nonzero morphism f : X → Y is an isomorphism. If X is not isomorphic to Y , then HomC(X,Y ) = 0, and HomC(X,X) is a division algebra.

Corollary 1.1.20. Suppose K is a algebraically closed field, and let C be a locally

finite abelian category over K. Then, for any simple X ∈ C, HomC(X,X) = K.

Proof. By Schur’s Lemma, we have that HomC(X,X) is a division algebra over K.

Since C is locally finite, H = HomC(X,X) is finite dimensional. Let h ∈ H. Then, as H is finite dimensional, there must be a polynomial f(h) over K, such that f(h) = 0. Suppose that f(h) is of the minimal degree of all such polynomials. Because K is algebraically closed, by the fundamental theorem of algebra, f(h) has a root in K, and in particular, f(h) = (h − k)g(h) for some k ∈ K, and g(h). However, we supposed that f is of minimal degree, and hence g(h) 6= 0, and hence h − k = 0, which implies h = k. Hence h ∈ K, and it must be that HomC(X,X) = K.

Definition 1.1.21. A K-linear abelian category C is finite if

(a) C has finite dimensional spaces of morphisms.

(b) Every object in C has finite length.

17 (c) C has enough projectives, meaning every simple object of C has a projective cover, where a projective cover of X is defined as an object P (X) such that

HomC(P (X), −) is exact, and an epimorphism p : P (X) → X suc h that if g : P → X is another epimorphism from a projective object P to X, then there exists an epimorphism h : P → P (X) with ph = g.

(d) There are finitely many isomorphism classes of simple objects.

An alternate definition of a K-linear abelian category given in “Tensor Category” is a category C equivalent to the category A-mod of finite dimensional modules over a finite dimensional K-algebra A.

Definition 1.1.22. A K-linear algebra A is given by

(a) (x + y) · z = xz + yz

(b) x · (y + z) = xy + xz

(c) (ax) · (by) = (ab)(xy) fore every a, b ∈ K and x, y, z ∈ A.

Proposition 1.1.23. A projective generator P of C represents a functor F : C → V ec, from C to the category of finite dimensional K-vector spaces by F (X) = HomC(P,X).

Since P is projective, HomC (P, −) is an exact functor, and hence F is exact and faithful.

It is well-known that an exact faithful functor F : C → V ec is represented by a unique projective generator P up to a unique isomorphism.

The dual category of A-mod is Aop-mod, where the arrows are reversed, and the duality functor between them is F : M → M ∗, where M is a finite dimensional A module, and M ∗ = Hom(M,A).

18 Definition 1.1.24. An additive K-linear functor F : A − mod → B − mod is (⊗−) representable if there exists a (B,A)-bimodule V such that F is naturally isomorphic to (V ⊗A −).

Proposition 1.1.25. An additive K-linear functor F : A − mod → B − mod is representable if and only if it is right exact.

Proof. The backward direction is obvious as the tensor product functor is right exact. Hence we only need to show the forward direction. A is a left module of A, and the map ψa : A → A defined by x 7→ ax induces F (ψa): F (A) → F (A). This gives F (A) a A-right module structure via F (A) · a = F (ψa)F (A) = F (aA).

Now, if X is a free A − module, X = ⊕A, and hence F (A) ⊗A X = F (A) ⊗A (⊕A), and as tensor product is distributive over ⊕, it suffices to consider F (A) ⊗ A.

But then, F (A) ⊗ A is clearly isomorphic to F (A) because F (A) is a right A module. Also, as F is an additive functor, F (X) ' F (L A) ' L(F (A) ⊗ A) ' F (A) ⊗ (L A) ' F (A) ⊗ X and hence we see that F (X) ' F (A) ⊗ X.

π σ Now, let P1 → P0 → X → 0 be a free resolution of X. Then, P1 and P0 are free, and hence by the previous proof, F (P1) and F (P0) are identified with F (A) ⊗ P1 and

F (A) ⊗ P0. Applying F to the free resolution above, by right exactness of F , we now have that F (X) must be the cokernel of F (π): F (A) ⊗ P1 → F (A) ⊗ P0. But then, (F (A) ⊗ −) is a right exact functor, and applied to the projective presentation of X, we have that the cokernel of F (π) must be F (A) ⊗ X.

Hence we have that F (X) ' F (A)⊗X. Therefore we conclude that F (A) is precisely the (B,A)-bimodule we were looking for and F is isomorphic to (F (A) ⊗ −).

Corollary 1.1.26. Let C be a finite abelian K-linear category, and let F : C → V ec be an additive K-linear left exact functor. Then, F = HomC(V, −) for some V ∈ C.

19 Proof. Let C = A−mod for some finite dimensional algebra A, and let X ∈ A−mod∗.

Then, if F : C → V ec is an additive left exact functor, F ∗ : X 7→ F (X∗)∗ is a right

∗ ∗ ∗ exact functor defined on A − mod . Hence by Proposition 5, F (X ) = X ⊗A V for ∗ ∗ some V ∈ A − mod. Dualizing on both sides, we have that F (X ) = (X ⊗A V ) ' ∗ HomA(V,X ), and we have the corollary.

Definition 1.1.27. Let C be a K-linear finite abelian category. Then, we define the Grothendieck group of C, denoted Gr(C), as the generated by isomorphism classes of simple objects in C. To every object X in C, we associate its class [X] ∈ Gr(C) by X [X] = [X : Xi]Xi, i where [X : Xi] counts the number of copies of Xi’s in the decomposition of X into direct sums of simple objects.

Definition 1.1.28. Let F1,F2 : C → V ec be two exact and faithful functors. Then,

F1 ⊗ F2 : C × C → V ec is defined by

(X,Y ) 7→ F1(X) ⊗ F2(Y ).

Proposition 1.1.29. There exists a canonical algebra isomorphism

αF1,F2 : End(F1) ⊗ End(F2) ' EndF1⊗F2 defined by

αF1,F2 (η1 ⊗ η2)|F1(X)⊗F2(Y ) := η1|F1(X) ⊗ η2|F2(Y ) where ηi ∈ End(Fi).

20 Proof. Consider the following diagram.

End(F1(X)) × End(F2(Y )) End(F1(X) ⊗ F2(Y ))

End(F1(X)) ⊗ End(F2(Y ))

It is clear that the basis of End(F1(X)) × End(F2(Y )) maps surjectively to the basis of End(F1(X) ⊗ F2(Y )) via the following map:

τ1 × τ2 → τ

(x1 7→ x2, y1 7→ y2) 7→ (x1 ⊗ y1 7→ x2 ⊗ y2)

So the basis of End(F1(X))⊗End(F2(Y )) maps onto the basis of End(F1(X)⊗F2(Y )). But

2 2 dim(End(F1(X)) ⊗ End(F2(Y ))) = (dim(F (X))) (dim(F2(Y )))

2 = (dim(F1(X))dim(F2(Y ))) = dim(End(F1(X) ⊗ F2(Y ))).

Therefore, as End(F1(X)) ⊗ End(F2(Y )) → End(F1(X) ⊗ F2(Y )) is linear and finite dimensional, we have that End(F1(X)) ⊗ End(F2(Y )) ' End(F1(X) ⊗ F2(Y )). Now,

αF1,F2 : End(F1) ⊗ End(F2) → End(F1 ⊗ F2)

η1|F1(X) ⊗ η2|F2(Y ) 7→ η1 ⊗ η2|F1(X)⊗F2(Y ) is the unique map satisfying the following diagram.

End(F1(X)) × End(F2(Y )) End(F1(X) ⊗ F2(Y ))

αF1,F2

End(F1(X)) ⊗ End(F2(Y ))

21 But we know that F1,F2 are exact and faithful, which implies that they are uniquely represented up to isomorphism by projective generators P1,P2 of C, and we have:

op End(F1) = End(P1) = A1

op End(F2) = End(P2) = A2

for some finite dimensional algebra A1,A2.

Lm Ln But now, P1 = i=1 P1,i, P2 = j=1 P2,j, where P1,i,P2,j are projective covers of all

op op simple objects of C. It follows that End(F1) ⊗ End(F2) = End(P1) ⊗ End(P2) = L op L op End( P1,i) ⊗ End( P2,j) implying

X X dim(End(F1) ⊗ End(F2)) = ( dim(P1,i))( dim(P2,j)). i j

op Also we claim that End(F1 ⊗ F2) = End(P1 ⊗ P2) . To show this, recall that F (X) = Hom(P,X).

Then, it follows from finite dimensionality of P1,P2 that

F1(X) ⊗ F2 = Hom(P1,X) ⊗ Hom(P2,Y )

∗ ∗ = (P1 ⊗ X) ⊗ (P2 ⊗ Y )

∗ ∗ = (P1 ⊗ P2 ) ⊗ (X ⊗ Y )

= Hom(P1 ⊗ P2,X ⊗ Y )

and so we have F1 ⊗F2((X,Y )) = Hom(P1 ⊗P2,X ⊗Y ), and it follows that End(F1 ⊗ op F2) = End(P1 ⊗ P2) .

22 op L op From the above we derive End(F1 ⊗ F2) = End(P1 ⊗ P2) = End( i,j(P1,i ⊗ P2,j)) and conclude that

X dim(End(F1 ⊗ F2)) = (dim(P1,i))(dim(P2,j)). i,j

Hence we have that dim(End(F1) ⊗ End(F2)) = dim(End(F1 ⊗ F2)). P But notice that αF1,F2 is given by X,Y αF1,F2 |X,Y where each αF1,F2 |X,Y is the com- ponent-wise isomorphism in the previous universal diagram. Hence αF1,F2 is a linear injection, and by finiteness, we have that αF1,F2 must be an isomorphism.

23 Chapter 2: Monoidal Categories

Now that we have gone over some preliminary materials about abelian categories and exact sequences, we are ready to discuss monoidal categories.

2.1 Monoidal Categories and Monoidal Functors

2.1.1 Monoidal Categories

Definition 2.1.1. A monoidal category is a quintuple (C, ⊗, a, 1, ι) where C is a cat- egory, ⊗ : C × C → C is the tensor product bifunctor, a :(− ⊗ −) ⊗ − →∼ − ⊗ (− ⊗ −) is a natural isomorphism defined by

∼ aX,Y,Z :(X ⊗ Y ) ⊗ Z → X ⊗ (Y ⊗ Z) ∀X,Y,Z ∈ C called the associativity constraint, 1 ∈ C is an object, and ι : 1 ⊗ 1 →∼ 1 is an isomorphim, which satisfies the following axioms:

(a) Pentagon Axiom

((W ⊗ X) ⊗ Y ) ⊗ Z

aW,X,Y ⊗idZ aW ⊗X,Y,Z

(W ⊗ (X ⊗ Y )) ⊗ Z (W ⊗ X) ⊗ (Y ⊗ Z)

aW,X⊗Y,Z aW,X,Y ⊗Z id ⊗a W ⊗ ((X ⊗ Y ) ⊗ Z) W X,Y,Z W ⊗ (X ⊗ (Y ⊗ Z))

(b) Unit Axiom The functors:

24 L1 : X → 1 ⊗ X

R1 : X → X ⊗ 1

are autoequivalences of C.

Definition 2.1.2. (1, ι) is called the unit object of C.

We can extend the notion of subcategories of categories to monoidal subcategories in the following way.

Definition 2.1.3. A monoidal subcategory of (C, ⊗, a, 1, ι) is a quintuple (D, ⊗, a, 1, ι) where D is a subcategory of C closed under the tensor product of morphisms and objects, and contains (1, ι).

We want monoidal categories to be the categorification of , and a has a unique multiplicative identity. Hence, we would like to have a unique (up to isomorphism) unit object in a monoidal category. Not surprisingly, we can derive that the unit object in a monoidal category is unique up to a unique isomorphism from the pentagon axiom and unit axiom given in the definition of a monoidal category.

Let (C, ⊗, a, 1, ι) be a monoidal category. We will define two natural isomorphism lX : 1 ⊗ X → X, and rX : X ⊗ 1 → X where L1(lX ) and R1(rX ) are equal to:

a−1 1,1,X ι⊗idX L1(lX ) := 1 ⊗ (1 ⊗ X) → (1 ⊗ 1) ⊗ X → 1 ⊗ X

aX,1,1 idX ⊗ι R1(rX ) := (X ⊗ 1) ⊗ 1 → X ⊗ (1 ⊗ 1) → X ⊗ 1

Definition 2.1.4. We call lX and rX the left and right unit constraints.

25 Because we defined lX and rX to be natural isomorphisms we have the following proposition.

Proposition 2.1.5. For any X ∈ C, l1⊗X = id1 ⊗ lX and rx⊗1 = rX ⊗ id1

Proof. By naturality of lX , if we let F := (1 ⊗ −), then we have the following com- muting diagram:

F (l ) F (1 ⊗ X) X F (X)

l1⊗X lX 1 ⊗ X lX X

Because lX is an isomorphism, together with the diagram, we have that l1⊗X =

F (X) = id1 ⊗ lX .

Similarly, if we let G := (− ⊗ 1), by naturality of rX , we get the following commuting diagram:

G(r ) G(X ⊗ 1) X G(X)

rX⊗1 rX X ⊗ 1 rX X

This implies that rX⊗1 = rX ⊗ 1 and we have the proposition.

From the unit constraints and the associativity constraint, we get the triangle dia- gram:

a (X ⊗ 1) ⊗ Y X,1,Y X ⊗ (1 ⊗ Y )

rX ⊗idY idX ⊗lY X ⊗ Y

26 Proposition 2.1.6. The triangle diagram commutes.

Proof. Consider the following diagram induced by the unit constraints and the pen- tagon axiom.

a ⊗id ((X ⊗ 1) ⊗ 1) ⊗ Y X,1,1 Y (X ⊗ (1 ⊗ 1)) ⊗ Y

rX ⊗id1⊗idY

(idX ⊗ι)⊗idY (X ⊗ 1) ⊗ Y

aX⊗1,1,Y aX,1,Y aX,1⊗1,Y X ⊗ (1 ⊗ Y )

rX ⊗id1⊗Y

idX ⊗(ι⊗idY )

(X ⊗ 1) ⊗ (1 ⊗ Y ) idX ⊗l1⊗Y X ⊗ ((1 ⊗ 1) ⊗ Y )

aX,1,1⊗Y

idX ⊗a1,1,Y X ⊗ (1 ⊗ (1 ⊗ Y ))

It is clear that the outside pentagon is induced by the pentagon axiom, and notice that if we let Y 0 = 1 ⊗ Y , the bottom left triangle is precisely the triangle diagram that we want to show commutes.

From how we defined rX , R1(rX ) = rX ⊗ id1 = (idX ⊗ ι) ◦ aX,1,1, and so it follows that the top triangle commutes. Similarly, by how we defined lY , L1(lY ) = id1 ⊗ lY =

−1 l1⊗Y = (ι ⊗ idY ) ◦ a1,1,Y , and hence the bottom right triangle commutes. Lastly, by naturality of the associativity constraint aX,Y,Z , the two trapezoids in the middle commute. Hence every component of the diagram outside of the bottom left triangle commutes, which forces the bottom left triangle to commute. Consequently we have that the triangle diagram commutes.

Now that we have the triangle diagram, we can use it to get the following proposition.

27 Proposition 2.1.7. The following diagrams commute for every X,Y in C.

a (1 ⊗ X) ⊗ Y 1,X,Y 1 ⊗ (X ⊗ Y )

lX ⊗idY lX⊗Y X ⊗ Y

a (X ⊗ Y ) ⊗ 1 X,Y,1 X ⊗ (Y ⊗ 1)

r X⊗Y idX ⊗rY X ⊗ Y

Proof. Consider the following diagram induced by the pentagon axiom and the unit constraints.

a ⊗id ((X ⊗ 1) ⊗ Y ) ⊗ Z X,1,Y Z (X ⊗ (1 ⊗ Y )) ⊗ Z

rX ⊗idY ⊗idZ (idX ⊗lY )⊗idZ (X ⊗ Y ) ⊗ Z

aX⊗1,Y,Z aX,Y,Z aX,1⊗Y,Z X ⊗ (Y ⊗ Z)

rX ⊗idY ⊗Z idX ⊗(lY ⊗idZ )

(X ⊗ 1) ⊗ (Y ⊗ Z) idX ⊗lY ⊗Z X ⊗ ((1 ⊗ Y ) ⊗ Z)

a X,1,Y ⊗Z idX ⊗a1,Y,Z X ⊗ (1 ⊗ (Y ⊗ Z))

The naturality of aX,Y,Z and the commutativity of the triangle diagram gives us that the two left trapezoids and the bottom left triangle commutes, and combining them with the pentagon axiom, we can easily see that the above diagram must commute.

Now consider the bottom right triangle,

28 X ⊗ (Y ⊗ Z)

X ⊗ ((1 ⊗ Y ) ⊗ Z)

X ⊗ (1 ⊗ (Y ⊗ Z))

This diagram commutes for all X,Y,Z in C, and hence if we let X = 1, we have

1⊗a 1 ⊗ ((1 ⊗ Y ) ⊗ Z) 1,Y,Z 1 ⊗ (1 ⊗ (X ⊗ Y ))

1⊗lX ⊗idY 1⊗lX⊗Y 1 ⊗ (X ⊗ Y )

−1 Applying L1 : 1 ⊗ X → X functor yields the desired result for the first triangle in the proposition.

Similarly it is easy to show using the following diagram to show that the second triangle in the proposition commutes.

((X ⊗ Y ) ⊗ 1) ⊗ Z

(X ⊗ Y ) ⊗ (1 ⊗ Z) (X ⊗ (Y ⊗ 1)) ⊗ Z

(X ⊗ Y ) ⊗ Z

X ⊗ (Y ⊗ Z)

X ⊗ (Y ⊗ (1 ⊗ Z)) X ⊗ ((Y ⊗ 1) ⊗ Z)

29 −1 Notice that if we take Z = 1, then applying R1 : X ⊗ 1 → X functor to the top right triangle precisely gives us the second triangle in the proposition.

From the two triangles in the proposition above, we get an important corollary about the unit constraints.

Corollary 2.1.8. In any monoidal category l1 = r1 = ι.

Proof. We have that the following diagram commutes for any X,Y in C

a (1 ⊗ X) ⊗ Y 1,X,Y 1 ⊗ (X ⊗ Y )

lX ⊗idY lX⊗Y X ⊗ Y

Set X = Y = 1 to obtain:

a (1 ⊗ 1) ⊗ 1 1,1,1 1 ⊗ (1 ⊗ 1)

l1⊗id1 l1⊗1 1 ⊗ 1

Hence we have that l1 ⊗ id1 = l1⊗1 ◦ a1,1,1 = (id1 ⊗ l1) ◦ a1,1,1.

Also, from the triangle diagram, we can get the following diagram by setting X = Y = 1.

a (1 ⊗ 1) ⊗ 1 X,1,Y 1 ⊗ (1 ⊗ 1)

r1⊗id1 id1⊗l1 1 ⊗ 1

−1 Hence r1 ⊗ id1 = (id1 ⊗ l1) ◦ a1,1,1. Recall that L1(l1) = id1 ⊗ l1 = (ι ⊗ id1) ◦ a1,1,1. It follows that (id1 ⊗ l1) ◦ a1,1,1 = ι ⊗ id1.

30 Combining the results gives us that l1 ⊗ id1 = r1 ⊗ id1 = ι ⊗ id1, and this is equivalent to R1(l1) = R1(r1) = R1(ι). Now, from the unit axiom, we have that R1 is an autoequivalence on C, hence we have that l1 = r1 = ι.

Finally, using all of the propositions and definitions we have discussed so far, we are ready to show the uniqueness of the unit.

Proposition 2.1.9. The unit object in a monoidal category is unique up to a unique isomorphism.

Proof. Let (1, ι), (10, ι0) be unit objects.

0 0 0 Then we have r, l and r , l as our corresponding unit constraints. Now let η := l10 ◦r1 : 1 →∼ 10.

Consider the following diagram.

(r0 )−1⊗(r0 )−1 l ⊗l 1 ⊗ 1 1 1 (1 ⊗ 10) ⊗ (1 ⊗ 10) 10 10 10 ⊗ 10

0 id1⊗10 ⊗r1 id1⊗10 ⊗l10 −1 id 0 ⊗id 0 r1 ◦ι=id1⊗1 1 1

0 −1 (r ) ⊗id1 id1⊗10 ⊗η l10⊗10 ◦a1,10,10 1 0 0 0 0 0 1 ⊗ 1 (1 ⊗ 1 ) ⊗ 1 (1 ⊗ 1 ) ⊗ 1 1 ⊗ 1 0 ι ι

l0 =ι0=r0 r1=ι 10 10 r1⊗10 r0 1⊗10

(r0 )−1 l 1 1 1 ⊗ 10 10 10

By directly composing, it is easy to see that the diagram commutes, and as r, l are natural isomorphisms, ι is an isomorphism, and we have that η is precisely the isomorphism taking ι to ι0. Hence we see that the unit object in a monoidal category is unique up to isomorphism, and the isomorphism is given by η.

31 To prove the prove the proposition, it remains that we show the isomorphism η is unique.

Let τ be any morphism from 1 to 1. Then,

1 ⊗ 1 τ⊗id1 1 ⊗ 1

ι ι 1 τ 1

commutes, by naturality of ι = r1.

Also notice that if σ : 1 →∼ 1 is an isomorphism from 1 to 1, then by naturality of

ι = r1, we have the following commuting diagram.

1 ⊗ 1 σ⊗σ 1 ⊗ 1

ι ι 1 σ 1

Setting τ = σ, we have that σ ⊗ id1 = σ ⊗ σ, and hence σ = id1, and in particular, every isomorphism from 1 to 1 are equivalent to id1. Hence η = id1 must be unique, and we have the proposition.

Using a similar construction as the above we can deduce an interesting observation on monoidal categories.

Proposition 2.1.10. Let C be a monoidal category. Then, EndC(1) is a commutative monoid under composition of morphisms. Also, f ⊗ g = ι−1 ◦ (f ◦ g) ◦ ι for every f, g ∈ EndC(1).

Proof. By naturality of ι = r1 we have the commutative diagram:

32 f⊗id 1 ⊗ 1 1 1 ⊗ 1

ι ι f 1 1

−1 −1 Hence f ⊗ id1 = r1 ◦ f ◦ r1, and similarly we have id1 ⊗ g = l1 ◦ g ◦ l by naturality −1 of l1. Since ι = l1 = r1, we have that f ⊗ g = (f ⊗ id1) ◦ (id1 ⊗ g) = ι ◦ (f ◦ g) ◦ ι.

Repeating a similar process we get that g ⊗ f = ι−1 ◦ (f ◦ g) ◦ ι, and we conclude that f ⊗ g = g ⊗ f, and accordingly, f ◦ g = g ◦ f.

The above proof can be thought of as an analogue of the definition of a monoid as a category with a single object. By the above proposition we can think of a commutative monoid as a monoidal category with a single object. It is quite surprising that the pentagon axiom and the unit axiom impose a commutative structure on the set of endomorphisms of the unit object.

Since we have examined the structure of monoidal categories, the natural progression would be to examine maps between monoidal categories, i.e. monoidal functors.

2.1.2 Monoidal Functors

Definition 2.1.11. Let (C, ⊗, 1, a, ι) and (C, ⊗, 1, a, ι) be monoidal categories.

A monoidal functor from C to C is a pair (F,J) where F : C → C is a functor, and

∼ JX,Y : F (X)⊗F (Y ) → F (X ⊗ Y ) is a natural isomorphism such that F (1) ' 1 and the following diagram commutes for every X,Y,Z in C.

33 a (F (X)⊗F (Y ))⊗F (Z) F (X),F (Y ),F (Z) F (X)⊗(F (Y )⊗F (Z))

JX,Y ⊗idF (Z) idF (Z)⊗JY,Z F (X ⊗ Y )⊗F (Z) F (X)⊗(F (Y ⊗ Z))

JX⊗Y,Z JX,Y ⊗Z F (a ) F ((X ⊗ Y ) ⊗ Z) X,Y,Z F (X ⊗ (Y ⊗ Z))

This diagram in called the monoidal structure axiom, and if F is an equivalence of C and C as ordinary categories, then F is denoted an equivalence of monoidal categories.

Notice that the canonical natural isomorphism φ : 1 →∼ F (1) is defined by the following diagram.

l 1⊗F (1) F (1) F (1)

φ⊗id −1 F (1) F (l1 ) J F (1)⊗F (1) 1,1 F (1 ⊗ 1) where (r, l) and (r, l) are the unit constraints of C and C in respective order.

Using the diagram to define φ, we get the following proposition.

Proposition 2.1.12. For any monoidal functor (F,J): C → C, the following dia- grams commute.

l 1⊗F (X) F (X) F (X)

φ⊗id −1 F (X) F (lX ) J F (1)⊗F (X) 1,X F (1 ⊗ X)

34 r F (X)⊗1 F (X) F (X)

id ⊗φ −1 F (X) F (rX ) J F (X)⊗F (1) X,1 F (X ⊗ 1)

Proof. We have the commuting diagram

l 1⊗F (1) F (1) F (1)

φ⊗id −1 F (1) F (l1 ) J F (1)⊗F (1) 1,1 F (1 ⊗ 1)

We can apply the functor (−⊗F (X)) to the above commutative diagram to get the following commuting diagram, which we will call diagram (a).

l ⊗id (1⊗F (1))⊗F (X) F (1) F (X) F (1)⊗F (X)

−1 φ⊗idF (1)⊗idF (X) F (l1 )⊗idF (X) J1,1⊗idF (X) (F (1)⊗F (1))⊗F (X) F (1 ⊗ 1)⊗F (X)

We will now build a series of commutative diagrams to prove the proposition. Consider the following.

a id ⊗J (1⊗F (1))⊗F (X) 1,F (1),F (X) 1⊗(F (1)⊗F (X)) 1 1,X 1⊗(F (1 ⊗ X))

(φ⊗idF (1))⊗F (X) φ⊗(idF (1)⊗idF (X)) φ⊗idF (1⊗X) aF (1),F (1),F (X) idF (1)⊗J1,X (F (1)⊗F (1))⊗F (X) F (1)⊗(F (1)⊗F (X)) F (1)⊗(F (1 ⊗ X))

The left square commutes because a is functorial, and the right square commutes by naturality of J1,X . Note that the top and bottom arrows are isomorphisms, hence we can reverse the arrows.

35 Also, as lX is a natural isomorphism we get the following commutative diagram.

id ⊗F (l ) 1⊗(F (1 ⊗ X)) 1 X 1⊗F (X)

φ⊗idF (1⊗X) φ⊗idF (X) id ⊗F (l ) F (1)⊗(F (1 ⊗ X))F (1) XF (1)⊗F (X)

Combining the above two diagrams and reversing the arrows on the top and bottom gives us the following commutative diagram, which we will call diagram (b).

−1 −1 −1 a ◦id1⊗J1,X ◦id1⊗F (lX ) 1⊗F (X) 1,F (1),F (X) (1⊗F (1))⊗F (X)

φ⊗idF (1)⊗idF (X) φ⊗idF (X)

−1 −1 −1 a ◦idF (1)⊗J ◦idF (1)⊗F (lX ) F (1)⊗F (X) F (1),F (1),F (X) 1,X (F (1)⊗F (1))⊗F (X)

Similarly, we have the following commutative diagram, denoted diagram (c).

J F (l ) F (1)⊗F (X) 1,X F (1 ⊗ X) X F (X)

−1 −1 −1 F (l1 )⊗idF (X) F (l1 ⊗idX ) F (lX ) J F (l ⊗id ) F (1 ⊗ 1)⊗F (X) 1⊗1,X F ((1 ⊗ 1) ⊗ X) 1 X F (1 ⊗ X)

The left square commutes by naturality of J, and the right square commutes since l is natural and l1 ⊗ idX = l1⊗X . Combining diagrams (a), (b), and (c) in the order (b)→(a)→(c) yields the desired result.

Using φ we can identify 1 with F (1), so it is notationally beneficial if we take F (1) = 1, and φ = id1. Under this convention, we have that J1,X = JX,1 = idX , which is similar to how we took lX = rX = idX .

36 Now that we have what monoidal functors are, we can now define morphisms of monoidal functors.

Definition 2.1.13. Let (C, ⊗, 1, a, ι) and (C, ⊗, 1, a, ι) be two monoidal categories and (F 1,J 1), (F 2,J 2): C → C be two monoidal functors. Then, a morphism of monoidal

1 1 2 2 1 2 functors η :(F ,J ) → (F ,J ) is a η : F → F where η1 is an isomorphism, and the diagram

J1 F 1(X)⊗F 1(Y ) X,Y F 1(X ⊗ Y )

ηX ⊗ηY ηX⊗Y J2 F 2(X)⊗F 2(Y ) X,Y F 2(X ⊗ Y ) commutes for all X,Y ∈ C.

Notice that the naturality of η : F 1 → F 2 makes η coherent with the isomorphisms

∼ 1 ∼ 2 of the unit objects. That is, if φ1 : 1 → F (1), and φ2 : 1 → F (1), then η1 ◦ φ1 = φ2, and as we have adopted the convention that φ1 = φ2 = id1, we have that η1 = id1.

We’ve thus far defined monoidal functors and morphisms between them, so a nat- ural question one might have is, what are some examples? An important, and in- tuitive example is the forgetful functors. For instance, the forgetful functor defined by Rep(G) → V ec lets us remove the G-representation structure and just view these objects as vector spaces. The monoidal structure is obvious for both categories, and the natural isomorphism J as defined in the definition of monoidal functors is trivially defined for this example, since the tensor structure on both categories are equivalent.

Another example of monoidal functors is one between the category of bimodules of a unital algebra A over a field and the endofunctors of the category of left modules of A. Let K be a field, and A be a unital K-algebra. Let C be the category of left A-modules. Then, we have a monoidal functor,

37 F : A − bimod → End(C)

M 7→ (M ⊗A −)

In fact, we can say much more about this monoidal functor.

Proposition 2.1.14. The functor F : A − bimod → End(C) takes values in the full subcategory (the subcategory contains all of the morphisms between any two objects in it) Endre(C), the right exact endofunctors of C, and defines an equivalence between monoidal categories A-bimod and Endr(C).

Proof. We know that the tensor product is right exact, hence by definition, F must take values in Endre(C). Hence the first part of the proposition holds.

−1 Now, for every G ∈ Endre(C), define F : Endre(C) → A − bimod by G 7→ G(A). Clearly, G(A) is a left A-module, and notice that G(A) admits a right action of Aop by G(A) · a = G(aA) where a : A → A is defined by α ∈ A 7→ aα. Notice that the action must be given by Aop as (G(A) · b) · a = G(abA) = G(A) · (ab). Hence we have that G(A) is an A-bimodule.

−1 −1 −1 We claim that F is a quasi-inverse to F , i.e. F ◦ F ' idEndre(C), and F ◦ F ' idA−bimod.

Let G ∈ Endre(C). We want to show that G(X) for any X ∈ C can be identified with −1 ((F ◦ F )(G))(X) = G(A) ⊗A X. To do so, let us first take a projective resolution of X, given by · · · → P1 → P0 → X → 0. Then, P0 and P1 are free, hence G(P0) and G(P1) are identified with G(A) ⊗A P0 and G(A) ⊗A P1 in respective order. Now,

G is right exact, hence G(X) is the cokernel of G(P1) → G(P0), and by the above, this is equivalent to Coker(G(A) ⊗A P1 → G(A) ⊗A P0). But Coker(G(A) ⊗A P1 →

G(A) ⊗A P0) is canonically G(A) ⊗A X, and hence we have that G(A) ' G(A) ⊗A X. −1 −1 It follows that ((F ◦F )(G))(X) ' G(X), and we conclude that F ◦F ' idEndre(C).

38 −1 For the other direction, let M ∈ A − bimod. Then, (F ◦ F )(M) = M ⊗A A. Now, observe the following diagram.

M × A M

∃! identity map M

−1 By the universal property of tensors, we have that M ⊗A A ' M, and hence F ◦F ' idA−bimod.

Hence F −1 is a quasi-inverse of F , and hence F is an equivalence between A − bimod and Endre(C).

2.1.3 The Category of G-graded Vector Spaces and Cocycles

In the previous section of abelian categories, we have talked about exact sequences and cohomologies in categories of modules. With our understanding of monoidal categories and monoidal functors, we will begin examining similar ideas on categories of graded vector spaces.

First, consider the following. Let G be a monoid, and A be an abelian group. Let

CG = CG(A) be the category whose objects are labeled by elements of G, denoted

δg. The morphisms in this category satisfies HomCG (δg1 , δg2 ) = ∅ if g1 6= g2, and

HomCG (δg, δg) = A. The tensor product functor ⊗ in CG is defined on the objects by

δg ⊗ δh = δgh, and on the morphisms by a ⊗ b = ab. Then, CG is a monoidal category, where the associativity isomorphism is the identity, and 1 is represented by the unit element of G. Note that if G is a non-commutative group, X ⊗ Y is not necessarily isomorphic to Y ⊗ X.

39 The category of G graded vector spaces is precisely the linear version of the above. Let

K be a field, and let V ecG the category of G-graded vector spaces over K, such that L V = g∈G Vg. The morphisms in V ecG must be linear maps that preserve the grading, L and the tensor product in this category is defined by (V ⊗ W )g = xy=g Vx ⊗ Wy.

The unit object 1 must satisfy 1e = K, and 1g = 0 for any g 6= e, where e is the identity element of G. Let the associativity isomorphism a be defined as usual for tensor products of vector spaces, and let the unit constraint ι : 1 ⊗ 1 →∼ 1 be defined ∼ by the identity map K ⊗ K → K. This construction equips V ecG with the structure of a monoidal category, and we have a similar construction for the finite dimensional case.

Let V ecG be the category of finite dimensional G-graded vector spaces over K. In this category, the smallest objects are the pairwise non-isomorphic 1 dimensional G- L graded vector spaces δg = h∈G(δg)h such that (δg)g 'K, and (δg)h = 0 for every h 6= g. Then, we see that

M M M δg ⊗ δh = (δg ⊗ δh)k = ( (δg)x ⊗ (δh)y) = (δg)g ⊗ (δh)h = (δg ⊗ δh)gh ' δgh. k∈G k∈G xy=k

Notice that with this tensor structure, these objects form the G-graded monoidal

× category CG(K ). Since we take the abelian group to be K\0, under multiplication, × the 0 morphisms are missing, and hence we have that CG(K ) is a non-full monoidal subcategory of V ecG. Also, any object in V ecG must be given by a finite direct sum × of these one-dimensional vector spaces. Hence, we can view CG(K ) as the basis of

V ecG whose objects are given by the direct sum of non-negative multiples of + finitely many δg’s. This point of view directly mirrors the idea of Z -rings that we will consider later on.

We can generalize the construction of graded monoidal categories further by ”twist- ing” the associativity isomorphisms. Let G be a group, and A be an abelian group.

40 Let ω : G × G × G → A be a 3-cocycle of G taking values in A, i.e.

ω(g1g2, g3, g4)ω(g1, g2, g3g4) = ω(g1, g2, g3)ω(g1, g2g3, g4)ω(g2, g3, g4) for every g1, g2, g3, g4 ∈ G.

Notice that this is exactly the pentagon axiom for monoidal categories.

ω Now we will define the category CG(A). ω As a category, CG(A) is equivalent to CG(A), and ⊗, (1, ι) are defined equivalently. However, we will twist the associativity isomorphism. aω is defined by

aω = ω(g, h, i)id :(δ ⊗ δ ) ⊗ δ → δ ⊗ (δ ⊗ δ ), δg,δh,δi δghi g h i g h i where g, h, i ∈ G. Then, ω(g, h, i) defines some element in A, which corresponds to elements of HomCG (δg, δg), HomCG (δh, δh), and HomCG (δi, δi). Then, because ω satisfies the pentagon axiom, aω satisfies the pentagon axiom, and so Cω (A) δg,δh,δi G makes sense as a monoidal category.

ω As before, we can get a linear version of CG(A) by letting A be the field K and taking

δg to be 1 dimensional G-graded vector spaces. By taking arbitrary direct sums of non-negative integer multiples of them, we get the category of G-graded vector spaces

ω V ecG.

We can now examine the monoidal functors between categories of graded vector spaces with non-trivial associativity isomorphisms.

3 Let G1, G2 be groups, let A be an abelian group and let ω1, ω2 ∈ Z (Gi,A) be 3- cocycles, and suppose the action of G ,G on A are trivial. Let C = Cωi be the 1 2 i Gi monoidal categories of graded vector spaces.

Any monoidal functor F : C1 → C2, when restricted to the simple objects defines a group homomorphism f : G1 → G2 by F (δg ⊗ δh) = F (δg) ⊗ F (δh). By the monoidal structure axiom, we have

∼ Jg,h = µ(g, h)idδf(gh) → F (δgh), g, h ∈ G1

41 where the map µ : G1 × G1 → A satisfies

ω1(g, h, l)µ(gh, l)µ(g, h) = µ(g, hl)µ(h, l)ω2(f(g), f(h), f(l)) ∀g, h, l ∈ G1. and this condition on µ encodes the monoidal structure axiom.

Notice here, that the codomain of ωi, µ is A, an abelian group, and hence we can rewrite the above condition additively:

ω1(g, h, l) − ω2(f(g), f(h), f(l)) = µ(g, hl) + µ(h, l) − µ(gh, l) − µ(g, h).

4 Recall that ωi is a map in the kernel of the the following ∂ :

∂3 ∂4 ··· −→ {Gi × Gi × Gi → A} → {Gi × Gi × Gi × Gi → A} −→ · · · where g1|g2|g3 + g1|g2g3|g4 + g2|g3|g4 − g1|g2|g3g4 − g1g2|g3|g4 = 0.

Also, µ(gh, l) + µ(g, h) − µ(g, hl) − µ(h, l) is given by g|h|l 7→ gh|l + g|h − g|hl − h|l and because we assumed that the action of G on A is trivial, this tells us that the

∗ difference of ω1 and f ω2, (i.e. ω1(g, h, l) and ω2(f(g), f(h), f(l))), is given by the boundary map

∂3 · · · −→ {G1 × G1 → A} −→ {G1 × G1 × G1 → A} −→ · · ·

∗ 3 ∗ and hence ω1(g, h, l)−f ω2(g, h, l) = ∂ (µ). Therefore, ω1 and f ω2 are cohomologous. rewriting this condition multiplicatively, we have the condition

∗ 3 ω1 = f ω2 · ∂ (µ).

Conversely, given a homomorphism f : G1 → G2 and any function µ satisfying ∗ 3 ω1 = f ω2 · ∂ (µ), we get a monoidal functor F : C1 → C2 defined by

F (δg) = δf(gh),

∼ Jg,h = µ(g, h)idδf(gh) : F (δg) ⊗ F (δh) → F (δgh).

Note that if F is an equivalence of monoidal categories, f must be an isomorphism, and conversely, any isomorphism f must induce an equivalence of monoidal categories.

42 0 Now let F = Ff,µ, F = Ff 0,µ0 be two functors C1 → C2. We will determine the natural monoidal transformations between F and F 0. Let η be one such transformation. Then,

F (δg) F (δh)

ηg ηh

0 0 F (δg) F (δh)

0 commutes by naturality of η. However, F (δg) = δf(g), F (δg) = δf 0(g), and if f(g) 6=

0 0 f (g), Hom(δf(g), δf 0(g)) = ∅. Hence if such an η exists, it must be that f(g) = f (g) 0 for every g ∈ G1, and hence f = f .

Also, δg is simple, and hence δf(g) must also be simple, and as δf(g) = δf 0(g), ηg must be an isomorphism. Hence we have that η : F → F 0 is an isomorphism.

Lastly, because η is a transformation between monoidal functors, each ηg : δf(g) →

δf(g) must satisfy

0 µ (g, h)(ηg ⊗ ηh) = ηghµ(g, h) and hence µ = µ0 · ∂2(η), i.e. µ is cohomologous to µ0.

0 2 On the other hand, any η : G1 → A that satisfies µ = µ · ∂ (η) gives us a morphism 0 0 of monoidal functors η : Ff,µ → Ff 0,µ0 for fixed f, f and µ, µ .

1 0 2 Notice that given any α ∈ H (G1,A) such that α 6= 1 and η such that µ = µ · ∂ (η), which we can rewrite additively as A is abelian, η + α gives us

ηgh + α(gh) − ηg ⊗ ηh − α(g) ⊗ α(h)

1 and as α ∈ H (G1,A), α(g) ⊗ α(h) = α(gh). Therefore,

ηgh + α(gh) − ηg ⊗ ηh − α(g) ⊗ α(h) = ηgh − ηg ⊗ ηh

0 2 1 and we see that µ−µ = ∂ (η) is invariant for the action of H (G1,A) on η. Therefore 1 {η : Ff,µ → Ff,µ0 } is a torsor over H (G, A).

43 Similarly, for a fixed homomorphism f : G1 → G2, the set of µ : G1 × G1 → A that satisfies

ω1(g, h, l)µ(gh, l)µ(g, h) = µ(g, hl)µ(h, l)ω2(f(g), f(h), f(l))

2 parametrizing the isomorphism classes of Ff,µ is a torsor over H (G1,A), as any α in the kernel of

{G1 × G1 → A} −→ {G1 × G1 × G1 → A} satisfies α(gh, l) + α(g, h) − α(g, hl) − α(h, l) = 0.

ω Lastly, equivalence classes of monoidal categories CG are parametrized by the set H3(G, A)/Aut(G). However, A is abelian, so Inn(G) must be trivial.

Hence we have that the parametrization is given by H3(G, A)/Out(G).

ω ω All of the above properties we derived about CG can be specialized to V ecG. Take A = K×, and let our monoidal functors to be additive. Then, as any morphism η

0 −1 of monoidal functors has the property that η1 6= 0, and using µ (g, g )(ηg ⊗ ηg−1 ) =

−1 η1µ(g, g ), we have that all ηg must be nonzero. Hence any η : Ff,µ → Ff 0,µ0 must be an isomorphism, and hence f = f 0.

ω Definition 2.1.15. Let G be a group, and K be a field. Let V ecG be the category of G graded vector spaces whose associativity isomorphism is given by the 3-cocycle ω. Then ω is called normalized if ω(g, 1, 1) = ω(1, 1, g) = 1 ∈ K×.

Note that by the triangle axiom, ω(g, 1, h) = ω(g, 1, 1)ω(1, 1, h), and hence we can alternatively define normalized by ω(g, 1, h) = 1.

Proposition 2.1.16. For any 3-cocycle ω, ω is cohomologous to normalized ω.

Proof. Let µ be a 2-cochain satisfying µ(g, 1) = ω(g, 1, 1), and µ(1, h) = ω(1, 1, h)−1.

Then, ∂3(µ)(g, 1, h) = g · µ(1, h)µ(g, h)−1µ(g, h)µ(g, 1)−1, and as G acts trivially on A, we have

44 ∂3(µ)(g, 1, h) =µ(1, h)µ(g, h)−1µ(g, h)µ(g, 1)−1

=µ(1, h)µ(g, 1)−1

=ω(1, 1, h)−1ω(g, 1, 1)−1

=ω(g, 1, h)−1 and we see that ω · ∂3(µ) is normalized. Hence, ω and its normalized cocycles are cohomologous.

Now consider the case where G = Z/nZ where n > 1 is an integer, and let K = C. As usual, assume that the action of G on K is trivial. Let us consider the cohomology of

Z/nZ.

Let ZG = Z[x]/ < xn − 1 >, and take the following resolution:

g−1 gn−1+···+1 0 ←− ZG ←− ZG ←− ZG ←− · · · .

We can apply the functor HomZG(−, C) to get

g−1 1+···+gn−1 g−1 0 HomZG(ZG, C) HomZG(ZG, C) HomZG(ZG, C) ···

' ' ' 0 ×n 0 0 C C C ···

Clearly HomZG(ZG, C) → C is an isomorphism, as the map is entirely determined by where we map 1 ∈ ZG. Also, as the action of G on C is trivial, the bottom complex alternates between the 0 map and the map ×n, defined as multiplication by n.

45 Computing the cohomology, we have that

0 H =C

H1 =(0 = ker(×n)/im(0) = 0) = 0

2 H =(C = ker(0)/im(×n) = C) = 0

. .

Hence for every i > 0, Hi(Z/nZ, C) = 0.

Now, 0 −→ Z −→ C −→ C× = C/Z −→ 0 is a short exact sequence, and we can construct the following long exact sequence.

0 Hom(ZG, Z) Hom(ZG, C) Hom(ZG, C×)

H1(ZG, C×) H1(ZG, C) = 0 H1(ZG, Z)

H2(ZG, Z) H2(ZG, C) = 0 H2(ZG, C×) ···

Hence we get that for every i > 0,

i × i+1 0 → H (ZG, C ) → H (ZG, Z) → 0 is exact, and so Hi(ZG, C×) ' Hi+1(ZG, Z).

Also, by applying the functor HomZG(−, Z) to

g−1 gn−1+···+1 0 ←− ZG ←− ZG ←− ZG ←− · · · it is easy to see that H2i−1(ZG, Z) = 0 and H2i(ZG, Z = Z/nZ for i > 0, and H0(ZG, Z) = Z.

46 Now, H∗( /n , ) is the cohomology ring given by L Hi( /n , ), and the cup Z Z Z i∈Z+ Z Z Z product ^ is defined by

i j i+j H (Z/nZ, Z) ^ H (Z/nZ, Z) = H (Z/nZ, Z).

Take α to be the generator of H2(Z/nZ, Z) = Z/nZ. It is well known that cup ∼ product by α induces an isomorphism Hi(ZG, Z) −→ Hi+2(ZG, Z) for all i ∈ Z. For proof of this result, see Cassels-Frohlich, Section 8, Theorem 5.

2.1.4 Group Actions on Categories

We have talked a lot about G-gradings on categories so far, but then one may ask, is it possible to have group actions on categories? In order to define such a concept let us first begin by defining a special category.

Definition 2.1.17. Let G be a group. Then, we define Cat(G) to be the monoidal category with objects as elements of G, with the only morphisms being the identity morphisms on the elements, and ⊗ as multiplication in G.

In the language of G-graded categories, Cat(G) is equivalent to CG(1), where 1 is the trivial group.

Now that we have a monoidal category that is essentially a group, we can define group actions on categories.

Definition 2.1.18. Let G be a group, and Aut(C) be the category of auto-equivalences of C, whose morphisms are isomorphisms of functors. If C is a monoidal category, then we write Aut⊗(C) to denote the category of monoidal auto-equivalences of C. Then,

• An action of G on a category C is a monoidal functor

T : Cat(G) −→ Aut(C).

47 • An action of G on a monoidal category C is a monoidal functor

T : Cat(G) −→ Aut⊗(C).

If C admits such a functor, we say G acts on C.

Notice that in the above definition C need not be a monoidal category for the monoidal functor T to be defined. Indeed if we have the isomorphisms in Aut(C) that play nicely with composition Tg’s, then we may have a monoidal structure on T .

Let G act on the category C, and let g −→ Tg be the corresponding action. For every ∼ g, h ∈ G, let γg,h : Tg ◦ Th −→ Tgh be the isomorphism that defines the monoidal structure on the functor Cat(G) → Aut(C).

Definition 2.1.19. A G-equivariant object in C is a pair (X, u), with X and object

∼ in C and a family of isomorphisms u = {ug : Tg(X) → X, g ∈ G} such that

Tg(uh) Tg(Th(X)) Tg(X)

γg,h(X) ug

ugh Tgh(X) X commutes for all g, h ∈ G.

The morphisms of equivariant objects are morphisms in C that commute with ug for every g ∈ G.

We denote the category of equivariant objects of C, or the G-equivariantization of C as CG.

48 Similarly as before, we can examine a linear case by letting G act on V ec, the cate- gory of finite dimensional K-linear vector spaces. In this case we have the following proposition.

Proposition 2.1.20. The action of G on the category V ec as an abelian category correspond to elements of H2(G, K×), whereas any action of G on V ec as a monoidal category is trivial.

Proof. G acts on V ec by T : Cat(G) → Aut(V ec). Let us begin by viewing V ec as an abelian category.

First, notice that Aut(V ec) is a subcategory of End(V ec). It follows that associativity is given by the identity, and ⊗ is the composition of automorphisms.

As T is a monoidal functor from Cat(G) to Aut(V ec), it must satisfy the following monoidal structure axiom.

(T (g) ⊗ T (h)) ⊗ T (l) id T (g) ⊗ (T (h) ⊗ T (l))

(µ(g,h)·idT (gh))⊗idT (l) idT (g)⊗(µ(h,l)·idT (hl)) T (gh) ⊗ T (l) T (g) ⊗ T (hl)

µ(gh,l)·idT (ghl) µ(g,hl)·idT (ghl) T ((gh)l) id T (g(hl)) and this gives us µ(gh, l)µ(g, h) = µ(g, hl)µ(h, l) where µ : G × G → K×.

Hence we see that µ ∈ ker(∂3) = Z2(G, K×).

Notice that any monoidal functor T induces a µ, and any µ satisfying the 2-cocycle condition, together with a functor T , turns T into a monoidal functor.

49 Hence, given a functor T : Cat(G) → Aut(V ec), the set of µ parametrizes the iso- morphism classes of monoidal functors Tµ.

But now, any monoidal functor satisfies the following diagram:

µ(g,h)id T (g) ⊗ T (h) gh T (gh)

ηg⊗ηh ηgh µ0(g,h)id T 0(g) ⊗ T 0(h) gh T 0(gh)

× The morphisms in Aut(V ec) are given by multiplication by k ∈ K , so ηg : T (g) → T 0(g) for every g ∈ G induces η : G → K×, and the converse also holds. Hence, we 0 2 L have that µ = µ · ∂ (η), and this tells us that the two functors Tµ = T (g), and L 0 Tµ0 = T (g) are isomorphic.

Hence the set of µ, given by H2(G, K×) parametrizes the classes of monoidal functors

Tµ : Cat(G) → Aut(V ec).

Now we will view V ec as a monoidal category. Taking V ec as a monoidal category, we have that Tg : V ec → V ec is a monoidal equivalence for all g ∈ G. V ec is the category of finite dimensional vector spaces over K, so any simple object of V ec is 1 dimensional. It follows that any isomorphism of simple objects are entirely

×k ∼ g determined by φ : K → K. Hence for any monoidal functors Tg, Th, µX,Y satisfying

g g h JX,Y = µX,Y · idX⊗Y , and µX,Y satisfying similar conditions, satisfies the following commutative diagram.

g µX,Y ·idX⊗Y Tg(X) ⊗ Tg(Y ) Tg(X ⊗ Y )

ηX ⊗ηY ηX⊗Y h µX,Y ·idX⊗Y Th(X) ⊗ Th(Y ) Th(X ⊗ Y )

50 So Tg’s are isomorphic pairwise for all g ∈ G. In particular, T1 must be the identity functor for 1 ∈ G. Therefore T is trivial.

2.1.5 Duals in Monoidal Categories

Now we are almost ready to talk about rigid monoidal categories, but in order to do so, we must talk about the notion of “dual”.

Definition 2.1.21. Let (C, ⊗, 1, a, ι) be a monoidal category, and let X be an object of C. Then, an object X∗ in C is said to be a left dual of X if there exist morphisms

∗ ∗ evX : X ⊗ X → 1 and coevX : 1 → X ⊗ X called evaluation and coevaluation such that

a ∗ X = 1 ⊗ X coev−→X ⊗idX (X ⊗ X∗) ⊗ X X,X−→,X X ⊗ (X∗ ⊗ X) idX−→⊗evX X ⊗ 1 = X

−1 id ∗ ⊗coev a ∗ ∗ ev ⊗id ∗ X∗ = X∗ ⊗ 1 X −→ X X∗ ⊗ (X ⊗ X∗) X−→,X,X (X∗ ⊗ X) ⊗ X∗ X−→X 1 ⊗ X∗ = X∗ are identities.

∗ 0 ∗ An object X in C is called the right dual of X if there exist morphisms evX : X⊗ X → 0 ∗ 1 and coevX : 1 → X ⊗ X such that

−1 id ⊗coev0 a ∗ ev0 ⊗id X = X ⊗ 1 X−→ X X ⊗ (∗X ⊗ X) X,−→X,X (X ⊗ ∗X) ⊗ X X−→ X 1 ⊗ X = X

−1 coev0 ⊗id∗ a∗ ∗ id∗ ⊗ev0 ∗X = 1 ⊗ ∗X −→X X (∗X ⊗ X) ⊗ ∗X −→X,X, X ∗X ⊗ (X ⊗ ∗X) −→X X ∗X ⊗ 1 = ∗X are identities.

∗ ∗ 0 Notice that if X is a left dual of X, then X is a right dual of X by taking evX = evX∗ , 0 ∗ ∗ ∗ ∗ and coevX = coevX∗ . Hence (X ) ' X ' ( X) for any X admitting left and right duals. Also, in any monoidal category, 1∗ = ∗1 = 1 with respect to ι and ι−1.

51 Now, there is a well-defined notion of adjoint, and we want to show that our notion of left and right duals coincides with the notion of adjoints. In order to do so, let us define what are.

Definition 2.1.22. A functor F : D → C is a left adjoint functor if for each object X in C, there exists a terminal morphism from F to X. If, for each object X in C, we choose an object G0X of D for which there is a terminal morphism X : F (G0X)X from F to X, then there is a unique functor G : C → D such that G(X) = G0X and

X0 ◦ FG(f) = f ◦ X for f : X → X a morphism in C; F is then called a left adjoint to G.

The right adjoint of a functor is similarly defined.

It is much more common to define adjoint functors using Hom(−, −) functors, but the above definition is much more useful in our case to show that our notion of duals coincide with the notion of adjoints.

Proof. Suppose G : C → C is an endofunctor of C, and has a left dual, denoted F . Then, we have the natural isomorphism

∼ evG : F ⊗ G → 1 but note that in the category of End(C), ⊗ is given by composition.

Hence, for every X ∈ C,

X = evG(X) := FG(X) = (F ⊗ G)(X) → 1(X) = X.

Also, if f : X → X0 is a morphism in C for some X,X0 ∈ C, we have the following commuting diagram by definition of evG.

FG(f) FG(X) FG(X0)

X X0 f X X0

52 Hence X0 ◦ FG(f) = f ◦ X , and we have that F is the left adjoint of G.

We now want to show the opposite direction, so let us suppose that F is the left adjoint of G in End(C). We have that for every X ∈ C, G(X) is an object with a

0 terminal morphism X : FG(X) → X, and X0 ◦ FG(f) = f ◦ X for any f : X → X in C.

In particular, FG = F ⊗ G in End(C), so we take evG to be . Since evG =  defines an isomorphism between F ⊗ G and 1, and similarly, one can define an isomorphism

0 0−1  between G ⊗ F and 1, as G is the right adjoint of F . Let coevG =  .

Now, for any object X ∈ C,(idG ⊗ )(G ⊗ (F ⊗ G))(X) = idG(G)(X) = G(X), and 0 similarly ( ⊗ idG)((G ⊗ F ) ⊗ G)(X) = (1 ⊗ idG)(1 ⊗ G)(X) = G(X), so clearly the following diagram commutes.

G ⊗ (F ⊗ G) idG⊗ G ⊗ 1 = G

= idG 0 (G ⊗ F ) ⊗ G  ⊗idG 1 ⊗ G = G

However, End(C) is a strict monoidal category, which tells us that aG,F,G is not just an isomorphism, but an equality. Also, 0 is an isomorphism, so we can reverse the bottom arrow, and rewriting evG and coevG as we’ve defined before we have,

G ⊗ (F ⊗ G) idG⊗evG G

aG,F,G idG (G ⊗ F ) ⊗ G G coevG⊗idG

Note that this diagram precisely defines the first condition of left duals.

Similarly, we get the following commuting diagram

53 (F ⊗ G) ⊗ F evG⊗idF F

−1 id aF,G,F F F ⊗ (G ⊗ F ) F idF ⊗coevG and the two diagrams show that the left adjoint functors are precisely the left dual objects in End(C). The proof is similar for the right adjoint functors and right dual objects.

If we have left/right dual objects, we want them to be unique, and fortunately, our notion of dual has such a property.

Proposition 2.1.23. Let C be a monoidal category. If X ∈ C has a left/right dual object, then it is unique up to a unique isomorphism.

∗ ∗ Proof. Let X1 ,X2 be two left duals of X. Let (e1, c1) and (e2, c2) be the ev and coev maps in respective order.

∗ ∗ We now have a morphism α : X1 → X2 defined by

−1 id ∗ ⊗c2 a ∗ ∗ e1⊗id ∗ ∗ X1 ∗ ∗ X1 ,X,X2 ∗ X2 ∗ α := X1 −→ X1 ⊗ (X ⊗ X2 ) −→ (X1 ⊗ X) ⊗ X2 −→ X2

∗ ∗ and similarly we have a morphism β : X2 → X1 defined by

−1 id ∗ ⊗c1 a ∗ ∗ e2⊗id ∗ ∗ X2 ∗ ∗ X2 ,X,X1 ∗ X1 ∗ β := X2 −→ X2 ⊗ (X ⊗ X1 ) −→ (X2 ⊗ X) ⊗ X1 −→ X1 .

By suppressing the associativity law, (i.e. just by taking (A ⊗ B) ⊗ C = A ⊗ (B ⊗ C)) we get the following diagram:

54 id ∗ ⊗c1 ∗ X1 ∗ ∗ X1 X1 ⊗ X ⊗ X1 id id⊗c2 id⊗c2⊗id

1 ∗ id⊗c1 ∗ ∗ ∗ id⊗e2⊗id ∗ ∗ X1 ⊗ X ⊗ X2 X1 ⊗ X ⊗ X2 ⊗ X ⊗ X1 X1 ⊗ X ⊗ X1

e1⊗id e1⊗id e1⊗id

∗ id⊗c1 ∗ ∗ e2⊗id ∗ X2 X2 ⊗ X ⊗ X1 X1

Note that the composition of maps on the left side is α, the composition of maps on the top is id ∗ , and the composition of maps on the bottom is β. Hence β ◦α = id ∗ , X1 X1 and using similar construction we can see that α ◦ β = id ∗ . X2

The uniqueness of the isomorphism above has a very subtle nuance. It should rather be interpreted as being “unique up to a canonical isomorphism”. The canonical isomorphism is precisely given by α, i.e. (e1 ⊗ id) ◦ (id ⊗ c2) and indeed if we alter α 0 0 0 to α , this would induce a different e1 and c2, thereby altering the dual structure.

Now that we have the duality of objects, let us talk about duality of morphisms.

Definition 2.1.24. Let C be a monoidal category. If X,Y are objects in C that have left duals X∗,Y ∗, and f : X → Y is a morphism in C, one defines the left dual f ∗ : Y ∗ → X∗ of f by

−1 id ∗ ⊗coev a ∗ ∗ (id ∗ ⊗f)⊗id ∗ ev ⊗id ∗ f ∗ := Y ∗ Y −→ X Y ∗ ⊗(X ⊗X∗) Y −→,X,X (Y ∗ ⊗X)⊗X∗ Y −→ X (Y ∗ ⊗Y )⊗X∗ Y−→X X∗

Likewise, if X,Y have right duals ∗X, ∗Y , and f : X → Y is a morphism in C, we define the right dual ∗f : ∗Y → ∗X of f by

0 0 coev ⊗id∗ a∗ ∗ id∗ ⊗(f⊗id∗ ) id∗ ⊗ev ∗f := ∗Y −→X Y (∗X ⊗X)⊗∗Y X,X,−→ Y ∗X ⊗(X ⊗∗Y ) X −→ Y ∗X ⊗(Y ⊗∗Y ) −→X Y ∗X

55 We would also like for the notion of dual objects and morphisms to behave nicely with monoidal functors, and indeed, we will see that if F is a monoidal functor, F (X)∗ = F (X∗). For this proposition, we want to be rigorous in showing that dual objects behave nicely with monoidal functors, so we will not be using the convention 1 = F (1), and use the canonical isomorphism φ : 1 →∼ F (1).

Proposition 2.1.25. Let C and D be monoidal categories, and let X,X∗C be objects such that X∗ is the left dual of X. Let (F, J, φ) be a monoidal functor from C to D. Then, F (X∗) is the left dual of F (X).

Proof. To show this, we define evF (X), coevF (X) to be:

−1 ∗ JX∗,X ∗ F (evX ) φ evF (X) := F (X ) ⊗ F (X) → F (X ⊗ X) → F (1) → 1

J−1 φ F (coevX ) ∗ X,X∗ ∗ coevF (X) := 1 → F (1) → F (X ⊗ X ) → F (X) ⊗ F (X )

Now we want to show that evF (X) and coevF (X) satisfy the identity map.

−1 F (coevX )⊗idF (X) JX,X∗ ⊗id F (X) F (1) ⊗ F (X) F (X ⊗ X∗) ⊗ F (X) (F (X) ⊗ F (X∗)) ⊗ F (X) ···

id J1,X =id JX⊗X∗,X JX⊗X∗,X ◦(JX,X∗ ⊗id) F (coevX ⊗idX ) id F (X) F (1 ⊗ X) F ((X ⊗ X∗) ⊗ X) F ((X ⊗ X∗) ⊗ X) ···

aF (X),F (X∗),F (X) id⊗JX∗,X id⊗F (evX ) F (X) ⊗ (F (X∗) ⊗ F (X)) F (X) ⊗ F (X∗ ⊗ X) F (X) ⊗ F (1) F (X)

JX,X∗⊗X ◦(id⊗JX∗,X ) JX,X∗⊗X JX,1=id id F (aX,X∗,X ) id F (id⊗evX ) F (X ⊗ (X∗ ⊗ X)) F (X ⊗ (X∗ ⊗ X)) F (X ⊗ 1) F (X)

It is easy to see that this diagram commutes, and represents the first identity condition for evF (X) and coevF (X). The other identity conditions can be shown in similar ways.

56 In a similar manner, one can check that if U, V, W ∈ C with left/right duals, and if f : V → W and g : U → V , then fg)∗ = g∗f ∗. Also, U ⊗ V has a left dual (U ⊗ V )∗ = V ∗ ⊗ U ∗ and a right dual is similarly given.

We have shown that the notion of duality behaves well with monoidal functors, but the notion has even nicer properties with the HomC(−, −) functor.

Proposition 2.1.26. Let C be a monoidal category, and let U, V, W be an object in C.

• If V has a left dual V ∗, then there are natural adjunction isomorphisms:

∼ ∗ HomC(U ⊗ V,W ) −→ HomC(U, W ⊗ V )

∼ ∗ HomC(U, V ⊗ W ) −→ HomC(V ⊗ U, W )

• If V has a right dual ∗V , then there are natural adjunction isomorphisms:

∗ ∼ HomC(U ⊗ V,W ) −→ HomC(U, W ⊗ V )

∗ ∼ HomC(U, V ⊗ W ) −→ HomC(V ⊗ U, W )

Proof. We will prove the two adjunction isomorphisms for the left dual case.

Let f : U ⊗ V → W , and g : U → W ⊗ V ∗.

The first adjunction is given by the maps

φ : f −→ (f ⊗ idV ∗ ) ◦ (idU ⊗ coevV )

ψ : g −→ (idW ⊗ evV ) ◦ (g ⊗ idV )

Notice that (ψ ◦ φ)(f) is given by the following diagram:

57 id⊗coev ⊗id f⊗id⊗id id ⊗ev U ⊗ V V U ⊗ (V ⊗ V ∗) ⊗ V W ⊗ V ∗ ⊗ V W V W

id id f⊗id⊗id f

id⊗coevV ⊗id id idU⊗V ⊗evV U ⊗ V U ⊗ (V ⊗ V ∗) ⊗ V U ⊗ V ⊗ V ∗ ⊗ V U ⊗ V

The diagram obviously commutes, and the bottom row is just the identity, and hence (ψ ◦ φ)(f) = f. Using a similar construction, we can show that φ ◦ ψ)(f) = f, and hence we have that ψ = φ−1, and conclude that φ is an isomorphism.

Now let f : U ⊗∗ V → W , and g : U → W ⊗ V .

For the second adjunction isomorphism, we can take two maps:

φ : f −→ (idV ⊗ f) ◦ (coevV ⊗ idu)

ψ : g −→ (evV ⊗ idW ) ◦ (idV ∗ ⊗ g)

As in the above case, we have the diagram of (ψ ◦ φ)(f) given by:

id⊗coev ⊗id id⊗id⊗f ev ⊗id V ∗ ⊗ U V V ∗ ⊗ V ⊗ V ∗ ⊗ U V ∗ ⊗ V ⊗ W V W W

id id id⊗id⊗f f

id⊗coevV ⊗id id evV ⊗idV ∗⊗W V ∗ ⊗ U V ∗ ⊗ V ⊗ V ∗ ⊗ U V ∗ ⊗ V ⊗ V ∗ ⊗ U V ∗ ⊗ U

Hence we have that (ψ ◦ φ)(f) = f, and by a similar diagram, (φ ◦ ψ)(f) = f. Therefore, ψ = φ−1, and φ is an isomorphism.

One can notice that Proposition 18 precisely tells us that if V has a left dual, then the left adjoint functor of (V ⊗ −) is (V ∗ ⊗ −), and if V has a right dual, then the right adjoint functor of (− ⊗ V ) is (− ⊗ V ∗).

58 Also, Proposition 18 provides another proof of Proposition 16, but this requires the Yoneda Lemma. While the Yoneda Lemma is not required for the purposes of this thesis, it is a very significant result in . The statement is that if C is a locally small category, then

Nat(HomC(∗, −), HomC(∗, −)) ' HomC(∗, ∗).

This result generalizes Cayley’s Theorem, and in fact, we can retrieve Cayley’s The- orem by taking C to be a category with a single object with all of its morphisms as isomorphisms. Then, HomC(∗, ∗) forms a group G under composition, and any covariant functor from C to Set takes the unique object of C to some set X, and the group of morphisms G to the group of permutations of X, and this tells us that X is a G-set. But then, a natural transformation between such functors is the equivariant maps from a G-set to itself, and the group of such equivariant maps form a subgroup of the group of permutations of G. The statement of Yoneda Lemma then tells us that the group of equivariant maps from a G-set to itself is isomorphic to G, hence G is isomorphic to a subgroup of the group of permutations of G, which is precisely the statement of Cayley’s Theorem.

Now, using Proposition 18 and the Yoneda Lemma, we have this alternate proof of Proposition 16:

Proof. Let C be a monoidal category. If X ∈ C has a left dual X∗, then

∗ ∼ HomC(X ⊗ U, W ) −→ HomC(U, X ⊗ W ).

If we set U = 1, then we have the natural isomorphism

Hom(X∗,W ) ' Hom(1,X ⊗ W ) so if X1 and X2 are left duals of X, we have the natural isomorphism Hom(X1,W ) '

Hom(X2,W ). By the Yoneda Lemma, we then have the canonical isomorphism

X1 ' X2, and we have the proposition.

59 Now that we’ve fully explored the notion of duality in a monoidal category, we are ready to define rigid monoidal categories.

Definition 2.1.27. An object in a monoidal category is called rigid if it has left and right duals. A monoidal category C is rigid if every object of C is rigid.

So what are some examples of rigid monoidal categories? Our notion of duality coincides with the notion of duality for vector spaces, and in fact K-V ec, the category of finite dimensional vector spaces over the field K is a rigid monoidal category. The dual of any object V is its dual space V ∗, and the required maps are:

∗ ∗ • evV : V ⊗ V → K being the contraction, and is defined by, if f ∈ V and

v ∈ V , evV : f ⊗ v 7→ f(v) ∈ K.

∗ P ∗ • coevV : K → V ⊗ V defined by, if c ∈ K, coevV : c 7→ i cvi ⊗ vi , where ∗ ∗ {v1, ··· , vn} is the basis of V , and {v1, ··· , vn} is the dual basis.

But note that K-V ec, the category of all vector spaces (not necessarily finite) over K is not rigid, since coevaluation maps are not well-defined for infinite dimensional vector spaces.

Similarly, Rep(G), the category of finite dimensional representation of G over K is

−1 ∗ rigid. If we take ρ as usual to be the G-representation, ρV ∗ (g) = (ρV (g )) , and

−1 T this is usually given by (ρV (g )) , which is called the contragradient representation. Recall that the dual space is the vector space of linear functionals from V to K, so the action of a functional in the dual space on a vector in V can be represented by the matrix multiplication, f T v, and hence to be consistent with this action, we define the evaluation and coevaluation maps to be:

−1 −1 evG : ρV ∗ (g) ⊗ ρV (g) 7→ ρV (g )ρV (g) = ρV (g g) = 1

60 X coevG : 1 7→ ρV ∗ (g) ⊗ ρV (g) g∈G

Another example of a rigid monoidal category is V ecG, the category of finite dimen- sional G-graded vector spaces over K, but it is rigid if and only if G is a group.

We’ve already discussed the tensor structure on V ecG, given by δg ⊗ δh = δgh, so ∗ ∗ if we define δg = δg−1 , δg ⊗ δg = δg−1 δg = δ1 = K, and this induces the evaluation and coevaluation maps in an obvious way, where the coevaluation map is given by

∗ the diagonal sum. Notice that if we have that δg 6= δg−1 , i.e. G is not a group but

∗ a monoid, Hom(δg ⊗ δg, K) = Hom(δhg, δ1) = ∅, where hg 6= 1. Hence we cannot define a dual structure, as we already have a predetermined tensor structure on the category. Therefore V ecG is not rigid if G is not a group.

We can also think about the duality notion as being a functor. In a monoidal category C with left and right duals, one can define a left duality functor by

X → X∗, f → f ∗, C → C.

Clearly (−)∗ reverses the order of arrows, so it must be a contravariant functor. Also, the order of tensors are reversed so we have

∗ ∗ ∗ ∗ ∗ (X ⊗ Y ) = Y ⊗ X = X ⊗op Y

(f ◦ g)∗ = g∗ ◦ f ∗

∗ (−) −1 aX,Y,Z −→ aZ∗,Y ∗,X∗ where

aX,Y,Z :(X ⊗ Y ) ⊗ Z −→ X ⊗ (Y ⊗ Z)

−1 ∗ ∗ ∗ ∗ ∗ ∗ aZ∗,Y ∗,X∗ : Z ⊗ (Y ⊗ X ) −→ (Z ⊗ Y ) ⊗ X

Hence we have that (−)∗ : C∗ → Cop, where C∗ is the dual category, is a monoidal functor, and it is easy to see from Proposition 18 that (−)∗ and ∗(−) are quasi- inverses. This tells us that at least for rigid monoidal categories, the dual categories

61 and opposite categories are equivalent. (In the sense that the duality functors serve as monoidal equivalences)

We also have a very surprising result for the natural transformations between monoidal functors in rigid monoidal categories.

Proposition 2.1.28. If C, D are rigid monoidal categories, F1, F2 are monoidal functors and η : F1 → F2 is a natural transformation, then η is an isomorphism.

∗ ∗ ∗ ∗ ∗ Proof. Let us explicitly write out the map ηX∗ : F2(X ) → F1(X ).

coev ∗ 0 ∗ ∗ ∗ F1(X ) ⊗id ∗ ∗ ∗ ∗ ∗ ηX∗ := F2(X ) −→ ( F1(X ) ⊗ F1(X )) ⊗ F2(X )

a ∗ ∗ ∗ ∗ ∗ −→ F1(X ) ⊗ (F1(X ) ⊗ F2(X ))

id⊗η⊗id ∗ ∗ ∗ ∗ ∗ −→ F1(X ) ⊗ (F2(X ) ⊗ F2(X ))

0 id⊗ev ∗ F2(X ) ∗ ∗ −→ F1(X )

Using the explicit write-out of η, by naturality of η we obtain the following diagram:

coev⊗id ∗ a ∗ ev F1(X) (F1(X) ⊗ F1(X )) ⊗ F1(X) F1(X) ⊗ (F1(X ) ⊗ F1(X)) F1(X)

η id⊗id⊗η id⊗η⊗η φ η◦a coev ∗ ∗ ev F2(X) (F1(X) ⊗ F1(X )) ⊗ F2(X) F1(X) ⊗ (F2(X ) ⊗ F2(X)) F1(X)

id η⊗η⊗id η⊗id⊗id η

coev ∗ a ∗ ev F2(X) (F2(X) ⊗ F2(X )) ⊗ F2(X) F2(X) ⊗ (F2(X ) ⊗ F2(X)) F2(X)

Notice that all of the boxes commute except for the top right box, and in order for the top right box to commute, φ = idF1(X). Also notice that the top row is the identity ∗ map, and the middle row is precisely ηX∗ , and by the commutativity of the diagram ∗ we have that η ◦ ηX∗ = id. Similarly, the middle row and the last row imply that

62 ∗ ∗ ηX∗ ◦ η = id. Therefore, we have that ηX∗ is the inverse of η, thereby showing that η is an isomorphism.

Lastly, we can find a correspondence between special left modules and rigid bimodules over an algebra A.

Proposition 2.1.29. M ∈ A-bimod has a dual(left) if and only if it is finitely gen- erated projective when considered as a left A-module.

Proof. First, note that having a left/right dual implies that it has a finite dual basis. So let us suppose M has a left dual, i.e. has a finite dual basis.

∗ ∗ If we take (m1, ··· , mk) to be the basis of M, then (m1, ··· , mk) is the dual basis.

Now we take a free module F with basis (x1, ··· , xk) and a surjective homomorphism

φ : F → M defined by φ : xi 7→ mi.

∗ But since we have a bijective correspondence between each mi : M → A and xi, we have the induced map:

k k X ∗ X ∗ ψ : (m · mi )mi 7→ (x · mi )xi i=1 i=1

Clearly, φ ◦ ψ = idM , and φ is surjective. Hence we have:

F (projective) φ φ M id M and we see that M is projective.

Conversely, suppose M is a finitely generated .

63 Let F be a free module with an epimorphism from F to M. Then, as M is projective, this epimorphism splits, and we have a monomorphism from M to F . By choosing a basis of M, which is in our case {m1, ··· , mk}, we have a finitely generated free 0 0 0 0 module M with {m1, ··· , mk} as the basis with an epimorphism φ : M → M defined 0 by mi 7→ mi.

0 0 0 As M is a free left A-module, for any m ∈ M , there exists a1, ··· , ak ∈ A such that

0 Pk 0 m = i=1 aimi. So we will define a map:

m∗ : M 0 → A

0 P defined by m 7→ ai.

∗ ∗ This induces the dual basis {m1, ··· , mk}, where

∗ 0 mi :mi 7→ ai

0 mj 7→ 0 ∀j 6= i

Now, setting the monomorphism we had from M to F as ψ, ψ induces a monomor-

0 0 phism from M to M , which we will call ψ . It is clear that φ ◦ ψ = idM as M is projective.

∗ Hence we have that the basis for the left dual of M is given by {mi ψ}.

64 Chapter 3: Tensor Categories, Grothendieck Rings, and

Z+ Rings

3.1 Tensor Categories and Grothendieck Rings

Now, we have all the ingredients to define what tensor categories are.

Definition 3.1.1. Let C be a locally finite K-linear abelian rigid monoidal category. We say C is a multitensor category over K if the tensor bifunctor ⊗ : C × C → C is bilinear on morphisms. We say that C is indecomposable if C is not categorically equivalent to a direct sum of nonzero multitensor categories. Lastly, if EndC(1) 'K, then we call C a tensor category.

A finite semisimple multitensor category is called a multifusion category, and a fusion category is a multifusion category with EndC(1) 'K. It can alternatively be defined as a finite semisimple category.

Recall that V ec, the category of finite dimensional vector spaces over K is a rigid monoidal category. Since it is comprised of finite dimensional vector spaces, it is locally finite and it is clearly abelian. Also, by taking ⊗ to be the Kronecker product between linear transformation matrices, we see that ⊗ is bilinear on morphisms in V ec. Also, any finite dimensional vector space can be decomposed into a direct sum of 1-dimension vector spaces over K, by choice of a set of basis. We can also see that V ec is not just locally finite, but finite, since any vector space is necessarily a free module, and hence projective, and there is exactly 1-isomorphism class of simple objects in V ec. Lastly, EndC(1) is clearly just K, and hence we have that V ec is a fusion category.

Similarly, Rep(G), the category of finite dimensional representations of a group G over K, is a tensor category. Now, Maschke’s Theorem states that if K is of characteristic

65 0 or char(K) is coprime to |G|, then every representation of G over K is a direct sum of irreducible representations, or equivalently, KG, the group algebra of G over K is semisimple. Therefore, we have that Rep(G) over K whose characteristic doesn’t divide |G| is a finite semisimple tensor category, i.e. a fusion category.

ω Lastly, V ecG, much like V ec is a tensor category. However, if G is an infinite group, this implies infinitely many non-isomorphic simple objects corresponding to each g ∈

ω G, hence V ecG is a fusion category if G is a finite group.

For a multitensor category, the bifunctor ⊗ has a very nice property, but in order to show that we need the following proposition.

Proposition 3.1.2. Let F , G be additive functors where F is the left adjoint of G and G is the right adjoint of F . Then, F is right exact, and G is left exact.

Proof. Let 0 → A → B → C → 0 be a short exact sequence. Since for any X, Hom(X, −) is left exact, 0 → Hom(F (X),A) → Hom(F (X),B) → Hom(F (X),C) is exact, and by the naturality of adjunction isomorhpisms, we have the following diagram:

0 Hom(F (X),A) Hom(F (X),B) Hom(F (X),C)

' ' ' 0 Hom(X,G(A)) Hom(X,G(B)) Hom(X,G(C))

By the Yoneda Lemma, the exactness of the bottom row implies that 0 → G(A) → G(B) → G(C) is exact, which implies G is left exact. Applying Hom(−,G(X)) on the short exact sequence yields a similar diagram, and we can conclude that F is right exact.

With Proposition 21, we have the follwing proposition.

66 Proposition 3.1.3. Let C be a multitensor category. Then, the bifunctor ⊗ is exact in both factors.

Proof. In our discussion of rigid monoidal categories, we saw that the functors (V ⊗−) and (− ⊗ V ) have both left and right adjoint functors if V has left and right duals. Since we are in a multitensor category, V is rigid, and therefore the above functors have both left and right adjoints. Applying Proposition 21 yields the desired result.

Ultimately, the goal of this paper is to provide a context in which we can examine mathematical objects that are called Grothendieck Rings, and it is useful to define a category that has slightly weaker conditions than a tensor category.

Definition 3.1.4. A multiring category over K is a locally finite K-linear abelian monoidal category C with bilinear and biexact tensor product. If EndC(1) = K, then we call C a ring category.

Notice that for multiring categories, we must impose a condition that tensor products are biexact. For multitensor categories, we saw from Proposition 22 that rigidity forces tensor products to be biexact, but as multiring categories do not require such condition, we must impose the restriction on tensor products separately.

An example of a ring category is V ecG, the category of finite dimensional G-graded vector spaces over K, with G a monoid. If G is a group, we saw previously that the existence of inverses extends to the rigidity condition of tensor categories, but as monoids do not require inverses to exist, we have a ring category.

As before, since we’ve defined tensor categories, we also want to define functors be- tween tensor categories that behave nicely with them.

Definition 3.1.5. Let C and D be multiring categories over K, and let F : C → D be an exact and faithful K-linear functor.

67 F is called a quasi-tensor functor if it has a functorial isomorphism J : F (−) ⊗ F (−) → F (− ⊗ −), and F (1) = 1.

A quasi-tensor functor (F,J) is called a tensor functor if it satisfies the monoidal structure axiom.

Finally, we have gotten to the primary object of interest of this paper. Here, we have the definition of Grothendieck rings.

Definition 3.1.6. Let C be a multiring category over K. Let Xi be representatives of the isomorphism classes of simple objects in C. Then, the tensor product on C induces a natural multiplication on the Grothendieck group Gr(C) defined by:

X XiXj := [Xi ⊗ Xj] = [Xi ⊗ Xj : Xk]Xk, i, j ∈ I k∈I

With multiplication defined as above, and addition defined as previously, Gr(C) is called the Grothendieck ring of C.

The multiplication rule above is called the fusion rule of C, and is of importance in physics, as well as when we are examining fusion categories. The detailed explanation on fusion categories, or applications of fusion rules in physics is beyond the scope of this paper.

Grothendieck rings are very important because they have the potential to give us insight about categories, which are immense mathematical objects, by allowing us to use ring theoretic tools, which are much easier to handle. The caveat is that the definition of Grothendieck rings isn’t quite so useful, as it does not give us informa- tion about the structure of the ring unless we arduously compute the ring, which is often times an incredibly difficult task. However, there is a special case, where a

Grothendieck ring becomes a very special type of ring called Z+-ring. However, to show this, we must know and understand the properties of Z+-rings, which happens to be the topic of our next discussion.

68 3.2 Z+-rings

Definition 3.2.1. Let Z+ denote the semiring of non-negative integers.

Let A be a ring which is free as a Z-module with the following properties:

+ P k • A Z basis of A is a basis B = {bi}i∈I such that bibj = k∈I cijbk, where

k + cij ∈ Z .

• A Z+ ring is a ring with fixed Z+ basis and with identity 1, which is a non- negative linear combination of the basis elements.

• A unital Z+ ring is a Z+ ring such that 1 is a basis element.

Note that every Z+ ring is assumed to have an identity, but notice that not all Z+ rings are unital. It is important when understanding Z+ rings to see that the coefficients have to be non-negative induces a limitation on how we can change the basis, i.e. we cannot just pick an arbitrary basis as we can do for vector spaces.

+ Let A be a Z ring and let I0 be the set of i ∈ I such that bi occurs in the decompo- sition of 1.

Let τ : A → Z denote the group homomorphism defined by

τ(bi) = 1 if i ∈ I0, and 0 if i 6∈ I0

In our discussion of rigid monoidal categories and tensor categories, the notion of duality and adjointness played a big role. As was stated previously, the Grothendieck rings become Z+ rings under a special case, so we can expect an equivalent of dual structure on Z+ rings.

+ Definition 3.2.2. A Z ring A with basis {bi}i∈I is called a based ring if there exists an involution i 7→ i∗ of I such that the induced map

X ∗ X a = aibi 7→ a = aibi∗ , ai ∈ Z i∈I i∈I

69 is an anti-involution of the ring A, i.e.

∗ ∗∗ i 7→ i 7→ i = idI

∗ ∗∗ a 7→ a 7→ a = idA

∗ ∗ and we have that τ(bibj) = 1 if i = j , and 0 if i 6= j .

Perhaps it is trivial from our definition of I0, but it is nice to explicitly identify how the multiplicative unit is defined in a unital based ring.

Proposition 3.2.3. In every based ring, 1 = P a b . In other words, if P a b = i∈I0 i i i∈I0 i i

1, then ai = 1 for every i.

P P ∗ Proof. a b = 1, hence a b ∗ = 1, as the induced map from i 7→ i is an i∈I0 i i i∈I0 i i involution, i.e. an automorphism of the ring.

Note that for j ∈ I0, X X τ(bj aibi∗ ) = aiτ(bjbi∗ )

i∈I0 i∈I0 but i 7→ i∗ is an automorphism. Hence it must be that the only nonzero term in the P sum is a τ(b b ∗ ), but as a b ∗ = 1, we have j j j i∈I0 i i

aj = ajτ(bjbj∗ ) = τ(bj) = 1 and so we have the proposition.

The basis elements that appear in the decomposition of the unit in a Z+ ring have some interesting properties that are similar to the norm of an orthonormal basis for a vector space.

70 2 Proposition 3.2.4. In a based ring, for i, j ∈ I0, bi = bi and bibj = 0 for i 6= j.

Proof. WLOG let I0 = {1, ··· , n}.

Suppose bibj has bl for some l 6∈ I0 in its decomposition. Since ci > 0 for every ci such P that cibi = 1, and

X 2 X 2 2 X 1 = ( cibi) = ci bi + cjckbjbk

so bl is in the decomposition of 1, which is absurd.

+ Hence ∀i, j ∈ I0, bibj must be a Z -linear combination of b1, ··· , bn.

But now, X X 1 = τ(bi) = τ(bi cjbj) = cjτ(bibj) = 1

Hence there exists a unique j ∈ I0 such that bibj = bk for some k ∈ I0 and cj = 1. 0 For any other j 6= j, bibj0 must be 0.

Also, by similar logic, for each i ∈ I0, there exists a unique j ∈ I0 such that bjbi = bk 0 for some k ∈ I0, cj = 1, and bj0 bi is 0 for any j 6= j.

0 It follows that ∀i ∈ I0, there exists a unique i ∈ I0 such that bibi0 6= 0 and as we run 0 through i ∈ I0, i runs through all of I0.

We also have that 1 = b1 + ··· + bn, i.e. the coefficients must be all 1.

0 P P Now suppose for some i ∈ I0, i 6= i. Then, we have that bi bj = ( bj)bi, so bibi0 = bi00 bi 6= 0. Hence

X X 1 = blbl0 + bibi0 + bi00 bi = blbl0 + 2bibi0 l,l06=i l,l06=i

But each of bjbj0 correspond to a unique basis element, so we have that for some s, t ∈ I0, bs + bt = 2bs, which is absurd since bs and bt are linearly independent.

Hence it must be that ∀i 6= j, bibj = 0.

71 2 P 2 P P P 2 Now suppose that bi = bj for some j 6= i. Then, ( bs)( bs) = ( bs)bt, but 2 2 notice that every bt has exactly one bs such that bsbt 6= 0. But for bj, that is impossible, 2 2 2 2 since bi is the only possible element such that bi bj 6= 0, since bi = bj, but bi bj = bi(bibj) = bi · 0 = 0, which is absurd.

2 Hence bi = bi, and this concludes the proof.

As a corollary to Proposition 24, we get that any involution on a based ring must fix any i in I0.

The unique restrictions on the involution of a based ring also give rise to a useful property regarding the coefficients in the decomposition of the product of two basis elements.

k∗ Proposition 3.2.5. In any based ring, cij , defined as the coefficient of bk∗ in the decomposition of bibj, is invariant under the cyclic permutation of i, j, k.

P k Proof. bibj = cijbk, so if we take a triple product of the basis elements, we get

k∗ k∗ τ(bibjbk) = τ(cij bk∗ bk) = cij and from the definition of τ, it is easy to see that τ(xy) = τ(yx). Hence we have that

k∗ k∗ j∗ cij is cyclically symmetric, which implies cij = cki .

Definition 3.2.6. A multifusion ring is a based ring of finite rank, and a fusion ring is a unital based ring of finite rank.

If we recall the definition of multifusion categories and fusion categories, one can see that the multifusion rings and fusion rings are analogously defined. Hence it should be that based rings are somehow corresponding to tensor categories, and we will see that once we’ve fully explored Z+ rings.

72 Proposition 3.2.7. Let A be a multifusion ring with basis {bi}. For any x ∈ A, the P element Z(x) := bixbi∗ is central in A.

P P ⊗ P P Proof. For a fixed x ∈ A, define ⊗ on A to be aibi × ajbj 7→ ( aibi)x( ajbj). Note that if we fix a k,

X X r bkbi = ckibr i r,i

r and cki = τ(bkbibr∗ ).

P r However, as r,i ckibr runs through all r, i,

X r X i∗ ckibr = cr∗kbr r,i r∗,i∗

r and as cki is invariant under cyclic permutations, we have

X i∗ X cr∗kbi∗ = br∗ bk r,i r and by using the bilinearity of ⊗, we get

X X r bkbi ⊗ bi∗ = ckibr ⊗ bi∗ i r,i

X i∗ = cr∗kbr ⊗ bi∗ r,i

X i∗ = br ⊗ cr∗kbi∗ r,i

X = br ⊗ br∗ bk r

P Hence we see that Z(x) := i bixbi∗ is central in A.

73 The idea of based ring may seem quite novel, but in fact we have a very classical example of a unital based ring. The ZG is a unital based ring, with the group elements as the basis and take g∗ = g−1. Notice that in this case, we have

I0 = {1g}. If G is finite, then ZG is of finite rank, and hence is a fusion ring.

It may seem unrelated, but Frobenius-Perron dimension is actually a defining charac- teristic of based rings. Hence we will explore this notion, and to do so we will begin by stating and proving the Frobenius-Perron Theorem.

Theorem 3.2.8. (Frobenius-Perron Theorem)

Let B be a square matrix with non-negative real entries.

(a) B has a non-negative real eigenvalue. The largest non-negative real eigenvalue λ(B) of B dominates the absolute values of all other eigenvalues µ of B, i.e. |µ| ≤ λ(B). Moreover, there exists an eigenvector of B with non-negative entries and eigenvalue λ(B).

(b) If B has strictly positive entries, then λ(B) is a simple positive eigenvalue, and the corresponding eigenvector can be normalized to have strictly positive entries. Also, |µ| < λ(B) for any other eigenvalue µ of B.

(c) If a matrix B with non-negative entries has an eigenvctor v with strictly positive entries, then the corresponding eigenvalue is λ(B).

Proof. Let B be an n × n matrix with non-negative entries.

If B has an eigenvector v with non-negative entries and eigenvalue 0, we are done.

n Otherwise, let Σ be the set of column vectors x ∈ R with non-negative entries xi, Pn and s(x) = i=1 xi equal to 1. Under this definition, notice that Σ is a simplex.

Now we will define a continuous map:

fB :Σ → Σ

Bx x 7→ s(Bx)

74 Since we assumed that B does not have an eigenvector with non-negative entries with eigenvalue 0, s(Bx) > 0 for every x ∈ Σ.

By the Brower Fixed Point Theorem, this map has a fixed point, so for some fixed point p, Bp = s(Bp)p, where s(Bp) > 0. So s(Bp), p are the desired eigenvalue and eigenvector.

Now let λ = λ(B) be the maximal non-negative eigenvalue of B for which there is an eigenvector with non-negative entries. We will denote this eigenvector as

  f1    .  f =  .    fn

Suppose B has strictly positive entries. Then, Bf = λf has strictly positive entries, hence f must also have strictly positive entries, and λ > 0.

  d1    .  If d =  .  is another real eigenvector of B with eigenvalue λ, let z be the smallest   dn of the numbers di . Then, v = d − zf satisfies Bv = λv, and so has at least one entry fi 0. But any nontrivial eigenvector of B corresponding to λ must have strictly positive entries, so v = 0. Hence the eigenspace of λ must be 1-dimensional, i.e. simple.

n Now let y = (y1, ··· , yn) ∈ C be a row vector. P We will define the norm |y| to be |yj|fj. Then,

X X X |yB| = | yibij|fj ≤ |yi|bijfj = λ|y|, j i i,j

πiθ and equality holds if and only if every non-zero yi have the form yi = cie for some

fixed θ, and ci > 0. i.e. yi’s have the same argument.

75 So if yB = µy then |µ| ≤ λ, and if |µ| = λ, we can normalize y to have non-negative entries. This implies µ = λ. So this proves (b).

Let BN be a sequence of matrices with strictly positive entries that converges to B.

Since the spectral radius function ρ is continuous, ρ(BN ) −→ ρ(B) as N → ∞.

Also, ρ(BN ) = λ(BN ) for every N. Hence ρ(B) is an eigenvalue of B, and there exists an eigenvector of B with non-negative entries corresponding to ρ(B), and this implies ρ(B) = λ(B). Hence we have (a).

Now suppose B has a row eigenvector y with strictly positive entries and eigenvalue µ. Then, µyf = yBf = λyf. This implies µ = λ since yf 6= 0.

Hence taking any eigenvector g with strictly positive entries, you see that the corre- sponding eigenvalue must be µ = λ, by the chain of equalities of the form

(yµ)g = y(B)g = y(ηg) which implies that η = µ. Hence we have (c), and we are done.

Now that we have the Frobenius-Perron Theorem, we can define the Frobenius-Perron dimension, which we will denote FPdim. To do so, we have a bit more definitions and properties we need to examine.

Let A be a Z+ ring with Z+ basis I.

Definition 3.2.9. We say A is transitive if ∀X,Z ∈ I, there exists Y1,Y2 ∈ I such that XY1 and Y2X have Z in their decompositions with non-zero coefficients.

Proposition 3.2.10. Any unital based ring is transitive.

Proof. Let X,Z ∈ I.

Note that τ(XX∗) = 1, so XX∗Z must contain a Z in its decomposition.

76 ∗ But XX Z = X(a1b1 +···+anbn) = a1Xb1 +···+anXbn, so for some i ∈ {1, ··· , n}, aiXbi contains Z in its decomposition.

Hence Xbi contains Z in its decomposition, as otherwise Z is a nontrivial linear combination of bi’s which is absurd, as Z ∈ I.

So let us set Y1 = b. Then, XY1 clearly contains Z in its decomposition, and we can

find a desired Y2 in a similar way.

Hence A is transitive.

Now let us define FPdim.

Definition 3.2.11. Let A be a transitive unital Z+ ring of finite rank.

Define a group homomorphism F P dim : A −→ C as follows:

For X ∈ I, let F P dim(X) be the maximal non-negative eigenvalue of the matrix of left multiplication by X.

  a1 ···    .   .    an ···

where the first column represents the decomposition of Xb1, i.e. Xb1 = a1b1 + ··· + anbn.

We call the above defined homomorphism FPdim as the Frobenius-Perron dimension.

Notice that because we are in a Z+, the matrix of left multiplication defined above has non-negative real entries, so by the Frobenius-Perron Theorem, our homomorphism

FPdim is validated. Also, as I is a basis, the homomorphism FPdim Z-linearly extends from I to A via addition.

77 By how we defined FPdim, some restrictions are imposed on the possible values of F P dim(X), which play an important role in proving some very nice properties of FPdim.

Proposition 3.2.12. Let X ∈ I.

(a) The number α = F P dim is an algebraic integer, and for any algebraic conjugate

α0 of α, we have α ≥ |α0|, i.e. α is a root of some monic polynomial in Z[x], and 0 0 for any other root α of the minimal polynomial of α, denoted mα,Z, α ≥ |α |.

(b) F P dim(X) ≥ 1.

Proof. Since α is a root of the characteristic polynomial of NX , the matrix of left multiplication by X defined previously, α is clearly an algebraic integer.

0 Also, mα,Z|charα,Z, so any algebraic conjugate of α, α is an eigenvalue of the left- 0 multiplication matrix NX , and by Frobenius-Perron Theorem, |α | ≤ α, as the spectral radius is α. This proves (a).

Let r be the number of algebraic conjugates of α. Then, αr ≥ |N(α)| where N(α)

Q mi is the norm of α given by αi , where αi are all roots of mα,Z, and mi are the multiplicities. Clearly |N(α)| ≥ 1, as α is an algebraic integer, so αr ≥ 1, which implies α ≥ 1. Therefore, F P dim ≥ 1, and we have (b).

By Proposition 28, we have that all Frobenius Perron dimensions of elements of A are algebraic integers.

We will now use Proposition 28 to prove the follwing proposition, which makes it evident why FPdim is a defining characteristic of a transitive unital based ring.

Proposition 3.2.13. (a) The function F P dim : A −→ C is a ring homomor- phism.

78 (b) Up to scaling, there exists a unique non- R ∈ AC := A ⊗Z C such that XR = F P dim(X)R for all X ∈ A, and it satisfies the equality RY = F P dim(Y )R for all Y ∈ A. After appropriate normalization, this element has positive coefficients, and hence F P dim(R) > 0.

(c) F P dim is the unique character of A that takes non-negative values n I, and these values are actually strictly positive.

(d) If X ∈ A has non-negative coefficients with respect to the basis of A, then

F P dim(X) is the largest non-negative eigenvalue λ(NX ) of the matrix NX of multiplication by X.

Remark: A must be unital to admit a homomorphism to C. P Proof. First, note that the matrix M of right multiplication by X∈I X in A in basis I must have strictly positive entries by the transitive property of A.

For every bi, bj ∈ I, there exists an X ∈ I such that biX contains bj, and since we are multiplying by P X, X certainly is a component of P X.

Hence, by the Frobenius-Perron Theorem, M has a single eigenvector (up to scaling) with strictly positive entries, and the eigenvalue λ(M). We denote this eigenvector

R ∈ AC = A ⊗Z C.

Since R is unique, and since ∀X ∈ A, XR(P X) = λXR, XR is also an eigenvector of M, and hence XR = d(X)R for some map d : X 7→ d(X) ∈ C.

But this implies that R is an eigenvector of the matrix NX of left multiplication by X, and hence by the Frobenius-Perron Theorem, d(X) = F P dim(X). P Now, R = X∈I aX X, where aX > 0. Hence we can scale R such that aX ≥ 1 for every X ∈ I, and by transitivity, left multiplication by R yields the matrix NR which is broken down to NR = T + NP X , where T is a matrix with non-negative entries, and NP X has strictly positive entries. Hence F P dim(R) > 0, so this proves (a), and the first part of (b).

79 Since for every X ∈ A, XR = F P dim(X)R, we have a system of linear equa- tions whose solution is R. Notice that R is the unique solution, as (P X)R = P F P dim( X)R, and NP X is a matrix with strictly positive entries, and so R is a simple eigenvector.

As RY is also a solution to the above system of linear equations, we have that RY = d0(Y )R by uniqueness of R, for some number d0(Y ). Now, F P dim(RY ) = F P dim(d0(Y )R) = (F P dim(R))d0(Y ). But F P dim is a ring homomorphism, so

F P dim(R)F P dim(Y ) = F P dim(RY ) = (F P dim(R))d0(Y ). But this is in C which is a field, hence d0(Y ) = F P dim(Y ), and this proves (b).

The above proof makes it clear that F P dim is a character. We now need to show that F P dim is the unique character, and so suppose χ is another character of A taking non-negative values on I.

P P P Then, for any Y ∈ I, χ( X∈I X · Y ) = χ( X)χ(Y ) = ( χ(X))χ(Y ), so the vector with entries χ(Y ) for Y ∈ I is an eigenvector of NP X .

Note that as NP X has positive entries, the eigenvector of χ(Y ) must also have positive entries by the Frobenius-Perron Theorem, and so we have that χ(Y ) = λ(NP X ) by another application of the Frobenius-Perron Theorem.

Hence the the eigenvector corresponding to χ(Y ), denoted vχ(Y ) must be equal to cR for some c ∈ C, but χ is a character and this implies that χ(1) = 1. Therefore, c = 1, and we have that χ = F P dim. This proves (c).

Lastly one can easily see that (d) is immediate from (b) and the Frobenius-Perron Theorem.

The element R ∈ AC, as one can see, plays a crucial role, and so we call R the regular element of A.

80 Now that we have defined F P dim, one can ask how F P dim behaves with respect to involutions, and it turns out that F P dim is invariant under involutions!

Proposition 3.2.14. Let A be as usual, and ∗ : I −→ I be a bijection given by ∗ : x · y 7→ (x · y)∗ = y∗ · x∗. Then, F P dim is invariant under ∗.

Proof. Note that the left multiplication by X corresponds to right multiplication by X∗. Since scalars are preserved, (by the τ map) the matrix of right multiplication by

∗ X is exactly the same as NX .

P 0 However, R is the eigenvector of N X = NP X∗ i.e. the matrix of right multiplication by P X∗, and RX∗ = F P dim(X∗)R.

Hence R is the eigenvector of NX∗ = NX corresponding to λ(NX∗ ), where NX∗ is ∗ ∗ the matrix of right multiplication by X . Hence we have that RX = λ(NX∗ )R = ∗ F P dim(X )R. But then, λ(NX∗ ) = λ(NX ) = F P dim(X).

Therefore, we have that F P dim(X∗) = F P dim(X).

Finally we can use the propositions we have so far to justify why we call R a regular element, and define the Frobenius-Perron dimension of A.

Proposition 3.2.15. Let A be a fusion ring. Then the element

X R = F P dim(Y )Y Y ∈I is a regular element.

81 Proof. Let us directly multiply an arbitrary basis element X of A and R.

X X Z X Y ∗ XR = F P dim(Y )XY = F P dim(Y )cXY Z = F P dim(Y )cZ∗X Z Y Y,Z Y,Z

X Y X ∗ = F P dim(Y )cX∗Z Z = F P dim(X Z)Z Y,Z Z

X = F P dim(X∗)( F P dim(Z)Z) = F P dim(X)R Z

As F P dim(X) is nonzero for any X ∈ I, we see that R is regular with respect to the basis elements of X.

The element R normalized as in Proposition 31 is called the canonical regular element of A, and now we have a definition of the Frobenius-Perron dimension of A.

Definition 3.2.16. Let A be a transitive unital based ring, and R be its canonical regular element. Then,

X F P dim(R) = F P dim(X)2 X∈I is called the Frobenius-Perron dimension of A, and is denoted F P dim(A).

Now that we’ve explored the notion of Z+ rings and F P dim, we will state the final theorem of this paper, connecting the notion of Grothendieck rings of tensor categories and transitive unital based rings.

3.3 Gr(C) and Z+-rings

Theorem 3.3.1. If C is a ring category with left duals, then Gr(C) is a transitive unital Z+ ring.

82 Proof. Recall the fusion rule given by

X XiXj := [Xi ⊗ Xj] = [Xi ⊗ Xj : Xk]Xk k∈I

The fusion rule is defined as the natural multiplication on Gr(C), and we will now show that it is associative.

By the exactness of tensor products,

X [(Xi ⊗ Xj) ⊗ Xp : Xl] = [Xi ⊗ Xj : Xk][Xi ⊗ Xk : Xl] ∀i, j, p, l. k

Similarly, we can see that

X [Xi ⊗ (Xj ⊗ Xp): Xl] = [Xj ⊗ Xp : Xk][Xi ⊗ Xk : Xl]. k

Lastly we know that

(Xi ⊗ Xj) ⊗ Xp ' Xi ⊗ (Xj ⊗ Xp) by associativity of tensor products.

Combining all of the information above, we have that the product as defined above in Gr(C) is associative.

Now, notice that the index above must always take on non-negative integer values, and so by choosing a set of representatives of isomorphism classes of simple objects in C, we have that Gr(C) is a unital based ring, as the identity object must be simple. We can see that by taking a simple subobject of 1, denoted X and taking a short exact sequence 0 −→ X −→ 1 −→ Y −→ 0.

Applying the left dualization functor, which is exact, (the proof of this fact will be given after) we get the exact sequence

0 −→ Y ∗ −→ 1 −→ X∗ −→ 0,

83 and tensoring with sequence with X on the left, we get

0 −→ X ⊗ Y ∗ −→ X −→ X ⊗ X∗ −→ 0.

As X is simple and X ⊗ X∗ is nonzero, we have that X ⊗ X∗ ' X. Using this isomorphism and the coevaluation map, we now have a surjective morphism

1 → X ⊗ X∗ → X, and since X is a subobject of 1, we have the embedding X → 1. Composing these maps gives us a non-zero morphism 1 → 1, and as EndC(1) = K, this morphism must be multiplication by a non-zero element of K, and hence X = 1.

Thus we have that 1 is a simple object, and hence Gr(C) must be a unital based ring.

Now, due to the evaluation map, X ⊗ X∗ contains 1 in its decomposition, and hence for any simple X,Z ∈ C, X ⊗ X∗ ⊗ Z contains Z in its decomposition. Hence there

∗ must be a simple object Y1 in the decomposition of X ⊗ Z such that X ⊗ Y1 contains Z in its decomposition.

Similarly, the object Z⊗X∗⊗X contains Z in its decomposition, and by the evaluation

∗ map, X ⊗ X contains 1 in its decomposition. Therefore, we can find a Y2 in the ∗ decomposition of Z ⊗X , such that Z appears in the decomposition of Y2 ⊗X. Hence Gr(C) is transitive.

Lastly, we will give a short justification of why we call R the regular element.

Proposition 3.3.2. In Rep(G), the regular representation is the regular element of

Gr(Rep(G)) as the unital transitive based ring defined above.

Proof. Consider Rep(G), the category of finite dimensional representations of a group

G over K. Let S1, ··· ,Sn be simple, i.e. irreducible representations of G over K.

84 Then, by the fusion rule and the definition of a regular element, if R is a regular element, it must be that R ⊗K Sk ' (dim(Sk))R.

Take R = KG, the regular representation of G over K. Now consider KG ⊗K Sk and

(KG)dim(Sk). Rep(G) is semisimple, hence irreducible implies projective. So we can apply tensor evaluation to get

HomKG(Si, KG ⊗K Sk) ' HomKG(Si, KG) ⊗K Sk

We know that the regular representation contains exactly dim(Si) many copies of Si, and hence HomKG(Si, KG) ⊗K Sk has dimension precisely dim(Si)dim(Sk).

dim(Sk) On the other hand, the dimension of HomKG(Si, KG ) is given by

dim(HomKG(Si, KG))dim(Sk) = dim(Si)dim(Sk).

Hence the dimensions match, which implies that there are exactly the same number of

dim(Sk) copies of Si in the decomposition of HomKG(Si, KG⊗KSk) and HomKG(Si, (KG) ), and so we have

dim(Sk) KG ⊗K Sk ' (KG) .

Since each irreducible representation appears positive-integer many times in the reg- ular representation, we have that KG corresponds to a vector with strictly positive integer valued entries, and by Frobenius-Perron, we have that it must be the eigenvec- tor corresponding to F P dim. Therefore, we have a justification of the term “regular”.

85 Chapter 4: Conclusion

In this paper, we have shown that we can examine tensor categories as unital based rings by looking at the respective Grothendieck Rings. This notion is merely a pre- cursor to a prospering area of research in mathematics called algebraic K-theory, and

Gr(C) is usually referred to as K0. As one can infer from the subscript, K0 is the most basic K-theory group, and there are higher K-groups, denoted Ki. However, the K0 we have looked at in this paper is particularly nice even in the context of K-theory, as the tensor structure in tensor categories allows us to define a ring on K0, which opens up to the use of more robust tools in , such as the Frobenius-Perron dimension which is, in our case, a defining invariant.

86 Vita

Woohyung Lee 1004 Aspen Trail, Winston-Salem, NC

27106 United States of America

(818)318-3155

[email protected]

Education

Masters in Mathematics Expected: May, 2018

Wake Forest University • Master’s thesis in Tensor Categories

• Graduate Coursework: Abstract Algebra, Galois Theory, Representation The- ory, Point Set Topology, Algebraic Topology, Real Analysis, Measure Theory, Advanced Linear Algebra, and Elliptic Curves

• Reading Sessions: Homological Algebra, Category Theory, and Cohomology

Bachelor of Arts in Mathematics and Economics May, 2015

Pomona College

• Undergraduate thesis on n-Player Game Equilibria and Algebraic Varieties

• Independent study with Dr. Karaali at Pomona College on Bruhat order, Kazh- danLusztig polynomial, and Supercharacter theory

87 Research

Original

3 • Classification of symmetries of distinct embeddings of θ4-graph in S (In progress)

• Classifying homophonic quotient groups of various languages including Korean and Turkish

• Supercharacter theory of symmetric groups given by filtrations of Sn

Expository

• Expository master’s thesis on tensor categories and fusion categories based on the monograph “Tensor Category” by Etingof et al (In progress)

Presentations

Pomona College

• Undergraduate mathematics thesis presentation on n-Player Game Equilibria and Algebraic Varieties

• Chosen class speaker for economics majors at Pomona College, on an undergrad- uate economics project regarding mathematical modeling of altruistic behaviors

Professional Experience

Teaching Assistant

Department of Mathematics, Wake Forest University August 2016 - Present

• Linear Algebra, Calculus 2, and Modern Algebra

• Tutor for various undergraduate courses, including calculus, linear algebra, etc

88 • Ran 4 hours of weekly study sessions for the above courses

• Weekly assignment grading

Department of Mathematics, Pomona College Fall, 2014

• Ran weekly study sessions for Calculus and Applied Mathematics for Science and Economics

Other Work Experiences

• Served as the Republic of Korea Navy liason/translator during ROK-US joint military trainings

• Taught mathematics, english, and economics to Korean high school students living in Qing-Dao, China in fall, 2010

• Paid vocalist at bars in Ap-gu-jeong, South Korea in summer, 2008

89