Homological Algebra Anthony Baraconi Eastern Washington University
Eastern Washington University EWU Digital Commons
EWU Masters Thesis Collection Student Research and Creative Works
2013 Homological algebra Anthony Baraconi Eastern Washington University
Follow this and additional works at: http://dc.ewu.edu/theses Part of the Physical Sciences and Mathematics Commons
Recommended Citation Baraconi, Anthony, "Homological algebra" (2013). EWU Masters Thesis Collection. 141. http://dc.ewu.edu/theses/141
This Thesis is brought to you for free and open access by the Student Research and Creative Works at EWU Digital Commons. It has been accepted for inclusion in EWU Masters Thesis Collection by an authorized administrator of EWU Digital Commons. For more information, please contact [email protected]. HOMOLOGICAL ALGEBRA
A Thesis
Presented To
Eastern Washington University
Cheney, Washington
In Partial Fulfillment of the Requirements
for the Degree
Master of Science
By
Anthony Baraconi
Spring 2013 THESIS OF ANTHONY BARACONI APPROVED BY
DATE: DR. RON GENTLE, GRADUATE STUDY COMMITTEE
DATE: DR. DALE GARRAWAY, GRADUATE STUDY COMMITTEE
DATE: DR. DAN TAPPAN, GRADUATE STUDY COMMITTEE
ii Abstract This thesis gives some results in the topics of modules and cat- egories as they directly relate with the functor Ext used in ho- mological algebra.
iii Acknowledgements
To Dr. Gentle and Jessica for all the help you gave to make this possible.
iv Contents
1 Introduction: 1
2 Modules: 3
2.1 Definition ...... 3
2.2 Hom Sets ...... 4
2.3 Sequences ...... 12
2.4 Projective and Injective Modules ...... 27
3 Categories 37
3.1 Functor ...... 41
3.2 Natural Transformations ...... 47
3.3 Products and Coproducts ...... 50
3.4 Kernels and Cokernels ...... 52
3.5 Pull-Backs and Push-outs ...... 55
4 Extensions of Modules 68
4.1 Extensions ...... 68
4.2 The Functor Ext ...... 87
4.2.1 Projective Modules ...... 87
4.2.2 Injective Modules ...... 97
v 5 Ext in RepG 103
5.1 Projectives in RepG ...... 103
5.2 Analysis of objects in RepG ...... 122
5.3 Computing Ext ...... 130
5.4 Ext in RepAn ...... 142
vi Chapter 1
Introduction:
Homology is a generic process that is used to give the algebraic structure of sequences of modules or abelian groups to mathematical objects such as topological spaces. The approach this thesis takes is through homological algebra, which has direct applications to algebraic geometry, representation theory, mathematical physics, and partial differential equations. Homological algebra relies heavily on a concept known as category theory, affectionately referred to as abstract nonsense due to the fact that many are lost from the level of abstraction.
Homological algebra can trace its origins back to the end of the 19th century. Henri Poincare and David Hilbert are consider to be the chief inves- tigators in the research of homology. The development of homological algebra is closely linked to the development of category theory which is presented in this thesis.
Samuel Eilenberg and Sanders MacLane introduced categories, func- tors, and natural transformations in their development of algebraic topology. This gave rise to an axiomatic approach to homology from intuitive and ge-
1 ometric ones. The development of category theory was first driven by the computational needs of the homological algebra.
This thesis does not go into the full theory of homological algebra. It begins by introducing the basic ideas needed to create what are known as derived functors, the central idea in homological algebra. We will then introduce the notion of module theory as it deals directly with the theory of homology. Hom sets are the first major idea brought up in that chapter. Followed by sequences and projective and injective modules.
The chapter that discusses categories is narrowly focused on the ideas needed to develop a specific functor known as Ext. It should not be seen as even an overview of the subject of categories. The theory begins by defining things in a more general sense to give the reader a feel for the approach that category theory takes. However, this view of categories is unneeded to discuss the complex ideas presented. Thus definitions are quickly changed to be spe- cific to certain categories to give the reader a more familiar setting to work with.
From here the functor Ext is developed in three different approaches. One in the setting of Sets and module theory. The second approach is uses the idea of the projective resolution of a sequence. Finally, Ext is created through injective modules. All three approaches are then shown to be equivalent and shed light on the structure of Ext.
Finally, we sum up this entire process through directed graphs. First presented in Chapter 3, directed graphs are used to illustrate concepts in categories. Later in chapter 5, directed graphs are used to give the reader another example on how to derive the Ext functor.
2 Chapter 2
Modules:
2.1 Definition
For this thesis the reader will need to be familiar with several concepts from the theory of modules and categories. We will begin with modules.
Definition 2.1 Let R be a ring. A left R-module or a left module over R is a set M together with:
(1) a binary operation + on M under which M is an abelian group, and
(2) an action of R on M (that is, a map R × M → M) denoted by rm, for all r ∈ R and for all m ∈ M which satisfies:
(i)( r + s)m = rm + sm, for all r, s ∈ R, m ∈ M
(ii)( rs)m = r(sm) for all r, s ∈ R, m ∈ M
(iii) r(m + n) = rm + rn for all r ∈ R, m, n ∈ M
(iv) If the ring R has a 1 then, 1m = m for all m ∈ M
♦
3 Modules behave much like vector spaces. Unless otherwise stated as- sume that all modules are left modules and rings have unity. Most of the sequences dealt with throughout this thesis will be over modules.
Example 2.2 The integers acting on Zp will be a left Z-module. In fact, the integers acting on any abelian group will form a left Z-module by defining nx as repeated addition, so nx = x + x + ··· + x where n ∈ N. For −n define −nx = −(nx).
Definition 2.3 Given that M and N are modules, a map f : M → N is a module homomorphism if for any s, r ∈ R and m, n ∈ M, f(rm + sn) = rf(m) + sf(n). ♦
Note: If R has unity, then f is a group homomorphism.
2.2 Hom Sets
The next major idea that will be that of Hom Sets. Given A and B both
modules over R, HomR(A, B) denotes the set of all module homomorphisms from A to B. Given two homomorphisms from A to B,(φ : A → B and
ψ : A → B) we can define addition on the set HomR(A, B) by φ + ψ as the morphism given by (φ + ψ)(a) = φ(a) + ψ(a). Next we would like to check to see if it is a module homomorphism. So given a ∈ A, b ∈ B, and r ∈ R
(φ + ψ)(a + rb) = φ(a + rb) + ψ(a + rb) definition of φ + ψ = φ(a) + rφ(b) + ψ(a) + rψ(b) φ, ψ are homomorphisms = φ(a) + ψ(a) + r(φ(b) + ψ(b)) = (φ + ψ)(a) + r(φ + ψ)(b) definition of φ + ψ
4 It follows that since +R is abelian + forHomR(A, B) is abelian as well and
HomR(A, B) forms an abelian group. In the next sections of this text we will suppress the R on the HomR(−, −) notation when it is not necessary.
∼ Theorem 2.4 HomZ(Zn,A) = A(n) where A is an abelian group, and A(n) = {a ∈ A|na = 0}.
Proof: We can define a mapping Γ : HomZ(Zn,A) / A(n) by Γ(φ) = φ(1). Consider
nφ(1) = φ(1) + φ(1) + ··· + φ(1) (n times)
= φ(1 + 1 + ··· + 1) (since φ is a homomorphism) = φ(n)
= φ(0)
Since φ is a group homomorphism it preserves the additive identity, so nφ(1) =
0. Resulting in φ(1) ∈ A(n). Now consider
Γ(φ + ψ) = (φ + ψ)(1)
= φ(1) + ψ(1)
= Γ(φ) + Γ(ψ)
Thus Γ a homomorphism of abelian groups.
Define a map Λ : A(n) / HomZ(Zn,A) as follows. For each a ∈ A(n),
Λ(a) = φa where φa(k) = ka. Clearly, φa is a homomorphism φa : Zn / A.
Given φ ∈ HomZ(Zn,A)
ΛΓ(φ) = ∆(φ(1))
= φφ(1)
5 Now for each k ∈ A
φφ(1)(k) = kφ(1)
= φ(1) + φ(1) + ··· + φ(1)
= φ(1 + 1 + ··· + 1)
= φ(k)
Thus ΛΓ(φ) = φ. Now let a ∈ A(n)
ΓΛ(a) = Λ(φa)
= φa(1)
= a1
= a
∼ It then follows that Γ is an isomorphism HomZ(Zn,A) = A(n). Since
ΛΓ is the identity on HomZ(Zn,A) and ΓΛ is the identity on A(n).
Example 2.5 HomZ(Zn, Zm)
Here (Zm)n = {k ∈ Zm|nk = 0}, and
nk = 0 mod m (2.1)
⇔ m divides nk (2.2)
m n ⇔ d divides d k for d = gcd(n, m) (2.3) m n ⇔ m divides k since gcd , = 1 (2.4) d d d
m m Resulting in (Zm)n = {l d |l ∈ Z} = d which is a cyclic subgroup of m m ∼ Zm of order d generated by d . Hence d = Zd, since any cyclic group of order
d is isomorphic to Zd. Thus by Theorem 2.4, a generic φ ∈ HomZ(Zn, Zm) will m ∼ have the form φ(k) = l d for a fixed l where 0 ≤ l < d and HomZ(Zn, Zm) =
Zd.
6 Example 2.6 For B1 ⊕ B2, the direct sum of modules B1 and B2, and two module homomorphisms φi ∈ Hom(A, Bi) for i = 1, 2 we can define a
φ ∈ Hom(A, B1 ⊕ B2) by φ1(a) φ(a) = (φ1a, φ2a) = φ2(a)
φ1 Denote φ by . Similarly, for ψi ∈ Hom(Ai,B) for i = 1, 2 we can define φ2
ψ ∈ Hom(A1 ⊕ A2,B) by
ψ(a1, a2) = ψ1(a1) + ψ2(a2).
ψ will be denoted by [ψ1, ψ2]. More generally, if φi ∈ Hom(A, Bi) for i = 1, ..., n, define φ1 φ1(a) φ2 φ2(a) φ = . by φ(a) = . . . φn φn(a) and if ψi ∈ Hom(Ai,B) for i = 1, ..., n, define ψ = [ψ1, ..., ψn] by a1 n a2 X [ψ1, ..., ψn] . = ψi(ai). . i=1 an
∼ This will imply that, for finite sums, Hom(⊕iAi,B) = ⊕iHom(Ai,B). As ∼ well as Hom(A, ⊕jBj) = ⊕jHom(A, Bj).
∼ Theorem 2.7 For i, j ∈ N, Hom(⊕iAi,B) = ⊕iHom(Ai,B) ∼ and Hom(A, ⊕jBj) = ⊕jHom(A, Bj).
7 Proof: For φj ∈ Hom(A, Bj) with j = 1, ..., n, define φ ∈ Hom(A, ⊕jBj),
(where elements of ⊕jBj are written as column vectors) by φ1(a) φ2(a) φ(a) = . . φn(a)
Define a mapping πk : ⊕jBj / Bk, called the canonical (natural) projection, by b1 b2 πk . = bk, . bn b1 . where bk is the kth component of the vector . . Then bn φ1 ∼ . ⊕jHom(A, Bj) = Hom(A, ⊕jBj) by . 7→ φ defined as above, with the φn π1 ◦ φ . inverse φ 7→ . , which are both clearly group homomorphisms. With πn ◦ φ this isomorphism, φ ∈ Hom(A, ⊕jBj) is usually denoted as the column vector φ1 . . and regarded as an element of ⊕jHom(A, Bj). φn
8 Now for ψi ∈ Hom(Ai,B), with i = 1, ..., m, define ψ ∈ Hom(⊕iAi,B),
(where elements of ⊕iAi are given as column vectors) by a1 m a2 X ψ . = ψi(ai). . i=m am
We can then define a mapping, ik : Ak / ⊕i Ai, called the the canonical (natural) injection by 0 . . 0 ik(a) = a 0 . . 0 Where a occurs in the kth component of the vector. Then ∼ ⊕iHom(Ai,B) = Hom(⊕iAi,B) by (ψ1, ..., ψm) 7→ ψ, defined as above, and with the inverse ψ 7→ (ψ ◦ i1, ..., ψ ◦ im) (which are again both group homo- morphisms). With this isomorphism, the elements ψ ∈ Hom(⊕iAi,B) are denoted as row vectors and regarded as elements of ⊕iHom(Ai,B).
Now we will discuss some basic ideas for elements in HomR(A, B). Given f ∈ Hom(A, B), we define the image of f, denoted as Im f, by Im f = {f(a) ∈ B|a ∈ A}. The kernel of f, denoted as ker f, is all of the elements in A that are sent to the identity in B by the homomorphism f, (ie ker f = {a|f(a) = 0}). Finally, the cokernel of f (the dual notion to kernel), denoted as cok f, is defined as the module B modded out by Im f (ie. cok f = B/Im f).
9 In the above theorem, we saw the canonical injection map for external direct sums. Another kind of natural injection map could involve submodules.
For a given module, M, with submodule, N, the injection map, i : N / M, is i(x) = x. In fact, the injection map is any mapping that satisfies the following universal property.
α i Property 2.8 For A / B, the inclusion ker α / A has the following uni- versal property:
(i) 0= αi
(ii) if 0 = αγ with γ : M / A then there exists a unique λ : M / ker α such that γ = iλ. Diagrammatically, this is
i α ker α / A / B O < γ ∃!λ M
Proof:
i) Clear, since for a ∈ ker α, αi(a) = α(a) = 0
ii) Suppose αγ = 0, so αγ(m) = 0 for all m ∈ M. Thus γ(m) ∈ ker α.
Now let λ : M / ker α be λ(m) = γ(m). Then
iλ(m) = λ(m)
= γ(m)
For uniqueness, suppose iλ = γ, then for any m ∈ M, iλ(m) = γ(m), so λ(m) = γ(m) since i is just the inclusion map.
Similarly, the earlier definition of the projection map was given for internal direct sum. However, given a module, M, with submodule, N, the
projection map, π : M / M/N, is given by π(x) = x = x + M. In fact, the
10 canonical projection map is any mapping that satisfies the following universal property.
α π Property 2.9 For A / B, the projection B / cok α has the following universal property:
(i) 0= πα
(ii) if 0 = γα, γ : B / M, then there exists a unique λ : cok α / M such that γ = λπ. Diagrammatically, this is
α π A / B / cok α γ ∃!λ " M
Proof:
i) Clear, since πα(x) = α(x) = 0 because α(x) ∈ Im α.
ii) The elements of cok α are π(b) with b ∈ B, so we must define λ so that λ(π(b)) = γ(b). Such a λ would then be unique. To establish λ is well-defined, suppose π(b1) = π(b2). Then π(b1 − b2) = 0, implying that
(b1 − b2) ∈ ker π. However, ker π = Im α and thus b1 − b2 = α(a) for some a ∈ A.
0 = γα(a)
= γ(b1 − b2)
= γ(b1) − γ(b2)
γ(b1) = γ(b2)
This results in λ(π(b1)) = γ(b1) = γ(b2) = λ(π(b2)). Meaning that there exists a module homomorphism that is uniquely determined by γ.
11 We finally then define some notation that will be used throughout the
ε thesis. Given a morphism A / B we can define a new morphism as follows:
For each module X, ε∗ : Hom(X,A) / Hom(X,B) is defined as ε∗(φ) = εφ
for each φ ∈ Hom(X,A). Dually, ε∗ : Hom(B,X) / Hom(A, X) is defined
as ε∗(ψ) = ψε.
2.3 Sequences
Using the canonical injection and projection maps for an associated homo- morphism f, denoted as if and πf respectively, we can create the following sequence.
if f πf ker f / A / B / cok f
By definition we have ker f = Im if and ker πf = Im f, also if is a monomor-
phism and πf is an epimorphism. Such a sequence is said to be exact at A and B.
α β Definition 2.10 The pair of homomorphism X / Y / Z is said to be exact at Y if Im α = ker β. Furthermore, a sequence
φn φn+1 ··· / Xn−1 / Xn / Xn+1 / ··· of homomorphism is said to be an exact sequence if it is exact at every Xn between a pair of homomorphisms. ♦
Note: When exact at Xn for a ∈ Xn, φn+1(a) = 0 if and only if a = φn(b) for some b ∈ Xn−1, implying that φn+1 ◦ φn = 0.
Theorem 2.11 Let A,B,C be R-modules, then:
ψ (i) the sequence 0 / A / B is exact if and only if ψ is injective
φ (ii) the sequence B / C / 0 is exact if and only if φ is surjective
12 where 0 is the trivial module (0 = {0}).
Proof: The uniquely defined map 0 / A has image 0 in A. This will be ker ψ if and only if ψ is injective. Similarly, the kernel of the uniquely defined zero homomorphism C / 0 is all of C, which is the image φ if and only if φ is surjective.
ψ φ Corollary 2.12 The sequence 0 / A / B / C / 0 is exact if and only if ψ is injective, φ is surjective, and Im ψ = ker φ.
ψ φ Definition 2.13 The sequence of the previous corollary, 0 A/ /B C/ /0, is called a short exact sequence.
ν α Theorem 2.14 For K / A / B, ν induces an isomorphism K / ker α if and only if ν satisfies the following universal property:
(i) αν = 0
(ii) if αγ = 0, then there exists a unique λ such that γ = νλ
Proof:
∼ Assuming ν induces an isomorphism, K / ker α, then we get i) and ii) from the universal property of the inclusion (Property 2.8).
Assuming ν satisfies this universal property, then there is a unique γ with i = νγ and using the universal property of i we get another uniqueγ ˆ such that ν = iγˆ giving the following diagram.
ν α K / A / B O < γ γˆ i ker α
13 Using γ to be ν the unique λ would have to be the identity on K.
ν α K / A / B B O < E γ i
1K ker α O ν γˆ K
Which gives νγγˆ = ν, hence γγˆ = 1K . Now using γ to be i we get λ to be the identity on ker α.
i α ker α / A / B B O < E γˆ ν
1ker α K O i γ ker α −1 Giving iγγˆ = i andγγ ˆ = 1ker α. Implying thatγ ˆ = γ , so γ is an isomorphism.
α ν Theorem 2.15 For A / B / L, ν induces an isomorphism L / cok α if and only if ν satisfies the following universal property:
(i) να = 0
(ii) if γα = 0, then there exists a unique λ such that γ = λν
Proof:
∼ Assuming that ν induces an isomorphism, L / cok α, we then get i) and ii) as consequences of the the universal property of the projection (Property 2.9).
Now assuming that ν satisfies this universal property, there exists a unique γ with π = γν and using the universal property of π we get another
14 uniqueγ ˆ such that ν =γπ ˆ . Giving the following diagram.
α ν A / B / L O γˆ γ π " cok α Thus using γ to be ν the unique λ would have to be the identity on L.
α ν A / B / L
π γ
ν " cok α 1L
γˆ Ó L
Which givesγγν ˆ = ν, henceγγ ˆ = 1L. Now using γ to be π, we get λ to be the identity on cok α.
α π A / B / cok α ν γˆ π " L 1cok α
γ Ó cok α
−1 Giving γγπˆ = π and γγˆ = 1cok α. Implying thatγ ˆ = γ , so γ is an isomor- phism.
While these was presented as a property of the kernel and cokernel of R- module homomorphisms, nowhere did we use the definition of kernel, cokernel, projection and inclusion morphism directly; only the universal property of i and π was needed. Meaning we can actually use this universal property to define the kernel or cokernel of a general morphism. This will be discussed more in the next chapter. Furthermore, this definition would imply that the kernel is unique if it exists.
15 Returning to a short exact sequence,
ψ φ 0 / A / B / C / 0
we get, ψ induces the isomorphism
A / B =
∼
ker φ
and φ induces the isomorphism
B / C O
∼
! cok ψ
Giving us the following theorem.
Theorem 2.16 Let
µ ε 0 / B0 / B / B00
be an exact sequence of R-modules. For every R-module A the induced se- quence
µ∗ ε∗ 0 / Hom(A, B0) / Hom(A, B) / Hom(A, B00)
is exact. Where µ∗ is defined by µ∗(φ) = µφ.
Proof: Assume that µφ in the diagram
A
φ µ ε B0 / B / B00
16 is the zero map. Then µφ(a) = 0 for all a, which implies that φ(a) = 0 for all
a, since µ is injective. Giving φ : A / B0 as the zero map, so µ∗ is injective. A map in Im µ∗ is of the form µφ. Since εµ = 0, then εµφ is the zero map, thus ker ε∗ ⊃ Im µ∗. Finally, we show that ker ε∗ ⊂ Im µ∗, consider the diagram A λ ψ ~ µ ε B0 / B / B00 Suppose ψ ∈ ker ε∗, thus εψ is the zero map. Then, by Theorem 2.14, we
get there exists λ : A / B0 such that µλ = ψ. Giving ψ = µ∗(λ) and thus Im µ∗ = ker ε∗.
Theorem 2.17 Let
µ ε A0 / A / A00 / 0 be an exact sequence of R-modules. For every R-module B the induced se- quence
ε∗ µ∗ 0 / Hom(A00,B) / Hom(A, B) / Hom(A0,B)
is exact. Where ε∗ is defined as ε∗(φ) = φε.
00 00 Proof: Suppose φ ∈ ker ε∗, then ε∗(φ) = 0. Since ε is onto, for any a ∈ A we get ε(a) = a00 for some a ∈ A. Which implies that φ(a00) = φ(ε(a)) = φε(a) = 0
and thus φ = 0. So ker ε∗ = {0} and ε∗ is injective. For
µ∗ε∗(φ) = µ∗(φε)
= φεµ
= φ0
= 0
17 Thus Im ε∗ ⊆ ker µ∗. Considering the following diagram.
µ ε A0 / A / A00
φ λ ~ B
Suppose φ ∈ ker µ∗, thus φµ is the zero. Then, by Theorem 2.15, there exists
00 a λ : A / B such that φ = λε, and thus ker µ∗ = Im ε∗.
Theorem 2.18 Snake Lemma: Given the following commutative diagram of R-modules with exact rows
φ g M 0 / MMM/ M 00 / 0
f 0 f f 00
0 / N 0 / NNN/ N 00 h ψ there exists a map δ, called the connecting homomorphism, in the induced diagram
0 0 0
φ g ker f 0 / ker f / ker f 00
if0 if if00 φ g M 0 / M / M 00 / 0
f 0 f f 00 h ψ 0 / N 0 / N / N 00
πf0 πf πf00 h ψ cok f 0 / cok f / cok f 00
0 0 0
18 Where for f, f 0, f 00, ker f denotes the kernel of f and cok f denotes the cokernel of f and the i are canonical inclusion module homomorphisms and the π are the canonical projection module homomorphisms and δ : ker f 00 → cok f 0 such that the sequence
φ g δ h ψ 0 / ker f 0 / ker f / ker f 00 / cok f 0 / cok f / cok f 00 / 0
is exact and all sections of the diagram commute.
Proof: Since each i is the inclusion map it will thus be injective and each π is the projection map and thus surjective. Also, by definition of ker and cok,
we have ker πf = Im f and ker f = Im if . The same argument can be made for the columns involving f 0 and f 00. Thus all the columns
if f πf 0 / ker f / M / N / cok f / 0
are exact for f, f 0, f 00.
Now define φ, ψ, g, h as follows.
Let φ be the restriction of φ to the ker f 0. So given x ∈ ker f 0, we get f 0(x) = 0. Implying that hf 0(x) = 0. Now the commutativity of the appropriate square, we have fφ(x) = 0. Giving φ(x) ∈ ker f. So φ sends elements of ker f 0 to ker f, making φ a well defined module homomorphism and the square φ ker f 0 / ker f
if0 if
M 0 / M φ
19 commutes. A similar argument can be made for the square
g ker f / ker f 00
if if00
00 MMM g / M where g is the restriction of g to ker f.
Note: That the module homomorphisms; φ and g are the unique maps induced by ker f and ker f 00 respectively, from Theorem 2.14.
0 Now define h : cok f / cok f as h(a) = πf h(a), and let a = b be in the cok f 0. This implies that a − b is in Im f 0, so for some m ∈ M 0, f 0(m) = a − b. Thus h(a − b) = hf 0(m) and, by the known commutativity, h(a)−h(b) = fφ(m). Implying that h(a)−h(b) is in Im f. Giving h(a) = h(b)
with h(a) = πf h(a). Therefore, πf h(a) = πf h(b) and h is well defined. Also,
h(a) = πf h(a), so hπf 0 = πf h and the square
h N 0 / N
πf0 πf
cok f 0 / cok f h commutes. Again a similar argument can be made for the square
ψ NNN/ N 00
πf πf00
cok f / cok f 00 ψ
Note: The module homomorphisms; h and ψ are the unique maps generated by cok f 0 and cok f 00 respectively, from Theorem 2.15.
Thus φ, ψ, g, h are all well defined module homomorphisms and the induced diagram commutes. We will now show the exactness of each row.
20 We wish to show that Im φ ⊆ ker g which is equivalent to g ◦ φ = 0.
Since if 00 is injective it will suffice to show if 00 ◦ g ◦ φ = 0. By the known
commutativity, if 00 ◦ g ◦ φ = g ◦ φ ◦ if 0 . Since g ◦ φ = 0, we get g ◦ φ = 0, and Im φ ⊆ ker g. g Now, suppose b ∈ ker f and b / 0, or that b ∈ ker g. From the following diagram
φ g ker f 0 / ker f / ker f 00 3 1 M 0 / M / M 00 2 f h0 0 / N 0 / N
square 1 will give b / 0
b / 0 Which implies that b ∈ ker g. Thus there exists a c ∈ M 0 such that c / b. Now square 2 gives c / b
f
x / 0 h for some x ∈ N 0. By commutativity and the fact that h is one-to-one, x = 0. So c ∈ ker f 0 and square 3 gives
φ c / b
c/ b
21 Resulting in b ∈ Im φ, so ker g ⊆ Im φ and hence ker g = Im φ. Since φ is defined as the restriction of φ to the kernel of f 0, we can see that φ is one-to- one. Thus φ g 0 / ker f 0 / ker f / ker f 00
is exact.
0 We now wish to show Im h ⊆ ker ψ, so let a ∈ cok f . Thus a = πf (a) for
0 some a ∈ N . ψ◦h = 0 by exactness, and ψ◦h(a) = 0 meaning πf 00 (ψ◦h(a)) = 0
00 in cok f . Then φ ◦ h(a) = φ ◦ h ◦ πf (a) = 0 by commutativity, so ψ ◦ h = 0, and Im h ⊆ ker ψ.
Now, let x ∈ cok f. This will have the form πf (x) for some x ∈ N. Suppose x ∈ ker ψ. Using the following diagram
g M / M 00 / 0 f 1 f 00 h ψ N 0 / N / N 00
πf0 2 πf πf00 h ψ cok f 0 / cok f / cok f 00 and the definition of ψ we get πf 00 ψ(x) = 0. This implies that
00 00 00 ψ(x) ∈ ker πf 00 = Im f . Thus for some y ∈ M we get ψ(x) = f (y). But g is onto, so there exists some z ∈ M such that g(z) = y. By commutativity of 1 , we find that f 00g(z) = ψf(z) and thus ψ(x) = ψf(z). Giving ψ(x − f(z)) = 0 and (x − f(z)) ∈ ker ψ. From exactness, there exists a unique w such that
h(w) = x−f(z). But again, by commutativity of 2 , πf (x−f(z)) = πf h(w) =
hπf 0 (w) ∈ Im h. However, πf ◦ f = 0, resulting in πf (x) = x ∈ Im h. Thus
ker ψ ⊆ Im h, and hence ker ψ = Im h. Since ψ is defined as πf 00 ◦ ψ and the composition of two onto functions is onto, we have ψ as onto and thus
h ψ cok f 0 / cok f / cok f 00 / 0
22 exact.
It remains to show that there exists a well defined module homomor- phism, δ, completing the exact sequence. Let z ∈ ker f 00 ⊂ M 00. From the following diagram ker f 00
if00 g M / M 00 f 1 f 00 h N 0 / N / N 00 ψ
πf0 cok f 0 since g is surjective, there exists a y ∈ M such that g(y) = z. By the com- mutativity of 1 , f 00g(y) = 0 = ψf(y) and by exactness f(y) = h(x) for some x ∈ N 0. Define δ(z) = x ∈ cok f 0.
0 0 0 Now suppose y ∈ M with g(y ) = if 00 (z). Then (y − y ) ∈ ker g = Im φ, so y − y0 = φ(a) for some a ∈ M 0, and by commutativity we get f(y−y0) = h◦f 0(a). From above we can see that f(y) = h(x) and f(y0) = h(x0) for some x, x0 ∈ N 0. Thus (h ◦ f 0)(a) = h(x − x0) implying that (x−x0 −f 0(a)) ∈ ker h. Since h is injective, x−x0 −f 0(a) = 0 or x−x0 = f 0(a).
0 0 Then from πf 0 we get x − x = 0 implying that x = x . Giving δ as a well defined map.
Suppose that a, b ∈ ker f 00. By definition of δ, δ(a) = c and δ(b) = d where g(u) = a and f(u) = h(c) for some u ∈ M and c ∈ N 0 as well as g(v) = b and f(v) = h(d) for some v ∈ M and d ∈ N 0. Now
g(u + rv) = g(u) + rg(v)
= a + rb
23 and
f(u + rv) = f(u) + rf(v)
= h(c) + rh(d)
= h(c + rd)
Thus
δ(a + rb) = c + rd
= c + rd
= δ(a) + rδ(b)
Resulting in δ a module homomorphism.
Finally, we need to show exactness involving δ. For z = g(a) where a ∈ ker f, we get z = g(a) by definition of g and f(a) = 0. Then f(a) = 0 = h(0). Hence, by definition of δ, we get δg(a) = 0 = 0. Thus δ ◦ g = 0 and Im g ⊆ ker δ.
Let z ∈ ker δ, so δ(z) = 0. Thus by the definition of δ, there exists a y ∈ M such that g(y) = z and f(y) = h(x) where δ(z) = x. Hence x = 0 and x ∈ Im f 0, so there exists a u ∈ M 0 where f 0(u) = x. Resulting in h(x) = hf 0(u). By commutativity, we get hf 0(u) = fφ(u). Replacing y with y − φ(u) we have
g(y − φ(u)) = g(y) − gφ(u)
= z − 0
= z
and f(y − φ(u)) = 0, so (y − φ(u)) ∈ ker f and thus
g(y − φ(u)) = g(y − φ(u))
= z
24 Resulting in z ∈ Im g, giving ker δ ⊆ Im g. Hence Im g = ker δ.
Now for z ∈ ker f 00, we have δ(z) = x where g(y) = z and f(y) = h(x), by definition of δ. Thus
hδ = h(x)
= h(x)
= f(y)
= 0 since f(y) ∈ Im f. Resulting in h ◦ δ = 0 and Im δ ⊆ ker h.
Finally, let x ∈ ker h, so 0 = h(x) = h(x), thus h(x) ∈ Im f and h(x) = f(y) for some y ∈ M. Let g(y) = z ∈ M 00. Since
g MMM/ M 00
f f 00
NNN / N 00
commutes, we can create the square
g y / z
f f 00
h(x)/ 0
Thus f 00(z) = (ψ ◦ h)(x) = 0, so z ∈ ker f 00 with g(y) = z and h(x) = f(y). Thus by definition, δ(z) = x. Giving ker h ⊆ Im δ. Thus ker h = Im δ, completing the proof of the Snake Lemma.
Lemma 2.19 Let B / / E / / A and B0 / / E0 / / A0 be two short exact
25 sequences where in the commutative diagram:
µ ν BBE/ EEA/ A
α β γ
B0 / E0 / A0 µ0 ν0
any of the two homomorphism α, β, or γ are isomorphisms. Then the third is an isomorphism.
Proof:
By the Snake Lemma, there exists the connecting homomorphism δ and an induced digram
0 0 0
K1 / K2 / K3
B / E / A / 0
α β γ 0 / B0 / E0 / A0
C1 / C2 / C3
0 0 0 such that the sequence
δ 0 / K1 / K2 / K3 / C1 / C2 / C3 / 0
is exact.
26 Case 1: Now suppose that α and β are isomorphisms. This gives
K1 = K2 = C1 = C2 = 0. Giving the exact sequence
0 / 0 / 0 / K3 / 0 / 0 / C3 / 0
Since this sequence is exact, K3 = C3 = 0. K3 = 0 if and only if γ is injective and C3 = 0 if and only if γ is surjective. Thus γ is an isomorphism.
Case 2: Now suppose that α and γ are isomorphisms. This gives
K1 = K3 = C1 = C3 = 0. Giving the exact sequence
0 / 0 / K2 / 0 / 0 / C2 / 0 / 0
Since this sequence is exact, K2 = C2 = 0. Implying, by the same reasoning as above that β is an isomorphism.
Case 3: Now suppose that β and γ are isomorphisms. This gives
K2 = K3 = C2 = C3 = 0. Giving the exact sequence
0 / K1 / 0 / 0 / C1 / 0 / 0 / 0
Since this sequence is exact, K1 = C1 = 0. Finally implying that α is an isomorphism.
2.4 Projective and Injective Modules
Definition 2.20 A R-module P is projective if for every onto morphism ε : B → C and every homomorphism γ : P → C there exists a module homomor- phism β : P → B such that εβ = γ. Illustrated by the following commutative
27 diagram: P
β γ
BCB ε / / C ♦
Definition 2.21 A free module is a module with a basis. Where for an R-
module M, the set E ⊆ M is a basis for M if for ri ∈ R, ei ∈ E all i
(i) for any m ∈ M, m = r1e1 + r2e2 + ··· + rnen for some n ∈ N and
(ii) E is independent, that is, r1e1 +r2e2 +···+rnen = 0M where ei 6= ej
for i 6= j if and only if r1 = r2 = ··· = rn = 0R.
♦
Theorem 2.22 Every module M is a quotient of a free module.
Proof: Let F be the free module generated by the set X. Where X is the set of all non-zero elements of M. Then the inclusion i : X −→ M induces an epimorphism f : F −→ M defined by f(Σri(xi)) = Σrixi, for ri ∈ R and ∼ xi ∈ X for all i. Then by the first isomorphism theorem, M = F/ ker f.
Let F be a free module and {yi : i ∈ I} be a basis for F . Suppose that γ : F → C and ε : B → C are module homomorphisms with ε onto. So
for every i ∈ I there exists some xi ∈ B such that ε(xi) = γ(yi). Now define
β : F → B such that β(Σ(riyi)) = Σrixi. Since {yi : i ∈ I} forms a basis it
28 follows that β is a well defined homomorphism. So
εβ(Σriyi) = ε(Σrixi)
= Σriε(xi)
= Σriγ(yi)
= γ(Σriyi)
Thus γ = εβ and hence F is projective. Meaning that free modules are pro- jective modules.
Definition 2.23 The short exact sequence,
α β 0 / A / E / B / 0
is said to split if there exists a pair of homomorphisms,α ˆ and βˆ, such that ˆ ααˆ = 1A and ββ = 1B. ♦
Example 2.24 Consider the sequence
iA πB 0 / A / A ⊕ B / B / 0,
where iA : A / A⊕B is defined by iA(a) = (a, 0), which is clearly one-to-one,
and πB : A ⊕ B / B is defined by π(a, b) = b, which is clearly onto. Thus
Im iA = ker πB, giving a short exact sequence. We can the reverse the arrows as follows:
0 / B / A ⊕ B / A / 0. iB πA
This would then give πAiA = 1A and iBπB = 1B. Hence the sequence is a splitting sequence.
29 Note: The composition of these functions cannot always be reversed. αˆ acts strictly as a left inverse and βˆ acts as a right inverse. We then callα ˆ and βˆ splitting morphisms.
Our goal from here is to show for a split short exact sequence
0 / A / E / B / 0
that E ∼= A ⊕ B. First we will consider the following lemma.
Lemma 2.25 Given a short exact sequence,
α β 0 / A / E / B / 0,
the following are equivalent.
(i) There exists a module homomorphism,α ˆ, withαα ˆ = 1A;
ˆ ˆ (ii) There exists a module homomorphism, β, with ββ = 1B;
ˆ ˆ (iii) There existsα ˆ and β with ααˆ + ββ = 1E,
(iv) The sequence splits.
Proof: Assume there existsα ˆ such thatαα ˆ = 1A and consider the following diagram.
α β 0 / A / E / B / 0
1E −ααˆ E Then
(1E − ααˆ)α = 1Eα − αααˆ
= α − α1A
= α − α
= 0
30 ˆ ˆ so by Theorem 2.15 we know there exists a β : B /E such that ββ = 1E −ααˆ. ˆ ˆ ˆ Thus 1E = ααˆ+ββ. Now assume there exists β such that ββ = 1B and consider the following diagram.
E
ˆ 1E −ββ α β 0 / A / E / B / 0 Then
ˆ ˆ β(1E − ββ) = β1E − βββ
= β − 1Bβ
= β − β
= 0
ˆ so by Theorem 2.14 we know there exits aα ˆ : E /A such that ααˆ = 1E −ββ. ˆ Thus 1E = ααˆ + ββ. Resulting in (i) ⇒ (iii) and (ii) ⇒ (iii). ˆ Now assume (iii) and apply α to 1E = ααˆ + ββ to get
ˆ 1Eα = (ααˆ + ββ)α
= αααˆ + ββαˆ
Exactness would then imply that βα = 0 and thus 1Eα = αααˆ . However, this gives us that 1Eα = α1A = α(ˆαα). Since α is a monomorphism, 1A =αα ˆ .A ˆ similar argument can be made for ββ = 1B. Thus (iii) ⇒ (i) and (iii) ⇒ (ii). Giving (i) ⇔ (iii) ⇔ (ii). Consequently, (i) ⇔ (iv) ⇔ (ii) by definition of splitting sequences. Thus concluding the proof.
We then prove the following theorems simultaneously.
Theorem 2.26 If the short exact sequence
α β 0 / A / E / B / 0
31 splits, then the sequence
βˆ αˆ 0 / B / E / A / 0
is also a short exact splitting sequence.
Theorem 2.27 If the short exact sequence
α β 0 / A / E / B / 0
splits. Then
E = Im α ⊕ Im βˆ
= Im α ⊕ kerα ˆ
= ker β ⊕ Im βˆ
Hence, E ∼= A ⊕ B.
ˆ ˆ Proof: Given e ∈ E, 1E = ααˆ + ββ, from Lemma 2.25, thus e = ααeˆ + ββe. Since ααeˆ ∈ Im α and ββeˆ ∈ Im βˆ, E = Im α + Im βˆ. Now assume e ∈ Im α ∩ Im β.ˆ This would then imply that e = αa, for some a ∈ A, and e = βb,ˆ for some b ∈ B. Resulting in αa = βbˆ . Applying β to each side we have βαa = ββb.ˆ Since the sequence was a short exact splitting sequence, we ˆ ˆ know that βα = 0 and ββ = 1B, thus 0 = b and then e = βb = 0 as well giving Im α ∩ Im βˆ = 0. Hence E = Im α ⊕ Im βˆ. To show kerα ˆ = Im βˆ, suppose that e ∈ kerα, ˆ
e = 1Ee
= ααeˆ + ββeˆ
= ββeˆ
32 Thus kerα ˆ ⊆ Im βˆ. Now conversely, suppose that e = βbˆ . Giving
ˆ ˆ βb = 1Eβb = ααˆβbˆ + ββˆ βbˆ Lemma 2.25:(iii) = ααˆβbˆ + βbˆ Lemma 2.25:(ii) 0 = ααˆβbˆ βbˆ − βbˆ = 0 0 = ααeˆ
Since α is a monomorphism, we find that 0 =αe ˆ . Resulting in Im βˆ ⊆ kerα ˆ and hence Im βˆ = kerα ˆ. ˆ Nowαα ˆ = 1A would imply thatα ˆ is an epimorphism and ββ = 1B would imply that βˆ is a monomorphism. Therefore
βˆ αˆ 0 / B / E / A / 0 is a short exact splitting sequence.
Finally, α induces an isomorphism A / Im α and βˆ induces an iso- morphism B / Im βˆ. We then get E ∼= A ⊕ B, thus completing the proofs.
Theorem 2.28 An R-module P is projective if and only if any exact sequence of the form ψ φ 0 / A / E / P / 0 splits.
We will first prove the following lemma to aid in the proof of this theorem.
Lemma 2.29 Let A and B be R-modules. If the external direct sum A ⊕ B is free then, A and B are projective.
33 Proof: Let F = A ⊕ B. By symmetry, we will only need to prove that A is projective. Define the maps α : A → F and β : F → A by α(x) = (x, 0), the injection, and β(x, y) = x, the projection, for all x ∈ A and y ∈ B. Note that
βα = 1A. Suppose that we have R-module homomorphisms h : A → N and f : M → N, where f is onto. So we have an R-module homomorphism hβ : F → N and thus, since F is projective, there exists an R-module homo- morphism γ : F → M such that fγ = hβ. Let g = γα. Then fg = fγα = hβα = h, and thus A is projective.
We will now prove Theorem 2.28. Proof: Assume P is projective and
ψ φ 0 / A / E / P / 0
is a short exact sequence. Since φ is onto and P is projective, there exists a β such that P
β 1P
EEP/ / P φ
commutes. Hence φβ = 1P and by Lemma 2.25 the sequence splits. Now assume every short exact sequence
ψ φ 0 / A / E / P / 0
splits. Since every R-module is a quotient of a free module by Theorem 2.22,
we have a free module F and an onto R-module homomorphism f2 : F → P .
Let K = ker f2. Thus we have the exact sequence
f1 f2 0 / K / F / P / 0
which, by assumption, splits. Thus F ∼= P ⊕ K. Implying that P ⊕ K is free and by Lemma 2.29, P is projective.
34 Example 2.30 For a ring R with e ∈ R an idempotent (e2 = e), we would like to show R = Re ⊕ R(1 − e). Since for any r ∈ R r = re + r(1 − e), we get that R = Re + R(1 − e). Now if z ∈ Re ∩ R(1 − e), then z = ve for some v ∈ R and z = w(1 − e) for some w ∈ R. From z = ve we get
z = ve
ze = vee
= ve2
= ve
= z
Now from z = w(1 − e)
z = w(1 − e)
ze = w(1 − e)e
= w(e − e2)
= w(e − e)
= 0
Which results in z = ze = 0 and thus Re ∩ R(1 − e) = ∅. Making
R = Re ⊕ R(1 − e) a free R-module with basis E = {1R} and thus Re and R(1 − e) are both projective.
Now for FF R = 0 F 1 0 F 0 where F is a field let e = . Thus Re = . As a vector space 0 0 0 0 over F , R will be dimension 3. However, for any free module with basis of
35 size n, we would have a vector space of dimension 3n over F . But Re is one dimensional; meaning that Re is not free. However, it is still projective, so the concept of projective is more general than free.
Definition 2.31 A R-module J is injective if for every one-to-one morphism ε : A → B and to every homomorphism γ : A → J there exists a module homomorphism β : B → J such that βε = γ. Illustrated by the following commutative diagram: ε AAB/ / B
γ β J ♦
Example 2.32 Q, the rational numbers, is an injective Z-module.[2, p. 41]
36 Chapter 3
Categories
To be able to discuss the ideas of homology it is also necessary for the reader to be familiar with the concepts of categories. Categories give us the ability to discuss more general ideas. For example, one could not talk about the set of all sets without running into a problem with Russel’s Paradox. The following sections will be focused on the ideas and topics necessary for this thesis and should not be viewed as an overview of the subject of categories.
Definition 3.1 A category C consists of a class of objects and sets of mor- phisms between those objects. For every ordered pair A, B of objects there
is a set HomC(A, B) of morphisms from A to B, and for every ordered triple A, B, C of objects there is a law of composition of morphisms, a map
HomC(A, B) × HomC(B,C) −→ HomC(A, C)
where (f, g) 7→ gf, and gf is called the composition of g with f. The objects and morphisms must satisfy the following axioms: for objects A, B, C, and D:
(i) if A 6= C or B 6= D, the HomC(A, B) and HomC(C,D) are disjoint sets,
37 (ii) composition of morphisms is associative h(gf) = (hg)f for every
f ∈ HomC(A, B), g ∈ HomC(B,C), and h ∈ HomC(C,D)),
(iii) each object has an identity morphism, i.e., for every object A there
is a morphism 1A ∈ HomC(A, A) such that f1A = f for every
f ∈ HomC(A, B) as well as 1Ag = g for every g ∈ HomC(C,A).
♦
When discussing the idea of HomC(−, −), we shall suppress the sub- script when it is clear which category the objects are coming from. Also, a f morphism from objects A to B will be denoted as f : A −→ B or A → B. By definition, A is the domain of the morphism while B is the codomain. An iso- morphism f is a morphism f : A −→ B if there exists a morphism g : B −→ A such that gf = 1A and fg = 1B. Naturally one can extend this to the idea of subcategories. A subcate- gory C of a category D occurs when every object of C is also an object of D.
Also, given A, B ∈ C we have HomC(A, B) ⊆ HomD(A, B). Furthermore C is called a full subcategory when HomC (A, B) = HomD(A, B) for all objects A and B in D.
Example 3.2 Set is the category of all sets. Given two sets A and B, Hom(A, B) is the set of all functions from A to B. Composition of morphisms works the same as composition of functions and the identity in Hom(A, A) is the identity function on the set A. This category contains the category of all finite sets as a full subcategory.
Grp is the category of all groups. Here the objects are groups and morphisms are group homomorphisms. Since the composition of group homo- morphims is also group homomorphism all of the axioms are satisfied. Sim-
38 ilarly, to Set the category of all abelian groups, Ab is a full subcategory of Grp. Ring is also a category. The objects of Ring are nonzero rings with 1 and morphisms are ring homomorphisms that send 1 to 1.
Given a fixed ring R, the category R-Mod consists of all left modules with morphisms being R-module homomorphisms.
Example 3.3 Consider R a ring with unity, we will define a category as such:
• R is the only object,
• morphisms are the elements of R.
The identity morphism is the multiplicative identity and composition of a and b is defined by multiplication of a and b. Also, for a, b, c ∈ R we have (ab)c = a(bc) by definition of a ring. Thus we satisfy all of the axioms of a category. We will denote this category by R − cat.
Example 3.4 Directed Multigraphs
Here we introduce the concept of directed graphs. Directed graphs (or digraphs) will act as an underlying example of the theory presented throughout this thesis.
Definition 3.5 A directed graph is an ordered pair, G = (V,E), where
• V is a set denoting the vertices,
• E ⊆ V × V is a set of ordered pairs denoting the edges.
♦
Each edge can be denoted as e = (x, y) as the edge from vertex x to vertex y. When using multiple edges from one vertex to another we will la- bel them using subscripts. For the multiple edges from x to y, these will be
39 denoted by (x, y)1, (x, y)2, ..., (x, y)i. A path is a finite string of consecutive edges. Define the empty path as the edge (x, x), an edge that goes back to itself. So for example: V = {1, 2, 3, 4, 5}
E = {(1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (1, 2), (2, 1), (1, 4), (4, 5)1, (4, 5)2, (2, 5), (5, 3)}
2 =
1 w
5 / 3 37 Q
4 M
Given a digraph, we can create a new completed digraph. Paths in the original digraph are redrawn via concatenation as follows: (1, 2)||(2, 3) = (1, 3).
1 / 2 / 3 becomes the edge
1 / 3 The empty path then becomes the empty edge. For example the graph G
1 4
2 Ð
3
40 becomes the completed graph
1 4 M
2 Ð M
3 M This new digraph is drawn in exactly the same way as the original digraph. We can then create these completed graphs.
Each of these completed digraphs forms a category where the objects are the vertices, and the morphisms are the paths. The empty path acts as the identity on each object. This category is known as the Quiver of G. Denoted as Quiv(G).
3.1 Functor
Given two categories C and D it is natural for one to ask if there is some sort of way to change from one category to another.
Example 3.6 Given the category Grp one could change it to a subcategory of Set by forgetting the group structure of these objects. In other words, forget- ting that all of these objects are groups and morphisms are homomorphisms. This is called a forgetful functor.
Definition 3.7 Let C and D be categories. A covariant functor, F : C → D is a mapping that
• associates to each object X ∈ C an object F(X) ∈ D
41 • associates to each morphism f : X → Y in C to a morphism F(f): F(X) → F(Y ) in D such that
(1) F(1X ) = 1F(X) for every object X ∈ C
(2) F(f ◦g) = F(f)◦F(g) for all morphisms g : X → Y and f : Y → Z in C.
♦
In other words, a generic functor is a map that preserves identity morphisms and composition of morphisms. When clear, we will drop the parenthesis and bold facing on the functor so that F(f) = F f and F(X) = FX. The idea of a contravariant functor is the same; however, the composition is reversed in axiom (2), F (f ◦ g) = F g ◦ F f.
Example 3.8 HomR(M, −)
We saw earlier, Hom sets for R-modules have an abelian structure. So in actuality Hom can act as a functor from R − Mod into Ab.
Taking a closer look at the functor
F = HomR(M, −): R−Mod → Ab. Each object N will be sent to an abelian group HomR(M,N). Now given a module homomorphism, ψ : N → X, Fψ will send ψ to a corresponding group homomorphism ψ∗(φ) = ψφ. Given two composeable module homomorphisms
ψ1 ψ2 N / X / Y
42 we see that
∗ ∗ Fψ2ψ1(φ) = ψ2ψ1(φ)
= ψ2ψ1φ
= ψ2(ψ1φ)
∗ = ψ2(ψ1(φ))
∗ ∗ = ψ2(ψ1(φ))
= (Fψ2)(Fψ1)φ
Giving Fψ2ψ1 = Fψ2Fψ1. Now for ψ = IN , where IN is the identity homo-
∗ ∗ morphism on the module N, IN = IN φ = φ for all φ ∈ Hom(M,N), so IN is the identity on Hom(M,N). Thus F is a covariant functor.
Considering G = HomR(−,N). Each module M is sent to a corre- sponding abelian group Hom(M,N), and each module homomorphism ε :
M → X would be sent to ε∗ such that ε∗(φ) = φε. Here we have Gψ2ψ1 =
(Gψ1)(Gψ2). Making Hom(−,N) and contravariant functor. This leads to the following definition.
Definition 3.9 Given two categories A and B, F(−, −) is a bifunctor if it is a functor in both arguments with appropriate variances. Furthermore, if F is covariant in all arguments, then we require for morphisms A → A0 and B → B0 the following commutative diagram.
F(A, B) / F(A, B0)
F(A0,B) / F(A0,B0)
Reversing the appropriate arrows for contravariance when necessary. ♦
43 Hom(−, −) as a bifunctor will give us the following commutative diagram.
Hom(M,N) / Hom(M,N 0) O O
Hom(M 0,N) / Hom(M 0,N 0) Example 3.10 Let G be a digraph. Define the functor from the category of the quiver of G into the category of vector spaces over a field F ,
F : Quiv(G) / VecF , as follows:
• each vertice i is sent to a vector space over F , Vi,
• each edge ej = (ik, il) is sent to a linear transformation Lj : Vk → Vl.
To check that composition is preserved, consider the sub-digraph
1 / 2 / 3 F will then map this to the following
L1 L2 V1 / V2 / V3
With the concatenation of e1 = (1, 2) and e2 = (2, 3) we get
F(e1||e2) = F((1, 3)) (3.1)
= L3 (3.2)
Since Quiv(G) is created from the completed graph of G, we then can define
L3 = L2L1 since the composition of linear transformations is itself a linear transformation. Thus
= L2L1 (3.3)
= F(e2)F(e1) (3.4)
44 Meaning composition is preserved. We can map each empty path to IVi pre- serving identities. Thus confirming that F is a functor.
Here F(e1||e2) = Fe1Fe2 can represent the “steps” a linear transfor- mation at each stage illustrated on a digraph, much like a flow chart. Such functors are called the vector space representations of G and are denoted as
RepF (G).
Definition 3.11 For categories C and D and a functor F : C / D with
HomC(C1,C2) and HomD(D1,D2) abelian groups for any C1,C2 in C and any
D1,D2 in D, then F is called an additive functor if F(φ1 +φ2) = F(φ1)+F(φ2)
for φ1, φ2 ∈ HomC(C1,C2). In other words, F acts as an abelian group homomorphism,
F : Hom(C1,C2) / Hom(FC1, FC2). ♦
Example 3.12 With the ring R, R-cat into abelian groups
Given an additive functor F :R-cat / Ab then:
• FR = M, where M is an abelian group,
• for a ∈ R, set Fa = la, then la is an additive homomorphism M / M.
Here F(ab) = FaFb means that we can take any m ∈ M and find how b acts on m then how a acts on that, or we may simply find how ab acts on m. So we have an action of R on the abelian group M. Thus for a ∈ R and m ∈ M, we can define
am = la(m) (3.5)
45 Given a, b ∈ R and m, n ∈ M we have
(a + b)m = F(a + b)(m)
= (Fa + Fb)(m)
= Fa(m) + Fb(m)
= am + bm
(ab)m = F(ab)m
= Fa(bm)
= Fa(Fb(m))
= a(bm)
a(m + n) = Fa(m + n)
= F(am + an)
= Fa(m) + Fa(n)
= am + an
Finally, for 1 ∈ R we get 1m = F1(m) = m. Thus satisfying all the axioms of a left R-modules.
Conversely, given any left R-module M, we can define a functor FM from R-cat to the category of functors R / Ab as follows:
• FM R = M,
• FM a = la for a ∈ R, where la(m) = am.
Hence, one can identify additive functors on R − cat with the category of left R-modules.
46 3.2 Natural Transformations
Definition 3.13 If F and G are functors between categories C and D then a natural transformation η : F → G associates every object X in C a morphism
ηX : FX → GX between objects of D, such that for every morphism f : X → Y in D we have
ηX FX / GX X
F f Gf f
ηY FY / GY Y
is commutative. Furthermore, if ηX is an isomorphism for each object X, then we refer to it as a natural equivalence. ♦
Example 3.14 Let G be a group and define the category G − cat as follows.
• ? is the object
• G is the morphism
For the identity functor 1 : G − cat −→ G − cat, a natural transformation η is an element of G such that for all h ∈ G
g ? / ?
h h g ? / ?
In other words, hg = gh, so HomG(1, 1) is the center of G.
Example 3.15 Using the functors F = Hom(N, −) and G = Hom(M, −) for modules M and N and a module homomorphism φ : N / M we can define
ηX : Hom(M,X) / Hom(N,X) as ηX ψ = ψφ. For this to be a natural
47 transformation we need for any λ : X → Y , the following square to commute:
F λ Hom(M,X) / Hom(M,Y )
ηX ηY
Hom(N,X) / Hom(N,Y ) Gλ
Given ψ ∈ Hom(N,X) we obtain the following diagram.
ψ / λψ
ψφ / λ(ψφ) = (λψ)φ
Thus the square commutes giving a natural transformation.
Example 3.16 Natural transformation in Quiv(G)
Given two representations of a quiver of G, R and S, a natural trans- formation η : R / S consists of linear transformations ni (i a vertex of G) such that the following diagram commutes, for all i, j.
ni i R(i) / S(i)
α R(α) S(α)
nj j R(j) / S(j)
• G with one edge α •1 / •2 The representation R would look like
R V1 / V 2 (Here R will denote both the representation and the linear transformation.) Likewise S would be
S W1 / W2 A natural transformation η : R / S would then consist of an ordered pair of
48 linear transformations A and B such that BR = SA of the following diagram commutes. R V1 / V2
A B
W / W 1 S 2
• G with two edges α1 α2 •1 / •2 / •3 The representation R would be
R1 R2 V1 / V2 / V3 and the representation S would be
S1 S2 W1 / W2 / W3
Here η : R / S would be an ordered triple (n1, n2, n3) of linear transforma- tions such that
R1 R2 V1 / V2 / V3
n1 n2 n3
S1 S2 W1 / W2 / W3 commutes. We can then form a new category, called the category of F -
representations of G. Here the objects are the functors G / V ecF and the morphisms are the natural transformations. For each functor the identity nat- ural transformation makes sense, and composition of natural transformations results by composing the linear transformations at each vertex.
Example 3.17 Given two functors Fi :R-cat / Ab,(i = 1, 2) we want
to describe a natural transformation η from F1 to F2. Let M1 = F1R and
49 M2 = F2R. For each a ∈ R we want
n M1 / M2
F1a F2a
M1 n / M2
to commute. That would imply that n is a morphism in Ab and thus additive, and in turn would give us the following diagram.
x / n (x)
ax/ y
Using the top side first we see that y = an(x) and from the left side y = n(ax).
So an(x) = n(ax) for all a ∈ R and all x ∈ M1. Implying that the natural transformation is just an R-module homomorphism. Hence, from 3.12, the
category of all functors R-cat / Ab (where the objects are functors and the morphisms are the natural transformations) is equivalent to the category of left R-modules. Denoted as Func(R-cat, Ab) ∼= R − Mod.
3.3 Products and Coproducts
Consider the following dual notions. Let C be a category with some objects X1
and X2. An object X is the product of X1 and X2 denoted as X1 ×X2 provided it satisfies the following universal property: there exists morphisms π1 and π2, called the canonical projection morphisms, such that for every object Y and pair of morphisms f1 : Y → X1 and f2 : Y → X2 there exists an unique
50 morphism f : Y → X such that the following diagram commutes.
Y f1 f2 f
z π1 π2 $ X1 o X1 × X2 / X2
Similarly, an object X is the coproduct of objects X1 and X2 denoted as
X1 ⊕ X2, provided it satisfies the following universal property: there exists morphisms i1 and i2 such that for every object Y with f1 : X1 → Y and f2 : X2 → Y there exists a unique morphism f : X1 ⊕ X2 :→ Y the following diagram commutes.
i1 i2 X1 / X1 ⊕ X2 o X2
f f f 1 2 $ Y z Example 3.18 Products and coproducts in Set
For a family of sets Xi the product is just simply the cartesian product
of the sets. Resulting in Y = Πi∈I Xi = {(xi)i∈I |xi ∈ Xi∀i} with each
πj :Πi∈I Xi −→ Xj defined by πj((xi)i∈I ) = xj. Given the set Y with any
family of functions fi for i ∈ I, the universal arrow f : Y −→ Πi∈I Xi is defined as f(y) = (fi(xi))i∈I .
The coproduct of a family of sets Xi in Set is just simply the disjoint union of the sets with each ij the inclusion map.
Example 3.19 Products and coproducts in R − Mod
In the category of R − Mod, we construct the product as the cartesian product of two modules. For given modules M and N, the product is, as sets,
51 the ordinary cartesian product M × N where
P
f2 f f1
| π1 π2 " MMo × N / N
commutes with π1 : M × N → M is defined as π1(x1, x2) = x1 and π2 :
M × N → N is defined as π2(x1, x2) = x2 and f(x) = (f1(x), f2(x)). The coproduct is constructed using the external direct sum of modules, which as sets is again the cartesian product. Given modules M and N, the coproduct is as a set is again the ordinary cartesian product but denoted M⊕N where
M / M ⊕ N o N i1 i2
f f1 f2
" C | commutes with i1 : M → M ⊕N defined as i1(x) = (x, 0) and i2 : N → M ⊕N defined as i2(y) = (0, y) and f(x1, x2) = f1(x1) + f2(x2).
3.4 Kernels and Cokernels
Before we can discuss the universal property of kernel and cokernel, we will need to define a zero morphism.
Definition 3.20 Suppose C is a category, and f : X / Y is morphism in C. The morphism f is called a constant morphism if for any object W in C and
any morphisms g, h ∈ HomC(W, X), fg = fh. Dually, f is called a coconstant
morphism is for any object Z in C and any g, h ∈ HomC(Y,Z), gf = hf.A
52 zero morphism refers to a morphism which is both constant and coconstant. ♦
Definition 3.21 Suppose C is a category. An object I is called an initial object if for every object X in C there exits exactly one morphism I → X. Dually, a terminal object, T , has precisely one morphism X → T for every object X. If an object is both terminal and initial is then called a zero object. ♦
If C has a zero object 0, given two objects X and Y in C, there are
canonical morphisms f : 0 / X and g : Y / 0. Then fg is a zero morphism
in HomC(Y,X). A category with zero morphisms is one where, for any two objects A
and B in C, there is a fixed morphism 0AB : A / B such that for all objects
X,Y,Z in C and all morphisms f : Y / Z and g : X / Y , the following diagram commutes.
0XY X / Y 0XZ g f Y / Z 0YZ
The morphisms 0XY are forced to be zero morphisms and form a compatible system of such. If C is a category with zero morphisms, then the collection of
0XY is unique. A category with a zero object is a category with zero morphisms
(given by the compositions 0XY : X / 0 / Y ).
Example 3.22 In the category R−Mod a zero morphism is a module homo-
morphism f : G / H that maps all of G to the identity element in H. The zero object in modules is the trivial module.
Example 3.23 The zero object in Rep(G) would be the zero vector space (the vector space only containing the zero vector) at each vertex and the zero
53 transformation (the linear transformation that sends everything to the zero vector) for each edge.
Definition 3.24 Let C be a category with zero morphisms with the following universal property: A kernel of a morphism f is any morphism k : K / X such that:
• f ◦ k is the zero morphism from K to Y
X O f k
K / Y 0KY
• Given any morphism k0 : K0 / X such that f ◦ k0 is the zero morphism,
there is a unique morphism u : K0 / K such that k ◦ u = k0. Resulting in the following commutative diagram.
X F O f k
0 0KY k K / Y > 6 u 0K0Y K0
♦
Example 3.25 ker in Rep (G)
The kernel in Rep (G) is computed at each vertex i. So given the commutative diagram
ni ker(ni) / R(i) / S(i)
R(α) S(α)
nj ker(nj) / R(j) / S(j)
54 where ker(ni) is the “standard” kernel of a linear transformation. This would then imply that there exists a unique linear transformation ker(ni) / ker(nj)
defined by the restriction of R(α) on ker(ni) denoted as ker α. Thus ker α is an object of Rep (G).
The dual notion of cokernel is as follows:
Definition 3.26 Let C be a category with zero morphisms and the following
universal property: that the cokernel of a morphism f : X / Y is an object
C together with a morphism c : Y / C such that
• the following diagram Y > f c X / C 0XC commutes.
0 0 0 • Moreover, any other such c : Y / C with 0XC0 : X → C can be
obtained by composing c with a unique morphism u : C / C0:
Y > f c
0XC 0 X / C c
u 0 XC0 ' C0
♦
3.5 Pull-Backs and Push-outs
Here I will restrict my discussion of Category Theory to the category of R-mod since the development of the the functor Ext will be done with modules.
55 Definition 3.27 Given module homomorphisms φ : A → X and ψ : B → X, a pull-back of (φ, ψ) is an R-module Y with a pair of module homomorphisms α : Y → A and β : Y → B such that φα = ψβ giving the following commuta- tive diagram: α YYA/ A
β φ
BBX/ X ψ and has the following universal property: given γ : Z → A and δ : Z → B with φγ = ψδ, there exists a unique module homomorphism ξ : Z → Y with δ = βξ and γ = αξ such that
Z γ ξ # Y / A δ α β φ ψ B / X commutes.
Definition 3.28 Given module homomorphisms φ : X → A and ψ : X → B a push-out of (φ, ψ) is a module Y with a pair of module homomorphisms α : A → Y and β : B → Y such that φα = ψβ giving the following commuta- tive diagram: α XXA/ A
β φ
BYB / Y ψ and has the following universal property: given γ : B → Z and δ : A → Z with δα = γβ, there exists a unique module homomorphism ξ : Y → Z with
56 δ = ξφ and γ = ψξ such that
X α / A β φ ψ B / Y δ
γ ξ + Z commutes.
Theorem 3.29 Pull-backs (or push-outs) are unique up to isomorphism.
Proof: Given two pull-backs Y1 and Y2, we have the following commutative squares.
α1 α2 Y1 / A Y2 / A
β1 φ β2 φ
ψ ψ B / XB / X We can then create the following commutative diagram
Y2
α2 ξ1
α1 $ Y1 / A β2
β1 φ
ψ B / X
where ξ1 : Y2 → Y1 is unique. We can also create another diagram with Y1
and Y2 reversed, resulting in another unique morphism ξ2. Now consider the
57 following commutative diagram.
Y1
ξ1 α1
Y2 α2 ξ2
β1 α1 # Y1 / A β2
β1 % B / X Considering the outer path we have
Y1 α1 ξ2ξ1
α1 # Y1 / A β1
β1 B / X
Since α11A = α1 and α1ξ2ξ1 = α1 uniqueness would imply that ξ2ξ1 = 1Y1 . By symmetry we find that ξ1ξ2 = 1Y2 . Thus from the fact that these morphisms
“undo” each other and they are unique, we see that ξ1 and ξ2 are isomorphisms ∼ and hence Y1 = Y2. (An analogous argument can be made for push-outs.) Therefore, we can refer to the pull-back (or push-out).
Example 3.30 Pull-back and Push-out in R − Mod
Given B
g
f A / X For A ⊕ B, define a subset E = {(a, b)|f(a) = g(b)}. Here E is the kernel [f,−g] of A ⊕ B / X, so we get that E is a submodule of A ⊕ B. Now define
58 maps α : E / A and β : E / B by α(a, b) = a and β(a, b) = b respectively. Clearly, these would be module homomorphisms. For some (a, b) ∈ E we have fα(a, b) = fa and gβ(a, y) = gb. However, by definition of E, fa = gb, so the following square commutes.
β E / B
α g
f A / X
Suppose there exists α0 : E0 → A and β0 : E0 → B giving the following commutative diagram.
E0
β0
β $ E / B α0
α g
f A / X
We need to construct a unique ξ : E0 → E such that
E0
β0 ξ
β $ E / B α0
α g
f A / X
59 commutes. For ξ(e0) = (a, b) ∈ E, we would need
β0(e0) = βξ(e0)
= β(a, b)
= b
and similarly, α0(e0) = a. Resulting in ξ being defined as
ξ(e0) = (α0(e0), β(e0)).
Since fα0 = gβ0 we also get ξ(e0) ∈ E. Thus ξ is the unique morphism making the diagram commute.
Now given g A / C
f
B Let S = {(f(a), −g(a)) ∈ B ⊕ C|a ∈ A}. Since addition is defined compo- nentwise and f and g are module homomorphisms, we can see that S is a submodule of B ⊕ C. Now define E = (B ⊕ C)/S (giving E as the cokernel of [f,−g]T A / A⊕B), and maps α : B → E and β : C → E by b 7→ (b, 0)+S and c 7→ (0, c) + S for b ∈ B and c ∈ C. One can see these are then module homo- morphisms. Now given a ∈ A, f(a) 7→ (f(a), 0) by α. Since (f(a), −g(a)) ∈ S, we get (f(a), 0) + S = (0, g(a)) + S. Which implies that αf(a) = βg(a). Thus βg = αf, and we have create the following commutative square.
g A / C
f β
α B / E
60 Suppose there exists α0 and β0 such that β0g = α0f. We again need to create a ξ such that the following diagram commutes.
g A / C
f β β0 B α / E ξ α0 + E0 Then for b ∈ B we have
α0(b) = ξα(b)
= ξ((b, 0) + S)
and for c ∈ C, we get β0(c) = ξ((0, c)+S). This would imply that ξ((b, c)+S) = ξ((b, 0) + S, (0, c) + S) = α0(b) + β0(c). Thus ξ is uniquely defined, giving the desired commutativity.
Notice that the pull-back requires kernels and products and the push- out requires cokernels and coproducts.
Now consider the following pull-back diagram of modules:
α K / 0
δ φ
AAB/ B ψ
61 ν ψ Given M / A / B, where ψν = 0, we get the following commutative diagram. M 0 ξ α # ν K / 0 δ 0 ψ A / B Hence we get a unique homomorphism ξ : M → K such that ν = δξ and αξ = 0, which satisfies the universal property of the kernel. Thus K → A is the kernel of A → B.
For the dual notion, consider the following push-out of modules.
ψ AAB/ B
0 δ
0 α / L
ψ ν Now given A / B / M where νψ = 0, we get the following commutative diagram. ψ A / B
0 δ ν 0 α / L ξ 0 + M Hence we get a a unique homomorphism ξ : L → M such that ν = ξδ and 0 = ξα. Which satisfies the universal property of cokernel. Thus B → L is the cokernel of A → B.
62 Now consider A
B / 0
Here the pull-back is simply the product A × B. Using π1 : A × B → A and
π2 : A × B :→ B as the projection maps, the universal property of pull-back says there exists a unique ξ : M → A × B such that
M
# π1 A × B /% A
π2 B / 0 commutes. Which gives the following commutative diagram.
M
ξ
} π1 π2 ! AAo × B / B Thus satisfying the universal property of the product. Dually, the push-out of
0 / A
B using the injections maps will yield the coproduct (direct sum)
0 / A
i1
BBA/ A ⊕ B i2
63 By the universal property of push-out, we get a unique ξ : A ⊕ B → M such that
0 / A
i1 B / A ⊕ B i2
# , M commutes. Which give the following commutative diagram.
i1 i2 A / A ⊕ B o B
ξ
! } M
Thus satisfying the universal property of the coproduct.
At this point it is natural to ask how these ideas will apply to the notion of sequences of modules. We now present the following theorem.
Theorem 3.31 The square
α YYA/ A
β φ
BXB / X ψ
is a pull-back diagram if and only if the sequence
{α,β} <φ,−ψ> 0 / Y / A ⊕ B / X
is exact. Where {α, β} is defined as {α, β}(a) = (α(a), β(a)) and < φ, −ψ > is defined as < φ, −ψ > (a, b) = φ(a) − ψ(b).
64 Proof: Assuming α YYA/ A
β φ
BXB / X ψ is a pull-back diagram we have, Y = ker < φ, −ψ > from Example 3.30. Thus Im {α, β} = ker < φ, −ψ > .
Now assuming
{α,β} <φ,−ψ> 0 / Y / A ⊕ B / X
is exact, we have < φ, −ψ > ◦{α, β} = 0. Giving the commutative square.
α YYA/ A
β φ
BXB / X ψ
Now suppose there exists a module, M, such that
M δ
α # γ Y / A β φ ψ B / X
commutes. That would imply < φ, −ψ > ◦{δ, γ} = 0. By the universal property of the kernel there exists a unique ξ induced by Y → A ⊕ B giving
M
{δ,γ} ξ
" <φ,−ψ> Y / A ⊕ B / X {α,β}
65 It then follows that {α, β} ◦ ξ = {δ, γ} and thus
α YYA/ A
β φ
BXB / X ψ is a pull-back diagram.
The dual notion is as follows.
Theorem 3.32 The square
α XXA/ A
β φ
BBY/ Y ψ is a push-out diagram if and only if the sequence
{α,β} <φ,−ψ> X / A ⊕ B / Y / 0 is exact.
Proof: Assume that α XXA/ A
β φ
BYB / Y ψ is a push-out diagram. Then from Example 3.30, we can see that Y = cok {α, β} with the corresponding φ and ψ and thus Im {α, β} = ker < φ, −ψ > .
Now assuming that
{α,β} <φ,−ψ> toX / A ⊕ B / Y / 0
66 is exact, we get < φ, −ψ > ◦{α, β} = 0. Thus the following diagram com- mutes. α XXA/ A
β φ
BYB / Y ψ Now suppose there exists a module, M, such that
α X / A
β φ δ B / Y ψ
γ + M
commutes. That would imply < δ, −γ > ◦{α, β} = 0. By the universal property of the cokernel, there exists a unique ξ induced by A ⊕ B → Y , giving the following diagram.
<φ,−ψ> X / A ⊕ B / Y {α,β}
ξ <δ,−γ>
" M
It then follows that < φ, −ψ > ◦ξ =< δ, −γ > and thus
α XXA/ A
β φ
BBY/ Y ψ
is a push-out diagram.
67 Chapter 4
Extensions of Modules
Through out the field of algebra it is natural to look at subgroups, subrings, and submodules and ask questions about the associated quotients. In homol- ogy we narrow our focus down to just modules. Now given two modules, A, and B over a fixed ring R, our goal is to find all modules E such that B is a submodule of E and E/B ∼= A.
4.1 Extensions
More generally, given R-modules A and B and a short exact sequence
ψ φ 0 / B / E / A / 0
Then by the First Isomorphism Theorem, E/ ker φ ∼= A gives E/Im ψ ∼= A and also B ∼= Im ψ since ψ is one-to-one. Such a sequence is called an extension of B by A.
There is an equivalence relation on such sequences. Define the relation
68 v such that E1 v E2 where
BBE/ E1 / A
α µ β
BEB / E2 / A is commutative where α and β are isomorphisms.
To show that v is reflexive, let α, β, and µ be the identity functions.
Such a diagram is obviously commutative. For transitivity, let E1 v E2 and
E2 v E3, so there exists µ1 and µ2 such that the following diagram is commu- tative.
ψ1 φ1 BEB / E1 / A
µ1
BBEψ2 / E2 φ2 / A
µ2
BEB / E3 / B ψ3 φ3
Define the homomorphism µ3 : E1 −→ E3 to be µ3 = µ1 ◦ µ2, since composition of two homomorphism is a homomorphism this composition makes sense. This would give us the following diagram.
ψ1 φ1 BBE/ E1 / A
µ3
BBE/ E32 / A ψ3 φ3 Due to the commutativity of the previous diagrams, it follows that this
69 diagram would be also commutative. Thus if E1 v E2 and E2 v E3, then
E1 v E3. Finally, if E1 v E2, then it is necessary by Lemma 2.19 that µ is −1 −1 −1 also an isomorphism. Thus using α , β , and µ makes v symmetric and hence an equivalence relation.
Let E(A, B) denote the equivalence class of all such extension of A by B. This will always contain at least one element, namely the direct sum of modules. Given A and B, R-modules we can create the sequence
iA πB A / A ⊕ B / B
where if a ∈ A and b ∈ B then (a, b) ∈ A ⊕ B. Here iA would be the canonical injection map from A to A ⊕ B and πB would be the canonical projection map from A ⊕ B to B. iA would be clearly one-to-one and πB is clearly onto.
Since for each a ∈ A, iA(a) = (a, 0) and for each b ∈ B, πB(a, b) = b we get
Im iA = ker πB. In addition, using πA as the projection map on A, we get
πAiA = 1A where 1A is the identity on A. Similarly, πBiB = 1B. Thus we get a splitting sequence. Such an extension is called a split extension.
Example 4.1 E(Z2, Z4)
As Z-modules, we are looking for all sequences of the form
α β 0 / Z4 / E / Z2 / 0. ∼ ∼ Where, by the first isomorphism theorem, E/ ker β = Z2. Since ker β = Im α = ∼ Z4, we have E/Z4 = Z2. Thus |E|/4 = 2, which implies that |E| = 8.
The only abelian groups of order 8 are Z8, Z4 ⊕ Z2, and Z2 ⊕ Z2 ⊕ Z2.
For Z4 → Z2 ⊕ Z2 ⊕ Z2 there exist no one-to-one maps, since Z2 ⊕ Z2 ⊕ Z2 has no order 4 elements.
As we just saw E(Z2, Z4) contains the split extension
i π 0 / Z4 / Z4 ⊕ Z2 / Z / 0.
70 As for another possibility, we can use
ν π 0 / Z4 / Z8 / Z2 / 0, where ν(j) = 2j. Here ν is clearly one-to-one with
Im ν = {0, 2, 4, 6}
= ker π
∼ For any other possibilities with E = Z8 consider Hom(Z4, Z8). We ∼ know that Hom(Z4, Z8) = Z4 and thus Hom(Z4, Z8) has only 4 elements. These can be determined by where each homomorphism sends 1. Resulting in:
(i)1 → 2
(ii)1 → 4
(iii)1 → 6
(iv)1 → 0
(iv) is clearly not one-to-one, and (ii) gives 2 7→ 0, so it is not one-to-one. Now we will need to match the remaining maps to corresponding elements of
Hom(Z8, Z2) to create a short exact sequence. The elements of Hom(Z8, Z2) are:
(a) k → k
(b) k → 0
(b) is clearly not onto, so we only need to consider (a). Now a 7→ 2a and
71 a 7→ 6a will have the same image since 6 = −2. Giving the following diagram.
ν π 0 / Z4 / Z8 / Z2
α
γ π 0 / Z4 / Z8 / Z2 Where γ is defined as a 7→ −2a and α as a 7→ −a. By Theorem 2.19, we get α as an isomorphism giving us the same sequence. Therefore, up to isomorphism, there is only one extension with middle term Z8. ∼ For any other possibilities with E = Z4 ⊕ Z2, consider ∼ ∼ Hom(Z4, Z4 ⊕ Z2) = Hom(Z4, Z4) ⊕ Hom(Z4, Z2) = Z4 ⊕ Z2.
(i)1 → (1, 0)
(ii)1 → (1, 1)
(iii)1 → (2, 0)
(iv)1 → (2, 1)
(v)1 → (3, 0)
(vi)1 → (3, 1)
(vii)1 → (0, 0)
(viii)1 → (0, 0)
(vii) and (viii) are clearly not one-to-one and (iii) and (iv) are not one-to- one since 2 7→ (0, 0). Thus we need to match (i), (ii), (v), and (vi) with ∼ homomorphisms from Hom(Z4 ⊕ Z2, Z2) = Z2 ⊕ Z2. These homomorphism are:
72 (a) (1, 0) → 1 (0, 1) → 1
(b) (1, 0) → 1 (0, 1) → 0
(c) (1, 0) → 0 (0, 1) → 1
(d) (1, 0) → 0 (0, 1) → 0
Here (a) is clearly not onto so we can ignore it. For (b) the kernel is < 2 > ⊕Z2
which is {(0, 0), (2, 0), (0, 1), (2, 1)}. This has no match from Hom(Z4, Z4⊕Z2),
so we can disregard it. Now for (c) we get the kernel as Z4 ⊕0. This will match
with homomorphisms (i) and (v) from Hom(Z4, Z4 ⊕ Z2). Since 1 7→ (3, 0) = (−1, 0). With these can create the following commutative diagram.
i π Z4 / Z4 ⊕ Z2 / Z2
γ
ˆi π Z4 / Z4 ⊕ Z2 / Z2
Where ˆi is defined as ˆi(a) = (−a, 0) and γ(a) = −a. By Theorem 2.19 we get that γ is an isomorphism. Now for homomorphism (d) we get (a, b) = a + b, so the kernel is a cyclic group of order 4 generated by (1, 1). giving the kernel as {(0, 0), (1, 1), (2, 0), (3, 1)}. We can then match this with homomorphisms (ii) and (vi). With these sequences the following commutative diagram can
73 be created.
α β Z4 / Z4 ⊕ Z2 / Z2
γ
αˆ β Z4 / Z4 ⊕ Z2 / Z2 Where γ is defined as a 7→ −a, α(a) = (a, a), andα ˆ(a) = (−a, a). With β defined as (a, b) 7→ a + b. Again by Theorem 2.19 we get that γ is an isomorphism.
The question remains, is there an isomorphism between
i π Z4 / Z4 ⊕ Z2 / Z2
and
α β Z4 / Z4 ⊕ Z2 / Z2,
with the previously defined α and β. We can then define a homomorphism
ξ : Z4⊕Z2 → Z4⊕Z2 by ξ(a, b) = (a, a+b), creating the following commutative diagram.
i π Z4 / Z4 ⊕ Z2 / Z2
ξ
α β Z4 / Z4 ⊕ Z2 / Z2 Once again, by Theorem 2.19, we get ξ as an isomorphism. Thus for up to
isomorphism, the only extension with middle term Z4⊕Z2 is the split extension.
Resulting in only two elements for E(Z2, Z4). Our goal from here is to make E(−, −) a bifunctor. To do this the following lemmas will come in useful.
74 Lemma 4.2 If the square
α YYA/ A
β φ
BXB / X ψ
is a pull-back diagram, then
(i) β induces kerα v / kerψ;
(ii) if ψ is an epimorphism, then so is α.
Proof: Part (i): Let L be the kernel of α and K be the kernel of ψ. Giving the following diagram.
ν α 0 / L / Y / A
δ β φ
λ ψ 0 / K / B / X
From the commutativity of the pull-back, we get
ψβν = φαν
= φ0
= 0
Thus by the universal property of the pull-back, there exists a unique δ such that λδ = βν. We saw in a previous chapter that up to isomorphism the pull-back of modules is Y = {(a, b) ∈ A⊕B|φ(a) = ψ(b)} with α(a, b) = a and β(a, b) = b. Thus (a, b) ∈ ker α if and only if a = 0. However, for (a, b) ∈ Y we get (0, b) ∈ Y and thus φ(0) = ψ(b). Which implies that ψ(b) = 0. Resulting
75 in ker α = {(0, b) ∈ A ⊕ B|b ∈ ker ψ}. This would make δ to be β restricted to ker α, giving ker α ∼= ker ψ. Note: The induced isomorphism on the kernels can be replaced by equality as follows.
νδ−1 α 0 / K / Y / A
β φ
λ ψ 0 / K / B / X
Meaning, we can just assume that the kernels are equal when pull-backs are taken. Also, since νδ−1 is the kernel of α the sequence
0 / K / Y / A
is exact. {α,β} <φ,−ψ> Part (ii): Consider the sequence 0 / Y / A ⊕ B / X which is exact by Lemma 3.31. Suppose a ∈ A and consider the following diagram:
α YYA/ A
β φ
BXB / X ψ
ψ is epimorphic, which implies there exists a b ∈ B such that φ(a) = ψ(b).
φ(a) = ψ(b)
=⇒φ(a) − ψ(b) = 0 and thus (a, b) ∈ker< φ, −ψ > and from exactness of the sequence, ker< φ, −ψ >=im{α, β}, thus there exists y ∈ Y with a = α(y) and hence α is epimorphic.
76 Note: In this case one obtains, from the pull-back, a short exact se- quence
0 / K / Y / A / 0.
Now given two exact sequences and connecting maps.
ν1 δ1 0 / A1 / B1 / C1 / 0
α 1 β 2 γ
ν2 δ2 0 / A2 / B2 / C2 / 0
Let E be the pull-back of δ2 and γ, so we get a δ and ψ with the following diagram.
ν1 δ1 0 / A1 / B1 / C1 / 0
α0 3 λ 4 ν0 δ0 0 / A2 / E / C1 / 0 β ψ P b
ν2 δ2 0 / A2 / B2 / C2 / 0 By the commutativity of 1 , we are given an induced map into E from the universal property of the pull-back, such that λδ0 = δ and ψλ = β. Thus
4 commutes and δλν2 = δ2 = ν2 = 0. Resulting in a unique morphism
0 0 0 0 α into A2, the kernel of δ , with λν2 = ν α , making 3 commute. Now
0 0 0 ν1α = ψν α = ψλν2 = βν2 = ν1α. Since ν1 is a monomorphism, we can cancel, thus α = α0. By the ending note in Lemma 4.2, we get that the middle sequence is then exact.
77 0 Now let E be the push-out of ν1 and α creating the following diagram.
ν1 δ 0 / A1 / B1 / C1 / 0
α λ ζ
φ 0 ρ 0 / A2 / E / C1 / 0 5 ξ 6 ν0 δ0 0 / A2 / E / C1 / 0
ψ γ
ν2 δ2 0 / A2 / B2 / C2 / 0
Since ν0 = ξφ by the universal property of push-out, we know that square 5 will then commute. The universal property of the cokernel then gives us that ρ = δ0ξ−1 and thus square 6 commutes. This would also imply that the
0 sequence 0 / A2 / E / C2 / 0 is also exact. Thus by Lemma 2.19, we get that ξ is an isomorphism. Giving the following commutative diagram with exact rows.
ν1 δ1 0 / A1 / B1 / C1 / 0
α P o 0 / A2 / E / C1 / 0
P b
ν2 δ2 0 / A2 / / C2 / 0 We will use this idea to show the following lemma.
Lemma 4.3 Let
µ ν B / E / A
ξ Σ α
µ0 ν0 B / E0 / A0 be a commutative diagram with exact rows. Then the square Σ is both a pull-back and push-out diagram.
78 Proof: Let Eˆ be the pull-back of ν0 and α. Then we get the following com- mutative diagram.
B / E / A
B / Eˆ / A
P b α
ν0 B / E0 / A0
By Lemma 2.19 this forces E /Eˆ to be an isomorphism, thus Σ was originally the pull-back. Now by Theorem 3.31 there is an exact sequence
0 / E / A ⊕ E0 / A0.
However, ν0 is onto and thus A ⊕ E0 / A0 is also onto, so by Theorem 3.32 we get Σ is also a push-out diagram.
Lemma 4.4 Let
µ ν B / E / A
β Σ ξ
µ0 ν0 B0 / E0 / A be a commutative diagram with exact row. Then the square Σ is both a push-out and pull-back diagram.
Proof: Let Eˆ be the push-out of β and µ. This would result in the following
79 commutative diagram. µ B / E / A
β P o B0 / Eˆ / A
B0 / E0 / A
By Theorem 2.19 we get Eˆ / E as an isomorphism and thus Σ was originally the push-out. Now by Theorem 3.32 we get an exact sequence
B / B0 ⊕ E / E0 / 0.
However, µ was one-to-one, implying that B0 / B ⊕ E is also one-to-one. Thus by Theorem 3.31, Σ is also a pull-back diagram.
κ ν Back to E(−, −). Let α : A0 → A be a homomorphism and let B /E A/ be a representative element in E(A, B) and consider the diagram:
ν0 Eα / A0
ξ α κ ν B / E / A Where (Eα; ν0, ξ) is the pull-back of (α, ν). By Lemma 4.2 we obtain
ν0 the extension B / Eα / A0. Allowing us to define:
α∗ : E(A, B) → E(A0,B) by assigning B / Eα / A0 to the class B / E / A. This definition is independent of our choice of representative since pull-backs of equivalent short exact sequences will result in equivalent pull-back sequences.
We claim that this definition of α∗ = E(α, B) makes E(−,B) into a contravariant functor from modules to sets. First we check to see if the identity
80 is preserved. So for α = 1A : A → A we have the following diagram:
ν E1A / A
κ ν B / E / A
Giving back the original sequence. Now we need to check that composition is preserved. So let α0 : A00 → A0 and α : A0 → A. Our goal is to show that E(α ◦ α0,B) = E(α0,B) ◦ E(α, B) or that in the following diagram where each square is a pull-back 0 ν00 (Eα)α / A00
β0 α0
ν0 Eα / A0
β α
ν EAE / A the composite square 0 ν00 (Eα)α / A00
α◦α0
ν EAE / A is a pull-back of (ν, α ◦ α0). To do this, construct the following commutative diagram with exact rows.
0 ν00 B / (Eα)α / A00
β0 α ν0 B / Eα / A0
β α ν B / E / A
81 Giving the following diagram with exact rows.
B / (Eα)α / A00
Σ
B / E / A
Thus by Lemma 4.3 we get Σ as a pull-back. Resulting in E(−,B) as a contravariant functor from the category modules to the category of sets.
κ ν Let β : B → B0 be a homomorphism and let B / E / A be a representative of E(A, B) and consider the diagram:
κ ν B / E / A
β ξ
0 κ0 B / Eβ
0 where (Eβ; κ , ξ) is the push-out of (β, κ). The dual of Lemma 4.2 shows that
0 we obtain an extension B / Eβ / A. We can then define
0 β∗ : E(A, B) → E(A, B )
0 by assigning the class B / Eβ / A to the class B / E / A. By the dual reasoning of the above arguments β∗ = E(A, B) makes E(A, −) into a covariant functor from modules to sets.
∗ 0 0 ∗ Now consider β∗α : E(A, B) → E(A ,B ) and α β∗ : E(A, B) →
0 0 ∗ ∗ E(A ,B ). Our goal from here is to show that α β∗ = β∗α . Consider the representative sequence
B / E / A
of E(A, B) and module homomorphism α : A0 → A and β : B → B0. Taking
82 the pull-back on α, we get
B / E0 / A0 α0
B / E / A Now taking the push-out on β we get the following diagram.
B / E0 / A0 α0 ! B / E / A
β
B0 / E00 / A Finally, taking the pull-back of α again, we get the following 3D diagram.
B / E0 / A0 α0 ! B / E / A
β
B0 / Eˆ / A0 α B0 / E00 / A We can see that the path
E0 / E / E00 / A will be equivalent to
α E0 / A0 / A0 / A.
Since the squares
α E0 / A0 A0 / A E / A 1 α 2 3 E / AA0 / AE00 / A
83 commute, the paths
E0 / E / E00 / A and
E0 / E / A / A are equivalent from square 3 . As well as the paths
E0 / E / A / A and
E0 / A0 / A / A from square 2 . Finally, from square 1 we get the paths
E0 / A0 / A / A and
E0 / A0 / A0 / A are equivalent. This would give an induced map E0 / Eˆ making all faces on the following cube commute.
E0 / A0 α ! E / A
Eˆ / A0 α E00 / A Now considering the following exact sequences with connecting maps.
B / E0 / A0
B0 / Eˆ / A0
84 There exists an induced map λ : B → B0, making the above squares commute. This would give the following 3D diagram.
B / E0 / A0 α λ ! B / E / A
β B0 / Eˆ / A0 α
ν B0 / E00 / A Where every face, except for possibly BBBB
λ β
B0 B0 commutes. However, from the known commutative squares
B0 / Eˆ B / E0 E0 / E B / E
1 2 3 β 4 0 00 ˆ 00 00 B / E B / E E / E B ν / E We get the path
λ ν B / B0 / B0 / E00
is equivalent, by 1 , to
λ B / B0 / Eˆ / E00.
Which, by 2 , is equivalent to
B / E0 / Eˆ / E00.
Which by 3 , is equivalent to
B / E0 / E / E00.
85 Which by 2 is equivalent to
B / B / E / E00, and by 4 , this is equivalent to
β ν B / B / B0 / E00.
λ Resulting in the path B / B0 / B0 being equivalent to the path β B / B / B0. Which yields νβ = νλ. Since ν is monic, we get that β = λ. Thus the square BBBB
B0 B0 commutes. By Lemma 4.4, we can see that
B / E0 / A0
B0 / Eˆ / A0
α α ˆ ∗ is a push-out diagram. Thus (E )β = (Eβ) = E, and it follows that β∗α =
∗ α β∗ being represented by
B0 / Eˆ / A0.
Resulting in the following theorem.
Theorem 4.5 E(-,-) is a bifunctor from the category of R-modules to the category of sets. It is contravariant in the first variable and covariant in the second variable.
86 4.2 The Functor Ext
In this section we shall develop a new bifunctor, ExtR(−, −), from the category of R-modules into the category of abelian groups and compare it with E(−, −).
4.2.1 Projective Modules
µ ε A short exact sequence 0 / M / P / A / 0 of R-modules with P projective is called a projective presentation of A. By Theorem 2.17 such a presentation induces for an R-module B an exact sequence
ε∗ µ∗ 0 / HomR(A, B) / HomR(P,B) / HomR(M,B)
To the modules A and B, and to the chosen projective presentation of A we can associate the abelian group
ε ∗ ExtR(A, B) = cok (µ : HomR(P,B) → HomR(M,B))
Here, Im µ∗ will consist of mappings, φ : M → B that factor through P . More specifically, for γ ∈ Hom(P,B), µ∗(γ) = µγ will give the following commutative diagram. γ P / B O >
µ φ
M ε An element in ExtR(A, B) may be represented by the homomorphism φ de-
noted as [φ]. Then [φ1] = [φ2] if and only if φ1 − φ2 factors through P via µ.
Implying that φ1 − φ2 = γµ for some γ.
87 Given a homomorphism β : B → B0, create the commutative diagram
ε∗ µ∗ 0 / HomR(A, B) / HomR(P,B) / HomR(M,B)
β∗ β∗ β∗
∗ 0 ε∗ 0 µ 0 0 / HomR(A, B ) / HomR(P,B ) / HomR(M,B )
0 Using the map β∗ : HomR(A, B) → HomR(A, B ), gives the induced map on
ε ε 0 the cokernels β∗ : ExtR(A, B) → ExtR(A, B ). Given an a exact sequence
ε∗ µ∗ 0 / Hom(A, −) / Hom(P, −) / Hom(M, −) cok µ∗ would give us the sequence
∗ ε∗ µ ε 0 / Hom(A, −) / Hom(P, −) / Hom(M, −) / ExtR(A, −) / 0
ε . Giving that ExtR(A, −) is a functor. µ ε Now, given two projective presentations of A and A0, D / P / A
µ0 ε0 and D0 / P 0 / A0, let α : A0 → A be a homomorphism. Since P is projective, there exists a π and then σ induced on the kernels such that
D0 / P 0 / A0
σ π α
DPD / PPA/ A commutes.
We thus get a map
ε ε 0 πˆ : ExtR(A, B) / ExtR(A ,B)
where the diagram
ε 0 / Hom(A, −) / Hom(P, −) / Hom(D, −) / ExtR(A, −)
α∗ π∗ σ∗ πˆ
0 0 0 ε 0 0 / Hom(A , −) / Hom(P , −) / Hom(D , −) / ExtR(A , −)
88 ε commutes. Here for [φ] ∈ ExtR(A, −), πˆ([φ]) = [φσ]. Resulting the following commutative diagram.
ε πˆ ε 0 ExtR(A, B) / ExtR(A ,B)
βˆ βˆ
Extε (A, B0) / Extε (A0,B0) R πˆ R
This diagram would then imply that we have a natural transformation
ε ε 0 between the functors ExtR(A, −) and ExtR(A , −). As well as the transfor- mation not depending on the chosen π, only α.
Theorem 4.6 πˆ does not depend on the chosen π : P 0 / P but only on
α : A0 / A.
0 Proof: Let πi : P → P , i = 1, 2 be two homomorphisms lifting α, so that the following diagram commutes.
µ0 ε0 D0 / P 0 / A0
σi πi α
µ ε DPD / PPA/ A
0 Consider π1 − π2. Since they induce the same map we have a τ : P → D, giving the following diagram.
µ0 D0 / P 0
σ1−σ2 τ π1−π2
~ µ D / P
0 From the outer square, we get µ(σ1 − σ2) = (π1 − π2)µ , and from the lifting,
0 0 we get µτ = π1 −π2. Thus µ(σ1 −σ2) = µτµ , implying that σ1 −σ2 = τµ and
89 0 the above square commutes. This results in σ1 = σ2 + τµ . Thus if φ : D → B
ε is a representative of ExtR(A, B) we have
πˆ1[φ] = [φσ1]
0 = [φ(σ2 + τµ )]
0 = [φσ2 + φτµ ]
ε 0 0 Since ExtR operates on the cokernel of µ , we can see that φτµ will map to [0], resulting in
πˆ1[φ] = [φσ2]
=π ˆ2[φ]
Let (α : P 0,P ) denote the natural transformationπ ˆ. If we have three projective presentations, D00 / P 00 / A00,D0 / P 0 / A0, and D / P / A with two homomorphism α0 : A00 / A0 and α : A0 / A, we can thus have two liftings π0 : P 00 / P 0 and π : P 0 / P for α0 and α. Then π ◦ π0 : P 00 / P will lift α ◦ α0 and it follows that
(α0; P 00,P 0) ◦ (α; P 0,P ) = (α ◦ α0; P 00,P ) as well as
(1A; P,P ) = 1
Implying the following corollary.
ε ε0 Corollary 4.7 Let D / P / A and D0 / P 0 / A be two projective representations of A. Then
0 ε ε0 (1A; P ,P ): ExtR(A, −) / ExtR(A, −) is a natural equivalence.
90 0 0 0 Proof: Let π : P / P and π : P / P both lift 1A : A / A. From
0 0 the above equations, we can see that (1A; P,P ) ◦ (1A; P ,P ) = (1A; P,P ) = 1
0 0 0 and (1A; P ,P ) ◦ (1A; P,P ) = (1A; P,P ) = 1. Thus (1A; P ,P ) is a natural equivalence.
By this natural equivalence we are allowed to drop ε and simply write
ExtR(A, B).
Now given α : A0 / A we can define anα ˆ as follows. Choosing two
projective presentations D0 / P 0 / A0 and D / P / A, we letα ˆ =
0 0 πˆ = (α; P ,P ): ExtR(A, B) / ExtR(A ,B). Which we can see that by this
definition ofα ˆ we line up with Corollary 4.7 giving ExtR(−,B) is a covariant functor. Implying the following theorem.
Theorem 4.8 ExtR(−, −) is a bifunctor from the category of R-modules into the category of abelian groups. It is contravariant in the first variable and covariant in the second.
Since the category of abelian groups is a subcategory of Set we can
regard ExtR(−, −) as a Set-valued bifunctor.
Theorem 4.9 There is a natural equivalence of Set-valued bifunctors η :
E(−, −) / ExtR(−, −).
κ ν Proof: Given an element in E(A, B), represented by the extension B /E A/ we form the following commutative diagram.
µ ε DDP/ PPA/ A
ψ φ
BEB κ / EEAν / A
91 The homomorphism ψ : D / B defines an element [ψ] ∈ ExtR(A, B).
Let φi : P / E, i = 1, 2 be two maps inducing ψi : D / B. Then φ1 − φ2 factors through τ : P / B, or φ1 − φ2 = κτ. It follows that ψ1 − ψ2 = τµ, giving: [ψ1] = [ψ2 + τµ] = [ψ2]. Thus the particular ψ is independent of the φ chosen.
Given two representatives of the same element in E(A, B), we can create the following commutative diagram.
B / E / A
β α
B / E0 / A
Where α and β are both isomorphisms. Implying that the projective presenta- tions of A will be isomorphic. It follows that two representatives of the same
ε element in E(A, B) will induce the same element in ExtR(A, B). Thus we ε have a well-defined η : E(A, B) → ExtR(A, B).
Now given an element in ExtR(A, B), we represent this element by a homomorphism ψ : D / B. Taking the push-out of (ψ, µ) we obtain the following commutative diagram.
µ ε DDP/ PPA/ A
ψ φ
BEB κ / EEAν / A By the dual of Lemma 4.2 the bottom row is an extension. Given another representative ψ0 : D /B of the form ψ0 = ψ +τµ, where τ : P /B,
92 we are given the following diagram.
µ ε DDP/ PPA/ A
ψ0 φ0
BEB κ / EEAν / A with φ0 = φ + κτ. We wish this to be commutative, or φ0µ = κψ0.
ψ0 = ψ + τµ
κψ0 = κψ + κτµ
= φµ + κτµ
= (φ + κτ)µ
= φ0µ
By Lemma 4.4 the left hand square is a push-out diagram. Meaning, given any representative, we can create the original ψ and thus the extension is not dependent on the representative. Giving a well-defined map
ξ : ExtR(A, B) / E(A, B).
Which is also natural in B. Given a representative of E(A, B) η will give a
ψ : D / B such that
D / P / A
B / E / A commutes. By Lemma 4.4, we get the left hand square is a pull-back. Which is unique up to isomorphism. Implying that ξ will return the same extension.
93 Conversely, given ψ : D / B ξ will give us the extension
D / P / A
B / E / A
Again by Lemma 4.4, we will get that η will give back ψ, and η and ξ are
inverses of each other. Giving an equivalence η : E(A, B) / ExtR(A, B). Note that it may be possible that η is dependent upon our choice of pro- jective presentation. However, we show this is not the case with the following diagram.
ε D / P / A ` ` ` α D0 / P 0 / A0
ψ φ B / E / A ` ` α B / Eα / A0
Eα is the pull-back of E / A and A0 / A. We need to show the existence of homomorphisms, φ : P 0 / Eα and ψ : D0 / B such that this
diagram commutes. Since P 0 / E / A and P 0 / A0 / A agree they define
a homomorphism φ : P 0 / Eα into the pull-back. The φ then induces ψ :
D0 / B, resulting in all faces commutative. Meaning if given a representative in E(A, B), η will give us a corresponding ψ, and ifα ˆ send this representative to another extension we can find another corresponding ψ0. We then arrive at
94 the following diagram:
αˆ E(A, B) / E(A0,B) O O ξ η ξ η
αˆ 0 ExtR(A, B) / ExtR(A ,B)
0 For A = A, α = 1A this shows the independence of the chosen presentation. Furthermore, by similar arguments, we can create the following com- mutative diagram βˆ E(A, B) / E(A, B0) O O ξ η ξ η ˆ β 0 ExtR(A, B) / ExtR(A, B ) for the naturality in B. Thus concluding the proof.
Corollary 4.10 The set E(A, B) has an abelian structure.
Proof: This is shown by Theorem 4.9 and the fact that ExtR(A, B) has an abelian structure.
We will now investigate the structure on E(A, B). Given two represen- tatives of E(A, B),
B / Ei / A for i = 1, 2. We will first consider the direct sum.
B ⊕ B / E1 ⊕ E2 / A ⊕ A
Define a map ∆A : A / A ⊕ A by ∆A(a) = (a, a). Since addition is defined componentwise, ∆A is clearly a module homomorphism. Let F be the pull-
back of ∆A and E1 ⊕ E2 / A ⊕ A. Giving the following diagram.
B ⊕ B / F / A
∆A B ⊕ B / E1 ⊕ E2 / A ⊕ A
95 Now define a map ∇B : B ⊕ B / B by ∇B(b1, b2) = b1 + b2 for b1, b2 ∈ B.
Which is also clearly a module homomorphism. Let E be the push-out of ∇B
and B ⊕ B / F . Resulting in the following diagram.
B ⊕ B / F / A
∇B B / E / A
Where B / E / A is the sum of the two representatives.
Example 4.11 Adding the elements of Ext(Z2, Z4)
ν π1 Recall that Ext(Z2, Z4) contained the elements Z4 / Z8 / Z2 and
i π2 Z4 /Z4⊕Z2 /Z2 where ν is multiplication by 2, i is the injection map, and each π is the projection map. Now consider the direct sum of these sequences
γ πˆ Z4 ⊕ Z4 / Z8 ⊕ Z4 ⊕ Z2 / Z2 ⊕ Z2
T with γ = [ν, i] andπ ˆ = [π1, π2] . Let F be the pull-back of ∆Z2 andπ ˆ. Since
F = {(a, b)|∆Z2 (a) =π ˆ(a, (b1, b2))}, this would then give F = Z8 ⊕ Z2. Now
let E be the push-out of ∇Z4 and γ. Recall that E = (Z8 ⊕ Z2)/S where
S = {(∇Z4 (b1, b2), −γ(b1, b2)}. Thus E = Z8, resulting in Z4 / Z8 / Z2 as the sum.
We can actually find the sum of two representatives of E(A, B) via projectives as follows. Given two representatives of E(A, B) we can create the following push-out diagram for i = 1, 2.
K / P / A
λi
B / Ei / A
96 The sum can be computed by the following diagram.
K / P / A
λ1+λ2
B / E / A
We finally note:
Theorem 4.12 If P is projective and I is injective, then ExtR(P,B) = 0 =
ExtR(A, I).
Proof: Theorem 4.9 gives us that ExtR(A, B) has a one-to-one correspon- dence with the set E(A, B). Elements in E(A, B) are extensions of the form
B / E / A. If A = P , then by Theorem 2.28 the sequence splits. Similarly, if B = I the sequence splits by the dual of Theorem 2.28. If B / E / A splits, then E(A, B) contains only one element, namely 0.
4.2.2 Injective Modules
ν Here we will consider the dual argument to create ExtR(A, B) for injective
presentations B / I / S. One could develop all of the same (but dual) ν theory as in the previous section to create ExtR(A, B) as a functor and set up the natural equivalence to E(−, −). However, it will be advantageous to ν develop ExtR(A, B) in a different way to show more about the structure of E(−, −).
First we introduce new notation to shorten the arguments. Define Σ to
97 be a commutative diagram of R-modules
α A0 / A
ψ Σ φ
B / B0 β and Im Σ = Im φ ∩ Im β/Im φα
Ker Σ = ker φα /(ker α + ker ψ)
For Im Σ to be well defined we must have Im φα ⊆ Im φ∩Im β. Suppose that x ∈ Im φα. This would imply that x ∈ Im φ. By commutativity, φα = βψ. Resulting in Im φα ⊆ Im φ ∩ Im β.
For Ker Σ to be will defined we need ker α + ker ψ ⊆ ker φα. Suppose that x ∈ ker α + ker ψ. This would mean that x = a + b such that a ∈ ker α and b ∈ ker ψ. Thus
x = a + b
φα(x) = φα(a + b)
= φα(a) + φα(b) φ and α are homomorphisms
= φα(a) + βψ(b) commutativity of Σ
= φ(0) + β(0)
= 0
Giving ker α + ker ψ ⊆ ker φα.
98 Lemma 4.13 Let
α1 α2 A0 / A / A00
ψ Σ1 φ Σ2 θ
β1 β2 B0 / B / B00
v be a diagram with exact rows, then φ induces an isomorphism Φ : Ker Σ2 /Im Σ1.
Proof: Letx ∈ ker θα2, clearly φ(x) ∈ Im φ. Since (θα2)(x) = (β2φ)(x) = 0,
φ(x) ∈ Im β1. Thus we can define Φ(x) = φ(x). Now let x ∈ ker α2, so
x = a1 + a2 with α2(a1) = 0 and φ(a2) = 0. Thus
Φ(x) = φ(a1) + φ(a2)
= φ(a1)
0 = φα1(a )
= 0
Resulting in Φ as well-defined. Now let y ∈ Im φ ∩ Im β1. Then there exists an
x ∈ A such that φ(x) = y. By commutativity, we get (θα2)(x) = (β2φ)(x) =
β2(y). since y ∈ Im β1 and the bottom row is exact, we find that β2(y) = 0
and x ∈ ker θα2 and Φ is onto. Finally, suppose that x ∈ ker θα2 such that
0 φ(x) ∈ Im φα1, or φ(x) = (φα1)(z) for some z ∈ A . From the exactness of
the top row, we have α1(z) ker α2. Now φ(x − α1(z)) = 0, so let t = x − α1(z),
where t ∈ ker φ. Thus x = α1(z) + t. Implying that x ∈ ker α2 + ker φ. Making Φ one-to-one and thus an isomorphism.
µ ε Theorem 4.14 For any projective presentation D / P / A of A and
ν η injective presentation B / I / S of B there is an isomorphism
ε ν σ : ExtR(A, B) / ExtR(A, B)
99 Proof: Consider the following commutative diagram with exact columns and rows.
ν HomR(A, B) / HomR(A, I) / HomR(A, S) / ExtR(A, B)
Σ2 Σ1 HomR(P,B) / HomR(P,I) / HomR(P,S) / 0
Σ4 Σ3 HomR(D,B) / HomR(D,I) / HomR(D,S)
Σ5
ε ExtR(A, B) / 0
Looking at Σ1, we have the following commutative diagram.
α ν HomR(A, S) / ExtR(A, B)
ψ Σ1 φ Hom (P,S) / 0 R β
Since ψ is injective ker ψ = 0 giving Ker Σ1 = ker φα/ ker α = ker φ = ν ExtR(A, B).
Now looking at Σ5, we have this commutative diagram.
α HomR(D,B) / HomR(D,I)
ψ Σ5 φ Extε (A, B) / 0 R β
Here ker φα = HomR(D,B) and ψ is onto, meaning that for each x ∈ Ker Σ5
there exists a y ∈ HomR(D,B) such that ψ(y) = x. Giving Ker Σ5 =
ε ExtR(A, B). Now applying Lemma 4.13 several times we find
ν ∼ ∼ ∼ ∼ ε ExtR(A, B) = Ker Σ1 = Im Σ2 = Ker Σ3 = Im Σ4 = Ker Σ5 = ExtR(A, B)
100 ν Thus for any injective presentation of B, ExtR(A, B) is isomorphic to 0 ExtR(A, B), allowing us to drop the ν. Now let β : B / B be a homomor- ν0 phism and let B0 / I0 / S0 be an injective presentation. If τ : I / I0 is a map that induces β, we then obtain a corresponding diagram as seen in the proof of Theorem 4.14. Resulting in the induced homomorphism
0 β∗ : ExtR(A, B) / ExtR(A, B )
Which agrees with the above isomorphism and the homomorphism
0 β∗ : ExtR(A, B) / ExtR(A, B ). Following an analogous procedure, we can
∗ 0 define α : ExtR(A, B) / ExtR(A ,B). Using these definitions we get the following theorem.
Theorem 4.15 ExtR(−, −) is a bifunctor, contravariant in the first variable and covariant in the second. It is naturally equivalent to ExtR(−, −) and therefore naturally equivalent to E(−, −).
Since all of these functor are equivalent only one notation is tradition- ally used, ExtR(−, −).
Example 4.16 For the following examples the ground ring will be the inte- gers.
• Since Z is projective, Theorem 4.12 gives
Ext(Z, Z) = 0 = Ext(Z, Zq)
• Ext(Zr, Z)
µ ε Using the projective presentation Z / Z / Zr where µ is multiplica-
101 tion by r. This gives the exact sequence
µ∗ Hom(Zr, Z) / Hom(Z, Z) / Hom(Z, Z) / Ext(Zr, Z) o o o µ∗ 0 / Z / Z since µ∗ is multiplication by r, we get
∼ Ext(Zr, Z) = Zr
• Ext(Zr, Zq) ∼ From Example 2.5 we know that Hom(Zr, Zq) = Z(r,q), where (r, q) is the greatest common divisor of q and r. Giving us the following the following diagram.
µ∗ Hom(Zr, Zq) / Hom(Z, Zq) / Hom(Z, Zq) / Ext(Zr, Zq)
o o o µ∗ Z(q,r) / Zq / Zq
∗ ∗ ∼ Where µ is defined as multiplication by r. Here cok µ = Zq/rZq = ∼ Z(r,q). Which gives Ext(Zr, Zq) = Z(r,q).
102 Chapter 5
Ext in RepG
Here we will compute the functor Ext for a representations of simple digraphs. We will begin by discussing basic results in RepG to aid in finding the struc- ture of Ext.
5.1 Projectives in RepG
For a given digraph G, we will pick any vertex in G and consider all the paths which start from there.
Example 5.1
•7
' 7 •6 g •2
s •5 g •1
w ' •4 •3
We will form the subgraph L be starting from the vertex 1. Giving us the
103 graph:
7 •6 g
•5 g •1
w ' •4 •3 Here we will assign each vertex in L the one dimensional vector space F and each edge in L the identity linear transformation, denoted by 1. For all other
elements we will label them 0. Making the digraph P(1).
0 0
& 8 F f 0 1 1 0 s F f F 1 1 1 F x & F
Lemma 5.2 Given the digraph G has no cycles or multiple edges between ∼ two vertices, then Hom(P(i),R) = Ri as abelian groups.
Proof: Suppose φ : P(i) → R, in particular φi : F → R and φi(1) ∈ Ri.
This gives a map Hom(P(i),R) → Ri.
For the inverse, let v ∈ Ri. This defines φi : F → Ri to be φi(λ) = λv. Suppose P(j) 6= 0, so there exists a path from i to j. This would result in the following commutative diagram.
1 1 1 1 i / F / F / F F / j
φi β1 β2 β3 βn−1 βn
α1 α2 α3 αn Ri / • / • / • • / Rj
104 In order for the βi to give a commute diagram, one require β1 = α1φi,
β2 = α2α1φi,...,βn = αnαn−1...α1φi, resulting in a unique set of linear trans- formations. For any vertex not in such a path use the zero transformation.
Hence each v ∈ Ri yields a unique morphism in Hom(P(i),R) and it follows ∼ that Ri = Hom(P(i),R) as vector spaces. Since if v and w yield φv and φw
in Hom(P,R), then λv + w will yield λφv + φw.
Example 5.3
0 / 0
} P F / F / F
0 0 ! F
R5 / R6
~ RR1 / R2 / R3
R4
Theorem 5.4 Assume G has no multiple edges between two vertices and no cycles. Then the previous construction P(i), for any vertex i, is a projective object in Rep G.
Proof: Now given two presentations R and S with R /S onto and P(i) /S.
Since Ri / Si is onto, pick w such that w 7→ v, w ∈ Ri and v ∈ Si with
1 → v for 1 ∈ P(i). Thus w ∈ Ri and by the above lemma yields linear
transformation φ ∈ P(i) / R. In the compositions P(i) / Ri / Si we get
105 1 / w / v, so, by the above lemma again, composition must be the original
P(i) / S.
Example 5.5
G •1 / •2 / •3
1 1 P(1) F / F / F
1 P(2) 0 / F / F
P(3) 0 / 0 / F
G •3 / •4 >
•1 / •2
•5
1 (1) F / F P ? 1
1 F / F 1 F
1 (2) F / F P ? 1
0 / F 1 F
106 1 (3) F / F P @
0 / 0
0
(4) 0 / F P A
0 / 0
0
(5) 0 / 0 P @
0 / 0
F
For the dual notion of injective presentations in Rep G we use a similar construction but with paths ending at i. So with G
•7
' 7 •6 g •2
s •5 g •1
w ' •4 •3
107 will give the following injective presentation J (1)
0 0
& 8 0 f F 0 0 1 t 0 f F 0 0 0 0 x & 0 A simpler example would be
G •1 / •2 / •3
J (1) F / 0 / 0
1 J (2) F / F / 0
1 1 J (3) F / F / F
Now given a representation R and any vertex i, let Vi be the vector space
at i. This can be expressed as a direct sum of the field F . Thus assigning Vi at i and at any other vertex reachable from i using the identity transformation.
For all paths starting at i we get a direct sum of copies of P(i) and thus will
be projective. We will denote this as P(Vi), and then ⊕P(Vi), the sum over the vertices, will also be projective and maps onto R.
Example 5.6
•3 >
G •1 / •2 `
•4
108 This would result in the following.
V1 V2 ? ? 1 1
1 P(V1) V1 / V1 P(V2) 0 / V2 ` `
0 0
V3 V4 ? ? 1
P(V3) 0 / 0 P(V4) 0 / V4 _ _
1
0 V4
109 Which would give the following map.
1 1 0 V1 ⊕ V2 ⊕ V3 ⊕ V4 7 1 1 0 ! 0 0 α2α1, α2, 1, α2α3 V1 / V1 ⊕ V2 ⊕ V4 ] 0 1 V3 ! 8 1 α1, 1, α3 α2 V4
α1 V1 / V2 _ 1
α4
V4
This result is onto since at each vertex i, P(Vi) / Ri was onto. Implying that any projective presentation is the direct sum of copies of P(i)’s.
Applying Lemma 5.2, we can see that Hom(P(i), P(j)) = F if there is a path from i to j, otherwise Hom(P(i), P(j)) = 0. Continuing with the previous example we have,
110 FF ? ?
(1) F / F (2) 0 / F P _ P _
0 0
FF ? ?
(3) 0 / 0 (4) 0 / F P _ P _
0 F Resulting in
Hom(P(1), P(1)) = F
Hom(P(1), P(2)) = F
Hom(P(1), P(3)) = F
Hom(P(1), P(4)) = 0
We can express this as a matrix where the (i, j)−entry is Hom(P(i), P(j)). This would result in the following matrix. FFF 0 0 FF 0 0 0 F 0 0 FFF
In general, if G has n vertices with the graph being a tree (i.e. no cycle) the result would be an n × n matrix with entries 0 and F . This matrix
111 represents all the linear transformations of P(1) ⊕ P(2) ⊕ · · · ⊕ P(n) = P onto itself. In other words End(P). Since F is a field, we can express this as an n × n matrix with entries from F . Thus
∼ End(P) = {[aij]|aij = Hom(P(i), P(j))}
Identifying Hom(P(i), P(j)) with F and aij with bij ∈ F , then bij = 0 if Hom(P(i), P(j)) = 0. So continuing our example FFF 0 0 FF 0 ∼ End(P(1) ⊕ P(2) ⊕ P(3) ⊕ P(4) = 0 0 F 0 0 FFF a1 a2 a3 0 0 a4 a5 0 = |ai ∈ F all i 0 0 a6 0 0 a7 a8 a9
Now we will consider the representation of G, R, with linear transfor- mations A, B, C
V3 > B
A GV1 / V2 `
C
V4
Each vector vi ∈ Vi is sent along all paths starting at i. Thus for v1, we have the graph
v1AB 7 B
A v1 / v1A
112 We will then add up all the results for each vector at the vertices. Resulting in
v1AB + v2B + v3 + v4CB 3
v1 / v1A + v2 + v4C k
v4
So we send
[v1, v2, v3, v4] 7→ [v1, v1A + v2 + v4C, v1AB + v2B + v3 + v4CB, v4]
Using the matrix FFF 0 0 FF 0 0 0 F 0 0 FFF For example, starting with the element 1 from each F , this will give us 1 1 1 0 0 1 1 0 [v1, v2, v3, v4] = [v1·1, v1·1+v2·1+v4·1, v1·1+v2·1+v3·1+v4·1, v4·1] 0 0 1 0 0 1 1 1
So for each vi · 1, 1 is the (i, j)-entry obtained by taking the appropriate linear transformations along the unique path from i to j. Thus
•3 > B
A •1 / •2 ` C •4
113 would result in [v1, v1A + v2 + v4C, v1AB + v2B + v3 + v4CB, v4]. Similarly for G
•1 •2 >
A B
•3 O
C
•4 we would get [v1, v1AB + v2 + v3B + v4CB, v1A + v3 + v4C, v4] and the ring FFF 0 0 F 0 0 . Changing the 1’s to arbitrary scalars would just scale each 0 FF 0 0 FFF vi. For example 2 3 1 0 0 4 0 0 [v1, v2, v3, v4] = [2v1, 3v1AB+4v2−v3B+3v4CB, v1A−v3B+v4C, 2v4] 0 −1 −1 0 0 3 1 2
All of this implies that each representation of G is just a module over End(P). As constructed, these would create right modules; however, using the same process one could construct these as left modules. We will change whenever it is convenient.
Example 5.7 From Lie Theory
A2 • / •
114 The ring, R, for •1 / •2 would come from FF 0 F
and would be a1 a3 R = |ai ∈ F all i 0 a2
A with some generic representation of V1 / V2 giving a corresponding module
of V1 ⊕ V2 with the right action of R given by a1 a2 [ v1 v2 ] = [ v1a1 v1a3A + v2a2 ] 0 a3
Now consider A3 •1 / •2 / •3. We have a1 a4 a6 R = |a ∈ F all i 0 a2 a5 i 0 0 a3
A B with a generic representation of V1 / V2 / V3 making the corresponding
module V1 ⊕ V2 ⊕ V3 with right action given by a1 a4 a6 [ ] = [ ] v1 v2 v3 0 a2 a5 v1a1 v1a4A + v2a2 v1a6AB + v2a5A + v3a3 0 0 a3
115 Consider G
•1 •3 >
•2 O
•4 Here End(P(1) ⊕ P(2) ⊕ P(3) ⊕ P(4)) would be represented by the matrix FFF 0 0 FF 0 0 0 F 0 0 FFF and a generic representation looks like
V1 V3 ?
R S
V2 O
T
V4
Giving us a module V1 ⊕ V2 ⊕ V3 ⊕ V4 over End(P(1) ⊕ P(2) ⊕ P(3) ⊕ P(4)) with the right action defined by a1 b1 c1 0 0 a2 b2 0 [ ] v1 v2 v3 v4 0 0 a 0 3 0 a4 b4 c4
116 = [ v1a1 v1Rb1 + v2a2 + v4T a4 v1RSc3 + v2Sb2 + v3a3 + v4T Sb4 v4c4 ]
From this we will construct the following matrices.
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 e1 = e2 = e3 = 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 e4 = F1,2 = F1,3 = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 F2,3 = F4,2 = F4,3 = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0
In general for some edge •i / •j, ei is the matrix with 1 in entry (i, i) and zero elsewhere. The matrix Fi,j is the matrix with 1 in entry (i, j), zero elsewhere. Furthermore, these matrices satisfy the following properties. ei i = j (i) eiej = 0 i 6= j
Fi,j j = k (ii) Fi,jek = 0 j 6= k
Fi,j k = i (iii) ekFi,j = 0 k 6= i
117 Since End(P) ∼= [Hom(P(i), P(j))] and as previously stated these are just matrices made of elements from each F , ei and Fi,j represent endo- morphisms. Each ei will act as the identity on P(i) and send everything else to zero.
Continuing with the previous example, e2 would be
P(2)
0 F (2) < P 1 1
! F * 0 * F O < 1
0 *" F O 1
* 0
Where all others would be mapped to zero. For Fi,j, the endomorphism will send all P(k) for k 6= j and will send P(j) to P(i). For example,
P(2)
0 F (1) < P 0 1
! F * F * F O < 1
0 +" F O 1
+ 0
We can make a connection between these matrices and the standard
(r,s) basis vectors of the vector space Mn×n(F ). Using the notation E we get
(r,s) E = [ai,j] where the entry ai,j is one if (i, j) = (r, s) and zero if (i, j) 6= (r, s).
118 Resulting in 0 if p 6= s E(r,s)E(p,q) = E(r,q) if p = s
(i,i) (i,j) Thus ei = E and Fi,j = E .
Given a module M over End(P) each eiM will be a vector space. Let
Mc be the representation which has eiM on vertex i. If •j / •i we can define a linear transformation
Fdi,j ejM / eiM
by ejx 7→ Fi,j(ejx) = Fi,jx = eiFi,jx.
This would then create a representation from a left End(P) module. Thus giving a correspondence from End(P) − Mod in Rep G as well as the correspondence from Rep G into End(P) − Mod. We would like these to be functors, so we must define their behaviour on morphisms. φ Suppose (Vi) / (Wi) is a morphism of representations with cor- responding modules ⊕Vi and ⊕Wi. So for (vi) ∈ ⊕Vi we can define φe :
⊕Vi / ⊕ Wi by φe(vi) = (φivi). This would be an additive function so it remains to check that φλe = λφe for λ ∈ End(P).
Each λ is represented as a matrix [λi,j] where λi,j ∈ F and λi,j = 0 if
there exists no path from vertex j to vertex i. Let Ti,j denote the unique path of linear transformations that represent that path, if no such path exists then
Ti,j = 0. This results in
λi,jvj = Ti,jvj.
119 Now examining φ for n vertices, we have
n ! X φe(λ(vj)) = φe λi,jvj j=1 n X = φλe i,jvj j=1 X = φTe i,jvj j
= φiTi,jvj
Since φ is a morphism of representations we can create the following diagram.
Ti,j
Vj / • / • • /+ Vi
φj φi Wj / • / • • /3 Wi
Si,j
Where Si,j is representation of the unique path of linear transformations be-
tween Wj and Wi with each square commuting. Thus φiTi,j = Si,jφj. Changing our sum to
X = Si,jφjvj j X = λi,jφj j
= λ(φe(vj))
Resulting in φe a morphism of modules. Conversely, if ψ is a morphism of modules from M to N. Then for each
i, we get ψ(eix) = eiψ(x) for all x ∈ M. Hence ψ is restricted to an additive
map ψi : eiM / eiN. Also, if we regard eiM and eiN as vectors spaces, we get ψ as a linear transformation.
120 For any a ∈ F , we have
ψi(aeix) = ψ(aeix) (5.1)
= ψ(eiax) (5.2)
= eiaψ(x) (5.3)
= a(eiψ(x)) (5.4)
Where line (5.3) is justified by the fact that ψ is a module morphism. Line
(5.2) relies on the fact eia = aei. However, this is easily seen from the definition
of ei as the matrix with 1 ∈ F in the (i, i)-entry. Since 1a = a1 in F , we get
eia = aei. Now for the edge •j / •i, we need to verify the following square commutes. Fdi,j ejM / eiM
ψj ψi
ejN/ e iN Fdi,j For each x ∈ M we have
ψFci,j(ejx) = ψi(Fi,jejx) by definition of Fci,j
= ψi(eiFi,jx) by properties of ei and Fi,j
= ψ(eiFi,jx) by definition of ψ
= Fi,jψ(ejx) ψ a module morphism and Fi,j ∈ End(P)
= Fci,jψj(ejx) definition of Fci,j and ψ
Thus the square commutes resulting in ψ /(ψi) sends a morphism of modules
into a morphism of representations. Giving two functors F1 and F2 such that
F1 Rep G / End( ) − Mod o P F2
121 ∼ φ Given a representation R of G we claim F2F1(R) = R and if R / S the following square commutes.
F2F1(R) / R
F2F1(φ) φ
F2F1(S) / S
It can be shown from the construction of F1 and F2. Similarly, if M is a ∼ module we get F1F2(M) = M and the commutative square,
F1F2(M) / M
F1F2(ψ) ψ
F1F2(N) / N
for a module morphism ψ : M / N.
5.2 Analysis of objects in RepG
T Taking a closer look at a typical object in Rep (• / •). So given V / W we can decompose the vector spaces V and W as follows:
V = ker T ⊕ V 0 for some V 0
W = Im T ⊕ W 0 for some W 0 as internal direct sums. In other words, for v ∈ V we get v = x + y where x ∈ ker T and y ∈ V 0 with ker T ∩ V 0 = ∅ and v = 0 if and only if x = y = 0. We can thus decompose this as
0 T 0 0 0 V 0 ⊕ ker T / Im T ⊕ W 0
122 0 T where T = T |V 0 . Resulting in this being isomorphic to V / W , now as external sums. For v ∈ V we have v = v0 + k, and v 7→ (v0, k) for v0 ∈ V 0 and k ∈ ker T as well as w = T u + w0 for some u ∈ V . Thus w 7→ (T u, w0). Giving the following diagram,
T VVW/ W
=∼ =∼
V 0 ⊕ ker T / Im T ⊕ W 0 0 T 0 0 0 where v / T v
(v0, k)() / (T v, 0) 0 T 0 0 0 which is commutative since
T (v) = T (v0 + k)
= T (v0) + T (k)
= T v
We can express two representations as a direct sum as follows. Given two representations R and S define the direct sum to be the direct sum of vector spaces at each vertex and R ⊕ S : V1 ⊕ W1 / V2 ⊕ W2 to be a linear transformation R ⊕ S(v, w) = (Rv, Sw) for all v ∈ V1 and w ∈ W1. So the sum of
R S V1 / V2 ⊕ W1 / W2
123 would be R⊕S V1 ⊕ W1 / V2 ⊕ W2
We can express
0 T 0 0 0 V 0 ⊕ ker T / Im T ⊕ W 0 as the direct sum of two representations as follows.
T 0 V 0 / Im T
⊕
0 ker T / W 0
0 Generally speaking, A / B is the direct sum of
A / 0
⊕
0 / B Thus
T T 0 0 V / W ∼= (V 0 / Im T ) ⊕ (ker T 0 / W 0)
T 0 ∼= (V 0 / Im T ) ⊕ (ker T / 0) ⊕ (0 / W 0)
However, T 0 is an isomorphism of vector spaces, thus there is an isomorphism of representations T 0 V 0 / Im T
I T 0−1
V 0 V 0
124 T 0 I Where I is the identity linear transformation. Giving V 0 /Im T ∼= V 0 /V 0. 0 I 0 ∼ 1 Now if V = (⊕BF ) where B is the basis, then V / V = ⊕(F / F ). Thus
we get V 0 / V 0 is projective. Similarly, for zero morphisms
0 / W
W / 0
we get these as direct sums of 0 / F and F / 0 respectively. Hence any
representation of • / • is a direct sum of three types of representations.
1 0 0 F / F 0 / FF / 0
Where
1 0 • F / F and 0 / F are projective;
1 0 • F / F and F / 0 are injective;
0 0 • F / 0 and 0 / F are simple with simple meaning that there are no non-trivial proper subobjects.
A typical projective element in Rep (• / •) will be the sum of the two projective types and will have the form (up to isomorphism)
1 0 V / V ⊕ W
v / (v, 0)
T T 0 If T is one-to-one, then ker T = 0. Giving V /W ∼= V 0 /Im T ⊕0 /W 0 and thus projective. Likewise, if T is onto is, then Im T = W and hence
125 T T 0 W 0 = 0. Giving V / W ∼= V 0 / Im T ⊕ ker T / 0 and thus injective. T However, any one-to-one linear transformation V / Z can be decomposed 1−1 into 1 0 V / V ⊕ W
where W ⊕ Im T = Z. Which gives the following diaram.
1 0 VVV/ V ⊕ W
" # T 1
VVZ/ Z T Where the top row gives 1 v v = 0 0 and the right-hand column gives v [ T 1 ] = T v + w 0
Which results in an isomorphism. Meaning that all projectives have, up to
T isomorphism, the form V / W where T is one-to-one. The dual notion for injectives is as follows. All injective elements will have the form, up to isomorphism:
0 1 V ⊕ W / V
(v, 0) v
126 resulting in the following diagram.
0 1 V ⊕ WWV/ V S 1 ZZV/ V S Where S is an onto function. Thus injectives will have, up to isomorphism,
the form V / W with S onto. S T Now looking at projective presentations of V / W . Since we can
T decompose V / W into three types, we just need to consider the non-
projective case of ker T / 0. Generally speaking, if U / 0 we can create
1 UUU/ U
U / 0
T To find a projective presentation for V / W we will use this diagram
1 and add on ker T / ker T , creating
T 0 0 0 0 1 V 0 ⊕ ker T / Im T ⊕ W 0 ⊕ ker T 0 1 0 0 1 T 0 0 1 0 0 0 V 0 ⊕ ker T / Im T ⊕ W 0
0 The kernel of this presentation is 0 / ker T which is also projective. This leads us to the following.
127 Theorem 5.8 For projective objects in Rep (• / •), subobjects are also projective.
S Proof: Suppose A /B is projective. Thus S is a one-to-one transformation.
S Now for a monomorphism to A / B we can create the following diagram,
S0 A0 / B0
f g
AAB/ B‘ S where f and g are one-to-one linear transformations. Since the composition of two one-to-one functions is also one-to-one, f ◦ g would then be one-to-one.
S0 This would imply that g ◦ S0 is one-to-one. Hence A0 / B0 is projective. Note: The dual notion for injectives would use quotients. In other words, the quotients of injectives are injectives.
A projective presentation for a general tree G would have each edge,
Ti,j Vi / Vj, one-to-one. Similarly, for an injective presentation each Ti,j would need to onto.
By reducing the problem to each edge, we can create the following squares. T 0 0 i,j 0 Vi / Vj
fi fj
Vi / Vj Ti,j
Now for a typical object in Rep (• / • /•). We have representations
T S of the form U / V / W . We can express U as U = T −1(ker S) ⊕ X for some vector space X, where T −1 is the pre-image under T . Giving ker T ⊆ T −1(ker S). So T −1(ker S) = Y ⊕ ker T for some Y . Now U = X ⊕ Y ⊕ ker T .
128 Which implies that T is one-to-one on X ⊕ Y and
Im T = T (X ⊕ Y ) = T (X) ⊕ T (Y )
Now T (Y ) ⊆ ker S, so ker S = T (Y ) ⊕ K for some K, and since U = T −1(ker S) ⊕ X, T (X) ∩ ker S = ∅. Thus we can express V as
V = T (X) ⊕ T (Y ) ⊕ K ⊕ Z for some Z. Since T (Y ) ⊕ K = ker S, S will be one-to-one on T (X) ⊕ Z, giving Im S = ST (X) ⊕ S(Z). Now consider u ∈ U with u = x + y + k where x ∈ X, y ∈ Y k ∈ ker T. Then
ST u = ST (x + y + k)
= ST x + ST y + ST k
= ST x + ST y ST k = 0
= ST x T y ∈ ker S
Thus Im ST = ST (X). Finally, W = ST (X)⊕S(Z)⊕L for some L. Allowing
T W U / V / to be expressed as
α β X ⊕ Y ⊕ ker T / T (X) ⊕ T (Y ) ⊕ K ⊕ Z / ST (X) ⊕ L ⊕ S(Z) where T 0 0 S 0 0 0 0 T 0 α = β = 0 0 0 0 0 0 0 0 0 0 S 0 0 0
T S Giving U / V / W “breakable” into
X / TX / STX
129 Y / TY / 0
ker T / 0 / 0
0 / K / 0
0 / Z / SZ
0 / 0 / L
Implying that for any representation U / V / W can be expressed as the direct sum of the following six indecomposable objects.
• F / F / F , injective and projective;
• F / F / 0, injective;
• F / 0 / 0, injective and simple;
• 0 / F / 0, simple;
• 0 / F / F , projective;
• 0 / 0 / F , projective and simple.
5.3 Computing Ext
For Ext in Rep G we must first know what an exact sequence looks like. Given
G,• / • and exact sequence is of the form
R U1 / U2
f1 g1 S V1 / V2
f2 g2 T W1 / W2
130 f1 f2 Here the columns, U1 / V1 / W1, are the exact sequences; as vector ∼ spaces, this would force Vi = Ui ⊕ Wi.
Ti T1 T2 Given Vi /Wi for i = 1, 2 we want to compute Ext(V1 W/ 1,V2 /W2).
0 0 0 Since V1 / Im T1 and 0 / W1 are both projective and V2 / Im T2 and T1 T2 ∼ ker T2 / 0 are both injective, we find that Ext(V1 / W1,V2 / W2) =
0 Ext(ker T1 / 0, 0 / W2). Which reduces the problem to finding the exten- sions of the form Ext(V / 0, 0 / W ). Elements of this would be extensions of the form
0 / E1 / V
W / E2 / 0
For any E1 / V and W / E2 this diagram is commutative. However, for the
rows to be exact, E1 / V and W / E2 must be isomorphisms. This implies
that Ext(V / 0, 0 / W ) ∼= Hom(V,W ). (Where Hom(V,W ) is taken in the category of vector spaces.) So in general
T1 T2 ∼ Ext(V1 / W1,V2 / W2) = Ext(ker T1 / 0, 0 / W2)
∼ 0 = Hom(ker T1,W2)
0 0 C Where W2 ⊕ Im T2 = W2, so set W2 = (Im T2) .
Now a representation of the form • / • / • has extensions of the form:
R1 R2 U1 / U2 / U3
f1 g1 h1
S1 S2 V1 / V2 / V3
g2 g2 h2
T1 T2 W1 / W2 / W3
131 Where each column is exact. ∼ For finite sums Ext(⊕iAi, ⊕jBj) = ⊕i,jExt(Ai,Bj), so we can just consider the six basic forms of • / • / •. Our problem is further reduced by the fact that Ext(A, −) = 0 and Ext(−,B) = 0 where A is projective and B is injective. Giving only nine possible scenarios.
•Ext(A / 0 / 0, 0 / 0 / B)
This would create the extension
0 / 0 / B
β E1 / E2 / E3
α A / 0 / 0
For this to be exact, E2 = 0 with α and β would have to be isomorphisms. Thus the middle term would be of the form
A / 0 / B
Which splits and hence decomposes into
A / 0 / 0 ⊕ 0 / 0 / B
and the extension forms a split exact sequence. Thus Ext(A 0/ /0, 0 /0 /B) = 0.
•Ext(A / 0 / 0, 0 / B / 0)
Here extensions look like
0 / B / 0
β R E1 / E2 / E3
α A / 0 / 0
132 Which would make E3 = 0 with α and β isomorphisms. Here, the middle term would be of the form
R A / B / 0
for some R. The resulting short exact sequence does not split and it follows
that Ext(A / 0 / 0, 0 / B / 0) ∼= Hom(A, B) where Hom(A, B) is taken in the category of vector spaces.
1 •Ext(A / 0 / 0, 0 / B / B)
Here extension have the form
1 0 / B / B
β1 Σ β2
R1 R2 E1 / E2 / E3
α A / 0 / 0
Here exactness gives us that α, β1, and β2 are all isomorphisms. Commutativity
−1 of the square Σ forces β2 = R2β1, which implies that R2 = β2β1 is also an isomorphism. Thus the above is equivalent to
−1 −1 β R1α 1 A / B / B
α−1 β1 β2
R1 R2 E1 / E2 / E3 It follows that the middle term will have the form
R1 1 A / B / B
1 ∼ for any linear transformation R1. Hence Ext(A / 0 / 0, 0 / B / B) = Hom(A, B).
1 •Ext(A / A / 0, 0 / 0 / B)
133 This gives the extensions the form
0 / 0 / B
β
R1 R2 E1 / E2 / E3
α1 Σ α2 1 A / A / 0
Much like the previous example, α1, α2, and β are all isomorphisms as well
−1 square Σ giving R1 = α2 α1. Making all the middle terms have the form
1 R2 A / A / B
1 It follows Ext(A / A / 0, 0 / 0 / B) ∼= Hom(A, B).
1 •Ext(A / A / 0, 0 / B / 0)
We get the diagram
0 / B / 0
β
R1 R2 E1 / E2 / E3
α1 Σ α2 1 A / A / 0
Here the column exactness would force E3 = 0, β to be one-to-one, α1 to be an isomorphism, α2 to be onto and Im β = ker α2. From the commutativity
of square Σ we get α2R1 = R2, which implies that R1 is one-to-one as well.
Since α1 is one-to-one and α1 = α2R1, then
Im R1 ∩ ker α2 = 0
⇒ Im R1 ∩ Im β = 0
134 Now given y ∈ E2, α2y ∈ A and α1 and isomorphism, we have
α2y = α1x some x ∈ E1
α2y = α2R1x
α2(y − R1x) = 0
⇒ (y − R1x) ∈ ker α2 = Im β
⇒ y ∈ Im R1 + Im β
This would imply that E2 = Im R1 ⊕ Im β. Giving 1 0 A / A ⊕ B / 0
" # α−1 −1 1 R1α1 β
E1 / E2 / 0 as an isomorphism. Here the middle terms have the form
A / A ⊕ B / 0 which decompose to A / A / 0 ⊕ 0 / B / 0, and the resulting exact sequence splits. Hence
Ext(A / A / 0, 0 / B / 0) = 0
1 1 •Ext(A / A / 0, 0 / B / B)
This give us the following diagram
1 0 / B / B
β1 Σ1 β2
R1 R2 E1 / E2 / E3
α1 Σ2 α2 1 A / A / 0
135 Here exactness would force α1 and β2 to be isomorphisms, β1 to be
one-to-one, α2 to be onto, and Im β1 = ker α2. From the commutativity of
square Σ1 we find R2β1 = β2 and thus R2 is onto. From the commutativity of
square Σ2 we find α2R1 = α1 and thus R1 is one-to-one. As before, since R1
is one-to-one and Im β1 = Im α2, we have Im R1 ∩ Im β1 = 0.
Then given a y ∈ E2
α2y = α1x some x ∈ E1
α2y = α2R1x
α2(y − R1x) = 0
⇒ (y − R1x) ∈ ker α2 = Im β
⇒ y ∈ Im R1 + Im β
We would get 1 " # 0 0 1 A / A ⊕ B / B
" # α−1 −1 β 1 R1α1 β1 2
R1 R2 E1 / E2 / E3 as an isomorphism. Thus the middle terms have the form
A / A ⊕ B / B
Which can be decomposed as A /A /0⊕0 /B /B, and the corresponding exact sequence splits, so
Ext(A / A / 0, 0 / B / B) = 0.
•Ext(0 / A / 0, 0 / 0 / B)
136 This gives the following diagram.
0 / 0 / B
β
R1 R2 E1 / E2 / E3
α 0 / A / 0
Here exactness forces α and β to both be isomorphisms and E1 = 0. Which gives
0 / A / B
α−1 β
E1 / E2 / E3 as an isomorphism. Thus the middle terms have the form
0 / A / B
and it follows that Ext(0 / A / 0, 0 / 0 / B) ∼= Hom(A, B).
1 •Ext(0 / A / 0, 0 / B / B)
This will give the following diagram.
1 0 / B / B
β1 Σ β2
R1 R2 E1 / E2 / E3
α 0 / A / 0
Here exactness will force E1 = 0, β2 to be an isomorphism, α to be
onto, β1 to be one-to-one, and Im β1 = ker α. The commutativity of square Σ
137 will give R2β1 = β2 with R2 onto. This yields the following diagram.
R2 0 / E2 / E3 α −1 β2 −1 β2 R2 0 / A ⊕ B " # / B 0 1
We would like this diagram to be an isomorphism. Given (a, b) ∈ A ⊕ B, we
−1 have a = αy for some y ∈ E2 since α is onto. Now consider y+β1(b−β2 R2(y)).
Since ker α = Im β1, we find
−1 −1 α(y + β1(b − β2 R2(y))) = α(y) + α(β1(b − β2 R2(y)))
= α(y)
= a and
−1 −1 −1 −1 −1 β2 R2(y + β1(b − β2 R2(y))) = β2 R2(y) + β2 R2β1(b − β2 R2(y))
−1 −1 −1 = β2 R2(y) + β2 R2β1(b) − β2 R2(y)
−1 = β2 R2β1(b)
−1 = β2 β2(b)
= b α α(y) 0 So we get as onto. If = then y ∈ ker α = Im β1 −1 −1 β2 R2 β2 R2(y) 0 and y = β1(b) for some b. Giving
−1 0 = β2 R2(y)
−1 = β2 R2β1(b)
−1 = β2 β2(b)
= b
138 α Thus b = 0. So y = β1(b) = β1(0) = 0. Meaning is one-to-one and −1 β2 R2 therefore an isomorphism. Therefore the middle terms will have the form
0 / A ⊕ B / B
Which decomposes as 0 / A / 0 ⊕ 0 / B / B, which splits. Giving
Ext(0 / A / 0, 0 / B / B) = 0.
•Ext(0 / A / 0, 0 / B / 0)
This give the following diagram.
0 / B / 0
β
R1 R2 E1 / E2 / E3
α 0 / B / 0
Here exactness would force E1 = E3 = 0, α to be onto, β to be one-to- one, Im β = ker α. We can create the following digram.
0 / E2 / 0 α βˆ 0 / A ⊕ B / 0
Where βˆ is obtained by splitting β. Then
B i / E2 = Im β ⊕ Z some Z
In other words, each y ∈ E2 can be expressed uniquely as y = β(b) + z for some z ∈ Z where βˆ(y) = b.
139 Here we get the middle terms having the form 0 / A⊕B / 0. Which
can be decomposed as 0 / A / 0⊕0 / B / 0 and again the corresponding short exact sequence splits. Thus
Ext(0 / A / 0, 0 / B / 0) = 0. To summarize these results we have
(i) Ext(A / 0 / 0, 0 / 0 / B) = 0
(ii) Ext(A / 0 / 0, 0 / B / 0) ∼= Hom(A, B)
(iii) Ext(A / 0 / 0, 0 / B / B) ∼= Hom(A, B)
(iv) Ext(A / A / 0, 0 / 0 / B) ∼= Hom(A, B)
(v) Ext(A / A / 0, 0 / B / 0) = 0
(vi) Ext(A / A / 0, 0 / B / B) = 0
(vii) Ext(0 / A / 0, 0 / 0 / B) ∼= Hom(A, B)
(viii) Ext(0 / A / 0, 0 / B / B) = 0
(ix) Ext(0 / A / 0, 0 / B / 0) = 0
Ti Si More generally, given two representations, Ui / Vi / Wi, for i = 1, 2, we know we can decompose these into direct sums of the following.
(1) Xi / TiXi / SiTiXi
(2) Yi / TiYi / 0
(3) ker Ti / 0 / 0
(4)0 / Ki / 0
140 (5)0 / Zi / SiZi
(6)0 / 0 / Li
Since Im Ti ⊕ Ki ⊕ Zi = Vi and Im Si ⊕ Li = Wi, for clarity let Ki =
C C (Im Ti) and Li = (Im Si) . Omitting the times when Ext(−, −) = 0, we have the situations (ii), (iii), (iv), and (vii) from the summary above to consider. ∼ Situation (ii) will use (3) and (4) and give Ext(ker Ti /0 /0, 0 /K2 /0) =
Hom(ker Ti,K2). Where (iii) will need (3) and (5) and result in ∼ Ext(ker T1 / 0 / 0, 0 / Z2 / S2Z2) = Hom(ker T1,S2), since S2 is one-
to-one on Z2. Now, (iv) will require (2) and (6) and result in ∼ Ext(Y1 / T1Y1 / 0, 0 / 0 / L2) = Hom(T1Y1,L2). Finally, (vii) will use ∼ (4) and (6) and give Ext(0 / K1 / 0, 0 / 0 / L2) = Hom(K1,L2). Since
Ti Si Ui / Vi / Wi is made of any combination of direct sums of the above
C table we find Hom(ker T1,K2 ⊕Z2) = Hom(ker T1, (Im T2) ) and Hom(T1Y1 ⊕
C K1,L2) = Hom(ker S1, (Im S2) ). Resulting in
T1 S1 T2 S2 Ext(U1 / V1 / W1,U2 / V2 / W2)
∼ C C = Hom(ker T1, (Im T2) ) ⊕ Hom(ker S1, (Im S2) ).
We can get a similar result for presentations of the the form • / •.
T1 T2 ∼ 0 Recall, that Ext(V1 / W1,V2 / W2) = Hom(ker T1,W2), where 0 T1 T2 ∼ C W2⊕Im T2 = W2. Thus we get Ext(V1 W/ 1,V2 /W2) = Hom(ker T1, (Im T1) ). T S Furthermore, if we have a projective presentation, U / V / W ,
T S Ext(U /V /W, −) = 0 if and only if Hom(ker T, −)⊕Hom(ker S, −) = 0. Which is only true if ker T = 0 and ker S = 0. This only occurs when T and
141 T S S are one-to-one. Dually, for U / V / W , injective:
T S Ext(−,U / V / W ) = 0
⇔ Hom(−, (Im T )C ) ⊕ Hom(−, (Im S)C ) = 0
⇔ (Im T )C = 0 and (Im S)C = 0
Which is only true if and only if T and S are both onto.
5.4 Ext in RepAn
Now consider the more general case of An.
•1 / •2 / •3 / ··· / •n
Denoting projectives as P(i) for An, we have them as
0 0 F 1 F 1 1 F 1 F •1 / •2 / ··· / •i / •i+1 / ··· / •n−l / •n
Similarly, denote injectives as J (i), and they will have the form
F 1 F 1 1 F 0 0 0 •1 / •2 / ··· / •i / •i+1 / ··· / •n−l / •n
Now consider the quotient P(i)/P(j + 1) and denote it R(i, j). This would then result in
0 0 0 F 1 F 1 1 F 0 0 •1 / •2 / ··· / •i−1 / •i / •i+1 / ··· / •j / •j+1 / ··· /•n
Giving R(1, j) = J (j) and R(i, n) = P(i). In particular, we get the short exact sequences
0 / P(j + 1) / P(i) / R(i, j) / 0
and
0 / R(i, j) / J (j) / J (i) / 0.
142 Recall, if there exists a path from i to j, then Hom(P(i), P(j)) = F . ∼ Giving us an equivalence of categories RepAn = Mod-Rn, where Rn is the ring of upper triangular matrices defined as
[aij] with aij = Hom(P(i), P(j)) where F i ≤ j Hom(P(i), P(j)) = 0 i > j
Now for some vector space, V , define PV (i) by replacing the field F with V . For example PV (3) with n = 5 would be
0 0 V V V •1 / •2 / •3 / •4 / •5 .
We can define JV (i) and RV (i, j) in the same way. Thus Pv(i) = ⊕dP(i), where d is the dimension of the vector space V .
Giving us a corresponding short exact sequence
0 / PV (j + 1) / P(i)V / RV (i, j) / 0 called the projective presentation on R(i, j). We can the compute Ext via projective presentations. Given a repre- sentation, W , where
T1 T2 Tn−1 W = •W1 / •W2 / ··· / •Wn ,
Ext(RV (i, j),W ) is obtained through the projective resolution of RV (i, j).
0 /Hom(RV (i, j),W ) /Hom(PV (i),W ) /Hom(PV (j+1),W ) /Ext(RV (i, j),W ) /0
∼ Thus by Lemma 5.2, we know that Hom(P(i),W ) = Wi. Since ∼ V = ⊕F , we have Hom(PV (i),W ) = Hom(V,Wi). Similarly,
143 ∼ Hom(PV (j + 1),W ) = Hom(V,Wj+1). We then obtain the following commu- tative diagram.
Hom(PV (i),W ) / Hom(PV (j + 1),W ) / Ext(RV (i, j),W )
o o ∗ Ti,j Hom(V,Wi) / Hom(V,Wj+1)
where Ti,j is the composition TjTj−1 ··· Ti+1Ti from
Ti Ti+1 Tj−1 Tj Wi / Wi+1 / ··· / Wj / Wj+1.
∼ ∗ Thus Ext(RV (i, j),W ) = cok Ti,j.
1 Example 5.9 Ext(A / 0 / 0, 0 / B / B)
1 The graph A /0 /0 would then be RA(1, 1) thus Ext(A /0 /0, 0 B/ /B) = 1 1 Ext(RA(1, 1), 0 / B / B). The graph 0 / B / B would then make
∗ T1,1 = 0 and thus cok T1,1 = Hom(A, B).
1 Ext(A / A / 0, 0 / B / 0)
1 Here we get that Ext(A /A /0, 0 /B /0) = Ext(RA(1, 2), 0 /B /0)
∗ and T1,2 = I, thus cok T1,2 = 0.
Now, say that W = RW (r, s). Thus W is the graph
0 0 0 W 1 W 1 1 W 0 0 •1 / •2 / ··· / •r−1 / •r / •r+1 / ··· / •s / •s+1 / ··· /•n
We then get 0 i < r or j < s Ti,j = I r ≤ i ≤ j ≤ s
and thus making Ext(RV (i, j), RW (r, s)) either 0 or Hom(V,W ).
For Ext to be 0 we need Ti,j = I making Hom(V,Wi) = Hom(V,Wj),
or Wj+1 = 0 making Hom(V,Wj+1) = 0. Thus for Ext to be Hom(V,W ) we
144 must have Wj+1 = W and Ti,j = 0. For Wj+1 = W , we use the following diagram to illustrate the situation.
V ··· / •j+1 / ···
W W W ··· / •r / ··· / •j+1 / ··· / •s+1 / ···
This would then force r ≤ j ≤ s. Then for Ti,j = 0 we need the following diagram.
V V ··· / •i / ··· / • / ···
0 W ··· / •i / ··· / •r / ··· Then forcing i < r. Thus Hom(V,W ) for i < r ≤ j ≤ s Ext(RV (i, j), RW (r, s)) = 0 otherwise
Example 5.10 A7
We can then quickly compute any Ext group for RepA7. Consider the graphs
0 / V / V / V / 0 / 0 / 0
and
0 / 0 / W / W / W / 0 / 0.
These can be denoted as RV (2, 4) and RW (3, 5). Resulting in Ext(RV (2, 4), RW (3, 5)) = Hom(V,W ).
For the graphs
0 / V / V / V / V / V / 0
and
0 / 0 / W / W / 0 / 0 / 0,
145 we have RV (2, 6) and RW (3, 4). Giving Ext(RV (2, 6), RW (3, 4)) = 0. Finally, consider the graphs
0 / 0 / V / V / V / 0 / 0 and
0 / 0 / 0 / W / W / 0 / 0.
Here we get RV (3, 5) and RW (4, 5). Making Ext(RV (3, 5), RW (4, 5)) = Hom(V,W ).
146 Bibliography
[1] Dummit, David S. and Foote, Richard M. Abstract Algebra. Wiley 3rd Edition 2003.
[2] Hilton, P.J. and Stammbach U. A Course in Homological Algebra. Springer 2nd Edition 1971.
[3] Lang, Serge. Algebra Springer 3rd Edition 2002.
147 Vita
Author: Anthony M. Baraconi Place of Birth: Rockford, Illinois Undergraduate Schools Attended: Spokane Falls Community College Eastern Washington University Degrees Awarded: Bachelor’s of Arts, 2010, Eastern Washington University Honors and Awards: Graduate Instructorship, Mathematics Department, 2010-2012, Eastern Washington University