
U.U.D.M. Project Report 2016:39 Introduction to Representation Theory of Quivers Simon Scott Examensarbete i matematik, 15 hp Handledare: Martin Herschend Examinator: Jörgen Östensson September 2016 Department of Mathematics Uppsala University Contents Abstract 2 Introduction 3 1 Preliminaries 4 1.1 Elementary matrices . .4 2 Matrix problems 9 2.1 Second problem . .9 3 Decomposition into indecomposables 13 3.1 Direct sum . 13 3.2 Indecomposables and decomposition of the first matrix problem . 14 3.3 Indecomposables and decomposition of the second matrix problem . 15 4 Quivers and representations 17 4.1 Definition: quiver . 17 4.2 Morphisms of representations . 18 5 Quivers of type An 19 5.1 Direct sums of representations . 20 6 Quivers of Dynkin type 22 6.1 Final notes . 23 1 Abstract By thoroughly solving matrix problems of equivalence relation requirements and connecting these solu- tions to methods of exhibiting isomorphisms between representations of quivers, the basics of representation theory of quivers is displayed. The techniques used are taking both matrices and representations respectively and decomping their normal forms into indecomposables and comparing these forms to establish isomorph- ism/equivalence or the lack of it. Keywords: Matrix, Quiver, Representation theory, Direct sum, Indecompsable 2 Introduction This work focuses on illuminating the preliminaries that are useful when getting familiar with the first few concepts of the theory of quivers and representations of quivers. The aim is that this paper should be both readable and understandable for students that are in their first or second year of studies. Throughout the text the same approach of explaining the basics of representations theory as in [2] is used, however efforts are made to concretize the material further and being very explicit about which matrices are used for what purposes. For graduate students I hope that they can follow the text, and work out examples of their own as they go along. The second chapter, following the preliminaries, exhibits two matrix problems. These are to determine when matrices are equivalent and when they are not, respectively when pairs of matrices are or are not equivalent. These problems are solved by multiplication with invertible matrices (the same as row and column operations) and identifying normal forms. The answer is found to be that matrices (or pairs) are equivalent if and only if they have the same normal forms. A notation of direct sums of matrices and indecomposability is then introduced to simplify notation and connect these problems to latter topics. The notation of quivers and representations is established and another problem is introduced; whether two representations are isomorphic or not. As it turns out this problem can be solved by chosing bases for all vector spaces in the representations and solving the corresponding matrix problems for the matrices that correspond to each linear map in the representations. A simliar notation for direct sums of represent- ations and indecomposability is used to prove that each problem (for certain types of quivers at least) of isomorphism of representations is in bijection with the matrix problems. One could consider indecomposable representations as building blocks to make up arbitrary reprensentations. The last two chapters summarizes what is currently known about the types of quivers studied is this paper. It also goes slightly deeper into the quivers of type An and its properties. In these chapters one comes across Gabriel's Theorem regarding what type of quivers have a finite list of indecomposables, as well as the Krull-Schmidt's Theorem which states that two different decompositions into indecomposables differs only by permutation. However, for students that only wish to get a glance of represenation theory of quivers it should be suffiecent to exclude the theory in chapter 6. 3 1 Preliminaries The reader is assumed to be familiar with linear algebra and algebraic structures through out this docu- ment. Standard notation; Unless otherwise stated, K is an arbitrary field. Km×n is the set of all matrices of size m×n (having m rows and n columns) over K. I is the identity matrix. The set n is the set f1; 2; : : : ; ng. The following definitions are taken from the courses in linear algebra. Definition 1.1. Let A 2 Km×n;A0 2 Km×n then A; A0 lie in the same equivalence class by the relation ∼ if A can be transformed to A0 by elementary row and column operations. Notation: A ∼ A0 Definition 1.2. Let A 2 Km×n then the rank of A is given by the number of non-zero rows in the reduced row echelon form of A. The following notation will be used for totally reduced form; 1r 0 I .= 2 Km×n r;m;n 0 0 r×r where 1r = I 2 K and 0 is the zero matrix of suitable size. Ir;m;n is denoted briefly as Ir whenever the 1×1 context is clear. Also 11 2 K will be denoted 1, not to be confused with the number 1 2 K. Proposition 1.3. Given A 2 Km×n rank(A) = r , A ∼ Ir;m;n Proof: Applying Gauss-Jordan elimination on A followed by suitable column operations yields that A ∼ Ir;m;n for some r. By Definition 1.2 it follows that rank(A) = rank(Ir;m;n) = r. m×n Since rank(Ir;m;n) = r it follows that A ∼ Ir;m;n for every matrix A 2 K with rank(A) = r. Thus m×n Ir;m;n is called the normal form for the equivalence classes of matrices in K . In other words; two matrices of the same size m × n and rank r are equivalent, and belong to the same equivalence class as 0 0 Ir;m;n. Thus the question of equivalence between A and A now translates to if A and A have the same normal form. 1.1 Elementary matrices The goal of this chapter is to prove that the relation A ∼ A0 can be expressed by the equivalent statement 0 A = M1A M2 for some invertible matrices M1 and M2. Definition 1.4. Let "i;j be the square zero-matrix except for a 1 at index i; j, with it's size granted from the context, much like the identity matrix. Let k 2 K, then . Ei;j(k) .= (I + k"i;j) i 6= j . Ei(k) .= (I − "i;i + k"i;i) k 6= 0 . Ei$j .= (I − "i;i − "j;j + "i;j + "j;i) are called elementary matrices. 4 These elementary matrices each correspond to a certain row or column operation e.g. when multiplying from the left with a certain elementary matrix; (i) Ei;j(k)A corresponds to "add k multiples of row j to row i" (ii) Ei(k)A corresponds to "multiply row i by k"(k 6= 0) (iii) Ei$jA corresponds to "swapping row i with row j" and when multiplying from the right; (iv) AEi;j(k) corresponds to "add k multiples of column i to column j", (v) AEi(k) corresponds to "multiply column i by k"(k 6= 0), (vi) AEi$j corresponds to "swapping column i with column j", Note the difference in (i) and (iv). To establish a more convenient notation define: Definition 1.5. E is the set of all elementary matrices; . + E .= fEi;j(k);Ei(k);Ei$j : i; j 2 N ; k 2 Kg Definition 1.6. M is the set of arbitrary products (which can be thought of as compositions of operations) of matrices in E; n . Y + M .= fM : M = Ei;Ei 2 E; n 2 N g i=1 + + also, let S1 ⊂ N and S2 ⊂ N , then • if M affects at most the rows/columns in S1, and fetching information from at most the rows/columns S1 in S2 the notation M 2 M is used, S2 A • if S1 and S2 respectively make up all rows/columns of a matrix A the notation M will instead be used A for convenience. Example: Let M = E3$6E4;2(10)E4(7), then MA is obtained by performing the following operations on A; (i) multiply row 4 by 7 and then; (ii) add 10 times row 2 to row 4 and then; (iii) swap rows 3 and 6, f2;3;4;6g which implies that M = M 2 M. f4;3;6g 1 1 Example: Let M = E3;1( 2 ), then MA is obtained by adding 2 times row 1 to row 3 which implies that f3g M = M 2 M f1g 5 It is advised that the reader should make sure that these examples are completely understood before moving on. S Proposition 1.7. Let M 2 M and S = f1; : : : ; kg. If M = M then M is on the form; S N 0 M = ;N 2 \ Kk×k 0 I M and moreover; N −1 0 N T 0 M −1 = ;M T = 0 I 0 I Proof: A1 k×k M1 M2 Let A = such that A1 2 K . The matrix M can be split into . Now the blockwise A2 M3 M4 multiplication MA yields; M M A M A + M A MA = 1 2 1 = 1 1 2 2 M3 M4 A2 M3A1 + M4A2 and simultaneously since M does not fetch information from A2 it follows that M2 = 0. Similarly since M does not affect A2 it follows that M3 = 0 and M4 = I. The proofs of transpose and inverse are trivial, and therefore left out. Corollary 1.8. Let M 2 M, then if; 1 r 1r 0 MIr = M 1r 0 0 −1 it follows that MIrM = Ir.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages26 Page
-
File Size-