
INTRODUCTION TO GROUP REPRESENTATIONS GLOSSARY Sections 1. Advanced Linear Algebra 2. Analysis on Groups 3. Group Theory 4. Linear Algebra 5. Representation Theory Advanced Linear Algebra Cayley-Hamilton Theorem: If pA(x) = det(xI − A) is the characteristic polynomial of the square matrix A, then PA(A) = 0 as a matrix polynomial. Direct Sum: A vector space V can be represented as a direct sum of two subspaces 0 0 W and W , denoted V = W + W , if each v = v1 + v2 uniquely with v1 in W and v2 in W 0. Equivalently, if we choose bases B and B0 for W and W 0 respectively, then B [ B0 is a basis B∗ for V . If both W and W 0 are invariant under A, then the associated matrix for ∗ A1 0 A with respect to B is block diagonal AB∗ = : 0 A2 Dual Vector Space: The vector space of all linear functionals on V , denoted V ∗: Dual representations are also called contragredient. Invariant Subspace: A subspace W is called invariant under A if Av is in W for all w in W . This means that the restriction A : W ! W is defined. Eigenspaces are a special case of invariant subspaces. If we choose a basis for W and complete to a full basis, the A1 ∗ associated matrix for A in this basis is block upper-triangular AB = : 0 A2 Linear Functional: A linear functional on V is a linear transformation f : V ! C: The set of linear functionals on V is called the dual space of V , denoted V ∗; and V ∗ admits a vector space structure in a natural manner. If V admits an inner product, then every ∗ linear functional on V can be written in the form fv(w) = hw; vi: In addition V admits an inner product defined by hfv; fwi∗ = hw; vi = hv; wi: Linear Transformation Space HomC(V; W ): The vector space of linear transforma- ∗ tions T : V ! W: When W = C; we have V , the dual space of V: If we choose bases for Date: August 15, 2012. 1 2 INTRODUCTION TO GROUP REPRESENTATIONS V ∗ and W , then every linear transformation T from V to W can be expressed uniquely as X ∗ T (v) = ci;jvj (v)wi: i;j P ∗ ∗ In turn, we may associate T with the tensor t(T ) = i;j ci;j(vj ⊗ wi) in V ⊗ W: ∗ HomC(V; W ) admits an inner product defined by hT1;T2i = T race(T1T2 ): If V and W are inner product spaces and we choose orthonormal bases above, we have hT;T i = P 2 i;j jci;jj : Minimal Polynomial: For a square matrix A, the monic polynomial mA of smallest degree such that mA(A) = 0 as a matrix polynomial. mA divides pA, the characteristic polynomial, and mA and pA have the same irreducible factors. If q(A) = 0, then mA divides q. A useful result is that A is diagonalizable if and only if mA factors into distinct linear factors. Permutation Matrix: A matrix Pσ of zeros except for exactly one 1 in each row and column. To construct these, let σ be a permutation of f1; : : : ; ng: We define a linear n n transformation Tσ : C ! C by linearly extending Tσei = eσi: With respect to the standard basis B, the associated matrix Pσ = [Tσ]B is a rearrangement of the columns of T the identity matrix. Each permutation matrix Pσ is orthogonal/unitary: PσPσ = I: Also N Pσ = I for some positive integer N, so each permutation matrix is diagonalizable over C: The trace of a permutation matrix Pσ is equal to the number of labels in f1; : : : ; ng fixed by σ: Sesquilinear Pairing/Form: Generalizes Hermitian inner product. A sesquilinear 0 pairing f : V × V ! C satisfies the following three conditions: (1) f(cv; v0) = cf(v; v0) = f(v; cv0); (2) f(v + w; v0) = f(v; v0) + f(w; v0); and (3) f(v; v0 + w0) = f(v; v0) + f(v; w0): The set of sesquilinear pairings admits a vector space structure with scalar multiplication (cf)(v; w) = c(f(v; w)) and addition (f + h)(v; w) = f(v; w) + h(v; w): If V = V 0; then f is called a sesquilinear form, and if f(v; w) = f(w; v); then f is called Hermitian. If f(v; v) ≥ 0 and = 0 if and only if v = 0; then f is called a Hermitian inner product on V . Simultaneous Diagonalization: A set of diagonalizable matrices Aα can be diagonal- ized using the same basis of eigenvectors if and only if fAαg is a commuting set of matrices. That is, AαAβ = AβAα for all α; β: Tensor Product: Denoted by V ⊗ W . A way to \multiply" vector spaces. In general, a tensor is expressed as a sum of monomial tensors v ⊗ w: If B = fvig and C = fwjg are bases for V and W respectively, then a tensor may be expressed uniquely in the form P t = i;j ci;j(vi ⊗ wj): That is, fvi ⊗ wjg is a basis for V ⊗ W , which in turn has dimension dim(V )·dim(W ). We manipulate tensors with the following bilinearity rules: INTRODUCTION TO GROUP REPRESENTATIONS 3 (1) c(v ⊗ w) = (cv) ⊗ w = v ⊗ (cw); (2) (v + v0) ⊗ w = v ⊗ w + v0 ⊗ w; and (3) v ⊗ (w + w0) = v ⊗ w + v ⊗ w0: These allow us to switch to expressions in other bases. In general, tensors are defined using universal mapping properties. Here we work only with concrete examples. Trace: The trace of a square matrix A is the sum of its diagonal entries. In turn, it is also equal to the sum of its eigenvalues, counting multiplicities. We have that T race(AB) = T race(BA); so that T race(P AP −1) = T race(A): That is, if A is the matrix associated to a linear transformation T , then the trace is unaffected by change of basis, and thus T race(T ) is well-defined. If n n−1 p(x) = x + an−1x + ::: is the characteristic polynomial of A, then T race(A) = −an−1: Analysis on Groups Character: Depends on context. A one-dimensional representation is called a character of G, but it may also refer to the function χπ(g) = T race(π(g)) on G for any representation π: Representations are equivalent if and only if their characters are equal. Character Table: For an abelian group, each column corresponds to a group element and each row corresponds to a character. Then entries are the values of the characters. The table of entries is square of size jGj; and the rows are orthonormal as elements of L2(G): In addition, the columns of the table are also orthonormal, considered as elements of L2(G∗): For nonabelian G, each column corresponds to a conjugacy class of G, indexed by an element in the class and also labeled with the number of elements in the class. The rows correspond to the (trace) characters for each irreducible class of representations of G. The table has size equal to the number of irreducible classes, which is also equal to the number of conjugacy classes of G. Again the rows are orthonormal as elements of L2(G); in the table, we must weight each sum by the numbers of elements in each class. The columns are also orthogonal using unweighted sums. Here the length of each column is jGj ; where jCxj Cx is the conjugacy class for x and x represents the column. Class function: A function f in L2(G) is called a class function if f(gxg−1) = f(x) for all x; g in G. Two orthonormal bases exist for the subspace of class functions on G. First p consider the basis B1 = f jCxjfxg where ( X 1 y 2 Cx fx(y) = δg(y) = : 0 otherwise g2Cx 4 INTRODUCTION TO GROUP REPRESENTATIONS Here x runs over a set of representatives for each conjugacy class Cx in G: On the other hand, the characters for each irreducible class of representation on G form an orthonormal basis B2 = fχπg of Class(G): Thus the number of irreducible classes of representations for G equals the number of conjugacy classes in G. 2 Convolution: If f1 and f2 are in L (G), we define the convolution 1 X (f ∗ f )(x) = f (g)f (g0) 1 2 jGj 1 2 gg0=x 1 X = f (g)f (g−1x) jGj 1 2 g2G 1 X −1 = f (xg0 )f (g0): jGj 1 2 g02G Convolution has better group-theoretic properties than ordinary multiplication of functions and may be interpreted as a version of matrix multiplication for functions. For instance, φu;v ∗ φu0;v0 = 0 if the matrix coefficients belong to inequivalent irreducible representations, and 0 dσφu;v ∗ φu0;v0 = hu; v iφu0;v if the matrix coefficients belong to the irreducible representation σ: Orthogonal projections onto irreducible types in L2(G) may be defined using convolu- tions of characters. The projection onto the subrepresentation of σ-types is given by 2 2 Pσ : L (G) ! L (G);Pσf = dσf ∗ χσ = dσχσ ∗ f where χσ is the character of σ. Delta Function: Delta functions give a natural basis for L2(G), but are not well-suited to representation theory. We define ( 1 x = g δg(x) = : 0 otherwise 2 p An orthonormal basis for L (G) is given by f jGjδgg: Fourier transform: Suppose B = fvig is an orthonormal basis of the inner product space V . Then every v can be represented uniquely in the form X v = civi: i n The Fourier transform associated to B is the map FB : V ! C defined by FB(v) = vb = (c1; : : : ; cn): INTRODUCTION TO GROUP REPRESENTATIONS 5 −1 n The associated inverse Fourier transform FB : C ! V is defined as −1 X FB (c1; : : : ; cn) = civi: i Properties of the Fourier Transform are summarized by Fourier's Trick and Parseval's Identity.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages12 Page
-
File Size-