<<

INTRODUCTION TO REPRESENTATIONS JULY 1, 2012 LINEAR ALGEBRA REVIEW 3

Here we describe some of the main linear algebra constructions used in representation theory. Since we are working with finite-dimensional vector spaces, a choice of basis iden- n tifies each new space with some C . What makes these interesting is how they behave with respect to linear transformation, which in turn gives ways to construct new representations from old ones.

Let V and W be finite-dimensional vector spaces over C, and choose bases B = {v1, . . . , vm} for V and C = {w1, . . . , wn} for W . : There are two types of direct sums, internal and external. The internal direct sum expresses a given V 0 in terms of two (or more) subspaces V and W . Definition: We say V 0 is the internal direct sum of subspaces V and W , written V 0 = V + W, if each vector in V 0 can be written uniquely as a sum v0 = v + w with v in V and w in W . Equivalently, this condition holds if and only if B ∪ C is a basis for V 0. In turn, we also have this condition if dim(V ) + dim(W ) = dim(V 0) and V ∩ W = {0}. In the special case where V 0 admits an inner product and W = V ⊥, we call V 0 = V ⊕W an orthogonal direct sum. If B and C are orthonormal bases, then B ∪C is an orthonormal basis for V 0. On the other hand, the main idea here can be applied to combine two known vector spaces. Definition: The external direct sum of V and W , also written V 0 = V ⊕ W , is first defined as the of all ordered pairs (v, w) with v in V and w in W . Scalar is defined by c(v, w) = (cv, cw), and is defined by (v, w)+(v0, w0) = (v+v0, w+w0). One checks the other axioms for a vector space. Note that the external direct sum of V and W can be expressed as the internal direct sum of (V, 0) and (0,W ). A basis for V ⊕ W is given by {(vi, 0)} ∪ {(0, wj)}. If S : V → V 0 and T : W → W 0 are linear transformations, we obtain a linear transfor- mation (S, T ): V ⊕ W → V 0 ⊕ W 0 by (S, T )(v, w) = (Sv, T w). The reader should verify that (S, T ) is linear. This definition allows a natural method for constructing direct sums of representations.

Date: July 1, 2012. 1 2 INTRODUCTION TO GROUP REPRESENTATIONS

Dual Vector Spaces: Definition: A linear functional on V is a linear transformation f : V → C, and the vector space of all linear functionals on V is called the dual vector space V ∗. If V also admits an inner product, then one may describe each linear functional f 0 0 uniquely in the form fv(v ) = hv , vi for some v in V . In fact, one may choose any nonzero v in (Kerf)⊥ and rescale to match values to f. In turn, V ∗ admits an inner product defined by hfv, fwi∗ = hw, vi = hv, wi. If T : V → W is a linear transformation, then we have an induced linear transformation T ∗ : W ∗ → V ∗ by defining (T ∗w∗)(v) = w∗(T v). If V and W admit Hermitian inner products, (T ∗f)(v) = f(T v) = hT v, wi = hv, T ∗wi, where the latter T ∗ represents the adjoint of T with respect to the inner product. If we k ∗ are working in coordinate spaces C , then T is conjugate transpose of matrices: w∗T v = w∗(T ∗)∗v = (T ∗w)∗v.

Suppose S : W → X, so that ST : V → X. If f is in X∗ then [(ST )∗f](v) = f(ST v) = (S∗f)(T v) = [T ∗(S∗f)](v). Thus (ST )∗ = T ∗S∗. For representations, if π∗ is to be a on V ∗, we need π∗(gh) = π∗(g)π∗(h). We correct the order using inverses. ∗ ∗ ∗ ∗ Definition: The dual basis B for V with respect to B is the set {v1, . . . , vm}, where ∗ each vi is defined by ∗ ∗ vi (vi) = 1, and vj (vi) = 0 otherwise. If we have an with orthonormal basis B, then the corresponding dual ∗ ∗ basis has vectors vi = h·, vii. Note that dim(V ) = dim(V ).

If T : V → W is an invertible linear transformation, then {T v1, . . . , T vn} is a basis for ∗ ∗ ∗ ∗ −1 W with corresponding dual basis {(T v1) ,..., (T vn) }, where (T vi) (w) = vi (T w). Note that ∗ ∗ −1 ∗ (T vi) (T vj) = vi (T T vj) = vi (vj), confirming the dual basis property. One way to interpret: Suppose we wish to replace V with an isomorphic vector space W. To convert from linear functionals on V to linear functionals on W , we send v∗ to (T −1)∗v∗. For representations, W = V and T = π(g). That is, we define the group action by [π∗(g)v∗](v0) = v∗(π(g)−1v0). Another way to interpret: A group action expresses symmetries on an object X; that is, the group action leaves some quality of X unchanged. If T : V → V is an , then the basis B = {vi} is carried to the basis BT = {T vi}. Note that the definition of v∗(v0) requires no basis; that is, this quantity remains unchanged no matter how we pass INTRODUCTION TO GROUP REPRESENTATIONS 3 to coordinates. So if we change V by an isomorphism T , interpreted as a change of basis, the dual basis changes by (T −1)∗, and v0(v) is unchanged. For representations, we again arrive at the group action on V ∗. 2 Example: Consider the linear functional f : C → C defined by f(x, y, z) = x − y. Then x f(x, y, z) = [1 − 1] . y

1 1 −1 If T : 3 → 2 is represented by the matrix A = , then C C 2 0 3 x ∗ (T f)(x, y, z) = [1 − 1]A y = −x + y − 4z. z Linear Transformation Spaces:

Definition: The set of all linear transformations T : V → W is denoted by HomC(V,W ). As a vector space, we define by (cT )(v) = c(T (v)) and addition by (S + T )(v) = (Sv) + (T v). One verifies the other axioms for a vector space. With respect to choice of bases B and C, we can identify each T with an n × m matrix MT such that [T v]C = MT [v]B. This identification yields an isomorphism of HomC(V,W ) with the matrix vector space M(n, m, C). ∗ In turn, HomC(V,W ) admits a Hermitian inner product defined by hT1,T2i = T race(T1T2 ). With a choice of orthonormal bases for V and W , the norm squared of T1 with associated P 2 matrix MT1 = [ci,j] equals |ci,j| . On the other hand, each linear functional is a linear transformation into C, and we have ∗ seen how to identify V with M(1, m, C), the space of row vectors. Now every element of HomC(V,W ) may be written uniquely in the form X ∗ T (v) = ci,jvj (v)wi. i,j

Again T is identified with the n × m matrix [ci,j]. Thus we see that {Ti,j} is a basis for ∗ HomC(V,W ), where Ti,j(v) = vj (v)wi, and dim(HomC(V,W )) = mn. 0 0 Now suppose T1 : V → V and T2 : W → W are linear transformations. Then 0 0 T2TT1 is an element of HomC(V ,W ), and, after choosing bases, the associated matrix is MT2 MT MT1 . Suppose we wish to replace V and W with isomorphic vector spaces V 0 and W 0 using 0 0 T1 : V → V and T2 : W → W . Then we replace the element T in Hom(V,W ) with −1 0 0 0 0 T2TT1 in HomC(V ,W ). For representations, V = V , W = W , T1 = π(g) acts on V , 0 0 −1 ∗ and T2 = π (g) acts on W to give σ(g)T = π (g)T π(g) . For the V , we use 0 the trivial action on W = C, so π = I. 4 INTRODUCTION TO GROUP REPRESENTATIONS

3 2 Example: Let T : C → C be given by T (x, y, z) = (x + y − z, 2x + 3y). Suppose 3 3 T1 : C → C is given by T1(x, y, z) = (3x + y + z, 2x + y, −y + 2z) and 2 2 T2 : C → C is given by T2(x, y) = (x + y, 2x − 3y). Then 3 1 1 x 1 1  1 1 −1 T TT (x, y, z) = 2 1 0 y 2 1 2 −3 2 3 0     0 −1 2 z = (−17x + 8y + z, −26x − 9y − 8z).

Tensor Products: If the direct sum is thought of as a means to add vector spaces, then tensoring is the way we multiply vector spaces. We will take a somewhat naive approach here, as our focus is working with concrete examples of representations. In general, tensors are defined in terms of universal mapping properties without reference to coordinates. Tensors are the natural objects for moving from linearity to multilinearity. Definition: With our choices of bases B and C for V and W , we form the set B ⊗ C = {vi ⊗ wj}. The set V ⊗ W is defined as the set of all linear combinations of elements in B ⊗ C. That is, a tensor in V ⊗ W , the of V and W , is an element X t = ci,j(vi ⊗ wj), i,j and the vector space of tensors has dimension mn. For any v in V and w in W , the monomial tensor v ⊗ w is defined in terms of the basis by applying the following relations: (1) c(v ⊗ w) = (cv) ⊗ w) = v ⊗ (cw), (2) (v + v0) ⊗ w = v ⊗ w + v0 ⊗ w, and (3) v ⊗ (w + w0) = v ⊗ w + v ⊗ w0.

Often it is sufficient to work with monomials and check that results extend linearly to general tensors. If we choose new bases B0 and C0, then the bilinearity conditions above allow us to convert from linear combinations in B ⊗C to linear combinations in B0 ⊗C0. As 0 0 with the direct product, if we have linear transformations T1 : V → V and T2 : W → W , then the map X X ci,j(vi ⊗ wj) 7→ ci,j(T1vi ⊗ T2wj) sends tensors in V ⊗ W to V 0 ⊗ W 0. For representations, we use (π ⊗ π0)(g)(v ⊗ w) = π(g)v ⊗ π0(g)w. INTRODUCTION TO GROUP REPRESENTATIONS 5

An important case of a tensor product is HomC(V,W ). In this case, we define an iso- ∗ X ∗ X ∗ i : V ⊗ W → HomC(V,W ) by ci,j(vj ⊗ wi) 7→ T (v) = ci,jvj (v)wi. 0 0 0 In fact, if we replace V and W with isomorphic vector spaces V and W using T1 : V → V 0 and T2 : W → W , then the corresponding transforms factor through i: X −1 ∗ X −1 ∗ ∗ ci,j(T1 ⊗ T2)(vj ⊗ wi) = ci,j((T1 ) vj ⊗ T2wi) −1 is carried to T2TT1 . 2 Example: Suppose B is the standard basis of C , and suppose

v1 = (1, −1) and v2 = (−1, 2). Then

v1 ⊗ v2 = (e1 − e2) ⊗ (−e1 + 2e2)

= e1 ⊗ (−e1 + 2e2) − e2 ⊗ (−e1 + 2e2)

= −e1 ⊗ e1 + 2e1 ⊗ e2 + e2 ⊗ e1 − 2e2 ⊗ e2.

On the other hand, e1 = 2v1 + v2, and e2 = v1 + v2. Thus

e1 ⊗ e2 = (2v1 + v2) ⊗ (v1 + v2)

= 2v1 ⊗ (v1 + v2) + v2 ⊗ (v1 + v2)

= 2v1 ⊗ v1 + 2v1 ⊗ v2 + v2 ⊗ v1 + v2 ⊗ v2.

Let T1(x, y) = (x + y, 2x + y) and T2(x, y) = (−x − y, −x + y). Then

(T1 ⊗ T2)(e1 ⊗ e2) = T1e1 ⊗ T2e2 = (e1 + 2e2) ⊗ (−e1 + e2)

= −e1 ⊗ e1 − 2e2 ⊗ e1 + e1 ⊗ e2 + 2e2 ⊗ e2.