<<

Notes for the course on Hans Plesner Jakobsen Representations of Lie Groups: Department of Mathematics Compact (Matrix Lie) Groups F ebruary 2008

1. The Haar (and ) on locally compact groups We begin with a number of denitions: Denition 1.1. Let f be a continuous function on a metric space X (or, more generally, on a topological space). The support of f; supp f is given as supp f = {x ∈ X | f(x) 6= 0}.

Furthermore, Cc(X) denotes the space of (complex) continuous func- tions whose support is compact. Notice that the space of real-valued continuous functions with compact support is a real (but not complex) subspace of Cc(X). Denition 1.2. A linear functional L on a vector space V is a linear map . If , the linear functional is positive L : V 7→ C V = Cc(X) L if ∀f ≥ 0 : L(f) ≥ 0.

The following theorem represents positive linear functionals on Cc(X), the space of continuous complex valued functions of compact support. The Borel sets in the following statement refers to the σ-algebra gener- ated by the open sets. A non-negative countably additive Borel measure µ on a locally com- pact Hausdor space X is regular i • µ(K) < ∞ for every compact set K; • For every E, µ(E) = inf{µ(U): E ⊆ U, U open} • The relation µ(E) = sup{µ(K): K ⊆ E} holds whenever E is open or when E is Borel and µ(E) < ∞. Theorem 1.3. Theorem. Let X be a locally compact Hausdor space. For any positive linear functional L on Cc(X), there is a unique Borel regular measure µ on X such that Z L(f) = f(x)dµ(x) for all f ∈ Cc(X). X 2 In the case of a compact Hausdor space, this is the so-called Riesz Representation Theorem. Consider now a compact Lie K. A positive linear functional I on C(K) is called a left invariant Haar integral if ∀f ∈ C(K); ∀g ∈ K : I(L(g)f) = I(f). Similarly, it is a right invariant Haar integral if ∀f ∈ C(K); ∀g ∈ K : I(R(g)f) = I(f).

Theorem 1.4. On a compact K, there is a unique (up to multiplication by a positive constant) left Invariant Haar integral I. It is also right invariant. We write I(f) = R fdµ = R f(g)dµ(g). The following equations hold; the rst two areK just the statedK invari- ance; the third is an additional identity1. We set f˜(g) = f(g−1): ∀f ∈ C(K); ∀h ∈ K : Z Z f(hg)dµ(g) = f(g)dµ(g) I(L(h−1)f) = I(f) left inv, K K Z Z f(gh)dµ(g) = f(g)dµ(g) I(R(h)f) = I(f) right inv. K K Z Z   f(g−1)dµ(g) = f(g)dµ(g) I(f˜) = I(f) inversion inv. K K The measure, moreover, satises that the measure of K is nite. It is customary, and we shall also do this, to normalize the Haar measure such that µ(K) = 1. The Haar measure satises furthermore that if f is a non-negative continuous function which is strictly positive in some point, then I(f) > 0. We denote the space of square integrable functions on K by L2(K, dµ) or just by L2(K). The inner product2 is given by Z (1) hf1, f2i = f1(g)f2(g)dµ(g). K Since K is compact, and since the measure is nite, it is clear that C(K) ⊂ L2(K).

1The left regular and the right regular representation are given in (19) and (20), respectively 2Incidentally, our inner products are all linear in the left variable! 3 2. The Haar measure on some groups 2.1. On U(n). The Cayley transform C 1 1 + 2ih C : h 7→ 1 1 − 2ih maps the space H(n) of n × n hermitian matrices onto an open dense subset of U(n). Using it, one may dene an integral on U(n) for each p ∈ R by Z Z  1  1 + 2ih 1 p f(u)du = f 1 | det(1 + ih)| dh. U(n) H(n) 1 − 2ih 2 For a specic value of p (p = −n as far as I remember!) this is the Haar integral. In the formula, dh is the on the real vector space H(n) of dimension n2.

2.2. The Haar measure on SU(2). Recall that SU(2) = S3 as a topological space. Specically, write elements g of SU(2) as  x + ix x + ix  g = 1 4 2 3 . −x2 + ix3 x1 − ix4 Let us recall the denition of spherical coordinates in R4:

x1 = r cos(θ) x2 = r sin(θ) cos(ψ) x3 = r sin(θ) sin(ψ) cos(φ) x4 = r sin(θ) sin(ψ) sin(φ) The Jacobian (change-of-coordinates determinant) has absolute value ∂x , x , x , x | det( 1 2 3 4 | = r3 sin2(θ) sin(ψ). ∂r, θ, ψ, ψ The condition on g ∈ SU(2) is that r2 = x2 + x2 + x2 + x2 = 1. Thus, it follows that 1 2 3 4 Z π,π,2π (2) 1 2 2 f(θ, ψ, φ) sin (θ) sin(ψ)dθdψdφ. 2π θ,ψ,φ=0 If f is constant in ψ, φ; i.e. if there exists a function F such that ∀θ, ψ, φ : f(θ, ψ, φ) = F (θ), we obtain (3) Z π,π,2π Z π 1 2 2 2 2 f(θ, ψ, φ) sin (θ) sin(ψ)dθdψdφ = F (θ) sin (θ)dθ. 2π θ,ψ,φ=0 π θ=0 4

This is relevant if for instance f is a class function3.

2.3. The Haar measure on GL(n, R). The Haar measure on the group (R \{0}, ·) is given by Z dx f(x) . x6=0 |x| More generally, the Haar measure on GL(n, R) is given as follows: Let dx = dx11 ··· dxn1dx12 ··· dxn2 . . . dxnn be the restriction of the Lebesgue measure on n2 the open subset n R {x = {xij}i,j=1 | det(x) 6= 0}. Then the Haar measure is given as Z dx f(x) . | det(x)|n

3. Unitarity and complete reducibility Denition 3.1. A representation Π of a Matrix Lie Group is uni- tary if Π: G 7→ U(n) for some n ∈ N. Equivalently, Π is a rep- resentation in a nite-dimensional Hilbert space (V, h·, ·i) and each Π(g) preserves the inner product h·, ·i: ∀v, w ∈ V ; ∀g ∈ G : hΠ(g)v, Π(g)wi = hv, wi. Denition 3.2. A representation Π in V is completely decomposable or completely reducible if there exists a direct sum decomposition r V = ⊕k=1Vk such that each Vk is an invariant subspace for Π and such that the resulting representation of G on Vk is irreducible for each k. Denote these representations by Πk; k = 1, 2, . . . , r, and set Vk = VΠ . We then may write k (4) r and r VΠ = ⊕k=1VΠk , Π = ⊕k=1Πk.

Notice that the representations Πk are not necessarily all inequivalent. The matrices representing the operators Π(g) consists of k diagonal blocks when expressed according to the decomposition (4). We let [Π : Πk] denote the multiplicity of Πk in Π. This is the number of times Πk (or representations equivalent to it) occurs in the sum (4).

3 A function f of a group G which satises: ∀g, h ∈ G : f(hgh−1) = f(g); i.e. which is constant on conjugacy classes. 5 Proposition 3.3. If Π is a unitary representation on V and if W is an invariant subspace, then W ⊥ = {v ∈ V | ∀w ∈ W : hw, vi = 0} is an invariant subspace. Proof: We assume that it is known that W ⊥ indeed is a subspace and that V = W ⊕ W ⊥ (otherwise: prove it!). We move directly to invariance: Let v ∈ W ⊥, w ∈ W . Then hΠ(g)v, wi = hΠ(g)v, Π(g)Π(g−1)wi = hv, Π(g−1) = 0. Here, the rst equality is just a re-writing, the second is unitarity, and the third follows from the invariance of W . The claim is thus veried.  The key observation now is the following: Proposition 3.4. If G is a compact matrix Lie Group, and Π is a nite-dimensional representation on VΠ, then there is an inner prod- uct h·, ·i on VΠ which is invariant, i.e. such that Π is unitary with respect to this. Proof: This is where the Haar measure dµ(g) comes in! Let us namely start with an arbitrary inner product hh·, ·ii on V (such exist of course in abundance). Dene Z hv, wi = hhΠ(g)v, Π(g)wiidµ(g). G That this indeed is a positive-denite inner product is left as an exercise. The crucial computation is, for h ∈ G: Z hΠ(h)v, Π(h)wi = hhΠ(g)Π(h)v, Π(g)Π(h)wiidµ(g) G Z = hhΠ(gh)v, Π(gh)wiidµ(g) G Z = hhΠ(g)v, Π(g)wiidµ(g) G = hv, wi. Here, the rst equality is the denition, the second is saying that Π is a homomorphism, the third is the right invariance of the Haar measure, and the last is the denition.  Combining Proposition 3.3 with Proposition 3.4 we obtain the following result (which for nite groups is Maschke's Theorem) Theorem 3.5. Any nite-dimensional representation of a compact Matrix Lie Group is completely reducible. 6 Notice that we only used that there was a right invariant Haar measure on G and that the total measure of G was nite. For completeness, as well as additional information, we state

Theorem 3.6 (Schur's Lemma; special version).

• If ΠV , ΠW are irreducible representations of a group G, and Φ: V 7→ W is an intertwiner, i.e. if Φ is linear and

(5) ∀g ∈ G :Φ ◦ ΠV (g) = ΠW (g) ◦ Φ, then either Φ is a bijection (and then the two representations are equivalent) or Φ = 0. • If ΠV is irreducible and if a linear map Ψ: V 7→ V commutes with ΠV , i.e.

(6) ∀g ∈ G :Ψ ◦ ΠV (g) = ΠV (g) ◦ Ψ, then there is a such that , where is the λ ∈ C Ψ = λ · IV IV identity operator on V . • Suppose G is a compact (Matrix Lie) Group. Then the con- verse to the previous point holds. That is, if ΠV is a nite- dimensional representation on V with the property that the only operators that commute with ΠV are the operators λ · IV , then ΠV is irreducible. Proof:(Sketched) The rst two points are proved in Hall's book. As for the third, begin by assuming that the representation is unitary (Propo- sition 3.4). If W is an invariant subspace consider the orthogonal pro- jection PW onto W . Use Proposition 3.3 to prove that this operator commutes with the representation. Hence or . PW = 0 PW = IV 

4. Matrix coefficients and characters Denition 4.1. Let Π be a nite-dimensional complex representation of a group G on a vector space V . A function on G of the form ∗ (7) Fv,v∗ : G 3 g 7→ (v , Π(g)v) for any v ∈ V and any v∗ ∈ V ∗ is a matrix coecient (of Π). We let ∗ ∗ ∗ (8) MΠ = Span{(v , Π(·)v, v) | v ∈ V , v ∈ V }. In the case of a Matrix Lie Group, these functions are clearly continu- ous. 7

Remark 4.2. If the vector space V = VΠ has an inner product h·, ·i then, without additional assumptions about unitarity,

(9) MΠ = Span{hΠ(·)v1, v2i | v1, v2 ∈ V }.

The matrix coecients are of fundamental importance, as we shall see below. However, we rst introduce Denition 4.3. The character χΠ of a representation Π of a group G on a nite-dimensional vector space V is the function from G to C given by Π χ (g) = TrVΠ Π(g). The following is obvious:

Proposition 4.4. Let {e1, e2, . . . , en} be a basis of V , and let {ε1, ε2, . . . , εn} be the corresponding dual basis of V ∗. Then n n Π X X χ (g) = (εk, Π(g)ek) = (Π(g))kk. k=1 k=1 Π We will from now on set Πkk(g) = (Π(g))kk. Notice that χ is a sum of matrix coecients of Π. In view of Theorem 3.5, the following is clear: Lemma 4.5. The character χΠ of a nite-dimensional representa- tion is a sum of the characters of the irreducible representations that occurs in its decomposition (c.f. Denition 3.2); X χΠ = χΠk . k

4.1. The character of the dual representation. Recall from the chapter on Linear Algebra, specically (40) on p.10 that the dual repre- sentation in matrix form is given as (10) Π0(g) = Π(g−1)t. Since we by Proposition 3.4 may assume that Π is unitary, it follows that we may assume that Π(g−1) = Π(g)∗ (the complex conjugate matrix) and hence that Π0(g) = Π(g) (the matrix whose entries are the complex conjugates of the entries (in the same position) of Π(g). In particular we get 8 Proposition 4.6.

Π0 X Π χ (g) = Πii(g) = χ (g). i

4.2. The character of a tensor product. Let Π, Σ be two nite dimensional representations of G in VΠ,WΣ, respectively. Proposition 4.7. χΠ⊗Σ = χΠ · χΣ.

Proof: Let {e1, e2, . . . , en} and {f1, f2, . . . , fm} be bases of VΠ and WΣ, respectively. Since (Π ⊗ Σ)(g)(ei ⊗ fj) = Π(g)ei ⊗ Σ(g)fj, it follows that the coecient of (Π ⊗ Σ)(g)(ei ⊗ fj) in ei ⊗ fj is given as Πii(g) · Σjj(g). But then ! ! X X X TrVΠ⊗WΣ (Π⊗Σ)(g) = Πii(g)·Σjj(g) = Πii(g) · Σjj(g) . i,j i j The claim follows immediately from this.  4.3. Schur orthogonality.

Lemma 4.8. Let ΠV and ΠW be complex representations of the com- pact Matrix Lie Group G. Let h·, ·i be an inner product on ΠV (not assumed invariant). If v1 ∈ V, w1 ∈ W then the map T : V 7→ W given by Z −1 T (v) = hΠV (g)v, v1iΠW (g )w1 dµ(g) G is G-equivariant (an intertwining map between ΠV and ΠW ). Proof: The right invariance of the Haar measure gives that Z −1 T (ΠV (h)v) = hΠV (gh)v, v1iΠW (g )w1 dµ(g) = ΠW (h)T (v). G  4 Theorem 4.9 (Schur orthogonality 1). Suppose that ΠV , ΠW are ir- reducible representations of the G. Either every matrix 2 coecient of ΠV is orthogonal in L (G) to every matrix coecient of ΠW , or the two representations are equivalent.

4See e.g. D. Bump: Lie Groups (Graduate Texts in Mathematics, Springer Verlag 2004) 9

Proof: Let us assume that there are invariant inner products h·, ·iV and h·, ·iW on V and W , respectively. We wish to prove that if some matrix coecients are not orthogonal, then the representations are equivalent. It suces to take matrix coecients of the form (9): hΠV (g)v1, v2iV and hΠW (g)w1, w2iW . Our assumption is that for some v1, v2, w1, w2 Z (11) 0 6= hΠV (g)v1, v2iV hΠW (g)w1, w2iW dµ(g) G Z −1 (12) = hΠV (g)v1, v2iV hΠW (g )w2, w1iW dµ(g) G (13) = hT (v1), w1i, where T is dened as in Lemma 4.8 in terms of the vectors v2, w2. But this means that T is not zero, hence, by Schur's Lemma, T is an isomorphism, i.e. the representations are equivalent.  5 Theorem 4.10 (Schur orthogonality 2). Let ΠV be an irreducible representation of a compact group G with invariant inner product h·, ·iV . Then, for all v1, v2, v3, v4 ∈ V : (14) Z 1 hΠV (g)v1, v2iV hΠV (g)v3, v4iV dµ(g) = hv1, v3ihv4, v2i. G dim V

Theorem 4.11 (Schur orthogonality 3). Let ΠV , ΠW be irreducible representations of the Matrix Lie Group G. Then Z n if (15) V W V W 1 ΠV ≡ ΠW hχ , χ iL2(G) = χ (g)χ (g)dµ(g) = 0 otherwise G Proof: As remarked after Proposition 4.4, the characters are sums of matrix coecients, hence if ΠV and ΠW are inequivalent it follows from V W Theorem 4.9 that hχ , χ iL2(G) = 0. In case ΠV = ΠW it follows from Proposition 4.6 and Proposition 4.7 that Z V V Π⊗Π0 hχ , χ iL2(G) = χ dµ(g). G Now notice that if Π is any nite-dimensional representation, then χΠ is a sum of the characters of the irreducible summands: X χΠ = χΠi , i

5For a proof see e.g. Bump's book p. 12 and p. 15 10 where the Πi are the irreducible summands (the same representation may occur several times!) Since the character of the trivial representation is the constant function 1 and since this has integral 1 w.r.t. the normalized Haar measure, it follows that Z χΠdµ(g) = [Π : 1], G where [Π : 1] denotes the number of times the trivial representation 1 occurs in the decomposition of Π into irreducible representations. We 0 now show that [ΠV ⊗ Π : 1] = 1, and then we are done: We rst have to investigate,V what it means for the trivial representation to occur in 0 : This means precisely that there is a non-zero vector ΠV ⊗ΠV w ∈ V ⊗ V ∗ which is xed by all Π(g) ⊗ Π0(g). And the multiplicity 0 is the number of mutually orthogonal such m = [ΠV ⊗ ΠV : 1] = 1 vectors w1, ··· , wm. We want to show that m = 1: Let w be any one of the w1, ··· , wm (we assume that there is at least one! - see later). We dene a map Tw : V 7→ V by ∀v ∈ V, v∗ ∈ V ∗ :(v∗,T (v)) = hv ⊗ v∗, wi, where h·, ·i denotes an invariant inner product in V ⊗V ∗. We claim that T commutes with the representation Π: ∀g ∈ G; ∀v ∈ V, v∗ ∈ V ∗ : (v∗,T (Π(g)v)) = hΠ(g)v ⊗ Π0(g)Π0(g−1)v∗, wi = hv ⊗ Π0(g−1)v∗, (Π(g) ⊗ Π0(g)) wi = (v ⊗ Π0(g−1)v∗, wi = (Π0(g−1)v∗,T (v)) = (v∗, Π(g)T (v)). So, by Schur's Lemma, T is a scalar multiple λ of the identity. This constant is clearly non-zero, and we may absorb it into w. But then we may assume: (16) ∀v ∈ V, v∗ ∈ V ∗ :(v∗, v) = hv ⊗ v∗, wi. This, on the other hand, completely determines w: If two vectors have the same inner products with all vectors v ⊗ v∗ (which, after all, span V ⊗ V ∗), then they must be equal. Actually, equation (16) determines ∗ uniquely a vector w as can be seen by inserting ei ⊗ εj for v ⊗ v for all choices of i, j. The result is that X w = ei ⊗ εi. i This proves that the trivial representation occurs exactly once.  11 4.4. Representations from actions; the left (and the right) regular representation. Let G be a group which acts on a set M i.e. such that there is a map G × M 3 (g0, m) 7→ go ? m satisfying

(17) (g1g2) ? m = g1 ? (g2 ? m).

We can turn any such action into a representation Ua of G in the vector space F(M) of complex valued functions on M by the formula −1 (18) (Ua(g)f)(m) = f(g ? m). Let G be any group. The left regular representation L is dened in the vector space F(G) of complex valued functions on G from the left action g0 ?L g = g0g of the group on itself. Thus, (19) −1 (L(g0)f)(g) = f(g0 g). The right regular representation R is dened in the vector space F(G) of complex valued functions on G from the (right) action g0 ?R −1 g = g(g0) of the group on itself. Thus,

(20) (R(g0)f)(g) = f(gg0). In the case of a compact group one usually considers L, R in L2(G, dµ) and call these representations, which by means of the invariance of the Haar measure are unitary, the left, resp. right, regular representations. The crucial observations are the following:

Proposition 4.12. The matrix coecients Fv,v∗ in (7) belong to L2(G, dµ) and satisfy

(21) ∗ 0 ∗ ∗ ∗ L(g0)Fv,v = Fv,Π (g0)v ; R(g0)Fv,v = FΠ(g0)v,v .

Proof: The functions Fv,v∗ are continuous, hence bounded and measur- able, hence belong to L2(G). The two identities are proved in much the same way, so we only do one of them: ∗ −1 (L(g0)Fv,v∗ )(g) = (v , Π(g0 g)v) = 0 ∗ 0 ∗ . (Π (g0)v , v) = Fv,Π (g0)v

Lemma 4.13. Let ΠV be an irreducible nite-dimensional represen- tation of the compact (Matrix Lie) Group G. Then 0 is an ΠV ⊗ ΠV irreducible representation of G ⊗ G. Proof: (Sketched) We use Theorem 3.6. Suppose an operator T on ∗ commutes with 0 . Write in block form corresponding V ⊗ V ΠV ⊗ ΠV T to (60) in the Notes on Linear Algebra. (We are not assuming that T is of the form A ⊗ B for some A, B!) Using the irreducibility of the two representations, it follows from the fact that T commutes in particular with all operators of the form Π(g) ⊗ IV ∗ that the blocks of T must 12

∗ be scalar multiples λij of the identity in V . Then use the operators to conclude that for some . IV ⊗ Π(g) λij = λ · δij λ ∈ C  Combining Proposition 4.12 and Lemma 4.13, the following is ob- tained:

Proposition 4.14. Let Π = ΠV be an irreducible nite-dimensional representation of the compact (Matrix Lie) Group G. The linear map ∗ 2 ∗ ΨΠ : V ⊗ V 7→ L (G, dµ) given by its action on the tensors v ⊗ v as ∗ ∗ (22) (Ψπ(v ⊗ v )) (g) = Fv,v∗ (g) = (v , Π(g)v), is an intertwiner between the representations of G × G given by, re- spectively, 0 (g1, g2) 7→ Π(g1) ⊗ Π (g2) on V ⊗ V ∗ and (g1, g2) 7→ R(g1) ⊗ L(g2) on L2(G, dµ), that is 0 (23) ∀(g1, g2) ∈ G × G :Ψ ◦ (Π ⊗ Π )(g1, g2) = (R ⊗ L)(g1, g2) ◦ Ψ.

5. Further notes on Representations of compact groups 5.1. The Peter-Weyl Theorem, proof in the case of compact matrix groups. We begin by introducing the space Gb of equivalence classes of irreducible nite-dimensional representations of the compact (Matrix Lie) Group G. Theorem 5.1. Let G be a compact group. Then 2 M L (G) = MΠ Π∈Gb M L = (Π ⊕ · · · ⊕ Π) (orthogonal direct sum) | {z } Π∈Gb dim(VΠ) 6 M 0 L ⊗ R = Π ⊗ Π (as representation of G × G on L2(G)). Π∈Gb To prove this fundamental theorem in the matrix case, we use another fundamental theorem, namely the Stone-Weierstrass Theorem. For this

6This notation is a little unfortunate, since it is not a usual tensor product representation, but just refers to the representation given in (23). The tensor products on the right hand side of the equation are correct tensor product representations! 13 we need the concept of a *-algebra A of continuous functions on a topological space X. By this we mean a vector space A of continuous functions which has the additional property that ∀f, g ∈ A : f · g ∈ A and f ∗ = f ∈ A. We say that A separates points if ∀x, y ∈ X∃f ∈ A : f(x) 6= f(y). The theorem can now be formulated: Theorem 5.2 (Stone-Weierstrass). Let A be a *-algebra of continu- ous functions on a compact topological space K which separates points and does not vanish identically at any point.7 Then A is uniformly dense in the space C(K) of continuous functions on K; k·k A ∞ = C(K).

Since uniform convergence implies L2 convergence we of course have Corollary 5.3. Under the same assumptions,

k·k2 A = L2(K). The application we are interested in is the following: Corollary 5.4. Suppose that K is a compact subset of Rn. Equip K with the induced (subset) topology. Let A = PK( n) be the set of C R restrictions to K of real polynomials with complex coecients. Then k·k PK( n) ∞ = C(K), (and hence) C R k·k PK( n) 2 = L (K). C R 2 Proof: This follows from Theorem 5.2 since it is clear that polynomials are continuous on Rn and hence that their restrictions are continuous on K. Moreover, it is obviously a *-algebra, and it separates points x 6= y ∈ K since if x 6= y then for some coordinate number, say, i, xi 6= yi. But the polynomial (r1, ··· , rn) = ri then separates x and y.  The Gram-Schmidt ortho-normalization process applied to a space of polynomial functions clearly results in a basis of functions which still are polynomials. It then follows from the preceding corollary that

Corollary 5.5. There is an orthonormal basis of L2(K) consisting of restrictions of polynomials.

We now consider a compact matrix group K ⊂ GL(n, R) or K ⊂ 2 n n GL(n, C). In the rst case we consider K ⊂ Rn = R × · · · × R | {zn } 7 This means that there is no point k ∈ K such that f(k) = 0 for all f ∈ A. 14

2 n n and in the second case K ⊂ R2n = C × · · · × C - where we view | {zn } (when it is convenient) C = R2. Let us refer to these cases as the real and, respectively, complex case. In both cases we work with polynomials with complex coecients. In the real case we work with polynomials in variables where each n , and in x1, x2, ··· , xn xi ∈ R ; i = 1, 2, . . . , n the complex case we work with polynomials in variables z1, z2, ··· , zn, where each n and the variables z1, z2, ··· , zn zi ∈ C ; i = 1, 2, . . . , n z, z are viewed as independent real variables on C. Since in any real situation we have K ⊂ GL(n, R) ⊂ GL(n, C) we will from now on only work with the complex situation. Observe that for g ∈ K, left multiplication by g in K; Mg : k 7→ g · k becomes

Mg (z1, ··· , zn, z1, ··· , zn) 7→ (gz1, ··· , gzn, g z1, ··· , g zn), where by denition, gz = g z. Now observe that the set Pall = P(z1, ··· , zn, z1, ··· , zn) of all poly- nomials in the variables z1,..., zn, z1, ··· , zn has the form

Pall = P(z1,..., zn, z1, ··· , zn) = P(z1) ·P(z2) ····· P(zn).

We obtain a representation UK of K on Pall as in (18). Specically, in suggestive notation, −1 −1 (24) ∀g ∈ K :(UK(g)p)(z, z) = p(g ? z, g ? z), where g ? z = gz and g ? z = gz = gz. Next observe that any polynomial in P(z1) can be written as a sum of products of the form Qr for vectors . Let i=1hz1, vii v1, . . . , vr ∈ C ρK denote the dening representation of K. The preceding analy- sis shows that any polynomial representation transforms according to a sub-representation of (possibly a sum of) representations of the form ⊗r 0 ⊗s for certain non-negative integers . (If e.g. , (ρK) ⊗ (ρK) r, s r = 0 ⊗r (ρK) becomes the trivial 1-dimensional representation. To be com- ⊗r pletely precise, (ρK) is the representation of K occurring in the space Pr(z). Now, to connect these spaces, consider more explicitly the restriction map R from Pall to C(K); the space of continuous functions on K:

(25) Pall 3 p 7→ Rp ∈ C(K): ∀k ∈ K :(Rp)(k) = p(k).

Proposition 5.6. R is an intertwiner between the representation UK on Pall and the left regular representation L on C(K). The proof of this is left to the reader. Please observe that one must also explain why Rp is a continuous function. 15 EXAMPLES: In case K = S1 = {z ∈ C | |z|2 = zz = 1}, a polymomial p in has the form Pn k and a polynomial in has the z p(z) = k=0 akz q z form Pr `. Here the coecients and are all complex q(z) = `=0 b`z ak b` numbers. The restrictions to K are in this case n iθ X ikθ p(e ) = ake k=0 r −iθ X −i`θ (26) q(e ) = b`e `=0 It is known from the theory of Fourier Series that both types of restric- tions are needed to expand a continuous function. In case K = SU(2), we have the following perhaps unusual way of describing the group:

nz1 z3 K = SU(2) = ∈ Gl(2, ) | z1z1 + z2z2 = 1, z2 z4 C (27) z3z3 + z4z4 = 1, z1z3 + z2z4 = 0 and z1z4 − z2z3 = 1}. Upon closer scrutiny it is revealed that in this case, because we also have    that SU(2) = α −β ∈ Gl(2, ) | |α|2 + |β|2 = 1 , it suces to β α C consider polynomials in the variables z1, z2, z3, z4. That is, no conjuga- tions are needed (because the right hand column involves the conjugates of the left hand column). In case K = O(3), we have the following way of describing the group: K = O(3) = {O(Z1,Z2,Z3) = (Z1 Z2 Z3) ∈ Gl(3, C) | (28) is a column vector in 3 and ∀i = 1, 2, 3 : Zi C t O(Z1,Z2,Z3) O(Z1,Z2,Z3) = 1. Moreover, ∀i = 1, 2, 3 : Zi = Zi.}. Clearly, also in this case one only needs the polynomials, not the conju- gates of such. Because of Corollary 5.5 we have (essentially) proved the Peter-Weyl Theorem in the case of compact matrix groups. What remains in a little cleaning up regarding the multiplicities. This can be done in analogy with the case of a nite group and is left to the reader. Instead, let us notice that we have in fact proved more:

Proposition 5.7. For a compact matrix group G, any Π ∈ Gb occurs in ⊗r 0 ⊗s (ρK) ⊗ (ρK) 16 for some non-negative integers r, s.

EXAMPLE: Recall that the inverse of a matrix may be computed from the so-called Cramer's Rule. Specically, If a matrix A is invertible, C (A−1) = ji ij det A where Crs is the determinant of the matrix obtained from A by placing zeros in row r and column s except that the position rs should have a 1. This is the rs co-factor. Since a matrix U in SU(n) satises det U = 1 and U = (U ∗)−1 it follows by considering the inverse of U ∗ (!) that the last column in U satises:

Ujn = det Djn, where Djn is the matrix obtained from U by replacing the jth row and nth column by zeros except for a 1 in the position jn. This implies that

n 1 2 n−1 Ucol = F (Ucol, Ucol,..., Ucol ), where F is a multilinear function. It follows from this in the same way as it did for SU(2) that one only needs consider polynomials, not con- jugations of polynomials (or: vice versa!). This leads immediately to the Corollary 5.8 It is also proved in a more advanced exercise that for SU(n), the dual representation 0 occurs in ⊗n−1 8. ρK (ρK)

⊗r Corollary 5.8. Any Π ∈ SU\(n) occurs in (ρK) for some non- negative integer r.

Notice that the representations of K that occur in C(K) (or L2(K)) are those that are in the image of the intertwiner R. If we have the full description of the representations in Pall this information is equivalent to knowing the answer to the following question - which is unknown in complete generality to me: QUESTIONS: What is the kernel K of R? We know from abstract algebra that it is a nitely generated ideal. But when is K equal to the zero set of this ideal? And, if not, is the zero set a group? (See the rst two examples below.) What can be said about the quotient Pall/K ?

8 ⊗n Notice that this implies that the trivial representation occurs in (ρK ) . 17 EXAMPLE: In the K = S1 example from before, one can see that if a polynomial N,R X k ` 1 ak,`z z 7→ 0 on S , k,`=0 then there exists a polynomial q such that N,R X k ` ak,`z z = q(z, z)(zz − 1). k,`=0 In other words, the kernel is generated, as an ideal, by (zz − 1). If we now consider the of S1 we can see (as expected) that Pall/K has a basis consisting of the equívalence classes [zn]; n = 0, 1, 2,... , toghether with the classes [zk]; k = 1, 2,... ]. Let n k pn(z) = z for n = 0, 1, 2,... and qk(z) = z for k = 1, 2,... . Then the representation UK = US1 is given as follows: iθ −iθ −inθ n −inθ (29) (US1 (e )pn)(z) = pn(e z) = e z = e pn(z) iθ −iθ ikθ k ikθ (30) (US1 (e )qk)(z) = qk(e z) = e z = e qk(z). In other words, we get the 1-dimensional representations of S1: eiθ 7→ eirθ ; r = 0, ±1, ±2,..., ±n, . . . . These are precisely the irreducible nite-dimensional reepresentations of S1. EXAMPLE/EXERCISE: Let r, s be positive relatively prime inte- gers. Carry out an analysis of the group eirθ 0   (31) K = | θ ∈ 1 0 eisθ R along the lines of the previous example.

EXAMPLE: Let ΠN and ΠR be two irreducible represnetations of in spaces (2) and (2) of homogeneous polynomials of degree SU(2) PN PR N and R, respectively in 2 complex variables. Then, as representation spaces (32) (2) (2) min{N,R} (2) (2) (2) PN ⊗ PR = ⊕i=0 P|N−R|+2i = P|N−R| ⊕ · · · ⊕ PN+R. In particular, (33) min{N,R} (2) ΠN ⊗ Π= ⊕i=0 Π|N−R|+2i = Π|N−R| ⊕ · · · ⊕ ΠN+R. 18

Proof: The restriction map (restriction to the diagonal): (2) S PN ⊗ (2) (2) given by PR 7→ PN+R

(34) (S(pN ⊗ qR)) (z1, z2) = pN (z1, z2)qR(z1, z2) for (2) and (2) is clearly a non-trivial intertwiner into pN ∈ PN qR ∈ PR (2) . (Contemplate this.) We extend the map to all of (4) in PN+R S PN+R the obvious way. Let us for a moment denote the variables of pN by and those of by . The subspace (ideal) of (4) w1, w2 qR w3, w4 K2 PN+R generated by the second order homogeneous polynomial w1w4 − w2w3 (the similarity with the formula for a determinant is intended!) is clearly mapped to zero by the map . The map (4) (4) given S E : PN+R−2 7→ PN+R by (35) (4) (4) PN+R−2 3 p 7→ (w1w4 − w2w3)p ∈ PN+R is injective because there is Unique Factorization in P(4) . The sub-   N+R space (2) (2) (2) (2) has dimension , (2) (2) E PN−1 ⊗ PR−1 ⊂ PN ⊗ PR NR PN ⊗ PR has dimension and (2) has (N + 1)(R + 1) = NR + N + R + 1 PN+R dimension N + R + 1. It follows that there is an equivalence of representation spaces (36) (2) (2) (2) (2) (2) PN ⊗ PR = PN−1 ⊗ PR−1 ⊕ PN+R, where, since we write , the summand (2) actually should be in- ⊕ PN+R terpreted as an irreducible subspace in P(2) ⊗ P(2) isomorphic to this summand. We use here the complete reducibilityN R for nite-dimensional representations of compact Matrix Lie Groups. The proof is now almost complete; an induction argument (downward) nishes the job.  6. The representation theory of finite (or compact) groups The representation theory of nite groups is covered by the above, since a nite group has a Haar measure: Proposition 6.1. The counting measure in a nite group G is left as well as right invariant. Proof: Multiplication by a xed element h ∈ G, be it from the left or from the right, simply introduces a permutation of the group elements; it clearly preserves the magnitude of any subset.  We will later list some special cases of the previous results. 19 7. The representation theory of SU(2) (once again)

7.1. Special representations of SU(2) - the group point of view. We consider the representation Πn on the space Pn of homoge- neous polynomials of degree n in the complex variables z, w.9 Then   (37) a b (Πn( c d )p)(z, w) = p(dz − bw, −cz + aw). j n−j If we let pj,n(z, w) = z w for j = 0, 1, . . . , n, we obtain in particular  eiθ 0  (Π ( )p ) = e(n−2j)iθp . n 0 e−iθ j,n j,n It follows that the character is given by  eiθ 0  (38) χΠn = 0 e−iθ (39) eniθ + e(n−2)iθ + ··· + e(−n+2)iθ + e−niθ = eniθ − e−niθ sin(n + 1)θ (40) = . eiθ − e−iθ sin θ  x + ix x + ix  We know that any element g = 1 4 2 3 ∈ SU(2) is −x2 + ix3 x1 − ix4  iθ  conjugate to a diagonal element e 0 where, by considering 0 e−iθ traces, x1 = cos θ, which means that the θ we use here is the same as in Section 2.2. Then, by the formula for the Haar measure, Z Π Π Π (41) hχ n , χ r i = χ n χΠr dg = SU(2) Z 1 sin(n + 1)θ sin(r + 1)θ 2 2 sin θ sin ψdψdφdθ = 2π θ,ψ,φ sin θ sin θ 2 Z π sin(n + 1)θ sin(r + 1)θdθ = δn,r. π 0 7.2. Decomposition of tensor products. Lemma 7.1. Assume that n ≥ m. Then χΠn · χΠm = χΠn+m + χΠn+m−2 + ··· + χΠn−m .

9More precisely, we consider holomorphic polynomials; i.e. expressions of the form P i n−i. i aiz w 20 Proof: We have that  eiθ 0  χΠn · χΠm = 0 e−iθ (eniθ + e(n−2)iθ + ··· + e−niθ) · (emiθ + e(m−2)iθ + ··· + e−miθ) It is easy to see that e(n+m)iθ appears exactly once in this product, e(n+m−2)iθ appears twice, e(n+m−4)iθ appears three times,. . . , e(n−m)iθ, appears m+1 times, e(n−m−2)iθ appears m+1 times, e(n−m−4)iθ appears m + 1 times, e(−n+m)iθ appears m + 1 times,. . . , e−(n+m−4)iθ appears three times, e−(n+m−2)iθ appears twice, and e−(n+m)iθ appears exactly once. The claim follows from this, once one realizes that the decompo- sition in Lemma 4.5, by Theorem 4.11, is unique..  Corollary 7.2.

Πn ⊗ Πm = Πn+m ⊕ Πn+m−2 ⊕ · · · ⊕ Πn−m.

8. Unitarity; the algebra point of view. We will in this section consider a number of situations, where one has a representation π of a 3-dimensional Lie algebra g = Span {L1,L2,L3} (but it easily generalizes to higher dimensional Lie algebras).R Let us denote the representation space by V . We are not assuming that V nec- essarily is nite-dimensional. We will assume that there are 3 operators S, A, C that are complex linear combinations of the generators of the Lie algebra in the given representation, and which satisfy (42) [S,A] = 2 · s · A;[S,C] = −2 · s · C; and [A, C] = S for some real constant s. We are not directly assuming irreducibility, but we assume furthermore, that there is a non-zero vector v0 that satises (43) A(v0) = 0 and S(v0) = λ · v0, for some λ ∈ R. We are interested in the space Span n VC = C{C (v0) | n = 0, 1, 2,... }. The space is created by C and its non-negative powers, and one often calls C a creation operator. Likewise, A is called an annihilation operator. There is no universal name for S; it depends on the specic situation. The vector v0 is sometimes called the highest weight vector, while in physics, again depending on the situation, it might be called the vacuum vector. We are interested in those situations, where there exists an inner prod- uct h·, ·i (positive denite, of course, but no assumptions at the moment about completeness) on VC. 21

Denition 8.1. If T is a linear operator VC 7→ VC, the formal ad- joint, T †, is dened by † n r n r ∀n, r ∈ N ∪ {0} : hT C (v0),C (v0)i = hC (v0),TC (v0)i. Denition 8.2. The representation π of g is unitarizable if the op- erators π(Lk); k = 1, 2, 3 are formally skew adjoint, i.e. † ∀k = 1, 2, 3 : π(Lk) = −π(Lk). In the situations we are going to investigate, the assumption of unita- rizability implies (44) Either C† = −A, or C† = A. Let us cover both cases by writing

(45) C† = ε · A for some xed ε = ±1. Now observe Proposition 8.3. With the above assumptions there is at most one inner product on VC in which the representation is formally skew adjoint and such that v0 is a unit vector in the corresponding norm. Proof: We need to compute n r for all in . hC (v0),C (v0)i n, r N ∪ {0} Assume that n ≥ r and observe (verify #1): n r n n r (46) hC (v0),C (v0)i = ε hv0, [A ,C ](v0)i. Then observe that (42) implies that n r n−r (47) [A ,C ]v0 = A Ln,r · v0, (verify #2) 0 where Ln,r = Ln,r(λ) is a real constant, and where, as usual, A := 1. But this means that we at most get something non-zero when n = r. The case n < r can be handled similarly (verify #3). Finally, the constant can be computed explicitly. Ln,r =  n Remark 8.4. Let vn = C (v0) for all n = 0, 1, 2,... . It follows from the proof above that

(48) r 6= n ⇒ hvr, vni = 0. If s 6= 010, this could also have been proved by observing that these vectors are eigenvectors for S (verify #4). To proceed, we need to compute [An,Cn].

10This extra assumption was added on June 14 (why is it necessary?) 22 Proposition 8.5. n n n X k k (49) [A ,C ] = C Pn,k(S)A , k=0 where ∀k = 0, 1, . . . , k, Pk,n(S) is a polynomial in S, and where (ver- ify #5)

(50) P0,n(S) = n! · S · (S − s) ··· (S − (n − 1)s)). Corollary 8.6. (verify #6) n (51) hvn, vni > 0 ⇔ ε · λ · (λ − s) ····· (λ − (n − 1)s) > 0. Example 8.7. To get started, one may notice that [A, C] = S, [A2,C] = AS + SA = 2A(S + s), [A, C2] = 2C(S − s), [A2,C2] = 2S(S − s) + 4C(S − 2s)A. Also notice that (verify #7) (52) ∀n :[A, Cn] = n · Cn−1(S − (n − 1)s). We will now use these general observations to examine some specic Lie algebras; 8.1. The Heisenberg algebra. The Heisenberg Lie algebra h is gen- erated by elements R, U, W satisfying (53) [R,U] = W, [W, R] = 0 = [W, U]. Usually one considers the elements P, Q, Z in the set i·h given by P = −iR, Q = −iU, and Z = −iW . In terms of these, the commutators are

(54) [Q, P ] = iZ, [Z,Q] = 0 = [Z,P ]. One denes 1 1 (55) A = √ (Q + iP ),S = Z, and C = √ (Q − iP ). 2 2 Then the triple A, S, C satises (42) with s = 0 (verify #8)11 . More- over, it follows that in a unitarizable representation, one must have (56) C† = A, which implies that here, ε = 1. So (verify #9) (57) There is unitarity if and only if λ ≥ 0.

11 In this example, we denote by X both a Lie algebra element and its value in a representation 23 8.2. The Lie algebra su(2). This Lie algebra is generated by three elements       (58) 0 1 i 0 0 i −1 0 , 0 −i , i 0 . In the complexication one nds the elements (verify #10)       (59) 0 0 0 1 1 0 Y = 1 0 ,X = 0 0 ,H = 0 −1 . In a representation12 π set C = π(Y ),A = π(X),S = π(H). This triple then satises (42). Now one has (verify #11) s = 1 and ε = 1, and any nite-dimensional representation of su(2) is unitarizable according to the procedure given in this section (verify #12). Before proceeding, observe that one has (verify #13)

Proposition 8.8. The Lie algebras su(1, 1), sl(2, R), and sp(1, R) are isomorphic.

8.3. The Lie algebra su(1, 1). This Lie algebra is generated by three elements       (60) 0 1 i 0 0 i 1 0 , 0 −i , −i 0 . In the complexication one nds the elements (verify #14)

      (61) 0 0 0 1 1 0 Y = 1 0 ,X = 0 0 ,H = 0 −1 . In a representation13 π set C = π(Y ),A = π(X),S = π(H). This triple then satises (42) (verify #15). In contrast to the su(2) case, however, one now has s = 1, and ε = −1 (verify #16), and there are innite-dimensional unitarizable representations provided the constant λ satises a certain condition, which is left for the reader to determine (verify #17).

12June 19: The word unitary has been removed. What we mean is that we are looking for unitarizable representations π. 13June 19: The word unitary has been removed. What we mean is that we are looking for unitarizable representations π. 24 8.4. The Heisenberg algebra AND the Lie algebra su(1, 1). Let C,S,A be the triple from the discussion of the Heisenberg algebra. Set i i 1 (62) C˜ = C2, A˜ = A2, S˜ = −(CA + ) 2 2 2 Then this triple satises all the properties needed to dene a unitariz- able module for su(1, 1) provided that C, A, S come from a unitarizable representation of the Heisenberg algebra (verify #18)14. There are two independent highest weight vectors (not necessarily both of norm 1), namely v0 and C(v0), where v0 is the highest weight vector for the rep- resentation of the Heisenberg algebra (verify #19). More precisely, set even Span n and VC = {C (v0) | n = 0, 2, 4, ···} odd Span n VC = {C (v0) | n = 1, 3, 5, ···}. Then these spaces are invariant under the su(1, 1) representation dened by the elements C,˜ A,˜ S˜ (verify #20). The vectors v˜0 = v0, vˆ0 = C(v0) t into (43) for this triple. In particular, they are eigenvectors for S˜ of eigenvalues, λ˜ and λˆ, respectively. This task is left for the reader (verify #21).

14This is what is sometimes called The Harmonic Representation or The Metaplectic Representation. It is connected to the The Stone von Neumann Uniqueness Theorem and to the fact that Sp(1, R) acts by automorphisms on the Heisenberg algebra. 25 9. Higher dimensional examples, e.g. SU(3) One is here led by the same sort of reasoning to complexication and to operators 0 1 0 ! 0 0 0 ! (63) e1 = 0 0 0 , e2 = 0 0 1 0 0 0 0 0 0 0 0 0 ! 0 0 0 ! (64) f1 = 1 0 0 , f2 = 0 0 0 0 0 0 0 1 0 1 0 0 ! 0 0 0 ! (65) h1 = 0 −1 0 , h2 = 0 1 0 . 0 0 0 0 0 −1 (There are two more generators of su(3), but they are not as impor- tant.) A nite dimensional irreducible unitary representation (π, Hπ) of SU(3) is given by a pair (n1, n2) of non-negative integers (and any such pair leads to a such representation) and is given by a highest weight vector v0 satisfying dπ(e1)v0 = dπ(e2)v0 = 0 dπ(h1)v0 = n1v0 ; dπ(h2)v0 = n2v0 r s r s Hπ = Span{dπ(f1) 1 dπ(f2) 1 ··· dπ(f1) k dπ(f2) k (v0)}. An irreducible representation of U(3) is also an irreducible represen- tation of SU(3) and is simply given by such a representation tensored together with a 1-dimensional representation of the form U(3) 3 u 7→ det up for p ∈ Z. We here assume that p ≥ 0. One usually represents such a representation by a triple (p + n1 + n2, p + n2, p) where (n1, n2) gives the SU(3) representation. Furthermore, it is convenient to give it in the form of a Young Diagram with three rows of boxes with resp. p + n1 + n2, p + n2, and p) boxes.

Figure 1. Young Diagram (9,4,2) corresponding to the irreducible

representation of SU(3) given by (n1, n2) = (5, 2). As an irre- ducible representation of U(3) it has an extra factor of det u3 (be- cause of the 2 boxes in the third row)