<<

Group Theory (PHYS 507) Solution Set #8 5/26/17

1. of so(5) from root diagram. In HW7 #3 you constructed the Cartan and positive roots for the Lie algebra so(5) (two simple roots at an angle of 135◦). Use the resulting root diagram to determine all the nonvanishing involving E±α and Hi, as we did in lecture for su(3). In this way you are determining the structure constants of the algebra. We recall from HW7 #3 that there are four positive roots:

α(1) = (0, 1) , α(2) = (1, −1) , β = α(1) + α(2) = (1, 0) , γ = 2α(1) + α(2) = (1, 1) ,

and so 10 roots in all (four positive, four negative and two Cartan generators). The root diagram is

(2) 1 (1) -α α γ

-1 -β 1 β

-γ -α(1) α(2) -1

Next we need to determine all the commutators, i.e. the Lie algebra. We know the commutators of the (two) generators of the :

[Hi,Hj] = 0 , [Hi,E±φ] = ±φiE±φ ,

with φ any positive root. We also know that

[Eφ,E−φ] = φ · H.

What remains to be determined are [Eφ,Eφ0 ] and [E−φ,Eφ0 ]. All other commu- tators can then be obtained by hermitian conjugation.

1 From the root diagram we know that certain commutators must vanish:

[Eα(1) ,Eγ] = [Eα(2) ,Eβ] = [Eα(2) ,Eγ] = [E−α(1) ,Eα(2) ] = [E−α(2) ,Eγ] = 0 .

The remaining commutators are non-zero. These are [Eα(1) ,Eα(2) ], [Eα(1) ,Eβ], [E−α(1) ,Eβ], [E−α(1) ,Eγ], [E−α(2) ,Eβ], and [E−β,Eγ]. These can be calculated using the properties of the raising and lowering operators in the SU(2) sub- groups, as well as the Jacobi identity. (1) (2) First note that Eα(1) = E+ for its SU(2), since |α | = 1. Roots α , β and γ form a triplet under this SU(2), and we can choose the phases of the states in the adjoint irrep so that

|Eα(2) i = |1, −1i , |Eβi = |1, 0i , |Eγi = |1, 1i .

We also need the matrix elements of E± for the triplet irrep (which we wrote down earlier in lecture):

E+|1, −1i = |1, 0i ,E+|1, 0i = |1, 1i ,E−|1, 1i = |1, 0i ,E−|1, 0i = |1, −1i .

Now we are ready to proceed.

• Raising α(2) with α(1) we obtain

|[Eα(1) ,Eα(2) ]i = Eα(1) |Eα(2) i = E+|1, −1i = |1, 0i = |Eβi ,

so that [Eα(1) ,Eα(2) ] = Eβ. • Similarly, raising β with α(1) we obtain

|[Eα(1) ,Eβ]i = Eα(1) |Eβi = E+|1, 0i = |1, 1i = |Eγi ,

so that [Eα(1) ,Eβ] = Eγ. • Next, lowering β with −α(1) we find

|[E−α(1) ,Eβ]i = E−α(1) |Eβi = E−|1, 0i = |1, −1i = |Eα(2) i ,

so that [E−α(1) ,Eβ] = Eα(2) . This can also be obtained using the Jacobi identity after substituting Eβ = [Eα(1) ,Eα(2) ]. • Lowering γ with −α(1) we find

|[E−α(1) ,Eγ]i = E−α(1) |Eγi = E−|1, 1i = |1, 0i = |Eβi ,

so that [E−α(1) ,Eγ] = Eβ. This can also be obtained using the Jacobi identity.

2 • Next we consider [E−α(2) ,Eβ]. This must equal c × Eα(1) , and using low- ering operators as above one finds that |c| = 1. We do not have the freedom, however, to choose the phase here—all phases have been fixed. To determine c we must use the Jacobi identity

[E−α(2) ,Eβ] = [E−α(2) , [Eα(1) ,Eα(2) ]]

= −[Eα(1) , [Eα(2) ,E−α(2) ] (1) (2) = α · α Eα(1)

= −Eα(1) .

• Finally, by a similar calculation, we obtain

[E−β,Eγ] = [E−β, [Eα(1) ,Eβ]]

= −[Eα(1) , [Eβ,E−β]] − [Eβ, [E−β,Eα(1) ]] (1) = β · α Eα(1) − [Eβ,E−α(2) ]

= −Eα(1) where I have used β · α(1) = α(2) · α(1) + α(1) · α(1) = 0.

2. Fun with sO(2N). In class we determined the roots of the Lie algebra sO(2N) by taking differences between weights of the fundamental irrep. We found that they were ±ˆj ± kˆ ≡ ±ej ± ek , j < k , where ˆj ≡ ej are unit vectors lying along one of the N axes of a Cartesian basis for the Cartan subalgebra (ej is Georgi’s notation). Note that the two “±” symbols are independent, so that, for example, for so(4) there are four roots. What we did not do in class, however, was determine the generators corresponding to these roots. (a) For the case of so(4), determine which linear combinations of generators correspond to the four roots. (Recall that SO(4) generators are imaginary and antisymmetric, and that a standard basis for the Cartan subalgebra is     σ2 0 0 0 H1 = ,H2 = . 0 0 0 σ2 The roots can in general be complex linear combinations of the generators. The four remaining generators are  0 0 −i 0   0 0 0 −i   0 0 0 0   0 0 0 0   0 0 0 0   0 0 0 0   0 0 −i 0   0 0 0 −i    ,   ,   ,   .  0 i 0 0   0 0 0 0   0 i 0 0   0 0 0 0  0 0 0 0 i 0 0 0 0 0 0 0 0 i 0 0

3 Taking (complex) linear combinations one can write them in the form

 0 B  E = , B −BTr 0

in which B is a of the four matrices 1 and σi. Noting that     0 σ2B 0 Bσ2 [H1,EB] = Tr , [H2,EB] = − Tr . −(σ2B) 0 (−Bσ2) 0 we see we must choose B’s that are both left and right eigenvectors of σ2, i.e. σ2B = ±B and Bσ2 = ±B (with the two ± not necessarily the same). We know that the eigenvalues must be ±1 from the given values for the roots. It is easy to see that the appropriate B’s are (1 ± σ2) (with left and right eigenvalues ±1) and (σ3 ± iσ1) (with left eigenvalue ±1 and right eigenvalue ∓1). Putting this together, one finds the generators corresponding to the four roots have B’s as follows:

root = (+1, +1) : B = σ3 + iσ1 ,

root = (+1, −1) : B = 1 + σ2 ,

root = (−1, +1) : B = 1 − σ2 ,

root = (−1, −1) : B = σ3 − iσ1 ,

(b) Sketch briefly how your result generalizes to so(2N), with N > 2. 2 Divide the matrices into N 2 × 2 blocks. The Hj, j = 1,N then have a σ2 in the j’th diagonal block. The Eα have one of the same four B matrices in the off-diagonal jk’th block (j < k), and −BTr in the kj’th block. The root vectors corresponding to the four choices of B will be as in SO(4), but with the non-zero components being the j’th and k’th ones.

3. More fun with so(4). The for so(4) is decomposable: it consists of two unattached circles. Thus it is the same algebra as that for SU(2) × SU(2). This decomposition is clear from the root diagram of so(4). The two su(2)’s are generated by the two simple roots, i.e. α(1) = 1ˆ + 2ˆ and α(2) = 1ˆ − 2ˆ. Since these are orthogonal, it follows that the corresponding gen-

erators commute: [E±α(1) ,E±α(2) ] = 0 (where the two ± are independent). The Cartan subalgebra decomposes into α(1) · H and α(2) · H. This decomposition of the Lie algebra of SO(4) is very useful for determining its irreps, since we know those of SU(2). In this problem we see how rotations of 4-d vectors can be parametrized by two SU(2) transformations. An SO(4) 0 tranformation rotates vµ (a real Euclidean four-vector) into another one vµ by

4 an with unit . We can package four-vectors into 2 × 2 matrices by contracting them with a vector of matrices:

4 X V ≡ vµσµ , σµ = (iσ1, iσ2, iσ3, 1) . µ=1 This is nothing other than a (matrix representation of a) quarternion. (a) Since V has only four real parameters, whereas a general 2 × 2 matrix has eight, V must be constrained in some way. In fact V = |v|U, where qP 2 U ∈ SU(2), and |v| = µ vµ. Show this result. The result V = |v|U is equivalent to V/|v| being unitary and having unit determinant. First we calculate det V , which can be done directly using the expressions for the :   v0 + iv3 iv1 + v2 X 2 2 det V = det = vµ ≡ v iv1 − v2 v0 − iv3 µ Thus det(V/|v|) = (det V )/v2 = 1, as desired. Note that this is true irrespective of whether vµ are real or complex.

Next we check unitarity by direct computation, using σaσb = iabcσc (which † implies σaσb + σbσa = 2δab1) and σa = σa: V †V 1 v2 = (v − i~v · ~σ)(v + i~v · ~σ) = 1 = 1 . det V v2 0 0 v2

This does rely on vµ being real. Thus V/|v| is unitary. To summarize, the properties√ of V are that det V is real and positive semi- definite and that V/ det V ∈ SU(2). 0 † (b) Now we perform the transformation V → V = LV R , with L ∈ SU(2)L 0 P 0 0 0 2 2 and R ∈ SU(2)R. Show that V = µ vµσµ with vµ real and (v ) = v . Thus the length of the vector is maintained, corresponding to a rotation or reflection. Give an argument that it must be a rotation, i.e. that the corresponding O(4) matrix has unit determinant. We will need the converse of the result from the previous part: namely that P rU, with r ≥ 0 real and U ∈ SU(2), can always be written as µ vµσµ (with v2 = r2). This is because a general SU(2) matrix can be written U = exp(iθnˆ · ~σ) = cos θ + i sin θnˆ · σ , i.e. in our general form but with the four vector v˜ = (cos θ, sin θnˆ) ,

5 which is of unit length. Thus rU is also of this form, with v = rv˜, so that v2 = r2. Q.E.D. 0 2 Thus√ (almost) all we need to show is that det V = det V = v and that V 0/ det V 0 is unitary, or, equivalently, V 0†V 0 = v21. These are easy to show: first, det V 0 = det L det V det R† = det V = v2; second, V 0†V 0 = RV †L†LV R† = v21. Finally, we need to rule out orthogonal transformations with determinant −1. The point here is that the SU(2)L and SU(2)R transformations are continuously connected to the identity. Since the determinant must, on the one hand, vary smoothly with the parameters of these transformations, and, on the other, can only take on the values ±1, it must remain at that for the identity transformation, namely +1. 0 † (c) Work out the transformed vector v both for L = R = iσ3 and for L = R = iσ3. That these correspond to different transformations is an indication of the fact that SU(2)L and SU(2)R induce independent rotations.

If L = R = iσ3, then

0 V = iσ3(vµσµ)(−iσ3) = −iv1σ1 − iv2σ2 + iv3σ3 + v41 ,

0 so v = (−v1, −v2, v3, v4). This corresponds to a rotation by π in the 1 − 2 block. † If L = R = iσ3, then

00 V = iσ3(vµσµ)(iσ3) = iv1σ1 + iv2σ2 − iv3σ3 − v41 ,

00 0 so v = −v = (v1, v2, −v3, −v4). This corresponds to a rotation by π in the 3 − 4 block. (d) Does the equivalence of Lie algebras of SO(4) and SU(2)×SU(2) also hold for the groups, i.e. does SO(4) ∼= SU(2) × SU(2)? Explain. No, the groups differ by the familiar “factor of 2”. We can see this from V → LV R: if L = R = −1 (which is a non-trivial SU(2) × SU(2) matrix) then V is unchanged (so the SO(4) matrix is trivial, i.e. the identity). More generally, the two SU(2) × SU(2) elements (L, R) and (−L, −R) lead to the same SO(4) transformation. In fact

SU(2) ⊗ SU(2) SO(4) ∼= . Z2

(The following is not required for the solution.) It follows that, for an irrep of SU(2) × SU(2) [which can be labeled (jL, jR)] to be an irrep of SO(4), one must have jL + jR equal to an .

6 4. (Georgi problem 19.A) Consider the 36 matrices

σa, τa, ηa, σaτbηc , where σ, τ and η are independent sets of Pauli matrices. Show that these matrices form a Lie algebra, and find the roots, simple roots and the Dynkin diagram. What is the algebra? There are 3+3+3+33 = 36 elements of the proposed Lie algebra. We first need to show that these elements close under commutation. We will use repeatedly the properties of Pauli matrices:

σaσb = δab + iabcσc .

The σa, τa and ηa clearly close since they form an SU(2) ⊗ SU(2) ⊗ SU(2) Lie algebra. Thus one has to consider

[σd, σaτbηc] = 2idaeσeτbηc , which closes, and

[σaτbηc, σa0 τb0 ηc0 ] = −16iaa0a00 bb0b00 cc0c00 σa00 τb00 ηc00

+4iaa0a00 σa00 + 4ibb0b00 τb00 + 4icc0c00 ηc00 , which also closes. So the algebra is closed. Note also that the generators are hermitian, so that the the structure constants must be real, and indeed they are. The next step is to find the Cartan subalgebra—a maximal set of commuting generators. I say “a” rather than “the” since there are several choices, which are related to one another by similarity transforms. The standard choice is to pick σ3 to be diagonal, in which case the Cartan subalgebra is

H = {σ3, τ3, η3, σ3τ3η3} . The rank of the algebra is thus 4. Next we want to find√ the roots. Following our experience with SU(2) we in- troduce σ± = (1/ 2)(σ1 ± iσ2), and similarly for τ± and η±. Some of the commutation relations we need are

[σ3, σ±] = ±2σ± , [σ3, σ±τ3η3] = ±2σ±τ3η3 ,

[σ3τ3η3, σ±] = ±2σ±τ3η3 , [σ3τ3η3, σ±τ3η3] = ±2σ± . 0 From this we learn that (σ± ± σ±τ3η3) are eigenvectors under commutation with all elements of H, with the corresponding root vector being

0 0 (±2, 0, 0, ± ± 2) , [from (σ± ± σ±τ3η3)] .

7 Here ± and ±0 are independent, so there are four roots. Similarly we find two more sets of four roots by using τ or η in place of σ:

0 0 (0, ±2, 0, ± ± 2) , [from (τ± ± σ3τ±η3)] , 0 0 (0, 0, ±2, ± ± 2) , [from (η± ± σ3τ3η±)] .

Twelve more roots are obtained from σ±τ±0 η3 and permutations:

0 (±2, ± 2, 0, 0) , [from σ±τ±0 η3] , 0 (±2, 0, ± 2, 0) , [from σ±τ3η±0 ] , 0 (0, ±2, ± 2, 0) , [from σ3τ±η±0 ] .

Finally, one has eight roots of the form σ±τ±0 η±00 . The only non-trivial commu- tator is 0 00 [σ3τ3η3, σ±τ±0 η±00 ] = ± ± ± 2σ±τ±0 η±00 , so that the roots are

0 00 0 00 (±2, ± 2, ± 2, ± ± ± 2) , [from σ±τ±0 η±00 ] .

Next we want the positive roots. There are 16 of these:

(2, ±2, 0, 0) , (2, 0, ±2, 0) , (2, 0, 0, ±2) , (0, 2, ±2, 0) , (0, 2, 0, ±2) , (0, 0, 2, ±2) , (2, 2, 2, 2) , (2, 2, −2, −2) , (2, −2, −2, 2) , (2, −2, 2, −2)

From these, it is a tedious but ultimately mechanical exercise to figure out which are simple, i.e. which cannot be written as a sum of other positive roots. The answer is

(2, −2, −2, 2) , (0, 0, 2, −2) , (0, 2, −2, 0) , and (0, 0, 2, 2) .

◦ The angles between the simple roots (ordered this way) are θ12 = 135√ , θ23 = ◦ ◦ θ34 = 120 , with all other angles being 90 . The first simple root is 2 longer than the others. Thus the Dynkin diagram is

which we recognize as corresponding to C4 = Sp(8).

8 5. In HW7 #4 (Georgi problem 8.C) you determined all the roots, and Dynkin coefficients, for the algebra B3 = so(7) (although you did not know that was it’s name at the time). Its Dynkin diagram is

This problem concerns the algebra C3 = sp(6), which also has three roots, but two are shorter, rather than just one, so its Dynkin diagram is

(a) Determine the Cartan matrix, roots and Dynkin coefficients for C3. From the Dynkin diagram we find the Cartan matrix to be

 2 −1 0  2α · α A = i j = −1 2 −1 . (1) ij |α |2   j 0 −2 2

The positive roots are 20 0 α 1 0 1 0 α α 2 1 1 −1 1−2 2 0 α α 3 α 2 1 1 1 −1−1 0 1 α α α 1 3 α 2 2 2 −10 −1 2−1 0 −2 2 α α 2 α 1 3 00 0

There are the same number as for B3, though they are different.

9 For comparison we recall the results for B3. The Cartan matrix is

 2 −1 0  2α · α A = i j = −1 2 −2 , (2) ij |α |2   j 0 −1 2

while the positive roots are 01 0 α 2 1 −1 2 α 3 α 1 1 0 0 −10 2 α α 3 1 α 3 1 1 −2 −1 1 0 α α α 1 3 α 2 2 2 −10 −1 2−2 0 −1 2 α α 2 α 1 3 00 0

(b) Determine the root diagrams for B3 and C3, and describe how they differ geometrically. To do this you will need to pick a basis for the Cartan subalgebras, so that you can determine explicit root vectors. [In this way we see how the Cartan matrix construction leads to the same root vectors as we obtained in lecture starting from the definitions of the groups SO(n) and Sp(2n).]

We start with B3. Recall we can rescale the overall root lengths and orientations by changing basis in the Cartan subalgebra. From the Dynkin diagram and previous work we know that α1 · α1 = α2 · α2 = 2α3 · α3, so it is convenient to choose the long roots to have lengths α1 · α1 = 2, implying α3 · α3 = 1. Then we have α1 · α2 = −1 = α2 · α3 while α1 · α3 = 0. One possible orientation of the simple roots is

α1 = (1, −1, 0) , α2 = (0, 1, −1) , α3 = (0, 0, 1) . (3)

10 Then the other positive roots are (grouped by level)

α1 + α2 = (1, 0, −1) , α2 + α3 = (0, 1, 0) , (4)

α1 + α2 + α3 = (1, 0, 0) , α2 + 2α3 = (0, 1, 1) , (5)

α1 + α2 + 2α3 = (1, 0, 1) , (6)

α1 + 2α2 + 2α3 = (1, 1, 0) . (7)

Including the negative roots, altogether we obtain

±ˆj and ± ˆj ± kˆ (j < k) (8) as well as the three roots at the origin.

Now we turn to C3. In this case the third root is longer so we can choose α1 · α1 = α2 · α2 = 2 and α3 · α3 = 4, implying α1 · α2 = −1, α2 · α3 = −2 while α1 · α3 = 0. One solution is

α1 = (1, −1, 0) , α2 = (0, 1, −1) , α3 = (0, 0, 2) . (9)

Then the other positive roots are (grouped by level)

α1 + α2 = (1, 0, −1) , α2 + α3 = (0, 1, 1) , (10)

α1 + α2 + α3 = (1, 0, 1) , 2α2 + α3 = (0, 2, 0) , (11)

α1 + 2α2 + α3 = (1, 1, 0) , (12)

2α1 + 2α2 + α3 = (2, 0, 0) . (13)

Including the negative roots, altogether we obtain

±2ˆj and ± ˆj ± kˆ (j < k) (14) as well as the three roots at the origin. The only difference between the two root systems are that the on-axis roots are twice as long in C3 as in B3. This means that the non-zero roots lie on the surface of a cube for B3 and a diamond for C3. These are not connected by a rotation/rescaling in 3-d, whereas the corresponding shapes are connected in 2-d (which is why B2 = C2).

11