<<

Notes for Lie

Ting Gong

Started Feb. 7, 2020. Last revision March 23, 2020 Contents

1 Matrix Analysis 1 1.1 Matrix groups ...... 1 1.2 Topological Properties ...... 4 1.3 Lie groups and homomorphisms ...... 6 1.4 Matrix Exponential ...... 8 1.5 Matrix Logarithm ...... 9 1.6 and More ...... 11

2 Lie and Representation 14 2.1 Definitions ...... 14 2.2 of Matrix ...... 16 2.3 Mixed Topics: Homomorphisms and Complexification ...... 18

i Chapter 1

Matrix Analysis

1.1 Matrix groups

Last semester, in our reading group on differential , we analyzed some common Lie groups by treating them as an embedded submanifold. And we calculated the dimension by implicit function theorem. Now we discuss an algebraic approach to Lie groups. We assume familiarity in , and we consider Lie groups as topological groups, namely, putting a topology on groups. This is going to be used in matrix analysis.

Definition 1.1.1. Let M(n, k) be the set of n × n matrices, where k is a field (such as C, R). Let GL(n, k) ⊂ M(n, k) be the set of n × n invertible matrices with entries in k. Then we call this the over k.

n2 Remark 1.1.2. Now we let k = C, and we can consider M(n, C) as C with the Euclidean topology. Thus we can define basic terms in analysis.

Definition 1.1.3. Let {Am} ⊂ M(n, C) be a sequence. Then Am converges to A if each entry converges to the cooresponding entry in A in Euclidean metric space.

Definition 1.1.4. A matrix is a closed subgroup G of GL(n, C). That is, If Am is a convergent sequence in G, then Am converges to some A ∈ G or A not invertible.

Example 1.1.5. Consider GL(n, C), it is a matrix Lie group since the either the convergent sequence converges to an invertible matrix or a non-invertible one.

Example 1.1.6. We define SL(n, k) be the group of n × n invertivle matrices with entries in k with determinant 1. Then if we have a convergent sequence in SL(n, C), converging to A, then det(A) = 1. Thus A ∈ SL(n, C). Thus SL(n, C) is a matrix Lie group. Definition 1.1.7. An n × n complex matrix A is unitary if A∗A = I, where A∗ is the adjoint of A, the complex conjugate of the transpose of A. This is equivalent to A∗ = A−1

1 Example 1.1.8. The group of unitary matrices is called the , and we denote it as U(n). The group of unitary matrices with determinant 1 is called the special unitary group, and we denote it as SU(n). Let {Am} be a convergent sequence in U(n), converging −1 −1 ∗ ∗ ∗ −1 ∗ −1 to A, then Am → A and Am → A . And Am = Am , Thus A = A since limit is unique. Thus U(n) is closed subgroup of GL(n, C). Thus a matrix group. And SU(n) = U(n)∩SL(n, C). Thus SU(n) is closed and subgroup of GL(n, C). Thus a matrix Lie group. n P Now, we let h·, ·i be the standard inner product on C , by hx, yi = i xiyi. Then we see that the condition on unitary matrix is equivalent to hAx, Ayi = hx, yi.

Proposition 1.1.9. If A is unitary, | det(A)| = 1.

Proof. det(A∗A) = | det(A)|2 = det(I) = 1 since det(A∗) = det(A) and basic complex multiplication. ut

Example 1.1.10. An n × n real matrix A is called orthogonal if AT = A−1. Thus it n preserves inner product on R as unitary matrices do. And by proposition, | det(A)| = ±1. The group of all orthogonal matrices is called the , denoted O(n), and the ones with determinant 1 is called the special orhtogonal group, denoted SO(n). Then O(n) denotes rotations and reflections and SO(n) denotes rotations. These are matrix Lie groups, which can be checked as unitary groups. P Now we define a bilinear form (·, ·) such that (x, y) = i xiyi (called the dot product), n where x, y ∈ C . Example 1.1.11. The group of n × n complex valued matrices which preserves the above bilinear form is the complex orthogonal group, O(n, C), which is also a matrix Lie group. Similarly we have another matrix Lie group SO(n, C). n+k Example 1.1.12. Let n, k ∈ N, define [·, ·]n,k on R , a symmetric bilinear by [x, y]n,k = x1y1 + ... + xnyn − xn+1yn+1 − ... − xn+kyn+k. Then the set of (n + k) × (n + k) real matrices A such that [Ax, Ay]n,k = [x, y] is called the generalized orthogonal group, denoted as O(n; k) is a matrix Lie group. In physics, one is interested in O(3; 1), the . We define SO(n; k) be the subgroup of O(n; k) with determinant 1.

2n Pn Example 1.1.13. We define a skew-symmetric bilinear form on R by ω(x, y) = j=1(xjyn+j− xn+jyj). The set of all 2n × 2n real matrices A such that ω(Ax, Ay) = ω(x, y) is the real symplectic group denoted as Sp(n, R), and this is a matrix Lie group.  0 I Let Ω = be a 2n × 2n matrix, then ω(x, y) = hx, Ωyi by direct computation. −I 0 Therefore ω(Ax, Ay) = hAx, ΩAyi = hx, AT ΩAyi = hx, Ωyi = ω(x, y) iff AT ΩA = Ω. Thus −ΩAT Ω = A−1. Thus, one can show that det(A)2 = 1. We can define similarly the Complex Symplectic Group Sp(n, C), where all the identity above holds. Finally, the compact symplectic group Sp(n) = Sp(n, C) ∩ U(2n). Then this preserves both the bilinear form ω and the inner product we defined above.

2 n Example 1.1.14. The Euclidean group E(n) is the group of all transformation of R that can be expressed by translations and orthogonal linear transformation. Therefore, n we can write elements in E(n) as pairs {x, R} with x ∈ R and R ∈ O(n). An we let n {x, R} act on R by {x, R}y = Ry + x. Then the operation of this group is given by {x1,R1}{x2,R2}y = R1(R2y + x2) + x1 = R1R2y + x1 + R1x2 = {x1 + Rx2,R1R2}y. And thus, the inverse is given by {x, R}−1 = {−R−1x, R−1}. Thus, we observe that an element R x of E(n) is isomorphic to the matrix , R ∈ O(n). Therefore, E(n) is isomorphic to 0 1 the closed subgroup of GL(n + 1, R). We define P(n, 1) the Poincare group (inhomogeneous Lorentz group) be the group of A x all transformations of n+1, where an element is in the form where A ∈ O(n; 1), x ∈ R 0 1 n+1 R . Example 1.1.15. In this example, we introduce some examples that are more common. 1 a b The set of all 3 × 3 real matrices of the form 0 1 c. This set forms a group called the 0 0 1 . (This is useful in physics as a model for Heisenberg-Weyl commutation relation) One can associate R−{0} with multiplication to be GL(1, R), C−{0} with multiplication 1 to be GL(1, C). And S the , to be U(1). And if we consider R with addition, then we can associate it with the set of matrices with positive determinant by the exponential x n map: x 7→ e . Similarly we can do this with R . We end this section with a careful examination of the Compact Symplectic groups by showing that is it the ”unitary group over the ”. Firstly, define conjugate-linear 2n 2n n 2n map J : C → C by J(α, β) = (−β,¯ α¯) where α, β ∈ C and (α, β) ∈ C . Then we 2 2n observe J = −I, and that if z, w ∈ C , then ω(z, w) = hJz, wi = −hz, Jwi = −hJw, zi. Proposition 1.1.16. If U ∈ U(2n), then U ∈ Sp(n) if and only if U commutes with J.

2n ∗ Proof. Let U ∈ U(2n). Let z, w ∈ C , then ω(Uz, Uw) = hJUz, Uwi = hU JU, wi = hU −1JUz, wi, and we also have ω(z, w) = hJz, wi. Then U preserves ω if and only if U −1JU = J. Thus UJ = JU. ut

One should know the group H in group theory, with eight elements such that i2 = j2 = k2 = −1. In fact the quarternion group can be written in matrix form i 0   0 1 0 i i = , j = , k = . Then the quarternion algebra is the space of real 0 −i −1 0 i 0 linear combinations of I, i, j, k. 2n Since J is conjugate linear, then J(iz) = −iJ(z). Then for z ∈ C , iJ = −Ji. Then if we define K = iJ, then one can check that iI, J, K satisfy the relations as i, j, k.

3 Therefore, we observe that U ∈ Sp(n) commutes with this defined i,j,k. Thus, we conclude that U ∈ Sp(n) iff U is quaternion linear and preserves the norm. That is why we describe Sp(n) as the unitary group over the quaternions. Next we discuss properties of eigenvectors and eigenvalues for Sp(n).

2n Lemma 1.1.17. Let V be a complex subspace of C , invariant under J. Then the orthog- onal complement V ⊥ of V is also invariant under J. Furthermore, V,V ⊥ are orthogonal with respect to ω, that is ω(z, w) = 0, for z ∈ V, w ∈ V ⊥. Proof. If w ∈ V ⊥, then ∀z ∈ V , Jz ∈ V , then hJw, zi = −hJz, wi = ω(z, w) = 0, and we know that V ⊥ is invariant under J. ut

Theorem 1.1.18. U ∈ Sp(n), if and only if ∃u1, . . . , un an orthonormal basis, and v1, . . . , vn ∈ 2n iθ C with the following properties: Jui = vi and that ∃θ1, . . . , θn such that Uuj = e j uj, Uvj = −iθ e j vj; finally, ω(uj, uk) = ω(vj, vk) = 0, ω(uj, vk) = δjk, where δ is the Kronecker’s delta. Proof. Let U ∈ Sp(n), choose an eigenvector for U, and normalize it to be a unit vector, iθ say u1. Then sinder U ∈ U(2n), λ1 = e 1 for θ ∈ R, and λ1 is the eigenvalue for u1. iθ −iθ Let v1 = Ju1, by Prop. 1.1.19, Uv1 = J(Uu1) = J(e 1 u1) = e 1 v1. Thus v1 is an −iθ eigenvector of U with eigenvalue e 1 . Then hv1, u1i = hJu1, u1i = ω(u1, u1) = 0, since ω is skew-symmetric. Moreover, ω(u1, v1) = hJu1, v1i = hJu1, Ju1i = 1. Then u1 spanning V is invariant under J and v1 is invariant under J. By Lemma 1.1.21, one can see that V ⊥ is invariant under J and is ω-orthogonal to V . And V,V ⊥ is invariant under U and U∗, the restriction of U to V ⊥ gives an eigenvector, and after normailizing it to be a unit one, we call it u2, and v2 = Ju2. Then we have the properties we need with u2, v2. Then we proceed in this fashion to get the above statement. 2n Assume the converse. Then let x, y ∈ C with the basis u1, . . . , un, v1, . . . , vn. Then by linearity, we can easily check that hUx, Uyi = hx, yi, and ω(Ux, Uy) = ω(x, y). Therefore, U ∈ Sp(n). ut

1.2 Topological Properties

In this section, we examine some topological properties on matrix Lie groups. And finally we study the topology on projective spaces. Remark 1.2.1. A matrix Lie group is compact if it is compact in the Euclidean topo- 2n logical sense, when we consider M(n, C) as R . Thus the three definitions in a set- point topology/analysis course works. However, since a matrix Lie group is closed in GL(n, C), we only need the group to be bounded for it to be compact. Thus, we see that O(n), SO(n), U(n), SU(n), Sp(n) are all bounded by 1, since each of their column are x 0  unit vectors. However, SL(n, ) is unbounded, since A = ∈ SL(n, ). R 0 1/x R Definition 1.2.2. A matrix Lie group G is path connected if ∀A.B ∈ G, ∃A(t) continuous path a ≤ t ≤ b such that A(a) = A, A(b) = B. For any matrix Lie group G, the identity

4 component of G denoted as G0 is the set A ∈ G where ∃A(t) continuous path a ≤ t ≤ b such that A(a) = I, A(b) = A.

Proposition 1.2.3. If G is a matrix Lie group, the identity component G0 of G is a in G.

Proof. Let A, B ∈ G0, then ∃A(t),B(t) connecting I to A, B. Then A(t)B(t) is continuous −1 −1 connecting I to AB, and (A(t)) connecting I to A . Thus is a subgroup. Let A ∈ G0, B ∈ G, then ∃A(t) continuous path connecting I and A. Consider path BA(t)B−1. It −1 −1 connects I to BAB . Thus BAB ∈ G0, G0 is normal. ut

Proposition 1.2.4. The group GL(n, C) is path connected for all n ≥ 1 −1 Proof. By Spectral theorem, since A ∈ GL(n, C) is invertible, A = STS , where T is upper triangular. Then we let A(t) = ST (t)S−1, where we multiply all the entries above the diagonal by (1 − t). Thus, for 0 ≤ t ≤ 1, A(1) = SDS−1 where D is diagonal. And for 1 ≤ t ≤ 2, we consider A(t) = SD−t+1S−1, where we finally find a path connecting A and I for al A in GL(n, C). ut

Corollary 1.2.5. The group SL(n, C) is connected for n ≥ 1 Proposition 1.2.6. U(n) and SU(n) are connected for n ≤ 1 Proof. By diagonalization theorem, A ∈ U(n) then A = SDS−1, where the diagonal are iθ −1 (1−t)iθ e k . Then Let U(t) = SDnS where Dn is diagonal with entries e k . Thus U(n) is connected. Similarly, SU(n) is connected. ut

Proposition 1.2.7. SO(n) is connected for n ≥ 1. Proof. Let M ∈ SO(n). Then M = ARAT , where A is orthogonal and R is a block matrix with rotational matrices as blocks. Then assume the rotation angles are θ1, . . . , θn0 , then let M(t) = AR(t)AT , where R(t) is the matrix replacing the angles θ with (1 − t)θ. Thus M(t) is a path connecting function of I and M. ut

The following remarks are fundamentally , an interesting subject on its own. We only look at some properties. Definition 1.2.8. A matrix Lie group G is simply connected if it is connected and every loop in G can be shrunk continuous to a point in G. I.e. assuming G is connected, then G is simply connected if for every A(t) continuous path, 0 ≤ t ≤ 1, in G such that A(0) = A(1), ∃ a continuous function A(s, t), where 0 ≤ s, t ≤ 1 with the following properties: (1) A(s, 0) = A(s, 1) ∀s, (2) A(0, t) = A(t), (3) A(1, t) = A(1, 0), ∀t. Proposition 1.2.9. SU(2) is simply connected. Proof. SU(2) can be identified with S3 the three dimensional sphere, which is clearly simply connected. ut

5 We end our section with a careful analysis on the topology of SO(3). n Definition 1.2.10. The real projective space of dimension n, denoted as RP is the set of n+1 lines through the origin in R . Remark 1.2.11. Since each line through the origin intersects the unit circle twice, we n n n n n identify RP as the unit sphere S as pairs {u, −u}, where u ∈ S . The π : S → RP n is given by π(u) = {u, −u}. Then we can define a metric on RP by d({u, −u}, {v, −v} = n n min(d(u, v), d(u, −v)). Thus we see that RP is locally isometric to S (i.e. for nearby n n n points in S , the metrics are equal for RP and S ). Moreover, since the upper hemisphere contains the points u or −u for any u, but the n equator is identified antipodally. Thus, we can identify RP as the upper hemisphere with n antipodal points on the equator identified. Then we can make a projection into R , then we n see that the model for RP is the closed unit disk with antipodal points on the boundary identified. n Remark 1.2.12. We remark that RP is not simply connected. Assume u is a unit vector in n+1 n R , B(t) a path connecting u, −u. Then A(t) = π(B(t)) is a closed loop in RP . It suffices to show that this loop can’t be contracted to a point. Assume not, then ∃A(s, t),B(s, t) such that A(s, t) = π(B(s, t)), where B(0, t) = B(t), A(s, 0) = A(s, 1), then B(s, 0) = ±B(s, 1). Since B(0, 0) = −B(0, 1) for continuity, B(s, 0) = −B(s, 1). But 0 ∈/ Sn. Thus B(1, t) is not a constant function, where A(1, t) is therefore not constant. Thus it is not simply connected. 3 Proposition 1.2.13. SO(3) and RP are homeomorphic. 3 Proof. Let v be a unit vector in R , let Rv,θ be the element of SO(3) with a counterclockwise rotation of the plane orthogonal to v by angle θ. Then we can see that R−v,θ = Rv,θ. Moreover, we show that every element of SO(3) can be expressed as Rv,θ, where −π ≤ θ ≤ π. Let R ∈ SO(3), then R has eigenvalue of norm 1. Since Ris real, and any nonreal eigenvalues come in conjugate pairs. Thus, there must be a real eigenvalue, namely ±1. Thus let v⊥ be the plane orthogonal to v, then Rv = ±v ∈ v⊥. Thus Rv = λv is a rotation by some angle θ around the axis v. 3 3 3 Let B be the closed ball of radius π in R , φ : B → SO(3) by φ(u) = Ru,˜ kuk, where u˜ = u/kuk, and φ(0) = Rv,0. Then φ is continuous, since θ → 0, φ(u) → 1. This is surjective since kuk ≤ π, and it is injective except for points on the boundary of B3, which 3 are antipodal, since Rv,π = R−v,−π. Since both B and SO(3) are both compact, we know the inverse is also continuous. Thus we see that φ is a homeomorphism. Since we can identify B3 and SO(3) by the above remark, we know that they are homeomorphic. ut

3 Then since RP is simply connected, SO(3) is simply connected.

1.3 Lie groups and homomorphisms

In this section, we formally introduce the concept Lie groups and some of its properties. We start with recalling objects we studied last semester.

6 Definition 1.3.1. A n-dimensional manifold M is a second-countable, Hausdorff topolog- ical space with the property that if x ∈ M has a neighborhood U being homeomorphic to n an open subset of R . n We note that this definition is, informally, describing an object locally looks like R . Definition 1.3.2. A smooth manifold is a manifold M with a collection of local coordi- nates covering M such that change of coordinates map between two overlapping coordinate systems is smooth.

This is not going to affect significantly later. But note that one can do on a smooth manifold. Now we introduce the concept of a Lie group.

Definition 1.3.3. A Lie group is a smooth manifold G which is also a group, and that the group product · : G × G → G and the inverse map ι : G → G are smooth.

1 1 Example 1.3.4. Let G = R × R × S = {(x, y, u)|x ∈ R, y ∈ R, u ∈ S ⊂ C} with the ix y group operation (x1, y1, u1)(x2, y2, u2) = (x1 + x2, y1 + y2, e 1 2 u1u2). Then G is a group. We quickly check this is true. Firstly, it is clearly a smooth manifold, since the product of smooth manifolds is a smooth manifold. One can easily check that it is closed and that it is associative. Notice that the identity is (0, 0, 1), and (−x, −y, eixyu−1) is clearly an inverse. Thus, one can see that the product and inverse are both smooth. Thus this G is a Lie group. This example gives a Lie group but it is not a matrix Lie group. This will be proved later. However, all matrix Lie groups are Lie groups by showing that they are embedded n2 submanifolds of R . Definition 1.3.5. Let G and H be Lie groups, φ : G → H is called a Lie group homomor- phism if (i) φ is a group homomorphism (ii) φ is smooth. φ is called a Lie group isomorphism if in addition, it is bijective, and the inverse φ−1 is smooth.

Remark 1.3.6. In fact, for a matrix Lie group, φ is continuous suffices. In this section, we will use this more than the above definition.

∗ Example 1.3.7. Let φ : GL(n, C) → C , and φ(A) = det(A), where A ∈ GL(n, C). Then φ is a Lie group homomorphism. cos θ − sin θ Example 1.3.8. Let φ : → SO(2) and φ(θ) = . Then φ is a Lie group R sin θ cos θ homomorphism

In the following section, we carefully consider the example between SU(2) and SO(3).

Proposition 1.3.9. There exists a Lie group homomorphism φ : SU(2) → SO(3) that is two-to-one and onto.

7 Proof. Let V be the space of all 2 × 2 complex matrices X which are self-adjoint. Then all  x x + ix  matrices has the form 1 2 3 . Thus V has dimension 3. Then the standard x2 − ix3 −x1 3 1 inner product on R is hX1,X2i = 2 trace(X1X2). Let U ∈ SU(2), define φU : V → V by −1 −1 ∗ −1 φU (X) = UXU . We check this is well defined: since U is unitary, (UXU ) = UXU , thus UXU −1 is in V . −1 −1 Now we check it is a homomorphism: φU1U2 (X) = U1U2XU2 U1 = φU1 ◦ φU2 (X). 1 −1 −1 Then we check that it is indeed orthogonal: hφU X1, φU X2i = 2 trace(UX1U UX2U ) = 1 −1 1 2 trace(UX1X2U ) = 2 trace(X1X2) = hX1,X2i. Since SU(2) is connected, φU must lie in SO(3). Thus φ : SU(2) → SO(3) by φ : U 7→ φU is a homomorphism. We first consider the kernel of this map. φU (X) = X, thus XU = UX. We do matrix multiplication and compare the both sides, then conclude that U = λI. But since det(U) = 1, λ = ±1. Thus ker(φ) = {I, −I}. Thus φ is two to one. Finally we show that it is onto. Let R be a rotation of V . By Prop 1.2.13, we showed that there is an axis X ∈ V such that R is a rotation by some angle θ in the pane orthogonal   x1 0 −1 to X. Let X = U0 U0 with U0 ∈ U(2), then the plane orthogonal to X has 0 −x1    iθ/2  0 x2 + ix3 −1 e 0 −1 the form X⊥ = U0 U0 . We take U = U0 −iθ/2 U0 . Then x2 − ix3 0 0 e −1 −1 UXU = X, and UX⊥U is the matrix having the form of X⊥ but with x2, x3 rotated by some angle θ. Thus φU is a rotation by angle θ in the plane perpendicular to X. Thus φU coicides with R. ut

1.4 Matrix Exponential

Definition 1.4.1. Let X be an n × n matrix, the exponential of X denoted eX or exp(X) X P∞ Xm is e = m=0 m! .

Pn 2 1 Definition 1.4.2. For X ∈ M(n, C), let kXk = ( j,k=1 |Xjk| ) 2 . kXk is called the Hilbert-Schmidt norm of X.

Definition 1.4.3. Xm converges to X iff kXm − Xk → 0 as m → ∞. Remark 1.4.4. We have kX + Y k ≤ kXk + kY k, and kXY k ≤ kXkkY k. (The first by triangle inequality, the second by Cauchy-Schwarz.

P∞ Xm X Proposition 1.4.5. The series m=0 m! converges for all X ∈ M(n, C) and e is a continuous function.

m m P∞ Xm P∞ kXkm Proof. By 1.4.4, kX k ≤ kXk . Then we have | m=0 m! k ≤ kIk + m=1 m! < ∞. Thus the series converges absolutely. Since Xm is continuous, the partial sum is continuous. By Weierstrass M-test, the series converges uniformly on each ball of radius R. Thus eX is continuous. ut

8 Proposition 1.4.6. Let X,Y be n × n matrices. Then:

1. e0 = I

2. (eX )∗ = eX∗

3. eX is invertible, and (eX )−1 = e−X

4. e(α+β)X = eαX eβX

5. if XY = YX, then eX+Y = eX eY = eY eX (note that if XY 6= YX it is not necessarily true)

CXC−1 X −1 6. C ∈ GL(n, C) then e = Ce C . 0 P X ∗ P∞ Xm ∗ P∞ Xm ∗ Proof. For 1, By definition, e = I+ 0 = I. For 2, (e ) = ( m=0 m! ) = m=0( m! ) = P∞ (X∗)m X∗ −1 −1 X X −1 m=0 m! = e . Assuming 5, we know that XX = X X. Thus e (e ) = eX+−X = I. And for 4, αXβX = βXαX. Thus is true. Therefore, it suffices to prove 5. X Y P∞ Xm P∞ Y m P∞ Pm XkY m−k We do it by direct computation. e e = m=0 m! m=0 m! = m=0 k=0 k!(m−k)! = P∞ 1 Pm m! k m−k P∞ 1 m X+Y CXC−1 m=0 m! k=0 k!(m−k)! X Y = m=0 m! (X + Y ) = e . Finally, on 6, e = P∞ (CXC−1)m P∞ CXmC−1 X −1 m=0 m! = m=0 m! = Ce C . ut

iX Proposition 1.4.7. Let X be a n × n. Then e is a smooth curve in M(n, C) and d tX tX tX d tX d X+tY X+tY dt e = Xe = e X and dt e |t=0 = X. (Note that dt e might not be Y e ) Proof. We differentiate term by term since the series is absolutely convergent proved in Prop 1.4.5. ut

Now we review some in order to compute the exponential. By the spectral theorem, if X ∈ M(n, C) having n linearly independent eigenvectors v1, . . . , vn with eigen- values λ1, . . . , λn, and let C be matrix with columns of the eigenvetors, D be diagonal with entries being eigenvalues. Then X = CDC−1. Then eX = CeDC−1, where eD is diagonal with entries eλ1 , . . . , eλn . Another way to decompose it is that, if X is nilpotent, then eX vanishes somewhere. Let X = S + N, where S is diagonalizable, N nilpotnet and SN = NS, then eX = eSeN .

dv Remark 1.4.8. Matrix exponential can be used in differential equations. Consider dt = Xv n tX and v(0) = v0, where v(t) ∈ R and X is an n×n matrix. Then the solution is v(t) = e v0.

1.5 Matrix Logarithm

We define matrix logarithm in this section.

P∞ m+1 (A−I)m Definition 1.5.1. Let A be an n × n matrix. Then log A = m=1(−1) m when the series converges.

9 Theorem 1.5.2. log A (in the above definition) is defined and continuous on the set of all n × n complex matrices with kA − Ik < 1, and under this condition, elog A = A. For X X X ∈ M(n, C) with kXk < log 2, we have ke − Ik < 1 and log e = X. Proof. Since k(A−I)mk ≤ kA−Ikm, and it has radius of convergence 1, the series converges absolutely for all A where kA−Ik < 1. And continuity follows similarly to the case of matrix exponential. Assume A satisfies kA − Ik < 1, If A is diagonalizable with eigenvalues z1, . . . , zn, then −1 m −1 A = CDC where D is a diagonal matrix, with (A − I) = CD1C , where D1 has m 2 Pn 2 Pn 2 entries (zi − 1) . Let X ∈ M(n, C), then kXk = j,k=1 |Xjk| = j,k=1 |hej, Xeki| . Therefore, we see that if v is an eigenvector for X with eigenvalue λ, kXk2 = P |λ|2. Thus |λ| ≤ kXk. Thus with kA − Ik < 1 each eigenvalue zj of A must satisfy |zj − 1| < 1, −1 thus by summing the series log A = CD2C where the diagonal of D2 has the form log zi. log A −1 log z Therefore e = CD3C where D3 has diagonal e i = zi in this case. Assume then A is not diagonalizable, let Am be a sequence of diagonalizable matrices, which converges to A. (This we do by considering the eigenvalues of A, say λ1, . . . λn, 1 1 we make the eigenvalues of Am to be λ1 + m+1 , . . . , λn + m+n , then the eigenvalues are clearly distinct for most m. Thus Am are diagonalizable, and their eigenvalues, as m → ∞, converges to the eigenvalues of A. Thus Am → A.) Then taking the limit, by continuity of logarithm and exponential functions exp(log A) = A for A with kA − Ik < 1. X P∞ Xm P∞ kXkm kXk Assume kXk < log 2, then ke −Ik = k m=0 m! −Ik ≤ m=0 m! −1 = e −1 < 1. Thus log(exp X) makes sense. Thus log eX = X. ut

Proposition 1.5.3. There exists a constant c such that for all n × n matrices B with 1 2 2 kBk < 2 , we have k log(I + B) − Bk ≤ ckBk . In other words, log(I + B) = B + O(kBk ). Proof.

∞ ∞ X Bm X Bm−2 k log(I + B) − Bk = k (−1)m+1 k = kB2 (−1)m+1 k m m m=2 m=2 ∞ 1 X ( )m−2 ≤ kBk2 2 m m=2 ut

Theorem 1.5.4. Every invertible n × n matrix can be expressed as eX for some X ∈ M(n, C). −1 Proof. Consider a Jordan decomposition of A where A = S(Jk1 (λ1) ⊕ ... ⊕ Jkm (λm))S , z where none of the eigenvalues are 0. Then λi = e i for some zi ∈ C. Then consider

Yi = Jki (zi), then exp(Yi) is an upper triangular matrix similar to Jki (λi). Then Jki (λi) = −1 −1 Pi exp(Yi)Pi where Pi is invertible. Then define Xi = PiYiPi . Therefore, we see that −1 A = exp(S(X1 ⊕ ... ⊕ Xm)S ). ut

10 1.6 Polar Decomposition and More

We first develop more results on exponentials before we consider polar decomposition. These results will be helpful later.

X+Y X Y m Theorem 1.6.1 (Lie Product Formula). For X,Y ∈ M(n, C), then e = limm→∞(e m e m )

X Y m m X Y 1 Proof. By direct computation, we have e e = I + m + m +O( m2 ), then m → ∞, we have X Y e m e m → I. Therefore, it is in the domain of the log for large m. By Proposition 1.5.3,

X Y X Y 1 log(e m e m ) = log(I + + + O( )) m m m2 X Y X Y 1 = + + O(k + + O( k2) m m m m m2 X Y 1 = + + O( ) m m m2

X Y X Y m m X Y 1 m m m 1 Therefore, e e = exp( m + m +O( m2 )) and that (e e ) = exp(X +Y +O( m )). Finally, X+Y X Y m by the continuity of the exponential, we have e = limm→∞(e m e m ) . ut

X tr(X) Theorem 1.6.2. Let X ∈ M(n, C), we have det(e ) = e X Proof. If X is diagonalizable with eigenvalues λ1, . . . , λn, then e is diagonalizable with eigenvalues eλ1 , . . . , eλn . Then det(eX ) = eλ1 . . . eλn = eλ1+...+λn = etr(X). If X is not diagonalizable, in the proof of Theorem 1.5.2, we can approximate X by {Xm} a sequence of diagonalizable matrix, and then we get the same result. ut

Definition 1.6.3. A function A : R → GL(n, C) is called a one-parameter subgroup of GL(n, C) if (1) A is continuous, (2) A(0) = I, (3) A(t + s) = A(t)A(s) for t, s ∈ R. ε Lemma 1.6.4. Fix ε < log 2, let B ε be the ball of radius around the origin in M(n, ), let 2 2 C 1 U = exp(B ε ). Then every B ∈ U has a unique square root C in U with C = exp( log B). 2 2 Proof. The above C clearly is a square root and in U. We show the uniqueness. Let C0 ∈ U with (C0)2 = B, let Y = log C0, then eY = C0. Thus e2Y = (C0)2 = B = exp(log B). Therefore, sicne Y ∈ B ε , 2Y ∈ Bε. Then log B ∈ Bε. By Theorem 1.5.2, exp is injective in 2 0 Bε. Thus C = C. ut

Theorem 1.6.5. If A(·) is a one-parameter subgroup of GL(n, C), then ∃! n × n complex matrix X such that A(t) = etX .

d Proof. For uniqueness, we consider X = dt A(t)|t=0. Then clearly if there is one, then it is unique. For existence, let U be that in the lemma, since A is continuous, ∃t0 > 0 such that A(t) ∈ U for all |t| ≤ t . Let X = 1 log(A(t )), then t X = log(A(t )). Then 0 t0 0 0 0 t X t0 t0 t0X ∈ B ε and e 0 = A(t0). Then A( ) ∈ U, and A( ) = A(t0). Thus by lemma, 2 2 2

11 t0 t0X t0 t0X A( 2 ) = exp( 2 ), then applying repeatedly, we have A( 2k ) = exp( 2k ). Then for any mt0 t0 m mt0X mt0 integer m, A( 2k = A( 2k ) = exp( 2k ). Thus A(t) = exp tX for numbers 2k , which is dense in R. Thus A(t) = exp(tX) for all real numbers t. ut

Proposition 1.6.6. The exponential map is an infinitely differentiable map of M(n, C) into M(n, C). m Proof. For any j, k,(X )jk is a homogeneous polynomial of degree m in the entries, thus ∼ 2n2 it is a convergent power series in M(n, C) = R . Thus we can differentiate term by term. ut

Now we start doing polar decomposition. We explain the intuition of polar decompo- sition. For example consider a complex number z, then z = reiθ, where r is positive real iθ x number and |e | = 1. Since r is positive, r = e for some x ∈ R. Thus z = up where |u| = 1, p is positive and real. Then we establish a similar result in matrices.

Definition 1.6.7. P a self-adjoint n × n matrix (P ∗ = P ) is positive if hv, P vi > 0 for all n v ∈ C . Lemma 1.6.8. If Q is a self-adjoint positive matrix, then Q has a unique positive self- adjoint square root.

Proof. by linear algebra, Q is self-adjoint, then Q has an orthonormal basis of eigenvectors, −1 1 then√ Q = UDU where D is diagonal with entries positive eigenvalues. Then Q 2 = −1 U DU , thus is still self-adjoint and√ positive. Then we show it is unique. If Q is self-adjoint and positive, then Q has the same √ √ 2 eigenspace as Q, and the eigenvalue is the square of that of Q√. Since x 7→ x is injective, distinct eigenvectors corresponds to distinct eigenvalues. Thus Q is uniquely determined by Q. ut

Theorem 1.6.9. 1. if A ∈ GL(n, C) can be written uniquely in the form A = UP where I is unitary and P is self-adjoint and positive.

2. Every self-adjoint positive matrix P can be written uniquely in the form P = eX , where X is self adjoint. Conversely, if X is self-adjoint, then eX is self-adjoint and positive.

X 3. Let A ∈ GL(n, C), then A = Ue where U unitary, X self-adjoint, then U, X depend continuously on A

Proof. For existence in 1, for any A, A∗A is clearly self-adjoint. If let A be invertible, n ∗ ∗ let v ∈ C , then hv, A Avi = hAv, Avi > 0. Then A A is positive. We define P = √ √ −1 A∗A, and we define U = AP −1 = A A∗A . Then we only need to check U is unitary. √ −1 √ −1 U ∗U = A∗A A∗A A∗A = I. Thus we have constructed the equation we desired. For

12 uniqueness of 1, if A = UP , then A∗A = PU ∗UP = P 2. Then by Lemma, we see that P is unique. Thus U is unique. 2 can be proved in the same fashion as our lemma but change the square root function to a logarithm function. Then we prove 3. We claim that log P , where P self-adjoint and positive, depends continuously on P . Assume the eigenvalues of P are between 0 and 2, then by Theorem 1.5.2, log P converges, and continuity follows. In general, fix some positive, self-adjoint matrix a −a P0 and a small neighborhood V of P0 and pick a > 0. For P ∈ V , write P = e (e P ). Then the unique self-adjoint log P can be computed as log P = aI + log(e−aP ). If a is large enough, then ∀P ∈ V , the eigenvalues of e−a will be less than 2. Then log(e−aP ) converges and depend continuously on P . ut

Finally, we look at some applications of the theorem in GL(n, R), SL(n, C), SL(n, R). X Proposition 1.6.10. 1. Every A ∈ GL(n, R) can be written uniquely as A = Re , where R ∈ O(n) and X is real and symmetric.

X 2. Every A ∈ SL(n, C) can be written uniquely as A = Ue , where R ∈ SU(n) and X is self-adjoint and with tr(X) = 0.

X 3. Every A ∈ SL(n, R) can be written uniquely as A = Re , where R ∈ SO(n) and X is real and symmetric and tr(X) = 0. √ Proof. If A is real, then A∗A = AT A is real and symmetric. Thus P = AT A is real, and U = AP −1 is real and unitary. Thus in O(n). X If A ∈ SL(n, C), then A = Ue with U ∈ U(n) and X self-adjoint, then det(A) = det(U)etr(X) = 1, then tr(X) = 0, and is unique by polar decomposition. X And if A ∈ SL(n, R), then by combining the previous result, A = Re where R ∈ SO(n), and X is real and symmetric and its trace is zero. ut

13 Chapter 2

Lie Algebra and Representation

2.1 Definitions

Definition 2.1.1. A finite-dimensional real or complex Lie algebra is a finite-dimensional real or complex vector space g with a bilinear map [·, ·]: g × g → g, satisfying [X,Y ] = −[Y,X] for X,Y ∈ g (skew symmetric), and [X, [Y,Z]] + [Y, [Z,X]] + [Z, [X,Y ]] = 0 for X,Y,Z ∈ g (). This binary map is called the bracket/ on g or the Lie bracket of g. X,Y commute if [X,Y ] = 0 and g is commutative if [X,Y ] = 0 for all X,Y ∈ g.

3 Example 2.1.2. Let g = R , x, y ∈ g and [x, y] = x × y, where x × y is the . Then g is a Lie algebra.

Example 2.1.3. Let A be an associative algebra, g be a subspace of A with XY − YX ∈ g for X,Y ∈ g. Then g is a Lie algebra with bracket [X,Y ] = XY − YX.

Example 2.1.4. Let sl(n, bC) be the space of all X ∈ M(n, C) for which tr(X) = 0. Then sl(n, C) is a Lie algebra with bracket [X,Y ] = XY − YX. Definition 2.1.5. A subalgebra of a real and complex Lie algebra g is a subspace h of g, such that [H1,H2] ∈ h for all H1,H2 ∈ h. If g is complex and h is real, then h is the real subalgebra of g.

Definition 2.1.6. A subalgebra h of a Lie algebra g is an ideal in g if [X,H] ∈ h for X ∈ g and H ∈ h.

Definition 2.1.7. The center of a Lie algebra g is the set of all X ∈ g with [X,Y ] = 0 for all Y ∈ g

Definition 2.1.8. If g, h are Lie , then a linear map φ : g → h is a Lie algebra homomorphism if φ([X,Y ]) → [φ(X), φ(Y )] for X,Y ∈ g. If φ is one-to-one and onto, then φ is a Lie algebra isomorphism. A Lie algebra isomorphism of a Lie algebra with itself if called a Lie algebra .

14 Definition 2.1.9. If g is a Lie algebra and X ∈ g, define a linear map adX : g → g by adX (Y ) = [X,Y ]. The map X 7→ adX is the adjoint map or

With the above notation, we see that the Jacobi identity can be written as adX [Y,Z] = [adX (Y ),Z] + [Y, adX (Z)]. Then we observe that adX looks like a derivation of the bracket (recall the product rule in derivatives)

Proposition 2.1.10. If g is a Lie algebra, then ad[X,Y ] = adX adY − adY adX = [adX , adY ], that is, ad : g → End(g) is a Lie algebra homomorphism.

Proof. Since ad[X,Y ](Z) = [[X,Y ],Z], and [adX , adY ](Z) = [X, [Y,Z]] − [Y, [X,Z]] which is the Jacobi Identity. ut

Definition 2.1.11. If g1, g2 are Lie algebras, the direct sum of g1, g2 is the vector space direct sum of g1 ⊕ g2, with bracket given by [(X1,X2), (Y1,Y2)] = ([X1,Y1], [X2,Y2]). If g is a Lie algebra, g1, g2 are subalgebras, g decomposes as the Lie algebra of direct sum of g1, g2 if g = g1 ⊕ g2, and [X1,X2] = 0 for all X1 ∈ g1,X2 ∈ g2.

Definition 2.1.12. let g be a Lie algebra, let X1,...,XN be a basis for g, then the unique PN constants cjkl such that [Xj,Xk] = l=1 cjklXl are called the of g. (This shows up more in physics)

Then we look at specific types of Lie algebra

Definition 2.1.13. A Lie algebra g is irreducible if the only ideals are {0} and g. g is simple if it is irreducible and dim g ≥ 2.

Remark 2.1.14. One dimensional Lie algebras are special. They are irreducible and they are commutative. Moreover, if a Lie algebra is commutative and irreducible, then it is one dimensional, since if it is commutative, any subspace is an ideal.

Proposition 2.1.15. The Lie algebra sl(2, C) is simple. 0 1 Proof. Since this is the space of matrices with trace being 0, then its basis is X = ,Y = 0 0 0 0 1 0  ,H = . We do come computation to see [X,Y ] = H, [H,X] = 2X, [H,Y ] = 1 0 0 −1 −2Y . Let h be an ideal in sl(2, C), and h contains Z = aX + bH + cY , where Z 6= 0. Then it suffices to show that h = sl(2, C). Then assume c 6= 0, [X, [X,Z]] = [X, [−2bX + cH]] = −2cX, which is not 0. Since h is an ideal, X ∈ h. and [Y,X] = H which is non zero, and [Y, [Y,X]] is a nonzero multiple of Y . Thus Y,H ∈ h, h = sl(2, C). Then we assume c = 0, b 6= 0, and do the same argument, and assume c = 0, b = 0, a 6= 0 and do the same argument to get the proposition. ut

Definition 2.1.16. If g is a Lie algebra, then the commutator ideal in g denoted [g, g] = {Z ∈ g|Z = c1[X1,Y1] + ... + cm[Xm,Ym], cj is a constant, Xj,Yj ∈ g}

15 Definition 2.1.17. Let g be a Lie algebra, define sequences of subalgebras g0, g1, g2,... of g by g0 = g, g1 = [g0, g0], g2 = [g1, g1] etc.. This is called the derived series of g. A Lie algebra is called solvable if gj = {0} for some . Definition 2.1.18. Let g be a Lie algebra, define sequences of subalgebras gj by g0 = g and gj+1 = {[X,Y ]|X ∈ g,Y ∈ gj}. This is called the upper central series of g. A Lie algebra g is called nilpotent if gj = {0} for some j.

Proposition 2.1.19. If g ∈ M(3, R) be the space of 3 × 3 upper triangular matrices with zeros on the diagonal. Then g is a nilpotent Lie algebra.

Proof. We consider the basis X where X12 = 1 others 0, Y where Y23 = 1 others 0, and Z13 = 1 others 0. Then by computation, [X,Y ] = Z, [X,Z] = [Y,Z] = 0. Then [g, g] is the span of Z, and [g, [g, g]] = 0. Thus nilpotent. ut

a b Proposition 2.1.20. If g ∈ M(2, ) be the set of matrices { |a, b, c ∈ }, then g is C 0 c C solvable but not nilpotent.

Proof. We compute [X,Y ], X,Y ∈ g and see that the commutator ideal is one dimensional. 1 0  0 1 Therefore, g is solvable. But it is not nilpotent. H = ,X = , and [H,X] = 0 −1 0 0 2X. Thus we see that is upper central series is nonzero multiple of X. Thus gj 6= {0} for all j ut

2.2 Lie algebra of Matrix Group

Definition 2.2.1. Let G be a matrix Lie group. The Lie algebra of G, denoted g = tX {X|∀t ∈ R, e ∈ G}, where X are matrices. Remark 2.2.2. We have a equivalent statement, X ∈ g iff the entire one-parameter sub- group generated by X is in G. And that in fact, g is the tangent space to G at the identity.

Proposition 2.2.3. Let G be a matrix Lie group, X an element of its Lie algebra. Then X e is an element of the identity component G0 of G.

tX tX Proof. Since e ∈ G for all t ∈ R. However, as t goes from 0 to 1, e is a continuous path connecting the identity to eX . ut

Theorem 2.2.4. Let G be a matrix Lie group with Lie algebra g. If X and Y are elements of g then (i) AXA−1 ∈ g for all A ∈ G. (ii) sX ∈ g for all real numbers s. (iii) X + Y ∈ g. (iv) XY − YX ∈ g.

Proof. (i) etAXA−1 = AetX A−1 ∈ G, therefore, AXA−1 ∈ g. (ii) Clearly, et(sX) = estX ∈ G. t(X+Y ) tX/m tY/m m Then sX ∈ g. (iii) By Lie product formula, e = limm→∞(e e ) . Since

16 tX/m tY/m m d tX tX 0 e e ) is in G, and G is closed, then X + Y ∈ g. (iv) dt (e Y e )|t=0 = (XY )e + (e0Y )(−X) = XY −YX. Then by (i)etX Y etX ∈ G for all t, by (ii), (iii), g is a real subspace etX Y etX −Y of M(n, C) and g is closed in M(n, C). Thus XY − YX = limh→0 h ∈ g. ut

Definition 2.2.5. A matrix Lie group G is complex if its Lie algebra g is a complex subspace of M(n, C), where iX ∈ g if X ∈ g. Proposition 2.2.6. If G is commutative, then g is commutative.

d d tX sX −tX Proof. For X,Y ∈ M(n, C), the commutator [X,Y ] = dt ( ds e e e |s=0)|t=0. If G is commutative, X,Y ∈ g, then etX commutes with esY and thus the right sides is independent of t so [X,Y ] = 0. ut

Now we look at examples of Lie algebras

Proposition 2.2.7. The Lie algebra gl(n, C) of GL(n, C) is the space M(n, C). The Lie algebra gl(n, R) of GL(n, R) is equal to M(n, R). The Lie algebra sl(n, C) of SL(n, C) consists of all n × n complex matrices with trace zero. The Lie algebra sl(n, R) of SL(n, R) consists of all n × n real matrices with trace zero.

tX Proof. If X ∈ M(n, C), then e ∈ GL(n, C) is invertible, so X ∈ gl(n, C). And if similarly tX d(etX ) we have gl(n, R) ⊃ M(n, R). Conversely, if e is real, then X = dt |t=0 is also real. Thus tX tr(tX) gl(n, R) = M(n, R). If X ∈ M(n, C) with tr(X) = 0. Then , det(e ) = e = 1. Then tX tr(tX) d tr(X)t X ∈ sl(n, C). Conversely, if det(e ) = e = 1, then tr(X) = dt e |t=0. Finally, if tX X is real tr(X) = 0, then e is real with determinant 1. Thus X ∈ sl(n, R). Conversely, it follows the same way as the preceding argument. ut

Proposition 2.2.8. The Lie algebra u(n) of U(n) consists of all complex matrices satisfying X∗ = −X and the Lie algebra su(n) of SU(n) consists of all complex matrices satisfying X∗ = −X and tr(X) = 0. The Lie algebra so(n) of O(n) consists of all real matrices X satisfying Xtr = −X and the Lie algebra of SO(n) is the same as that of O(n). Proof. A matrix etX is unitary iff (etX )∗ = (etX )−1 = e−tX . And since (etX )∗ = etX∗ . Therefore, etX∗ = e−tX . This holds for all t iff X∗ = −X, which is exactly u(n). And adding the det(X) = 1 condition at lie group level, we have tr(X) = 0 conditino at Lie algebra level. The same argument can be done over R to show so(n) is the Lie algebra of O(n). And since Xtr = −X, tr(X) = 0 naturally. Therefore the Lie algebras of O(n) and SO(n) are the same. ut

I 0  Proposition 2.2.9. Let g = k , then the Lie algebra so(n, k) of O(n, k) consists 0 −Ik precisely of the real matrices X with gXT g = −X. And the Lie algebra of SO(n, k) is the   0 Ik same as that of O(n, k). If Ω = , then the Lie algebra sp(n, R) consists real −Ik 0 T matrices X with ΩX Ω = X. And the Lie algebra Sp(n, C) has the complex X satisfying

17 the same condition. The Lie algebra sp(n) consists the complex matrices with ΩXT Ω = X and X∗ = −X.

The proof follows the same computation as the above one

Problem 2.2.10. Prove the above proposition.

Proposition 2.2.11. The Lie algebra of the Heisenberg group H is the space of all matrices 0 a b of the form X = 0 0 c, where a, b, c ∈ R. 0 0 0 Proof. If X is strictly upper triangular, then Xm will be strictly upper triangular. Thus for X, etX = I + B with B strictly upper triangular. Thus etX ∈ H. Conversely, if etX ∈ H for real t, then all the entries are on and below the diagonal are independent of t Then d(etX ) X = dt |t=0 is in the form of X. ut

Problem 2.2.12. Find the Lie algebras of the Euclidean group and the Poincare group. i 0  0 i Example 2.2.13. The basis for the Lie algebra su(2) are E = 1 ,E = 1 ,E = 1 2 0 −i 2 2 i 0 3 0 −1 1 , with commutation relations [E ,E ] = E , [E ,E ] = E , [E ,E ] = E 2 1 0 1 2 3 2 3 1 3 1 2 Problem 2.2.14. Find the basis for so(3).

2.3 Mixed Topics: Homomorphisms and Complexification

Theorem 2.3.1. Let G, H be matrix Lie groups, with Lie algebras g and h. Let Φ: G → H be a Lie group homomorphism. Then ∃!φ : g → h a real linear map, with Φ(eX ) = eφ(X) for all X ∈ g. Then φ has following properties: (i) φ(AXA−1) = Φ(A)φ(X)Φ(A)−1 for all d tX X ∈ g,A ∈ G. (ii) φ([X,Y ]) = [φ(X), φ(Y )] for all X,Y ∈ g. (iii) φ(X) = dt Φ(e )|t=0 for all X ∈ g.

Proof. Since Φ is continuous, Φ(etX ) is then a one-parameter subgroup of H. Then by tX tZ Theorem 1.6.5, ∃!Z such that Φ(e ) = e for t ∈ R. Define φ(X) = Z, we check φ has the properties. We check the linearity. First, Φ(eX ) = eZ = eφ(X). Then Φ(etsX ) = etsZ Thus φ(sX) = sφ(X). Then by Lie product formula and the continuity of Φ, etφ(X+Y ) = tX tY m tX tY m tφ(X) tφ(Y ) m t(φ(X)+φ(Y )) Φ(limm→∞(e m e m ) ) = limm→∞(Φ(e m )Φ(e m )) = limm→∞(e m e m ) ) = e . Therefore, differentiating at t = 0 we get φ(X + Y ) = φ(X) + φ(Y ). Next, we check uniqueness. If both φ, φ0 has the above property, then etφ(X) = etφ0(X) = Φ(etX ). Differentiate the result at t = 0, we have φ(X) = φ0(X). The we verify the rest properties: A ∈ G, then etφ(AXA−1) = eφ(tAXA−1) = Φ(etAXA−1 ) = Φ(A)Φ(etX )Φ(A) = Φ(A)etφ(X)Φ(A)−1. Then we can differentiate at t = 0 to get (i).

18 d tX −tX d tX −tX d tX −tX As before, φ([X,Y ]) = φ( dt e Y e |t=0) = dt φ(e Y e )|t=0 = dt Φ(e )φ(Y )Φ(e )|t=0 = d tφ(X) −tφ(X) tX φ(tX) tφ(X) dt e φ(Y )e |t=0 = [φ(X), φ(Y )]. Thus (ii). Finally, Φ(e = e = e , then we compute φ(X) to get (iii). ut

Proposition 2.3.2. Let G, H, K be matrix Lie groups, Φ: H → K, Ψ: G → H be Lie group homomorphisms. Let Λ: G → K and Λ = Φ ◦ Ψ. Let φ, ψ, λ be Lie algebra maps associated to Φ, Ψ, Λ. Then λ = φ ◦ ψ.

Proof. Let X ∈ g, Λ(etX ) = Φ(Ψ(etX )) = Φ(etψ(X)) = etφ(ψ(X)). Thus λ(X) = φ(ψ(X)) by derivating at t = 0. ut

19 Bibliography

[1] Brian C. Hall,

20