<<

Notes on Enveloping (following Dixmier)

Danny Crytser Summer 2019

Abstract The purpose of these notes is to provide a more-or-less self-contained proof of Theorem 5.1, which asserts that (borrowing some C∗-algebraic jargon) the universal enveloping U = U(g) of a finite-dimensional g is residually finite-dimensional, in the sense that given any nonzero element u of U we can produce a finite-dimensional representation π of U that does not vanish on u. In fact, we prove something stronger: we can arrange for the intersection of ker π and U d to be trivial, where U d is the subspace spanned by symmetric homogeneous tensors of degree d. These notes follow [2, Ch. 2] very closely – mostly they were written to digest and provide additional background information. A reader who knows Lie’s theorem on solvable Lie algebras and Engel’s theorem on nilpotent Lie algebras will have adequate background knowledge.

Contents

1 The Poincar´e-Birkhoff-Witt Theorem 1

2 Functorial properties of U 6

3 The symmetrization map 10

4 Existence of finite-dimensional representations 12 4.1 Nilpotency ideals ...... 14

5 Main result: residual finiteness of U(g) 19

1 The Poincar´e-Birkhoff-WittTheorem

Definition 1.1. Let A be an . By A− we denote the Lie algebra that has A as its underlying set and bracket given by [xy] = xy − yx. This is called the underlying Lie algebra of A.

Remark 1.1. There is a faithful Alg → LieAlg which is given by A 7→ A− on objects and the identity on morphisms. It is kind of like a forgetful

1 functor, so we seek its left adjoint. (One could probably try to invoke some heavy duty Freyd theorem to prove that it is a right adjoint, but who has the energy?) The rough idea of a (universal) enveloping algebra is to reverse the construc- tion in Definition 1.1: we take a Lie algebra g and insert it in an associative algebra U in such a way that it generates the algebra (as an algebra! – not as a Lie algebra). The algebra representations of the algebra U ought to correspond precisely with the Lie algebra representations of g. The idea of the construction is pretty simple: just take the elements of g and treat them as elements in an associative algebra, while maintaining all relations coming from g. This corresponds with the of the g.

Definition 1.2. Let g be a Lie algebra. Let T 0 = C (or whatever ground field), let T 1 = g, and for n > 1 let

T n = T n(g) = g ⊗ g ⊗ ... ⊗ g . | {z } n-fold

L∞ n Set T = T (g) = n=0 T , which we refer to as the tensor algebra of g (here we are only treating g as a vector space). Remark 1.2. The tensor algebra provides a good example of an inclusion g ,→ T which is not a homomorphism:

x ⊗ y − y ⊗ x 6= [x, y].

Definition 1.3. Let g and T = T (g) be as above. Define J to be the two-sided of T generated by all elements of the form

x ⊗ y − y ⊗ x − [x, y] ∈ T 2 ⊕ T 1 ⊂ T.

Denote T/J by U = U(g). We refer to this algebra as the universal enveloping algebra of g. Remark 1.3. In view of remark 1.2, it is easy to see that U is the largest quotient of T so that the composite map g ,→ T  U(g) is a Lie . Definition 1.4. The map g → T  U(g) is denoted by σ and termed the canonical mapping of g into U(g). Remark 1.4. If g is abelian (trivial Lie bracket), then J is equal to the ideal generated by all x ⊗ y − y ⊗ x, and U is equal to the symmetric algebra of g. Remark 1.5. The ideal J in the definition of U is generated by elements belong- 1 2 ing to the two-sided ideal T+ = T ⊕ T ⊕ ... be the subalgebra of T generated 0 0 by g, which has trivial intersection with T = C. Thus the map T  U is injective.

2 Definition 1.5. We denote the image of T+ in U(g) by U+. (This is the 0 subalgebra of U generated by the image of σ.) We set U(g) = U ⊕ U+(g). The summand U 0 is identified with the ground field. For u ∈ U we refer to its projection onto the ground field as the constant term of u (imagining u as a polynomial). Note that U is generated as an algebra by {1} ∪ σ(g).

We are finally ready for our first result pertaining to U, which justifies its lofty title. Lemma 1.1 (cf. [2, Lemma 2.1.3]). Let σ be the canonical mapping of g into U(g) and let A be a unital algebra, and let τ : g → A− be a Lie homomorphism. There exists a unique unital homomorphism τ 0 : U → A such that τ 0 ◦ σ = τ.

Proof. The uniqueness is evident from the fact that U is generated by {1}∪σ(g). Let the inclusion g ,→ T be denoted by ι. The of the tensor algebra of g provides an algebra homomorphismτ ˜ : T → A satisfyingτ ˜ ◦ ι = τ, which is unique if we require it to be unital. In particular,τ ˜(x ⊗ y) = τ(x)τ(y). From this and the fact that τ is a Lie homomorphism we see thatτ ˜ vanishes on the two-sided ideal J discussed above. It therefore factors through the quotient 0 0 q : T  U to provide an algebra homomorphism τ : U → A satisfyingτ ˜ = τ ◦q. Then we have τ 0 ◦ σ = τ 0 ◦ q ◦ ι =τ ˜ ◦ ι = τ.

Remark 1.6. Lemma 1.1 shows that we have constructed a left adjoint to the “” A → A−. If A is any associative algebra, we have a natural bijection ∼ HomLieAlg(g, A−) = HomAlg(U(g),A). Lemma 1.1 provides the left-to-right map and the right-to-left map is given by composition with σ (although we will shortly see that g embeds injectively in U via σ). Definition 1.6. Assume that g is finite-dimensional (although as in [4] its possible to make this work for countable- Lie algebras as well) with p ordered basis (x1, x2, . . . , xn). Set yi := σ(xi). If I = (i1, . . . , ip) ∈ {1, 2, . . . , n} let yI = yi1 yi2 . . . yip . If i ∈ Z, write i ≤ I if for all k we have i ≤ ik. If q : T  U 0 1 d is the quotient we set Ud(g) to be q(T +T +...+T ). We also allow an empty string I (of length 0) with y∅ = 1 ∈ U.

We regard the Ud as the polynomials of degree ≤ d. The next Lemma shows that we can rearrange any monomial at the cost of introducing lower-order terms.

Lemma 1.2 ([2, Lem. 2.1.5]). let a1, . . . , ap ∈ g and let σ : g → U be canonical. If π is a permutation of the set {1, 2, . . . , p} then

σ(a1)σ(a2) . . . σ(ap) − σ(aπ(1))σ(aπ(2)) . . . σ(aπ(p)) ∈ Up−1(g).

3 Proof. Every π can be written as the product of transpositions of the form (j j + 1), so we assume that π is of that form. Then

σ(a1) . . . σ(ap) − σ(aπ(1)) . . . σ(aπ(p))

= σ(a1) . . . σ(aj)σ(aj+1) . . . σ(ap) − σ(a1) . . . σ(aj+1)σ(aj) . . . σ(ap)

= σ(a1) . . . σ(aj−1)[σ(aj), σ(aj+1)]σ(aj+2) . . . σ(ap)

= σ(a1) . . . σ(aj−1)σ([aj, aj+1])σ(aj+2) . . . σ(ap) ∈ Up−1(g).

Lemma 1.2 immediately implies the following.

Lemma 1.3 ([2, Lem. 2.1.6]). The set of monomials {yI : I increasing of length p} spans the space Up(g).

Definition 1.7. Let P = C[z1, . . . , zn] be the algebra of complex polynomials in n indeterminates. Set Pi to be the subspace consisting of elements of P of p total degree ≤ i. For I ∈ {1, . . . , n} set zI = zi1 . . . zip . Remark 1.7. The following rather technical result is used to prove that the increasing monomials are linearly independent. Lemma 1.4. For every integer p ≥ 0 there exists a unique linear mapping fp : g ⊗ Pp → P satisfying the following conditions:

(Ap) fp(xi ⊗ zI ) = zizI if i ≤ I and zI ∈ Pp;

(Bp) fp(xi ⊗ zI ) − zizI ∈ Pq for zI ∈ Pq, q ≤ p;

(Cp) fp(xi ⊗ fp(xj ⊗ zJ )) = fp(xj ⊗ fp(xi ⊗ zJ )) + fp([xi, xj] ⊗ zJ ).

Moreover fp|g⊗Pp−1 = fp−1.

Proof. The condition A0 implies that f0(xi ⊗ 1) = zi. This implies B0 and C0 both hold. The main issue is to extend fp−1 inductively to fp. Define fp(xi ⊗ zI ) = zizI as long as i ≤ I and zi ∈ Pp. If i is not bounded by I then I = (j, J) where j > i and j ≤ J. (If i is less than the least element in I, its bounded by everything in I.) Note that zI = fp−1(xj ⊗ zJ ). Then

fp(xi ⊗ zI ) = fp(xi ⊗ fp−1(xj ⊗ zJ ))

= fp(xj ⊗ fp−1(xi ⊗ zJ )) + fp−1([xi, xj] ⊗ zJ ).

Now fp−1(xi ⊗ zJ )) = zizJ + w for some w ∈ Pp−1 by induction. So we can define

fp(xj ⊗ fp−1(xi ⊗ zJ )) = zjzizJ + fp−1(xj ⊗ w)

= zizI + fp−1(xj ⊗ w).

4 This is well defined as a (we’ve basically given its value on the basis of g ⊗ Pp). We need to verify that properties Ap, Bp, and Cp all hold. The first holds basically by definition, and fp(xi ⊗ zI ) − zizI either equals 0 or fp−1(xj ⊗ w) + fp−1([xi, xj] ⊗ zJ ) ∈ Pq as long as zI ∈ Pq and q ≤ p. The condition Cp is harder to check. We rewrite it as

xi(xjzJ ) − xj(xizJ ) = [xi, xj]zJ where xz := fp(x ⊗ z) for x ∈ g and z ∈ Pp. If j ≤ i and j ≤ J, then Cp holds by the way that we have constructed fp in the previous paragraph. Switching xi and xj we see that Cp still holds by negating both sides. This implies that either i ≤ J or j ≤ J implies Cp holds. It remains to check in the case when J = (k, K) where k ≤ K and i, j (again remembering that J is an increasing sequence). since K is shorter than J, we can use induction to get

xjzJ = xj(xkzK ) = xk(xjzK ) + [xj, xk]zK . (1)

By Bp−1 we know that xjzK = zjzK + w where w ∈ Pp−2. As k ≤ K and k < j we obtain xi(xk(zjzK )) = xk(xi(zjzK )) + [xi, xk](zjzK ); by induction (on p) we have xi(xkw) = xk(xi(w)) + [xi, xk](w). Linearity of the map and the equation xjzK = zjzK + w implies that xi(xk(xjzK )) = xk(xi(xjzK )) + [xi, xk](xjzK ). Now we can calculate again, using (1) above as well as the general fact (*) that A[B,C] = [B,C]A + [A, [B,C]], to get

xi(xjzJ ) = xi(xk(xjzK ) + [xj, xk]zK )

= xk(xi(xjzK )) + [xi, xk](xjzK ) + xi[xj, xk]zK

= xk(xi(xjzK )) + [xi, xk](xjzK ) + [xj, xk](xizK ) + [xi, [xj, xk]]zK . Since our only condition on i and j was that i, j > k, we can interchange them to obtain

xj(xizJ ) = xk(xj(xizK )) + [xj, xk](xizK ) + [xi, xk](xjzK ) + [xj, [xi, xk]]zK .

Subtracting the two equations, and again using (*), we obtain that xi(xjzJ ) − xj(xizJ )

xk(xi(xjzK )) − xk(xj(xizK )) + [xi, [xj, xk]]zK − [xj, [xi, xk]]zK

= xk([xi, xj]zK ) + [xk, [xi, xj]]zK + [xi, [xj, xk]]zK − [xj, [xi, xk]]zK

= [xi, xj]xkzK

= [xi, xj]zJ , where in the second-to last equation we apply the Jacobi identity and in the last we use property Ap and the fact that k ≤ K.

Remark 1.8. We can view the sequence of fp as comprising a single linear map f : g ⊗ P → P , which endows P with the structure of a Lie . The main feature of this action is that xi.zI = zizI as long as i ≤ I.

5 Lemma 1.5 ([2, Lem 2.1.8]). The set {yI : I increasing} forms a basis for U(g). Proof. This follows rather easily from Remark 1.8 and Lemma 1.3. Let 1 ∈ P be the unit; the map g 3 y 7→ y.1 is linear in the y variable. If I is an increasing finite sequence of indices, then we have yI .1 = zI , and the set {zI } is linearly independent. The linear image of a linearly dependent set cannot be linearly independent. Hence {yI : I increasing } must be independent. Lemma 1.3 implies that the same set is a basis. Proposition 1.1. The canonical mapping of g into U(g) is injective.

Proof. The canonical map σ sends each basis vector xi ∈ g to the corresponding yi (where we can view i as an increasing sequence of length 1), and the set {yi} is linearly independent by Lemma 1.5. Remark 1.9. The map σ that embeds g into U(g) is henceforth suppressed. The following is a restatement of Lemma 1.5.

Theorem 1.1 (Poincar´e-Birkhoff-Witt). Let (x1, . . . , xn) be a basis for the vec- k1 k2 kn tor space g. Then the elements x1 x2 . . . xn , where kj ∈ N ∪ {0}, form a basis for U(g).

2 Functorial properties of U

Proposition 2.1. Let A be a unital algebra, τ : g → A− a Lie homomorphism. Then there is a unique unital algebra homomorphism τ˜ : U(g) → A so that τ˜|g = τ. Proof. This is a restatement of Lemma 1.1.

Remark 2.1. If we fix a vector space V , then there is a one-to-one correspon- dence between representations of g on V and representations of U(g) on V , described essentially by Lemma 1.1. This correspondence extends to the sub- module structure. Dixmier uses the same symbol to refer to ρ : g → gl(V ) and U(ρ): U(g) → End(V ). The only noteworthy thing is that, unless otherwise indicated, the kernel of ρ refers to the larger set (i.e. the zero set in U(g)).

Proposition 2.2. Let g, g0 be Lie algebras and φ : g → g0 a Lie homomorphism. Then there exists a unique unital algebra homomorphism φ˜ : U(g) → U(g0) so ˜ that φ|g = φ. Proof. Simply regard φ as a map into U(g0) ⊃ g0 and apply Proposition 1.1.

Proposition 2.3 ([2, Prop. 2.2.4]). Let g0 ⊂ g be a Lie aubalgebra. Then there is a Lie algebra embedding U(g0) ⊂ U(g).

6 Proof. Let ι : g0 → g be the inclusion map. We will show that U(ι) is injective. 0 Take a basis (x1, . . . , xm) for g and extend it to a basis (x1, . . . , xn) for g. The k1 km 0 image under U(ι) of any basis element x1 . . . xm ∈ U(g ) is a basis element in U(g) (with exponents km+1, . . . , kn all equal to 0). Since U(ι) sends basis elements to basis elements it is injective. Remark 2.2. Henceforth U(g0) is considered as a subalgebra of U(g) and the embedding U(ι) is suppressed. Also we use the notation g0 ≤ g to indicate a Lie algebra containment (i.e. that g0 is a Lie subalgebra of g).

0 Proposition 2.4 ([2, Prop. 2.2.7]). Let g ≤ g and let (y1, . . . , yq) be a basis for 0 k1 kq a (vector space) complement of g in g. Then the set {y1 . . . yq : kj ∈ N ∪ {0}} is a U(g0)-module basis for U(g). Proof. Fairly straightforward, just regrouping basis elements from the PBW theorem. Proposition 2.5 ([2, Prop. 2.2.8]). Let h and f be Lie subalgebras of g such that g = h + f. Let l = h ∩ f. Consider U(h) as a right l-module and U(f) as a left U(l)-module. Then there is a unique linear map f : U(h) ⊗ U(f) → U(g) satisfying f(v ⊗ w) = vw for v ∈ U(h) and w ∈ f. The map f is a bijection. Proof. Consider the map (v, w) 7→ vw from U(h) × U(f) to U(g). This is clearly U(l)-bilinear, so it descends to a map U(h) ⊗U(l U(f) → U(g). Uniqueness is automatic because the elements [h ⊗ f] span the domain (brackets around the tensor denote the equivalence class in the U(l)-balanced , as opposed to the ordinary tensor over C). It remains to prove that f is bijective. Let (a1, . . . , am) be a basis for h and (b1, . . . , bn) a basis for a supplement of l in f. (Note that the dimension of h + f is m + n.) Claim: the set of all tensors of the form

k1 km j1 jn a1 . . . am ⊗ b1 . . . bn forms a basis for the vector space U(h) ⊗U(l) U(f). (We will call these tensors distinguished. Dixmier rather cavalierly says that Prop. 2.4 implies this without spelling out the details.) First we show that these tensors span the vector space. P P 0 P Let v = ziai ∈ h and w = ` + zjbj ∈ f, where ` ∈ l, say ` = tkak. Now putting everything together (suppressing the balanced tensor product brackets) we get

X X X 0 X X 0 v ⊗ w = ( ziai) ⊗ ( tkak + zjbj) = zitkaiak ⊗ 1 + zizjai ⊗ bj. i,k i,j0

This shows that h ⊗U(l) f is spanned by the distinguished tensors. From this and Poincar´e-Birkhoff-Wittit is straightforward to replace v with an element of U(h) and w with an element of U(f), at cost of a lot of complicated . Linear independence seems a bit more complicated – I think that the way to do it is to show that the image of the spanning set of distinguished vectors is linearly independent (being a bunch of spanning monomials).

7 Proposition 2.6. Let g1,..., gn be Lie subalgebras of g = g1 ⊕ ... ⊕ gn. There exists one and only one linear mapping f : U(g1)⊗...⊗U(gn) → U(g) satisfying f(u1 ⊗ ... ⊗ un) = u1 . . . un for u1, . . . , un in respective summands. The map f is bijective. Proof. This is a corollary to Proposition 2.5, using induction and the fact that U(0) = C. Remark 2.3. The from Proposition 2.6 is called canonical, identi- fying the two vector spaces. If (and only if) g1 ⊕ ... ⊕ gn is a Lie algebra , then f is an algebra isomorphism.

Corollary 2.1 ([2, Cor. 2.2.12]). Let g1,..., gn be Lie algebras and g their product. The canonical vector space isomorphism f : U(g1) ⊗ ... ⊗ U(gn) is an algebra isomorphism. Proof. In a tensor product (a ⊗ 1)(1 ⊗ b) = (1 ⊗ b)(a ⊗ 1). Thus in order for f to be an algebra map the elements in distinct summands must commute – this is exactly the condition that we have a Lie algebra product.

Proposition 2.7 ([2, Prop. 2.2.14]). Let h be an ideal of g. (i) The left ideal R of U(g) generated by h is equal to the right ideal of U(g) generated by h. (ii) Let j : g → g/h be the Lie algebra quotient. The homomorphism U(j): U(g) → U(g/h) is surjective with kernel R.

Proof. By U+(h) we denote the subalgebra of U(h) generated by h. Let (x1, . . . , xm) be a basis for a vector space complement of h in g. For i = 1, . . . , m set yi = j(xi) ∈ g/h. Proposition 2.4 implies that

X k1 km U(g) = x1 . . . xm U(h). k1,...,km∈N

Let  : U(h) → C send each v to its constant term (in C = U 0).

Definition 2.1 (Principal anti-automorphism of U(g)). The map (u1 . . . un) 7→ n (−1) unun−1 . . . u1 is termed the principal anti-automorphism of U(g). On g ⊂ U(g) it is given by x 7→ −x; it is the unique unital algebra anti-automorphism of U(g) which is given by x 7→ −x on g. This is denoted by u 7→ uT for u ∈ U(g). Definition 2.2 (g-module structure on U(g)). For all u ∈ U(g) let L(u) and R(u) denote the mappings v 7→ uv and v 7→ vu of U(g) into itself. The asso- ciativity of U(g) implies that L and R are both algebra representations, termed the left and right regular representations of U(g). The mapping g 3 x 7→ L(x) ∈ End(U(g)) is called the left regular representation of g. The mapping u 7→ R(uT ) is an algebra representation (check) and the restriction g 3 x 7→ −R(x) is termed the right regular representation of g in U(g).

8 For all u ∈ U(g) it is immediate from associativity that R(u)L(u) = L(u)R(u) for any u. The mapping g 3 x 7→ L(x)−R(x) is a Lie representation of g, which we’ll denote a. This is nothing but the adjoint representation of the Lie algebra U(g)− restricted to the Lie subalgebra g. Definition 2.3. Let n ≥ 0 be an integer. The vector subspace of U(g) spanned by the products x1x2 . . . xn, where x1, . . . , xn ∈ g ∪ {1}, is denoted by Un(g). (In the case n = 0, we only take the empty product, which equals 1 ∈ U(g), and generates C.) The following claims are stated without proof in [2, 2.3.1].

Lemma 2.1. Let Un(g) be as in Definition 2.3.

(i) For all n ≥ 0 we have Un(g) ⊂ Un+1(g). S∞ (ii) The union n=0 Un(g) = U(g).

(iii) The first part U1(g) equals C ⊕ g.

(iv) There is a grading-like structure Un(g)Up(g) ⊂ Un+p(g).

Definition 2.4. The sequence (Un(g))n≥0 is called the canonical filtration of U(g). If u ∈ U(g) then the smallest integer n such that u ∈ Un(g) is called the filtration of u.

Remark 2.4. If (e1, . . . , er) is a basis for g then the elements of the form k1 kr P e1 . . . er with kj ≤ n together form a basis for Un(g). This follows by expanding the x1, . . . , xn relative to the basis and expanding out linearly.

n Definition 2.5 (Graded algebra G). Let G = Un(g)/Un−1(g) the quotient 0 1 vector space and define G = G ⊕ G ⊕ .... Convention: U−1(g) = 0, so that G0 = C. The multiplication in U(g) defines a bilinear mapping Gm × Gn → Gm+n. This endows G with the structure of a unital associative algebra. Lemma 2.2. The algebra G is commutative.

1 Proof. It is clear that G = g generates G as an algebra, and that for g1, g2 ∈ g 2 1 2 we have g1g2 ∈ U and g1g2 − g2g1 = [g1, g2] ∈ U . Thus, in the quotient G we have g1g2 = g2g1. Similar reasoning shows that this holds for all summands, using Lemma 1.2. Remark 2.5. The symmetric algebra of g is universal for linear maps into com- mutative algebras. Hence we obtain a (unique) homomorphism φ : S(g) → G satisfying φ(1) = 1. This is referred to as the canonical homomorphism. If Sn(g) denotes the subspace of degree n homogeneous elements, then φ(Sn(g)) ⊂ Gn.

Proposition 2.8 ([2, Prop. 2.3.6]). The canonical homomorphism φ : S(g) → G is an isomorphism.

9 Proof. This is really straightforward from Definition 2.3. The basis elements in Sn(g) have the form

ek1 ⊗ ... ⊗ ekn for a sequence k1 ≤ k2 ≤ ... ≤ kn. The map φ sends such an element to the class of ek1 . . . ekn , one of the basis elements for Un that does not belong to Un−1. Definition 2.6. A ring is called Noetherian if it satisfies the maximal condition for left ideals and right ideals. That is, every nonempty set of left ideals contains a maximal left ideal, and similarly for right ideals. The following result is not too hard to prove, but we omit the proof here. Theorem 2.1 ([5, Thm. 6.9]). If S is a filtered ring and the associated gr S is right Noetherian, then S is right Noetherian. Remark 2.6. By considering opposite rings we obtain the corresponding result for left Noetherian rings. Definition 2.7. A (possibly noncommutative) algebra is said to be Noetherian if it satsifies the asceding chain condition for left ideals and for right ideals. Corollary 2.2. The universal enveloping algebra U(g) of a finite-dimensional Lie algebra g is Noetherian. Proof. The graded algebra G is commutative and generated by a finite set of elements (namely those corresponding to a basis for g). Thus the Hilbert basis theorem implies that G is Noetherian and then Theorem 2.1 and the remark which follows it together imply that U(g) is Noetherian.

3 The symmetrization map

Let n ≥ 0 be an integer, T n(g) = g ⊗ ... ⊗ g, let Sn(g) be the set of homoe- | {z } n-fold geneous elements of degree n in the symmetric algebra S(g), and let Gn(g) = Un(g)/Un−1(g). Consider the diagram below:

n ψn T (g) Un(g)

τn θn φ Sn(g) n Gn(g)

Lemma 3.1 ([2, Lem. 2.4.2]). The diagram 3.2 commutes.

Proof. Let x1, . . . , xn ∈ g. Then ψn(x1 ⊗ ... ⊗ xn) = x1 . . . xn in U(g), and 0 1 hence θn(ψn(x1 ⊗ ... ⊗ xn)) = x1 . . . xn in G = G (g) ⊕ G (g) ⊕ .... Similarly, τn(x1 ⊗ ... ⊗ xn) = x1 . . . xn ∈ S(g), hence φn(τn(x1 ⊗ ... ⊗ xn)) = x1 . . . xn ∈ G.

10 Remark 3.1. This proof doesn’t say much, does it? Definition 3.1. An element of U(g) is said to be symmetric homogeneous of degree n if it is the canonical image in U(g) of a homogeneous of degree n over g. [Here the map is ψn.] The set of elements of U(g) which are symmetric homoegeneous of degree n is denoted by U n(g).

Proposition 3.1 ([2, Prop. 2.4.4]). There is a direct summand decomposition n Un(g) = Un−1(g) ⊕ U (g). Proof. Let us use the notation of Diagram 3.2. Let T 0 = T 0n(g) ⊂ T n(g) be n n the set of symmetric elements of T (g). Then τn|T 0 is a bijection onto S (g). By Proposition 2.8 we know that φn is a bijection. This implies that θn ◦ ψn|T 0 is a bijection of T 0 onto Gn(g). This implies by the quotient vector space construction that ψn|T 0 is a bijection onto a complement of Un−1(g) in Un(g). n 0 This gives the direct sum decomposition: U = ψn(T ), etc. Definition 3.2. The diagram 3.2 combined with the above discussion gives the commutative diagram of bijections

ψ T 0n(g) n U n(g)

τn θn φ Sn(g) n Gn(g)

n n This gives us a bijection ωn, termed canonical, of S (g) onto U (g). This is defined on the basis via 1 X ω (x x . . . x ) = x x . . . x . n 1 2 n n! π(1) π(2) π(n) π∈Sn Remark 3.2. I don’t really see where the canonical homomorphism comes from. Definition 3.3 (Following [3]). Let V be a vector space and A an associative algebra. Say that a linear map f : S(V ) → A is a symmetrization map if f(xn) = f(x)n for all x ∈ V . A (really, the) universal symmetrization map with values in an associative algebra Q is a symmetrization map g : S(V ) → Q such that if f : S(V ) → A is another symmetrization map, then there is an associative algebra map h : Q → A so that h ◦ g = f. Remark 3.3. The uniqueness (up to isomorphism) of a universal symmetrization map is standard stuff. Some simple calculations: f((x + y)2) = f(x + y)2 = (f(x) + f(y))2 = f(x)2 + f(x)f(y) + f(y)f(x) + f(y)2 but also f((x + y)2) = f(x2) + f(xy) + f(yx) + f(y2).

11 But xy = yx in the symmetric algebra S(V ), hence after some cancellation we obtain 2f(xy) = f(x)f(y) + f(y)f(x), 1 whence f(xy) = 2 (f(x)f(y) + f(y)f(x)). Remark 3.4. The associative algebra that is the codomain of the symmetriza- tion map is almost never commutative, so we don’t expect f to be an algebra homomorphism.

Now let x1, . . . , xn be a basis for V and let t1, . . . , tn be scalar indeterminates. If j : S(V ) → A is a symmetrization map then

n n j((t1x1 + ... + tnxn) ) = j(t1x1 + ... + tnxn) .

Both of these are polynomials in the indeterminates t1, . . . , tn with coefficients in A. The coefficient in front of t1 ·...·tn on the left-hand side is n!·j(x1 . . . xn), whereas on the right it is X j(xπ(1)xπ(2) . . . xπ(n)).

π∈Sn This finally gives the formula

1 X j(x . . . x ) = j(x ) . . . j(x ). 1 n n! π(1) π(n) π∈Sn

4 Existence of finite-dimensional representations

Lemma 4.1 ([2, Lem. 2.5.1]). Let I1,...,Im be right (or left) ideals of finite codimension in U(g). Then the product ideal I1I2 ...Im has finite codimension. Proof. Induction allows us to focus on the case with m = 2 right ideals. The right U(g)-module I1 is generated by a finite set of elements U1, . . . , up (because we proved somewhat vaguely that as long as g is finite dimensional, the uni- versal enveloping algebra is Noetherian). Let ν1, . . . , νq be elements of U(g) be elements of U(g) which span a vector space complement of I2 in U(g). Notice that I1 = {u1, . . . , up}U(g) and U(g) = I2 + span{ν1, . . . , νq}. This implies that

I1 = {u1, . . . , up}· U(g)

= {u1, . . . , up}· (I2 + span{ν1, . . . , νq}

⊆ I1I2 + span{uiνj}.

Thus each element of I1 is congruent modulo I1I2I1I2 to a linear combination of the Uiνj. We hav ea chain of vector space inclusions: I1I2 ⊂ I1 ⊂ U(g). Consequently

dim(U(g)/I1I2) = dim(U(g)/I1) + dim(I1/I1I2) < ∞.

12 Here the first codimension is finite because I1 is assumed to have finite codi- mension, and the second is finite because we have given a finite spanning set for the vector space I1/I1I2, viz. {uiνj}. Remark 4.1. The fact that we are working with the universal enveloping algebra of a Lie algebra is not really being used here. All that is needed is that U(g) is a Noetherian algebra. Lemma 4.2 ([2, Lem. 1.4.5]). Let a be an ideal of g, V a finite-dimensional vector space, ρ : g → End(V ) a simple representation such that each element of ρ(a) is nilpotent. Then ρ(a) = 0. Proof. Set W = {v ∈ V : ρ(a)v = 0}. Engel’s theorem implies that W is non-trivial. Claim: W is ρ(g)-invariant. Let g ∈ g, a ∈ a, and v ∈ W . Then ρ(a)(ρ(g)v) = ρ(g)ρ(a)v + [ρ(a), ρ(g)]v = 0 + ρ([a, g])v = 0, where the first 0 occurs from the fact that v ∈ W and the second comes from the fact that [a, g] ∈ a. Definition 4.1 ([2, Defn. 1.2.6]). Let ρ : g → End(V ) be a representation. A sequence (V0,V1,...,Vn) of sub-g-modules of V so that V = V0 ⊃ V1 ⊃ ... ⊃ Vn = 0 is termed a composition series of ρ (or of the g-module V ). Such a composition series is termed a Jordan-Holder series if each quotient Vi/Vi+1 is simple as a g-module. Following isn’t stated in [2]. The first statement is standard (the length of a composition series for V is bounded by the dimension of V , so take a composition series of maximum length), the second part isn’t too tricky but I’m too lazy to go through it. Lemma 4.3. If V is a finite dimensional g-module, then we can always obtain a Jordan-Holder series. Moreover any two Jordan-Holder series are isomorphic up to permutation of quotients (in the same sense as the Jordan-Holder theorem from theory of finite groups). The following lemma isn’t stated in [2], but it’s useful to state. Lemma 4.4. Let σ : g → End(V ) be a finite-dimensional representation and

V = V0 ⊃ V1 ⊃ ... ⊃ Vn ⊃ 0 be a Jordan-Holder series for the g-module V . Suppose that the action of g on each quotient Vi/Vi+1 is via nilpotent operators. Then the action of g on V is via nilpotent operators. Proof. Let v ∈ V and let g ∈ g. Then the assumption gives us for each j kj Pn an integer kj so that (σ(g)) Vj ⊂ Vj+1. Set k = j=0 kj and notice that (σ(g))kv = 0. Thus each operator σ(g) is

13 Lemma 4.5 ([2, Lem. 2.5.2]). Let a be an ideal of g, b a vector subspace of g such that g = a + b, and σ : g → End(V ) a finite-dimensional representation of g. Assume that σ(x) is nilpotent for all x ∈ a ∪ b. Then σ(x) is nilpotent for all x ∈ g. Proof. Using Lemma 4.3 we write

V = V0 ⊃ V1 ⊃ ... ⊃ Vn = 0 where each Vi is stable under the action of g and each quotient Vi/Vi+1 is simple. This means that the action of a is trivial on Vi/Vi+1; thus, letting σi denote the representation of g on Vi/Vi+1, we obtain σi(g) = σi(a) + σi(b) = σi(b). As the action of b on V is via nilpotent operators, the action of b on Vi/Vi+1 is also by nilpotent operators (this is the easy direction). Thus the action of g on Vi/Vi+1 is via nilpotent operators. Then Lemma 4.4 implies that the action of g on V is via nilpotent operators. Lemma 4.6 ([2, Lem. 2.5.3]). Let a be an ideal of g, b a Lie subalgebra of b such that g = a ⊕ b (vector space direct sum), π the left regular representation of a in U(a) (which is a Lie algebra homomorphism!), and φ the adjoint representation of g in U(a). The linear mapping ψ of g into End(U(a)) such that ψ|a = π and ψ|b = φ|b is a representation of g. Proof. The map ψ is well-defined and linear because of the vector space decom- position g = a ⊕ b. It’s necessary to prove that ψ([x, y]) = [ψ(x), ψ(y)] for any x, y ∈ g. Since ψ is a representation when restricted to either of the summands in the direct sum decomposition, it’s sufficient to show that ψ([x, y]) = [ψ(x), ψ(y)] if x ∈ a and y ∈ b. Let u ∈ U = U(a). Then

[ψ(x), ψ(y)]u = ψ(x)ψ(y)u − ψ(y)ψ(x)u = x(yu − uy) − (yxu − xuy) = xyu − yxu = [x, y]u = ψ([x, y])u, where in the last line we’ve used the fact that a is an ideal.

4.1 Nilpotency ideals It’s pretty standard to learn about the largest solvable ideal of a Lie algebra (the radical); the corresponding construction for nilpotent ideals is a little less standard. Lemma 4.7 ([2, Lem. 1.4.6]). Let a be an ideal of g, σ : g → End(V ) a finite- dimensional representation, and V0 ⊃ V1 ⊃ ... ⊃ Vn a J-H series for V . The following are equivalent:

14 (i) for every x ∈ a the operator σ(x) is nilpotent;

(ii) for every x ∈ a and k = 0, . . . , n − 1 we have σ(x)Vk ⊂ Vk+1. Proof. That (ii) implies (i) is a result of Lemma 4.4. That (i) implies (ii) is a result of Lemma 4.2. Proposition 4.1 ([2, Prop. 1.4.7]). Let σ : g → Endo(V ) be a finite-dimensional representation, and b(x, y) = Tr(σ(x)◦σ(y)) be the bilinear form associated with σ. (i) Among the ideals a of g such that σ(x) is nilpotent for every x ∈ a there is a largest such ideal, denoted n.

(ii) Let (V0,V1,...,Vn) be a J-H series of V and σi be the representation in- duced on Vi/Vi+1. Then n−1 \ n = ker σi. i=0 (iii) n is orthogonal to g with respect to b (i.e. b(x, y) = 0 if x ∈ n and y ∈ g) Proof. Let a be an ideal of g. From Lemma 4.7 it is clear that σ(x) is nilpotent Tn−1 for each x ∈ a if and only if x ∈ i=0 ker σi. Tn−1 This immediately gives that n = i=0 ker σi. Claim: if x ∈ n and y ∈ g then σ(x)σ(y) is nilpotent. To verify this, we only need to check that σ(x)σ(y)Vi ⊂ Vi+1 for i = 0, . . . , n − 1. But σ(x)σ(y)Vi ⊂ σ(x)Vi ⊂ Vi+1, using the fact that σ(x) is nilpotent and Vi is stable under the action of g. This establishes that σ(x)σ(y) is nilpotent and so has trace equal to 0. Definition 4.2. The ideal n in Proposition 4.1 is termed the largest nilpotency ideal of σ. Proposition 4.2 ([2, Prop. 1.4.9]). Let n be the largest nilpotency ideal of the adjoint representation of g (on itself). Then n is the largest nilpotent ideal of g. It is orthogonal to g with respect to the Killing form. Proof. Let a be an ideal of g; first we establish that a is nilpotent (as a Lie algebra) if and only if for all x ∈ a the operator adg(x) ∈ Endo(g) is nilpotent. If a is nilpotent, then each ada(x) is nilpotent for x ∈ a, so that for each x we k can find k satisfying ada(x) = 0. Because a is an ideal we have that adg(x) k+1 maps g into a; hence ada(x) = 0. The other direction is just Engel’s theorem (restricting the adjoint operators to a). Thus for an ideal a nilpotency and adg-nilpotency coincide. The largest nilpotency ideal of the adjoint representation of g is therefore the largest nilpo- tent ideal of g. The last bit follows from the last bit of Prop 4.1. Lemma 4.8 ([2, Lem. 2.5.4]). Let a be an ideal of g, b a Lie subalgebra of g such that g = a ⊕ b, V a finite-dimensional vector space and ρ : a → End(V ) a representation whose largest nilpotency ideal n contains [b, a].

15 (i) There exists a finite-dimensional representation σ : g → Endo(W ) of g, whose largest nilpotency ideal contains n, such that ρ is a quotient repre- sentation of σ|a.

(ii) If, for all y ∈ b, adg y acts as a nilpotent operator on a, then we can choose σ so that, in addition, the largest nilpotency ideal of σ contains b.

Remark 4.2. A quotient representation means a representation of the form W/W 0 for some a-submodule W 0 ⊂ W . Alternatively the range of a surjec- tive a-module map W → Q. Proof of Lemma 4.8. As earlier set U = U(a). Let V 1,...,V r be sub-a-modules of V , with sum V , such that each V i is generated as a U-module by a single 1 2 r element. The map V ⊕V ⊕...⊕V → V given by (v1, . . . , vr) 7→ v1 +v2 . . . , vr is a surjective a-module map. Claim: it therefore suffices to prove the Lemma in the case where V is cyclic (Dixmier uses the term monogeneous). The direct sum of modules W 1⊕...⊕W r has nilpotency ideal equal to the intersection n1 ∩ ... ∩ nr, where nk is the nilpotency ideal of W k. If each of these contained n, then so to would the intersection. Furthermore, if V k were a quotient representation of W k for each k, then V 1 ⊕ ... ⊕ V r would be a quotient of W 1 ⊕ ... ⊕ W r. By the forgoing this would imply that V is a quotient of W 1 ⊕ ... ⊕ W r. The condition (ii) is similarly satisfied when the adjoint action of b restricted to a is nilpotent. This establishes the claim. Henceforth require that V is cyclic as a U-module, with v ∈ V a cyclic vector. Let I be the kernel in U of the representation ρ; as U/I = ρ(U) ⊂ End(V ), and V is finite dimensional, we see that I has finite codimension. Equip U with the left regular representation a-module structure. Consider the map U → V given by u 7→ uv. This map vanishes on I, so we obtain a well-defined map U/I → V ; this map is surjective because v is cyclic for the U-module V . Thus V is a quotient of the a-module U/I. Let (V0,V1,...,Vn) be a J-H series for the a-module V . Let ρi be the repre- 0 sentation of a via ρ induced on Vi/Vi+1. Let I be the intersection of the kernels in U of all the ρi. 0n 0 0 Claim: I ⊂ I ⊂ I and I ∩ a = n. Proof of claim: let x = j1j2 . . . jn 0n 0 be a spanning element in I , so that each jk ∈ I . Then j1j2 . . . jnV0 ⊂ j1j2 . . . jn−1V1, and j1j2 . . . jn−1V1 ⊂ V2, so that eventually we get xV0 = Vn = 0 0. Thus x ∈ I. The fact that I ⊂ I is immediate: if ρ(x) = 0, then ρi(x) = 0 on each quotient representation Vi/Vi+1. If x ∈ n, then x belongs to a by definition and it belongs to I0 by Proposition 4.1, so that n ⊂ a ∩ I0. The converse is basically the same, so the claim is established. Equip U with the g-module structure as in Lemma 4.6 so that for x ∈ a, y ∈ b, and u ∈ U, we have x.u = xu, y.u = yu − uy. If x ∈ b, then φ(x) (adjoint action of g on U) is a derivation of U carrying a into [b, a]. Moreover, by hypothesis, [b, a] ⊂ n and hence [b, a] ⊂ I0 by the previous claim. We also have that I0 and I0n are invariant under the action of φ(x): if y ∈ I0 and

16 i = 0, . . . , n − 1 then

(xy − yx)Vi ⊂ xVi+1 + yVi ⊂ Vi+1.

0 Hence xy − yx belongs to ker ρi for all i, implying that I is invariant under the action of φ(x). The fact that φ(x) is a derivation then implies that I0n is invariant under φ(x) as well. 0 Since the kernel of each ρi in U is an a-module, we get that I is stable under both a and b, hence under all of g. Similarly I0n is a sub-g-module of U. Then the quotient module U/I0n hosts a representation σ of g, which has finite dimension: as I ⊂ I0, and U/I is finite dimension, the Lemma 4.1 implies that I0n has finite codimension. If x ∈ I0 ∩ a = n, then xnU ∈ I0n. This means that σ(xn) = σ(x)n acts trivially on the module U/I0n under σ. So σ(x) is a nilpotent operator, and n is contained in the largest nilpotency ideal of σ. The inclusion I0n ⊂ I implies that we have surjective a-module maps U/I0n → U/I → V . Thus V is a quotient representation of σ|a. This establishes part (i) of the Lemma. Now suppose that adg y|a is nilpotent for every y ∈ b. Claim: for all y ∈ b the operator φ(y) acts locally nilpotently on U. This is due to the fact, that if D : a → a is a nilpotent derivation, then the canonical extension to a derivation D0 : U → U is locally nilpotent. This fact is a little tedious to prove, but should be plausible from [2, Prop. 2.4.9] and an induction argument. Claim: σ(y) is nilpotent. This follows because σ(y) acts via φ(y) on the finite dimensional module U/I0n, and a locally nilpotent transformation of a finite dimensional module must be nilpotent. By the equation n = I0 ∩ a and the hypothesis we obtain [b, n] ⊂ [b, a] ⊂ n. By definition of σ, we have that for y ∈ b and n ∈ n the composition σ(y)σ(n) = σ([y, n]). This and nilpotency of n and σ(y) implies that σ(y + n) is nilpotent as well, similar to the proof that the set of nilpotent elements in a ring is closed under addition. This shows that b + n acts via nilpotent operators under σ; to show that it is contained in the largest nilpotency ideal for σ, we must show that it is an ideal. The equation g = a + b implies that

[g, b + n] ⊂ [b, b] + [b, a] + [a, n] ⊂ b + n, where we’ve used the fact that [b, n] ⊂ [b, a] ⊂ n as well as the ideal property of n. This verifies that b + n is an ideal of g, and since we have already verified that it acts via nilpotent operators under σ, it must be contained in the the largest nilpotency ideal of σ. The following theorem seems like a technical refinement of Ado’s theorem (every finite-dimensional Lie algebra has a faithful finite-dimensional represen- tation), but Dixmier cites it to Bourbaki, Harish-Chandra, and Jacobson (shrug emoji). Theorem 4.1 ([2, Thm. 2.5.5]). Let n be the largest nilpotent ideal of g. There is a finite-dimensional injective representation ρ of g whose largest nilpotency ideal contains n.

17 Remark 4.3. The proof proceeds in stages: first you produce the representation for the c of g (i.e. the largest commutative ideal), then you extend to n (i.e. the largest nilpotent ideal), then you extend to r (the largest solvable ideal), then you invoke Levi’s theorem to extend it to all of g. Thennn you show that this is faithful on the center, and then you direct sum with the adjoint representation to get something that’s faithful on the entire algebra. Theorem 4.2 (Levi decomposition, [2, Thm. 1.6.9]). Let r be the radical of g. Then there exists a Lie subalgebra s of g such that g = r ⊕ s as a vector space direct sum. We also need a little extra tidbit, which we’ve reprhased a bit. Lemma 4.9 ([2, Prop. 1.7.1]). If g is a Lie algebra and r is its radical, then [g, g] ∩ r = [g, r], and the ideal [g, r] is nilpotent. Proof of Theorem 4.1. The 1-dimensional Lie algebra k has the representation

0 0 λ 7→ . λ 0

This is injective and every operator acts nilpotently. Thus if c ⊂ g is the center, we can produce a faithful finite-dimensional representation of c via nilpotent op- erators by direct summing suitably many copies of this representation together. Refer to this representation as φ. The inclusion of the ideal c ⊂ n is immediate as abelian Lie algebras are nilpotent; if we take a J-H series for n/c then all the subquotients Vi/Vi+1 must be 1-dimensional as they are simple modules under the action of a nilpotent Lie algebra (see Engel’s theorem). Lift this under the correspondence between ideals of n/c and ideals of n that contain c to obtain a chain

c = n0 ⊂ n1 ⊂ ... ⊂ np = n and moreover each codimension equals 1. Each ni is the vector space sum of ni−1 and a 1-dimensional subspace (any 1-dimensional subspace is automatically a Lie subalgebra). We have a faithful representation of c via nilpotent operators, say σc : c → End(Vc). Since c is an ideal of n1, and all the other conditions of Lemma 4.8 are satisfied, we can extend σ to an action of n1 on some Vn1 which surjects onto Vc via a c map. Repeat this over and over to obtain a finite- dimensional representation ψ of n via nilpotent operators so that φ is a quotient of ψ|c. (The possibly non-obvious bit to verify among the hypotheses of Lemma 4.8 is to show that the action of ni on ni−1 is nilpotent, but this follows from the fact that n is nilpotent.) Let r be the radical of g. It’s a general fact that [r, r] is a nilpotent ideal of g (true if you replace r by any solvable ideal), and hence [r, r] ⊂ n ⊂ r. Lifting a J-H series for r/n, and arguing similarly to the previous paragraph but with solvable in place of nilpotent, we get a sequence of ideals

n = r0 ⊂ r1 ⊂ ... ⊂ rq = r

18 so that each ri/ri−1 has dimension 1 (by Lie’s theorem). Just as in the previous paragraph, each algebra ri is the (vector space) direct sum of ri−1 and a 1-dim Lie subalgebra. The equation [r, r] ⊂ n ensures that the hypotheses of part (i) of Lemma 4.8 are again satisfied for each i = 0, . . . , q − 1. So we can extend a representation τ of r so that τ(n) consists of nilpotent operators and such that ψ is a quotient of τ|n. Now we can take a Levi decomposition (Theorem 4.2) g = r ⊕ s. The tidbit implies that [g, r] is a nilpotent ideal and hence contained in n; hence the subalgebra [s, r] is contained in n as well. Invoking Lemma 4.8 gives a representation σ of g such that every element of σ(n) is nilpotent and such that τ is a quotient of σ|r. Let V1 be the space for φ, V2 the space for ψ, V3 the space for τ, and V4 the space for σ. Then we obtain a chain of surjective c-morphisms V4 → V3 → V2 → V1. If x ∈ c is nonzero, then xV4 = 0 would imply that xV1 = 0, contrary to the assumption that φ acts faithfully upon V1. Thus σ|c is faithful. Now let ρ = σ ⊕ adg, which is a direct sum of finite-dimensional representa- tions and hence finite-dimensional. The kernrel of ρ is equal to ker σ ∩ ker adg, but ker adg = c. Hence ρ is injective. Corollary 4.1. Let g be a finite-dimensional Lie algebra. Then there is a faithful finite-dimensional representation via traceless operators ρ : g → sl(V ). Proof. Let σ : g → End(W ) be a faithful finite-dimensional representation 0 from Theorem 4.1. The map σ defined via g 3 x 7→ − TrW (σ(x)) is a Lie algebra representation (this boils down to the observation that the trace of a is zero). Now set ρ = σ ⊕ σ0, and the trace of ρ(x) is equal to 0 for each x ∈ g.

5 Main result: residual finiteness of U(g)

Now we finally arrive at the theorem that we were originally trying to figure out. I’ve paraphrased it from Dixmier’s phrasing to obtain the version that Luminet uses. Theorem 5.1 ([2, Thm. 2.5.7]). If g is a finite-dimensional Lie algebra, and U d(g) is the set of elements of U = U(g) that are symmetric homogeneous of degree d, then there is a finite-dimensional representation π of g such that π is injective when restricted to Ud(g). The following is not stated as a separate result in Dixmier, but I find it’s helpful to isolate it.

Lemma 5.1. Let V be a finite dimensional vector space and let (x0, x1, . . . , xp) a basis for End(V ) such that x0 = 1 (identity transformation). For an integer d ≥ 1, set Vd = V ⊗ ... ⊗ V . | {z } d-fold

19 (i) The tensors xi1 ⊗ xi2 ⊗ ... ⊗ xid , where 0 ≤ i1, i2, . . . , id ≤ p, form a basis for End(Vd).

(ii) If we take F to be the vector subspace of End(Vd) spanned by all tensors of the form u1 ⊗ . . . ud, where u1, . . . , ud ∈ End(V ) and at least one uk =

1, then the set of tensors of the form xi1 ⊗ xi2 ⊗ ... ⊗ xid , where 0 ≤ i1, i2, . . . , id ≤ p and at least one ik = 0

0 (iii) There is a vector space direct sum decomposition End(Vd) = F ⊕F , where 0 F has basis consisting of all tensors xi1 ⊗xi2 ⊗...⊗xid such that no ik = 0. Proof. Part (i) is a fairly standard by dimension counting. Parts (ii) and (iii) are implied immediately by part (i) Now we can give a fairly easy proof of Theorem 5.1. Recall that if V and W are g-modules, then V ⊗ W becomes a g-module via g.(v ⊗ w) = (gv) ⊗ w + v ⊗ (gw). Proof of Theorem 5.1. Corollary 4.1 and the fact that an inclusion g ⊂ h induces U d(g) ⊂ U d(h), we can assume that g = sl(V ) for a finite dimensional vector space V . Let (x0, x1, . . . , xp) be a basis for gl(V ) (just End(V )) such that x0 = 1 (the identity ) and (x1, . . . , xp) is a basis for sl(V ). The proof is by induction on d. When d = 0, we can take the trivial rep- resentation of g on k. In this case Ud = U0 = k, and the representation is the identity representation on U0. So assume that for each k < d we can produce a 0 0 finite-dimensional representation π such that π(u ) 6= 0 for all u ∈ Uk(g). Take u ∈ Ud(g) \ Ud−1(g); then Lemma 3.1 implies that u = u1 + u2 where d u1 ∈ U (g) and u2 ∈ Ud−1(g) (recall that the former subspace is spanned by canonical images of symmetric homogeneous tensors of degree d, the latter by images of tensors of degree ≤ d − 1). Moreover, u1 6= 0, and (allowing Id to be the set of all non-decreasing elements of {1, . . . , p}d) we can write ! X X u1 = αi1,...,id xiτ(1) xiτ(2) . . . xiτ(d) .

i∈Id τ∈Sd

This last equation is just the statement that u1 is the image of a symmetric homogeneous tensor of degree d. Let σ be the identity representation of g = sl(V ) on V . Set σd = σ ⊗ ... ⊗ σ. | {z } d-fold If x ∈ g, then

σd(x) = (x ⊗ 1 ... ⊗ 1) + (1 ⊗ x ⊗ ... ⊗ 1) + ... + (1 ⊗ ... ⊗ 1 ⊗ x).

Moreover, if x1 . . . xk is a generating monomial in Ud−1 then, using the fact that σd induces an algebra representation of U(g) , we have

σd(x1 . . . xk) = σd(x1)σd(x2) . . . σd(xk).

20 If u2 ∈ Ud−1, we get by expanding using the distributive property many times that σd(u2) ∈ F . To see this, note that in order to “cover up” all the copies of 1 in each summand of σd(x) one has to multiply each indeterminate by at least d−1 other terms. There aren’t enough indeterminates in an element of Ud−1(g) for this to happen. Claim: σn(u1) 6∈ F . To see this, note that (again using the distributive property many times) we have that X σd(xi1 . . . xid ) = xiτ(1) ⊗ xiτ(2) ⊗ . . . xiτ(d) + f

τ∈Sd where f ∈ F . To see this, note that “covering” all the 1’s in the various σd(xk) requires exactly one term from each σd(xk) – hence the permutations – and everything else that is multiplied out is an element of F because not all the 1’s have been covered. If π ∈ Sd, then we see by re-indexing that X σd(xiπ(1) xiπ(2) . . . xiπ(d) ) = xiτ(1) ⊗ xiτ(2) ⊗ . . . xiτ(d) + fπ

τ∈Sd where fπ ∈ F . Thus if we sum over all such π we obtain ! X X 0 σd( xiπ(1) xiπ(2) . . . xiπ(d) ) = n! xiτ(1) ⊗ xiτ(2) ⊗ . . . xiτ(d) + f

π∈Sd τ∈Sd

0 where f ∈ F . The tensors of the form xiτ(1) ⊗ xiτ(2) ⊗ . . . xiτ(d) are all basis elements of F 0 (using the terminology of Lemma 5.1), and as we are working in zero, we see that for any sequence I = (i1 ≤ i2 ≤ ... ≤ id) the element z := σ (P x x . . . x ) is a nonzero element modulo I d π∈Sd iπ(1) iπ(2) iπ(d) F . If I1,...,It is a collection of distinct such sequences, then the corresponding elements zI1 , . . . , zIt will be linearly independent modulo F . This last bit follows because different sequences Ik will contribute different basis tensors xiτ(1) ⊗...⊗ xτ(n) to the various zk. (I know this last bit is a little hand-wavy, but it’s hard to write down precisely.) Finally we can use ! X X u1 = αi1,...,id xiτ(1) xiτ(2) . . . xiτ(d) .

i∈Id τ∈Sd and the fact that the set of all zI is independent modulo F to see that σd(u1) 6∈ F , hence σd(u) = σd(u1) + σd(u2) 6∈ F , and σd(u) 6= 0.

21 References

[1] Bourbaki, N. Algebra Commutative: chapitres 8 et 9. Elements of Mathe- matics. Springer, 1972. [2] Dixmier, J. Enveloping Algebras. Graduate Studies in Mathematics, Vol. 11. American Mathematical Society. 1977 (repr. 1996).

[3] Garret, P. Symmetrization maps and differential operators. http:// www-users.math.umn.edu/~garrett/m/v/symmetrization.pdf [4] Humphreys, J. Lie algebras and . Springer. Graduate Texts in Mathematics. 1974. [5] McConnell, J. and Robson, J. Noncommutative Noetherian Rings. AMS. Graduate studies in Mathematics. 2001.

22