<<

DIFFERENTIAL OPERATORS ON HOMOGENEOUS SPACES

L. BARCHINI AND R. ZIERAU

Abstract. These are notes for lectures given at the ICE-EM Australian Grad- uate School in Mathematics in Brisbane, July 2-20, 2007.

Introduction

The real flag manifolds are a interesting family of smooth manifolds. Among the real flag manifolds are the spheres, projective spaces and Grassmannians. Each is a compact homogeneous space G/P for reductive Lie group G; see Lecture 3 for a definition. There are (at least) two reasons the real flag manifolds are particularly important. One is that they are flat models for certain geometries, such as con- formal geometry and contact geometry. Another is that in the standard representations of a reductive Lie group occur as spaces of sections of homogeneous vector bundles over G/P . In both situations differential operators between sections of bundles over G/P play an important role; typically systems of such operators will have some G−equivariance properties. The goal of these lec- tures is to give some methods for constructing and studying interesting (families) of differential operators. The course begins with four lectures on the structure of reductive Lie groups and Lie algebras, and their finite dimensional representations. It is assumed that the student has had an introductory course on Lie groups and understands the basics of manifold theory. The classical groups, such as GL(n, R) and SO(p, q), are examples of reductive Lie groups. Using some linear algebra, much of the structure theory presented is easily understood for these groups. In fact students may wish to focus entirely on the group GL(n, R) (except in Lectures 13 and 14). Lectures 5-7 discuss homogeneous vector bundles over homogeneous spaces, and differential operators between sections of homogeneous bundles. Here the group G is arbitrary. Typically an interesting space of differential operators will be expressed in terms of the enveloping algebra, where certain questions about differential operators are transformed into questions in algebra. The remaining lectures are about differential operators between spaces of sections of homogeneous vector bundles over real flag manifolds. There are two important points which are made. One is that there are explicit correspondences between (a)

July 31, 2007. 1 G-invariant differential operators (on homogeneous bundles), (b) homomorphisms between Verma modules, and (c) conformally invariant systems of differential op- erators (as defined in Lecture 8). The second is that the existence of G-invariant differential operators (and therefore the existence of homomorphisms of Verma mod- ules and of conformally invariant systems) is somewhat special, occurring only for certain bundles. Thus, constructing G-invariant differential operators is a nontrivial problem. The lectures show invariant theory is useful in this construction. Lectures 8 and 11 contain the definition of conformally invariant systems as well as some of the theory relating these systems to the notions of G-invariant differ- ential operators and Verma modules. Lectures 10 and 12 work out examples of conformally invariant systems when P is of abelian type, i.e., when P has abelian nilradical. Lecture 9 gives the necessary invariant theory for these examples. Lec- tures 13 ad 14 give an introduction to the construction of conformally invariant systems when P is of Heisenberg type. The final lecture discusses a conjecture of Gyoja on reducibility of Verma modules (hence, on the existence of G-invariant differential operators and conformally invariant systems). We thank the ICE-EM for the opportunity to present these lectures. We enjoyed working with all of the students in the course.

LECTURE 1. Reductive groups

The groups we will consider are real and complex reductive Lie groups. The ‘classical groups’, such as GL(n, C) and SO(n), are typical examples. We will begin by giving the definitions of many of the classical groups and some of their important properties. Some general facts about Lie groups and Lie algebras will be recalled. Finally, a precise definition of a real reductive Lie group will be given.

1.1. Classical groups. Let us begin with the general linear group GL(V ). Here V is a real or complex vector space and GL(V ) is the group of invertible linear transformations of V to itself. Suppose that the dimension of V is n. Choose a basis B = {e1, . . . , en} of V . Then the matrix of g ∈ GL(V ) with respect to B is P the matrix (gij) satisfying g(ej) = i gijei. This provides a bijection

(1.1) φ : GL(V ) → GL(n, F) ≡ {A ∈ Mn×n(F) : det(A) 6= 0}, (F = R or C, depending if V is a vector space over R or C). Of course, φ is a group isomorphism. The bijection φ allows us to define a topology on GL(V ) as follows. Give Mn×n(R) the topology from the norm 2 X 2 ||X|| = |xij| . i,j

2 Since the determinant function is continuous, GL(n, F) is an open set in Fn . We may define open sets in GL(V ) as those sets U for which φ(U) is an open set in 2 Fn . It follows (since matrix multiplication is continuous, in fact a polynomial) 2 that GL(V ) is a topological group with respect to this topology. Similarly, GL(V ) is a differentiable manifold for which multiplication is a smooth map. So GL(V ) is a Lie group. This definition of topology and differentiable structure is independent of the chosen basis (why?). The may be identified with gl(V ), the set of linear transformations of V to itself, with the bracket operation [A, B] = AB − BA. See Exercise 1.1. The exponential map is the unique smooth function exp : gl(V ) → GL(V ) with the property that for each X ∈ gl(V ), exp((t + s)X) = exp(tX) exp(sX) and d dt exp(tX)|t=0 = X. Therefore,

∞ X Xk exp(X) = , k! k=0 an absolutely convergent series in Mn×n(F). 2 When F = C, GL(V ) is identified with an open set in Cn and multiplication is holomorphic. Therefore, GL(V ) is a complex Lie group. Note that the Lie algebra is a complex Lie algebra. In view of the fact that any closed subgroup H of a Lie group is a Lie group, many other Lie groups are easily obtained from GL(V ). Furthermore, the Lie algebra of H is

(1.2) Lie(H) ≡ h = {X ∈ g : exp(tX) ∈ H, for all t ∈ R}.

(See [6, Ch. II, §2].) A simple instance of this is the case of the special linear group SL(V ) (resp. SL(n, F)). This is the group of linear transformations (resp. matrices) of deter- minant equal to one. The Lie algebra is sl(V ) (resp. sl(n, F)) consisting of linear transformations (resp. matrices) of trace zero. The orthogonal groups are defined in terms of symmetric bilinear forms on vector spaces. Again let us consider a real or complex vector space V , let b be any nondegenerate symmetric bilinear form on V . Then the orthogonal group of b is

O(V, b) ≡ {g ∈ GL(V ): b(gv, gw) = b(v, w), for all v, w ∈ V }.

The Lie algebra is

o(V, b) = {X ∈ gl(V ): b(Xv, w) + b(v, Xw) = 0, for all v, w ∈ V }, with the bracket operation being the commutator. This may be checked by applying (1.2) as follows. Assuming exp(tX) ∈ O(V, b), taking the derivative (at t = 0) of b(exp(tX)v, exp(tX)w) = b(v, w) gives b(Xv, w) + b(v, Xw) = 0. Conversely, if 3 b(Xv, w) + b(v, Xw) = 0, then the curve c(s) = b(exp(sX)v, exp(sX)w) satisfies d c0(s) = b(exp((s + t)X)v, exp((s + t)X)w)| dt t=0 = b(X exp(tX)v, exp(tX)w) + b(exp(tX)v, X exp(tX)w) = 0.

Therefore, c(s) is constant, so c(s) = c(0) = b(v, w). Therefore, exp(tX) ∈ O(V, b).

Recall that the matrix of a bilinear form with respect to a basis B = {e1, . . . , en} is the matrix B = (bij) where b(ei, ej) = bij. Since b is symmetric and nonde- generate, the matrix B is symmetric and has nonzero determinant. (Also, any such matrix determines a symmetric nondegenerate bilinear form by the formula b(ei, ej) = bij.) Under the identification of GL(V ) with matrices (using B) as in (1.1) the orthogonal groups (and Lie algebras) are realized as matrices by

O(n, B) = {g ∈ GL(n, F): gtBg = B} o(n, B) = {X ∈ gl(n, F): XtB + BX = 0}.

It follows from this that the orthogonal groups are closed subgroups of GL(n, F). A fact about bilinear forms ([4, §31],[8, §6.3]) is that for each symmetric bilinear form there is a basis so that the matrix B is

B = In, if F = C, and   Ip 0 B = Ip,q ≡ , with p + q = n, if F = R. 0 −Iq

Here Im denotes the m×m identity matrix. The corresponding groups are denoted by O(n, C) and O(p, q), and similarly for the Lie algebras. Imposing the further condition that the determinant be equal to one defines the special orthogonal groups SO(V, b),SO(n, C) and SO(p, q). The corresponding Lie algebras are so(V, b), etc. One may check (for example, by using the following block form of the matrices in the Lie algebras that so(V, b) = o(V, b).

Example 1.3. In block form the Lie algebra so(p, q) is  AB  : A ∈ Skew (R),D ∈ Skew (R) and B ∈ M (R) . Bt D p q p×q

Example 1.4. When (p, q) = (n, 0) the orthogonal group is denoted by O(n). This t 2 P 2 is a compact group. To see this note that g g = In implies that ||g|| = gij = n, √ 2 so O(n) is contained in the sphere of radius n in Rn . Since any orthogonal group is closed, O(n) is compact.

The definition of the symplectic group is similar to that of the orthogonal groups. For this, let ω be a nondegenerate antisymmetric (skew) bilinear form on V . The dimension of V is necessarily even; we will denote it by 2n. Then the symplectic 4 group is the following closed subgroup of GL(V ): Sp(V, ω) = {g ∈ GL(V ): ω(gv, gw) = ω(v, w), for all v, w ∈ V }. The Lie algebra is sp(V, ω) = {X ∈ gl(V ): ω(Xv, w) + ω(v, Xw) = 0, for all v, w ∈ V }. For either F = R or C there is a basis of V for which the matrix of ω is  0 I  J = n . −In 0 (See [8, §6.2].) Then the corresponding group and Lie algebra are Sp(2n, F) = {g ∈ GL(2n, F): gtJg = J} sp(2n, F) = {X ∈ gl(2n, F): XtJ + JX = 0} AB  = { : A ∈ gl(n, F),B,C ∈ Sym (F)}. C −At n

The unitary groups are the symmetry groups of the hermitian forms. Recall that a hermitian form on a complex vector space V is a map h : V × V → C which is linear in the first variable and satisfies h(v, w) = h(w, v). Then U(V, h) = {g ∈ GL(V ): h(gv, gw) = h(v, w), all v, w} u(V, h) = {g ∈ gl(V ): h(Xv, w) + h(v, Xw) = 0, all v, w}. It is a fact that for each hermitian form there is a basis of V so that the matrix is Ip,q, for some p and q (with p + q = n). It follows that any unitary group is isomorphic to one of

t U(p, q) = {g ∈ GL(n, C): g Ip,qg = Ip,q}. The Lie algebra is t u(p, q) = {X ∈ gl(n, C): X Ip,q + Ip,qX = 0}. The special unitary groups (resp. Lie algebras) have the additional requirement that the determinant is one (resp. trace is zero). These are denoted by SU(p, q) and su(p, q). In each of the examples above the Lie algebras are nearly simple. Recall that a Lie algebra g is simple means that g has no ideals other that g and {0}, and dim(g) > 1. For example, gl(n, F) = sl(n, F) ⊕ z, where z = {zIn : z ∈ F} is the center and sl(n, F) is simple.

Definition 1.5. A Lie algebra g is semisimple if and only if g is the direct sum of ideals each of which is a . When g is a direct sum of ideals gss and z, with gss semisimple and z abelian (and thus z equals the center of g), then g is called reductive.

Although it is not immediate, each of the classical Lie algebras is reductive. 5 1.2. Cartan involutions and maximal compact subgroups. Of crucial im- portance in the representation theory of reductive Lie groups is the notion of a maximal compact subgroup. One reason for this is that the representation theory of compact Lie groups is fairly simple. In particular, each representation (finite or infinite dimensional) of a compact group is the direct sum of irreducible repre- sentations and every irreducible representation is finite dimensional. Furthermore, the irreducible representations of compact groups are easily parameterized and de- scribed. This is far from the situation for noncompact groups. The maximal compact subgroups of the classical groups are easily described. To do this we use the notion of Cartan involutions of the group G and its Lie algebra g. Define Θ : G → G by Θ(g) = (gt)−1 for any of the groups GL(n, F),SL(n, F), O(p, q),SO(p, q), Sp(n, F),U(p, q) and SU(p, q). (Note that for these groups Θ(g) is indeed in G when g ∈ G. Check this.) Now let θ : g → g be the differential of Θ. t That is, θ(X) = −X . We refer to Θ and θ as Cartan involutions. Since Cartan involutions have order two, g is the direct sum of the +1 and −1 eigenspaces of θ. Let us write this decomposition as g = k ⊕ s, with k ≡ {X ∈ g : θ(X) = X} and s ≡ {X ∈ g : θ(X) = −X}.

Here are some easily checked facts.

(1) Θ is a group homomorphism and θ is a Lie algebra homomorphism. (2) k is a subalgebra of g. (3) [k, s] ⊂ s and [s, s] ⊂ k. (4) K ≡ {g ∈ G : Θ(g) = g} is a Lie subgroup with Lie algebra k. (5) K is compact.

For G = GL(n, R), K = O(n) and s = Symn(R). A fact from linear algebra is that any invertible matrix may be written in a unique way as the product of an orthogonal matrix and a positive definite matrix (with orthogonal matrix first). This is called the polar decomposition of the matrix. See [4, §32] for a proof. A slightly stronger statement is that

φ : O(n) × Symn(R) → GL(n, R) φ(k, X) = k exp(X) is a diffeomorphism.

Corollary 1.6. O(n) is a maximal compact subgroup of GL(n, R) in the sense that it is not contained in another compact subgroup of GL(n, R).

Proof. Suppose that O(n) ⊂ K0, with K0 compact. Let k0 ∈ K0 \ O(n). Write 0 0 k = k exp(X), with X ∈ Symn(R),X 6= 0. Then exp(X) ∈ K . We claim that exp(mX) cannot be in a compact set for all m ∈ Z. This will give a contradiction. To prove the claim, note that X has real eigenvalues (as X is symmetric), at least 6 one of which is nonzero. Let λ be a nonzero eigenvalue and vλ to corresponding eigenvector. The eigenvalues of exp(mX) are emλ. Since λ 6= 0, emλ → ∞ as n m → ∞ (or as m → −∞). Now exp(mX)vλ → ∞ in R . But by the compactness 0 0 of K , {gvλ : g ∈ K } is bounded, and we arrive at a contradiction. 

Continuing with the groups GL(n, F),SL(n, F),O(p, q),SO(p, q), Sp(n, F), U(p, q) and SU(p, q), we may define a bilinear form on g by

T (X,Y ) = T race(XY ).

This form may be referred to as the trace form of g. Then T is Ad-invariant in the sense that T (Ad(g)X, Ad(g)Y ) = T (X,Y ), for all X,Y ∈ g. Also, for X 6= 0, t T (θ(X),X) = −T race(X X) 6= 0, so T is nondegenerate. Furthermore, we have the following facts.

(1) k and s are orthogonal with respect to T

(2) T |k×k is negative definite and T |s×s is positive definite.

Each of these may easily be checked.

1.3. Real reductive groups. We will now give a general definition of a real reductive group.

Definition 1.7. A real reductive group is a quadruple (G, K, θ, κ) with G a Lie group, K a compact subgroup, θ : g → g a Lie algebra homomorphism with θ2 = I and κ a nondegenerate, symmetric Ad invariant bilinear form, satisfying

(1) g is the Lie algebra of G and is reductive, (2) K has Lie algebra k and g = k ⊕ s (vector space direct sum of the ±1 eigenspaces with respect to θ),

(3) k and s are orthogonal with respect to κ, κ|k×k is negative definite and κ|s×s is positive definite, (4) φ : K × s → G, φ(k, X) = k exp(X) is a diffeomorphism.

EXERCISES

(1.1) Prove the claim that the Lie algebra of SL(n, F) is sl(n, F). You will need to d compute dt det(exp(tX))|t=0. (1.2) Suppose that B and B0 are symmetric nondegenerate matrices and there is an invertible matrix Q so that QtBQ = B0. Prove that O(n, B) and O(n, B0) are isomorphic Lie groups (so the Lie algebras so(n, B) and so(n, B0) are isomorphic). When  0 I  B = n , In 0 7 for which p, q is O(2n, B) isomorphic to O(p, q)? Write the Lie algebra so(2n, B) in block form as in Example 1.3 and explicitly write down the isomorphism so(2n, B) ' so(p, q). (1.3) Check that U(n) ≡ U(n, 0) is compact. (1.4) Let g be a reductive Lie algebra and let κ be an Ad-invariant symmetric form as in the definition a real reductive group. Define hX,Y i = −κ(X, θ(Y )). Prove that h , i is positive definite on g, is invariant under Ad(K), and ad(X) is a symmetric (resp. skew symmetric) operator when X ∈ s (resp. X ∈ k). (1.4) Let T be the trace form on one of the classical Lie algebras. Define hY,Zi = −T (Y, θ(Z)). Show that h , i is positive definite and invariant under Ad(k) for all k ∈ K. Show that the linear transformation ad(X): g → g is skew if X ∈ k and is symmetric if X ∈ s.

8 LECTURE 2. Structure of reductive Lie algebras

Let g be a reductive Lie algebra over either R or C. The structure of g is described in terms of ‘root systems’. In this lecture we will describe root systems and how they give us much information about the structure of the Lie algebra. First let us recall some basic linear algebra. Suppose that T : V → V is a linear transformation and V is a complex vector space. Then T is said to be diagonalizable if and only if there is a basis of V consisting of eigenvectors of T . Then, of course, with respect to such a basis the matrix of T is a diagonal matrix, with the eigenvalues as the diagonal entries. A diagonalizable linear transformation is also called semisimple. A matrix A is diagonalizable (i.e., semisimple) if there is an invertible matrix Q so that QAQ−1 is a diagonal matrix. Thus, a linear transformation is diagonalizable if and only if the matrix (with respect to any basis) is diagonalizable. The Jordan Decomposition Theorem states that T = S +N, with S semisimple, N nilpotent and [S, N] = 0. Furthermore, such an expression for T is unique. These facts may be found in [4, §24] and [7, §4.2] Now consider a complex g. (We will return to reduc- tive Lie algebras shortly.) Consider ad(X): g → g, for any X ∈ g. Since g is semisimple, the center of g is zero. Therefore, ad : g → gl(g) is one-to-one. An element X ∈ g is called semisimple (resp. nilpotent) if ad(X) is semisimple (resp. nilpotent). The Jordan decomposition of X ∈ g is defined as follows. Write the Jordan decomposition ad(X) = S + N. One may prove ([7, §4.2]) that both S and

N lie in the image of ad, that is there exist XS and XN in g so that S = ad(XS) and N = ad(XN ). Since ad is one-to-one, we may conclude that X = XS + XN .

Furthermore, it is easy to check that [XS,XN ] = 0. Then X = XS + XN is called the Jordan decomposition of X. Therefore, the Jordan decomposition of X is the unique decomposition of X into a sum of semisimple and nilpotent elements which commute. Suppose that π is any finite dimensional representation of g, i.e., π : g → gl(V ) is a Lie algebra homomorphism with V a finite dimensional complex vector space. Then it is a fact that if π is injective, then X is semisimple (resp. nilpotent) if and only if π(X) is a semisimple (resp. nilpotent) linear transformation of V . For a proof of this fact see [7, §6.4]. We may conclude that if g is realized as complex matrices, then an element is semisimple in g precisely when it is semisimple as a matrix. Similarly for nilpotent.

When g is reductive then g = gss ⊕ z, where z is the center of g. Then, since the kernel of ad (equal to the center) may not be zero, we need to adjust the definition of semisimple and nilpotent. In this case we say that X is semisimple if and only if ad(X) is semisimple, and X is nilpotent if and only if X is in gss and ad(X) is

9 nilpotent. Now the Jordan decomposition of X is the unique decomposition of X into a sum of commuting semisimple and nilpotent elements.

Definition 2.1. For a complex semisimple Lie algebra g, a is a maximal subalgebra of commuting semisimple elements of g.

Example 2.2. For g = gl(n, C), a convenient Cartan subalgebra is the subalgebra of diagonal matrices:    a1    a2   (2.3) h =   : a ∈ C .  ..  i  .      an  For g = sp(2n, C),   a1  ..   .      an  (2.4) h =   : ai ∈ C  −a1     ..   .  −an is a Cartan subalgebra.

Two key facts about complex semisimple Lie algebras ([7, §16.4 and §8.2]) are:

(a) Any two Cartan subalgebras are conjugate. (By this we mean that there

is some g in the identity component of Aut(g) so that Ad(g)h1 = h2. The identity component of Aut(g) is denoted by Int(g).) (b) The centralizer of a Cartan subalgebra h is h.

A fact from linear algebra is that any commuting set of diagonalizable linear transformations is simultaneously diagonalizable. In other words there is a basis of V consisting of eigenvectors for all of the linear transformations in the set. Applying this fact to the set of all ad(H), for H in a Cartan subalgebra h, we may conclude that there is a basis of g so that each vector in this basis is an eigenvector for each ad(H). Let X be such a common eigenvector. Then ad(H)X = [H,X] = α(H)X for some scalar α(H). It follows easily that α : h → C is a linear function, that is α ∈ h∗. When α 6= 0 we say that X is a root vector and α is the corresponding root. The set of roots is denoted by ∆ = ∆(h, g). Since the centralizer of h is h, the decomposition of g into common eigenspaces for ad(h) may be written as M g = h + g(α), g(α) ≡ {X ∈ g :[H,X] = α(H)X, for all H ∈ h} α∈∆ 10 Facts: (a) Suppose X ∈ g(α) and Y ∈ g(β). Then there are three possibilities for [X,Y ]: [X,Y ] ∈ g(α+β), if α + β ∈ ∆ [X,Y ] ∈ h, if α + β = 0 [X,Y ] = 0, otherwise. In the first two cases [X,Y ] 6= 0. (b) dim(g(α)) = 1 for each root α. (c) If α is a root, then the only multiples of α which are roots are ±α.

Example 2.5. Consider g = gl(n, C) and let h be the Cartan subalgebra of diag- onal matrices as in (2.3). Let   a1  a2  H =   .  ..   .  an 0 Let Eij be the matrix with 1 in the ij-place and 0 s elsewhere. Then (of course) P H = akEkk and it is easy to check that [H,Eij] = (ai − aj)Eij. Therefore {Eij}, along with a basis of h, is a basis of simultaneous eigenvectors for ad(H),H ∈ h.

We conclude from this that if one writes k(H) = ak, then ∆ = {i − j : i 6= j, i, j = 1, 2, . . . , n}.

Example 2.6. For g = sp(2n, C) and h as in (2.4), let   a1  ..   .     an  H =   .  −a1     ..   .  −an Then an easy matrix calculation shows that

[H,Ei,j − En+j,n+i] = (ai − aj)(Ei,j − En+j,n+i)

[H,Ei,n+j + Ej,n+i] = (ai + aj)(Ei,n+j + Ej,n+i)

[H,En+i,j + En+j,i] = −(ai + aj)(En+i,j + En+j,i). It follows that we have found a basis of simultaneous eigenvectors and

∆ = {±(i ± j) : 1 ≤ i < j ≤ n} ∪ {±2i : 1 ≤ i ≤ n}.

Let g be a semisimple Lie algebra with a Cartan subalgebra h and corresponding roots ∆. We make a few definitions.

Definition 2.7. A subset Π = {α1, . . . , αr} ⊂ ∆ is called a simple system of roots if and only if 11 (a) Π is a basis of h∗ and P (b) every root is of the form i miαi, with all mi ≥ 0 or all mi ≤ 0.

If Π is a system of simple roots then the corresponding system of positive roots P consists of all roots of the form i miαi with all mi ≥ 0.

Example 2.8. In the case of g = sl(n, C), one may take Π = {1 − 2, 2 − + 3, . . . , n−1 − n} and the corresponding set of positive roots is ∆ = {i − j :

1 ≤ i < j ≤ n}. For g = sp(2n, C), a system of simple roots is Π = {1 − 2, 2 − + 3, . . . , n−1 − n} ∪ {2n} with corresponding positive roots ∆ = {(i ± j) : 1 ≤ i < j ≤ n} ∪ {2i : 1 ≤ i ≤ n}.

One more important feature of root systems is that κ, the Killing form on g, defines an inner product on the real span of the roots. Since κ is nondegenerate on h (see Exercise (2.3)), κ defines a nondegenerate symmetric form on h∗ as follows. ∗ First, for λ ∈ h let Hλ ∈ h be defined by κ(Hλ,H) = λ(H),H ∈ h. Now set hλ1, λ2i = κ(Hλ1 ,Hλ2 ). For the following properties see [7, §8.4,§8.5]: ∗ (a) h , i is positive definite on hR = spanR{∆}. 2hα , βi (b) For α, β ∈ ∆, are integers (called the Cartan integers). hβ , βi The Weyl group is the group W which is generated by all reflections about planes orthogonal to the roots. Suppose that α is a root. Then the corresponding reflection is given by the formula

2hλ , αi σ (λ) = λ − , λ ∈ h∗. α hα , αi

It is a fact that each element of W permutes ∆. In the two examples above the inner product described here is the inner product for which {i} is an orthonormal basis. The Cartan integers turn out to be 0, ±1 and ±2. For sl(n, C) the Weyl group is isomorphic to the symmetric group Sn. Elements of W are the linear transformations defined by permutations of the basis n {i : i = 1, . . . , n}. For sp(2n, C), W ' Z × Sn. We make a few comments on useful ways to describe positive systems of roots. ∗ Fix Λ ∈ hR so that Λ is regular in the sense that hΛ , αi= 6 0 for all roots α. Then + ∆Λ ≡ {α ∈ ∆ : hΛ , αi > 0} is a positive set of roots. In both of the examples P given, Λ = mm defines the positive system described. It is clear that there are many positive systems. How many are there for each of the two examples? Another way to describe positive sets of roots is to use a lexicographic order. + For this let {Hi} be a basis of spanR{Hα : α ∈ ∆}. Define α ∈ ∆ if and only + if α(H1) > 0, or α(Hi) = 0, i = 1, . . . , k and α(Hk+1) > 0 for some k. Then ∆ is a positive system of roots. This method has the advantage that an ordering on 12 hR = span{∆} is determined by

λ ≥ µ if and only if λ(H1) > µ(H1), or (2.9) λ(Hi) = µ(Hi), i = 1, . . . , k and λ(Hk+1) > µ(Hk+1).

A third way is to apply an element of W to any known positive system; w∆+ is positive if ∆+ is positive. In fact, the action of W on the set of positive systems is simply transitive. It is useful to define an abstract root system as follows.

Definition 2.10. An abstract root system is a pair (V, ∆) where V is a real vector space with an inner product h , i and ∆ is a finite subset of V \{0} so that the following four properties hold.

(1) ∆ spans V . (2) If α ∈ ∆, then the only multiples of α which are in ∆ are ±α.

(3) For each α ∈ ∆, the reflection σα permutes ∆. 2hα , βi (4) For α, β ∈ ∆, are integers. hβ , βi

The notions of simple and positive systems of roots, as well as the Weyl group, are defined as above. The Dynkin diagram of a root system is defined as follows. Choose a system of simple roots Π. There is a node for each α ∈ Π. For any pair 2hα , βi 2hβ , αi α, β ∈ Π connect the nodes for α and β by edges. In addition, hβ , βi hα , αi if one of the two simple roots α and β is longer than the other then place either > or < along the edges pointing in the direction of the longer root. The Dynkin diagrams may be classified. It is a fact that for each root system there is a unique Dynkin diagram and two Dynkin diagrams are the same if and only if the root systems are isomorphic (in the appropriate sense). It is also a fact that there is a one to one correspondence between Dynkin diagrams and semisimple Lie algebras (up isomorphism) established as follows. Given a semisimple Lie algebra g choose a Cartan subalgebra h and form the root system ∆(h, g) and choose a set of simple roots Π. This gives a Dynkin diagram as above. (It is not obvious, but the Dynkin diagram is independent of the choices of Cartan subalgebra and of simple system of roots Π.) One can check that g is simple if and only if the corresponding Dynkin diagram is connected. We have discussed root systems for complex semisimple Lie algebras. Now let us turn to real Lie algebras. A real Lie algebra g0 is semisimple if and only if g0 ⊗C is a complex semisimple Lie algebra. So we may consider any real semisimple Lie algebra as a real subalgebra of a complex semisimple Lie algebra having complexification equal to g.

Definition 2.11. A real form of a complex Lie algebra g is a real subalgebra g0 of g so that g = g0 ⊕ ig0. 13 So let us fix a complex semisimple Lie algebra g.A conjugation of g is an conjugate linear homomorphism τ : g → g of order two. Another way to say this is that τ is a real Lie algebra homomorphism so that τ(aX) = aX, for a ∈ C and X ∈ g, and τ 2 = I. For g = sl(n, C), typical conjugations are X 7→ X t X 7→ −X (2.12) t X 7→ −Ip,qX Ip,q, where p + q = n  0 I  X 7→ JXJ −1, with J = m , when n = 2m. −Im 0

Proposition 2.13. Given a conjugation τ, g0 = {X ∈ g : τ(X) = X} is a real form. Conversely, if g0 is a real form then τ(X + iY ) = X − iY, for X,Y ∈ g0, defines a conjugation of g. This establishes a one-to-one correspondence between conjugations and real forms.

Proof. This (easy) proof is left to the reader. However, observe that for any X ∈ g 1 1 X = (X + τ(X)) + (X − τ(X)) ∈ g + ig . 2 2 0 0 

For the example of sl(n, C), the real forms corresponding to the conjugations listed in (2.12) are sl(n, R), su(n), su(p, q) and when n = 2m n  AB o sl(m, H) ≡ . −B A (The last Lie algebra may be identified with the ‘trace’ zero linear transformations of Hm, H the quaternions.)

It turns out that for a real form g0 of g (a complex semisimple Lie algebra), the structure may be described in terms of roots, however the situation is a bit more complicated than for complex Lie algebras. This is essentially because a linear transformation of a real vector V space may have complex eigenvalues; the eigenvectors lie in the complexification of V , but not in V . An example of this occurs in g0 = su(n) where eigenspaces of ad(H), H a diagonal matrix, are not in su(n). Instead of root systems, the appropriate notion is that of ‘restricted roots’.

Let us fix a real Lie algebra g0 and a Cartan involution with decomposition g0 = k0 + s0. Fix a maximal abelian subspace a0 of s0. This will play the role of

Cartan subalgebra. Note that (by Exercise (1.4)) when X ∈ a0, ad(X): g0 → g0 is symmetric. Therefore, ad(X) is diagonalizable (as a real linear transformation of a real vector space). Since all ad(X) for X ∈ a0 mutually commute, they are simultaneously diagonalizable. 14 Example 2.14. Let g0 = su(2, 4). Then we may choose a0 to be   a1  a2    n  0  o a0 =   : ai ∈ R .  0     a2  a1

The common eigenspaces for ad(A),A ∈ a0 are called restricted root spaces and ∗ the eigenvalues are given by elements of a0, which we call restricted roots (when nonzero).

Here is the reason the term restricted roots is used. Consider zk0 (a0), the subalge- bra of elements of k0 which commute with a0, and let t0 be a maximal commutative subalgebra of zk0 (a0). Then h = (t0)C + (a0)C is a Cartan subalgebra of g. (Check this.) Then we have the following.

Proposition 2.15. The set of restricted roots, denoted by Σ(a0, g0), is

{β : β = α|a0 , α ∈ ∆(h, g) and β 6= 0}.

The following example illustrates that ∆(a0, g0) is not a root system since part (2) of the definition fails. Also, the dimension of restricted root spaces need not be one. Note that the (complexification of) the restricted root space for β is X g(α).

α∈∆(h,g),α|a0 =β

Example 2.16. Consider g0 = u(2, 4) with a0 as in the previous example. Then we may choose t0 so that   it1 a1  it2 a2    n  it3 0  o h0 = t0 + a0 =   : ai ∈ R .  0 it4     a2 it2  a1 it1

This is conjugate to   a1 + it1  a2 + it2    n  it3  o   : ai ∈ R ,  it4     −a2 + it2  −a1 + it1 therefore we have 15 Roots Restricted roots Multiplicity ±(a1 ± a2) ± i(t1 − t2) ±(a1 ± a2) 2 ±aj ± i(tj − tk) ±aj 4 ±2aj ±2aj 1 ±i(t3 − t4) − −

The multiplicity is the dimension of the restricted root space.

The above discussion immediately gives the following decomposition of g0.

+ Proposition 2.17. Let a0 be as above and fix a system of positive roots Σ in

Σ(a0, g0). Let

X (β) X (−β) n = g0 , n = g0 and m = zk0 (a0). β∈Σ+ β∈Σ+

Then g0 = m0 + a0 + n + n.

EXERCISES

(2.1) Consider the complex Lie algebra defined by the bilinear form  0 I  B = n In 0 (a Lie algebra isomorphic to so(2n, C)). Determine a convenient Cartan subalgebra. Do the same for so(2n + 1, C). (Hint: use the bilinear form   0 0 In B =  0 1 0  In 0 0 to define the Lie algebra.) (2.2) Write down the root systems for the orthogonal Lie algebras using the Cartan subalgebras found in the preceding problem. (2.3) Use the root space decomposition of g to show that sl(n, C), (n ≥ 2), so(n, C), (n ≥ 3) and sp(2n, C), (n ≥ 1) are simple Lie algebras. (2.4) Find the Weyl group for the orthogonal groups. (2.5) Suppose that κ is an invariant symmetric bilinear form on a semisimple Lie algebra g and that h is a Cartan subalgebra. Prove that the g(α) and g(β) are orthogonal with respect to κ unless α + β = 0. Prove that κ is nondegenerate on h. ∗ (2.6) Prove that if Λ ∈ hR then M q ≡ h + g(α) hΛ ,αi>0 is a subalgebra of g. 16 (2.7) Prove that for h = a ⊕ t, for each α ∈ ∆(h, g)

α(H) ∈ R, if H ∈ a0

α(H) ∈ iR, if H ∈ t0 holds.

17 LECTURE 3. Parabolic subgroups

Consider G = GL(n, R). We have already seen that G = K exp(s), the polar decomposition of G. There are two other decompositions which we know from linear algebra. One is called the ‘L-U’ decomposition. It says that most matrices g ∈ G may be written as g = LU with L a lower triangular matrix with 1’s on the diagonal and U is an upper triangular matrix (with arbitrary nonzero diagonal entries). Such an expression is easily seen to be unique. The set of invertible matrices with such a decomposition is open and dense in GL(n, C), but this is not immediately clear. In order to find the L and U for a given matrix g, one may perform row operations in the form of adding a multiple of a row to a later row. As long as no pivots are 0 one ends up with a matrix U, i.e., for some L (of the proper form) L−1g = U. It may be seen that the pivots will never be zero if and only if all principal minors are nonzero. In other words, the subspace of all g = LU is open and dense in GL(n, R). This decomposition applies equally well for GL(n, C). The other decomposition referred to above is called the Iwasawa decomposition. This states that each element of GL(n, R) may be written uniquely as g = kan with k, a and n in the following groups.

K = O(n),   a1 n  a2  o A =   : a > 0 , and  ..  j  .  an 1 ∗ ∗ ∗ n 0 1 ∗ ∗ o N =   .  ..  0 0 . ∗ 0 0 0 1

This is essentially a restatement of the Gram-Schmidt orthogonalization process ([4, §15]) applied to the columns of g. Let us see how this happens. Write the usual dot product as h , i. Letting gj be the j-th column, write

u1 = g1/||g1||,

u2 = (g2 − hu1 , g2iu1)/||g2 − hu1 , g2i||, . . n−1 n−1 X X un = (gn − huj , gniuj)/||gn − huj , gniuj||, j=1 j=1

18 an orthonormal basis of Rn. For each m m−1 −1 X um = am (gm − huj , gjium), with am > 0, j=1 m−1 −1 X = am (gm + bjmgj), j=1 −1 for some scalars bij. Now take n to be the matrix with (i, j)-entry equal to 1 if i = j, 0 if i > j and bij if i < j. Then it follows that g = kan where the columns of k are the column vectors u1, . . . , un. Note that since {ui} is an orthonormal set, the matrix k is orthogonal. All three of these decompositions, if properly stated, hold for general reductive groups. In this lecture we will carefully state each. The polar decomposition is referred to as the Cartan decomposition and is part of the definition of a real reductive group. In Exercise (3.2) you will prove the following for the classical groups introduced in Lecture 1. Suppose that G is a classical group defined in Lecture 1. Let K be the fixed points of Θ and let s be the −1-eigenspace of θ.

Proposition 3.1. φ : K × s → G, φ(k, X) = k exp(X), is a diffeomorphism.

Corollary 3.2. The group G is connected (resp. simply connected) if and only if K is connected (resp. simply connected).

For the other two decompositions we will need some preparation. In particular we will need the definition of and some information about parabolic subgroups. Let us first assume that g is a complex reductive Lie algebra. A Borel subalgebra is a maximal solvable subalgebra of g. Let b be a Borel subalgebra. Then b contains a Cartan subalgebra h, and b is the direct sum of certain root spaces. It can be shown that there is a positive system of roots ∆+ so that X (3.3) b = h + g(α). α∈∆+ All Borel subalgebras of g are conjugate under Int(g). A parabolic subalgebra is any subalgebra which contains a Borel subalgebra. By the conjugacy of Borel subalgebras, each parabolic subalgebra is conjugate to one containing a fixed Borel subalgebra. Therefore to describe the parabolic subalgebras it suffices to fix a Cartan subalgebra h, a positive system ∆+ and describe the parabolic subalgebras containing the Borel subalgebra of (3.3). Call Λ ∈ h∗ dominant if hΛ , αi ≥ 0, for all α ∈ ∆+. Then for each dominant Λ ∈ h∗ set X (3.4) p(Λ) = h + g(α). α∈∆,hΛ ,αi≥0 19 Then p(Λ) contains b, so is a parabolic subalgebra. Conversely, each parabolic subalgebra containing b is of this form for some dominant Λ. Furthermore, p(Λ) = l(Λ) + n(Λ), where X l(Λ) = h + g(α), a reductive subalgebra, and hΛ ,αi=0 X n(Λ) = g(α), the maximal nilpotent ideal in p(Λ). hΛ ,αi>0 We will call this decomposition the Levi decomposition. It is useful to refine this description slightly as follows. Let Π be the system of + simple roots (for our fixed ∆ ). Define the fundamental weights λα, α ∈ Π by ( 2hλ , βi 1 if β = α α = hβ , βi 0 if β 6= α, for β ∈ Π.

Fact 3.5. The set of parabolic subalgebras containing b are in one-to-one correspon- dence with the subsets of Π. This one-to-one correspondence is given by S 7→ p(ΛS), X where ΛS = λα. α∈S

The parabolic subalgebra p(ΛS) may be thought of as coming from the Dynkin diagram by crossing out the simple roots in S. Then the root system generated by the remaining roots is ∆(l), and n is spanned by the root vectors for the other positive roots. For example, when g = sp(2n, C) and

Π = {αi = i − i+1 : i = 1, 2, . . . , n − 1} ∪ {αn = 2n}, the maximal parabolic subalgebras (that is, those not properly contained in any other parabolic subalgebras other than g itself) correspond to those S containing just one (simple) root. For m = 1, . . . , n − 1 and S = {αm},

∆(l) = {i − j : 1 ≤ i, j ≤ m} ∪ {±(i ± j): n + 1 ≤ i, j ≤ n}, l ' gl(m, C) ⊕ sp(2(n − m), C). In these cases n is 2-step nilpotent (i.e., n is nonabelian and [n, [n, n]] = {0}). When

S = {αn}, l ' gl(n, C) and n is abelian. If G is a connected complex Lie group with Lie algebra g, define any subgroup P which is the normalizer of some parabolic subalgebra to be a parabolic subgroup.

Thus each parabolic subgroup is of the form P = NG(p) = {g ∈ G : Ad(g)p ⊂ p}. It may be checked that the Lie algebra of P is p. It is also a fact that P is connected. Now consider a real reductive group G with Lie algebra g. Choose a maximal abelian subspace in s and call it amin. (The reason for this notation will become 20 clear in a moment.) Let tmin be a maximal abelian subalgebra of mmin = zk(amin).

Then, as in Lecture 2, h = (tmin)C ⊕ (amin)C is a Cartan subalgebra of g. Fix a + positive system of roots ∆ ⊂ ∆(h, g) with the property that {β : β = α|amin , α ∈ ∆+, β 6= 0} is a positive system of restricted roots, which we will call Σ+. Write X (β) nmin = g . Then β∈Σ+

pmin = mmin + amin + nmin is called a minimal parabolic subalgebra. A subalgebra p of g is a parabolic subalgebra if p is conjugate (under Int(g)) to a subalgebra of g containing pmin. Note that + (pmin)C is a parabolic subalgebra of gC, since it contains all root spaces for α ∈ ∆ . Therefore the complexification of each parabolic subalgebra in g is a parabolic subalgebra in gC. The converse does not hold, the extreme case being when G is compact (where amin = {0} and pmin = g).

Suppose p ⊃ pmin. We wish to describe the Levi decomposition p = l + n. Since p contains amin, X (β) p = mmin + amin + g , β∈Γ for some set Γ of restricted roots. The restricted roots in Γ fall into two types, those β for which −β ∈ Γ, and the others. Therefore, Γ = Γ0 ∪ Γ1,Γ0 = Γ ∩ (−Γ) and Γ1 = Γ \ Γ0. We will call

p =l + n, with(3.6) X (β) X (β) l = mmin + amin + g and n = g ,

β∈Γ0 β∈Γ1 the Levi decomposition of p. For the real reductive group G we define a parabolic subgroup to be a subgroup which is the normalizer of a parabolic subalgebra in g. Suppose the P = NG(p), p a parabolic subalgebra of g. Then, as for complex groups, the Lie algebra of P is p. Then there is a real reductive group L so that P = LN, N = exp(n). The group L may be chosen so that Lie(L) = l, however L is typically not connected. Explicitly, if Le is the connected subgroup of G with Lie algebra l, then L = NK (p)Le. So, the normalizer in K meets each connected component of L.

A simple example is when G = GL(n, R) and P = Pmin is the normalizer of pmin. Then NK (pmin) is the group of diagonal matrices with ±1 in the diagonal entries, Le is the group of all diagonal matrices with positive diagonal entries and L is the group of all invertible diagonal matrices.

For a parabolic subalgebra p ⊃ pmin with Levi decomposition p = l + n, the opposite parabolic subalgebra is X p = l + n, with n = g(−β). β∈Γ 21 For the parabolic subgroup P = NK (p), the opposite parabolic subgroup is

(3.7) P¯ = NK (p), so P¯ = LN,¯ with N¯ = exp(n). The opposite parabolic is defined similarly for a complex Lie algebra (or group). The Iwasawa decomposition of a real reductive group is given in the following theorem.

Theorem 3.8. ([6, Ch. VI,§3]) The map

φ : K × Amin × Nmin → G (k, a, n) 7→ kan is a diffeomorphism.

Definition 3.9. A real flag manifold is any homogeneous space G/P where G is real reductive group and P is a subgroup having Lie algebra which is a parabolic subalgebra. Note that P need not be a parabolic subgroup; P lies between a parabolic subgroup and its identity component.

The Iwasawa decomposition implies that a real flag manifold is compact. The generalization of the ‘L-U’ decomposition is the following.

Proposition 3.10. For any parabolic subgroup P = LN of a real reductive group, NP¯ is a dense open subset of G. The expression for any g ∈ NP¯ as g = n(g)p¯(g), with n(g) ∈ N and p¯(g) ∈ P is unique, and n and p¯ are smooth functions of g.

We will refer to (3.11) g = n(g)p¯(g) as the Bruhat decomposition of g.

EXERCISES

(3.1) Prove that each parabolic subalgebra is conjugate to one of the form defined in Equation (3.4). (3.2) Prove the fact stated in 3.5. (3.3) Prove that if p is a parabolic subalgebra of a complex reductive Lie algebra, then there exists H0 ∈ h so that α(H0) ∈ Z, ∆(l) = {α : α(H0) = 0} and ∆(n) = {α : α(H0) > 0} (3.4) Determine all parabolic subalgebras of sl(n, C) and so(n, C) for which n is abelian. Determine all parabolic subalgebras of sl(n, C) and so(n, C) for which n is 2-step nilpotent.

(3.5) Let g be a real reductive Lie algebra. Show that pmin contains no subal- gebra having complexification which is a parabolic subalgebra of gC. (Thus the terminology ‘minimal parabolic subalgebra’ for pmin.) 22 (3.6) For each maximal parabolic subalgebra of sp(2n, R) determine the group L of the Levi decomposition. (3.7) Prove the Iwasawa decomposition and Prop. 3.10 for the group Sp(2n, R) by using the corresponding statement for GL(2n, R).

23 LECTURE 4. Complex groups and Finite dimensional representations

In this lecture we will describe finite dimensional representations of complex reductive Lie algebras. The main theorem is Theorem 4.5, called the Theorem of the Highest Weight. See [7, §6,20 and 21] for details and proofs of most of the statements made in this Lecture. We will also make some comments on finite dimensional representations of real Lie algebras and Lie groups. Suppose that g is any real or complex Lie algebra. Then a (finite dimensional) representation of g is a pair (σ, E) with E a finite dimensional complex vector space and σ a Lie algebra homomorphism

σ : g → gl(E).

Sometimes we refer to σ or E as the representation. We also say that (σ, E) is a representation on E. Note that σ is a real or complex homomorphism depending on whether g is a real or complex Lie algebra, but in either case E is a complex vector space. (Sometimes it is useful to consider representations on real vector spaces, however we will not do this.) Fix for a moment a Lie algebra g and a representation (σ, E) of g. There are a number of standard definitions to be made. First, if dimC(E) = n then we say σ is n-dimensional. A subspace F of E is a g-invariant (or just invariant) subspace if σ(X)v ∈ F for all v ∈ F and X ∈ g. We say that the representation is irreducible if there are no invariant subspaces other than {0} and E. If F is an invariant subspace, then we refer to F as a subrepresentation of (σ, E); more precisely, the subrepresentation is the representation (σF ,F ) defined by σF (X) = σ(X)|F , for each X ∈ g. In the case that E = F1 ⊕ · · · ⊕ Fm with each Fi an invariant subspace, we say that E is direct sum of the subrepresentations F1,...,Fm. If E may be written as the direct sum of irreducible subrepresentations, then σ is said to be completely reducible. Given two representations (σi,Ei), i = 1, 2, of g, an intertwining map (also referred to a g-homomorphism) is a linear map T : E1 → E2 so that σ2(X)◦T = T ◦σ1(X), for all X ∈ g. The space of all such maps is denoted by Homg(E1,E2). The representations σ1 and σ2 are equivalent if there is an invertible g-homomorphism T : E1 → E2.

A simple observation is that for any intertwining map from E1 to E2, both

Ker(T ) and Im(T ) are invariant subspaces. It follows immediately that if E1 and

E2 are irreducible, then T is either invertible or identically zero.

Lemma 4.1. (Schur’s Lemma) If (σ, E) is an irreducible representation of E, then

Homg(E,E) = {cIE : c ∈ C}. 24 Proof. Suppose that T ∈ Homg(E,E). Then T − λI is also a g-homomorphism. Therefore T − λI is either invertible or zero. Choosing λ to be an eigenvalue of T ensures that T − λI is not invertible. Then T = λI holds.  Theorem 4.2. (Weyl’s Theorem) Every finite dimensional representation of a semisimple Lie algebra is completely reducible.

Proof. See [7, §6] for a proof. 

An example of a finite dimensional representation is the adjoint representation (ad, g) on g. This is defined by ad(X)(Y ) = [X,Y ]. An invariant subspace is an ideal and the adjoint representation is irreducible if and only if g is simple or one-dimensional. There are several natural constructions of representations from given represen- tations. We will give the definitions of a few of these below. Let (σ, E), (σj,Ej) and (ρ, F ) be representations of a Lie algebra g. (a) The dual of σ is the representation (σ∗,E∗) defined by (σ∗(X)µ)(v) = −µ(σ(X)v), for X ∈ g, µ ∈ E∗ and v ∈ E.

(b) There are three representations on HomC(E,F ). Let X ∈ g, v ∈ E and

T ∈ HomC(E,F ). Then three different representations are defined by (X · T )(v) = T (σ(X)v) − ρ(X)T (v). (X · T )(v) = ρ(X)T (v), (4.3) (X · T )(v) = −T (σ(X)v),

(c) The direct sum of (σj,Ej) is the representation on ⊕Ej is defined by

X · (v1 + ··· + vm) = σ1(X)v1 + ··· + σm(X)vm.

(d) The tensor product of (σj,Ej) is the representation on E1 ⊗ · · · ⊗ Em defined by

X · (v1 ⊗ · · · ⊗ vm) = (σ1(X)v1) ⊗ v2 ⊗ · · · ⊗ vm

+ v1 ⊗ (σ2(X)v2) ⊗ · · · ⊗ vm + ··· + v1 ⊗ · · · ⊗ vm−1 ⊗ (σm(X)vm).

(e) A representation on the exterior product Vm(E) is defined by

X · (v1 ∧ · · · ∧ vm) = (σ(X)v1) ∧ v2 ∧ · · · ∧ vm

+ v1 ∧ (σ(X)v2) ∧ · · · ∧ vm + ··· + v1 ∧ · · · ∧ vm−1 ∧ (σ(X)vm).

(f) A representation on Sm(E), the symmetric tensors of degree m, is defined as in (d). Now let us turn to representations of complex semisimple Lie algebras. So let g be a complex semisimple Lie algebra and fix a representation (σ, E) of g. A key to understanding σ is to understand the weights. Fix a Cartan subalgebra h of g. 25 A weight vector is a nonzero vector v ∈ E so that v is a common eigenvector for σ(H),H ∈ h. So there is a µ ∈ h∗ so that σ(H)v = µ(H)v, for all H ∈ h. In this case µ is called a weight. The set of weights is denoted by ∆(E), and given µ ∈ ∆(E)

Eµ = {v ∈ E : σ(H)v = µ(H)v, for all H ∈ h} ∪ {0} is the space of weight vectors (along with 0) corresponding to µ. As mentioned in Lecture 2, each σ(H) is semisimple for H ∈ h. Therefore, M E = Eµ. µ∈∆(E) An example which we have already encountered is the adjoint representation. The weights are the roots and 0. The dimension of each weight space is one, except for the weight 0 which has dimension equal to the dimension of h. Another example is the ‘standard representation’ of sp(2n, C) on C2n. With respect to the Cartan subalgebra of Example 2.6, the weights are {±j : i = 1, . . . , n}. An important observation is that (α) v ∈ Eµ,X ∈ g implies σ(X)v ∈ Eµ+α. This holds because when H ∈ h we have σ(H)σ(X)v = σ([H,X])v + σ(X)σ(H)v = (α(H) + µ(H))σ(X)v. + P (α) Now fix a positive system of roots ∆ . Recall that b = h + n, n = α∈∆+ g , is a Borel subalgebra of g. A vector v ∈ E is called a highest weight vector if and only if σ(X)v = 0 for all X ∈ n. The space of highest weight vectors is denoted by En. Since E is finite dimensional, En 6= {0}. The following statements are proved in [7, §20,21]. Proposition 4.4. Let (σ, E) be an irreducible finite dimensional representation of a semisimple Lie algebra g. Let b = h + n be a Borel subalgebra corresponding to a positive system ∆+. Then the following hold.

(1) dim(En) = 1 and the weight λ of a highest weight vector (called a highest weight) is the unique weight satisfying λ + α∈ / ∆(E) for all α ∈ ∆+.

Furthermore, dim(Eλ) = 1. P (2) Each weight is of the form λ − α∈∆+ mαα, with mα ∈ Z≥0. (3) The Weyl group W acts on ∆(E) and dim(Eµ) = dim(Ewµ). (4) All weights are contained in the convex hull of {wλ : w ∈ W }.

The set of dominant integral elements of h∗ is 2hµ , αi Λ+ = {µ ∈ h∗ : ∈ , for all α ∈ ∆+} hα , αi Z≥0 With the hypothesis of the proposition we have the following, which is often referred to as the Theorem of the Highest Weight. 26 Theorem 4.5. The highest weight of E is in Λ+. The map taking the irreducible representation E to its highest weight defines a one-to-one correspondence Λ+ ↔ {irreducible finite dimensional representations of g}.

Now we turn to complex reductive Lie algebras. Therefore we will let g be of the form g = gss ⊕ z, that is, g is the direct sum of its center z and a semisimple

Lie algebra gss. Fix a Cartan subalgebra h = hss ⊕ z, with hss a Cartan subalgebra of gss. If (σ, E) is an irreducible finite dimensional representation of g, then by

Schur’s Lemma for each Z ∈ z there is a constant c so that σ(Z) = cIE. Therefore the restriction to gss is irreducible, so there is a unique highest weight vector and highest weight. On the other hand, given λ ∈ h∗ which is dominant and integral there is an irreducible representation (σss,E) of gss with highest weight λ|hss . Defining σ(X + Z) = σss(X) + λ(Z)IE for X ∈ gss and Z ∈ z gives an irreducible representation of g with highest weight λ. Therefore the Theorem of the Highest weight holds for complex reductive Lie algebras.

Let g now be a real Lie algebra, and let gC be its complexification. Given a representation (σ, E) of g there is a representation (σC,E) defined by σC(X + iY ) = σ(X) + iσ(Y ) for X,Y ∈ g. Then σC is irreducible (resp. completely reducible) if and only if σ is irreducible (resp. completely reducible). It follows that the irreducible representations of a real reductive Lie algebra are in one-to-one correspondence with those of gC. If G is a Lie group, a finite dimensional representation is a pair (ρ, E) with E a complex vector space and ρ : G → GL(E) is a smooth group homomorphism. The definitions of invariant subspace, irreducible, etc are as already defined for representations of Lie algebras. The constructions given for representation of Lie algebras are similar. Let g ∈ G. (a) The dual E∗:(σ∗(g)µ)(v) = µ(g−1v).

(b) HomC(E,F ): (g · T )(v) = ρ(g−1)T (σ(g)v). (g · T )(v) = ρ(g)T (v), (g · T )(v) = T (σ(g−1)v),

(c) The direct sum of (σj,Ej) is the representation on ⊕Ej is defined by

g · (v1 + ··· + vm) = σ1(g)v1 + ··· + σm(g)vm.

(d) Tensor product:

g · (v1 ⊗ · · · ⊗ vm) = (σ1(g)v1) ⊗ (σ2(g)v2) ⊗ · · · ⊗ (σm(g)vm).

(e) Exterior product

g · (v1 ∧ · · · ∧ vm) = (σ(g)v1) ∧ (σ(g)v2) ∧ · · · ∧ (σ(g)vm). 27 (f) Sm(E): defined as in (d). (g) P (E): (g · p)(v) = p(g−1v). Given a representation (µ, E) of a Lie group G, a representation of its Lie algebra is defined by d dµ(X)v = µ(exp(tX))v| , for X ∈ g. dt t=0 A basic fact is that for a connected Lie group G with Lie algebra g, a represen- tation µ of G is irreducible if and only if dµ is an irreducible representation of g. When G is not connected such a statement fails.

EXERCISES

(4.1) Give an example of a 2-dimensional Lie algebra g and 2-dimensional repre- sentation of g which is not completely reducible. (4.2) Show that given a representation (σ, E) of G, the representations Sm(E) and P (E∗) are equivalent. (4.3) Show that for a representation (σ, E) of g, E∗ ⊗ E is equivalent to End(E). (4.4) Let g be a semisimple Lie algebra and b = h + n a Borel subalgebra. Show that for any finite dimensional representation E of g, En 6= {0}. (4.5) Write down the highest weight of the adjoint representation of sl(n, C), so(n, C) and sp(2n, C).

28 LECTURE 5. Homogeneous spaces, differential operators and the enveloping algebra

5.1. Homogeneous spaces. For any group G we say that a group action of G on a set M is a map (5.1) G × M → M, written as (g, m) 7→ g · m satisfying g1 · (g2 · m) = (g1g2) · m and e · m = m, for all g1, g2 ∈ G and m ∈ M. An action is called transitive if and only if whenever m1, m2 ∈ M, there is some g ∈ G so that g · m1 = m2. When G is a topological group and M a topological space, we say that an action is a continuous action if and only if (5.1) is continuous. If G is a Lie group and M is a differentiable manifold, a continuous action is called a smooth action if and only if (5.1) is smooth. Suppose G is a topological group and H a subgroup. Then the coset space may be given the quotient topology. Letting π : G → G/H be the natural quotient map (π(g) = gH), the quotient topology is defined by the property that U ⊂ G/H is open if and only if π−1(U) ⊂ G is open. Note that π is continuous and open. Furthermore, G × G/H → G/H is continuous and for each g ∈ G, left translation

τg : G/H → G/H (xH 7→ gxH) is a homeomorphism. Proposition 5.2. If a continuous action of a locally compact, second countable topological group on a locally compact Hausdorff space M is transitive, then the following hold.

(a) For each p ∈ M, Gp ≡ StabG(p) ≡ {g ∈ G : g · p = p} is a closed subgroup of G.

(b) gGp 7→ g · p is a homeomorphism G/Gp ' M.

For a proof see [6, Ch. II,§3].

Proposition 5.3. For any closed subgroup H of a Lie group G, the topological space G/H has the unique structure of smooth manifold having the property that G × G/H → G/H, (g, xH) 7→ gxH is smooth. For each X ∈ g a vector field on G/H is defined by d (Xf˜ )(gH) = f(exp(−tX)gH)| . dt t=0 This gives an isomorphism TgH (G/H) ' g/Ad(g)h, for each g ∈ G.

Local coordinates for G/H may be described as follows. It is a fact about the exponential map that if g = V1 ⊕V2 (as vector spaces) then there are neighborhoods

Ui of 0 in Vi so that

U1 ⊕ U2 → exp(U1) exp(U2) (5.4) (X1,X2) 7→ exp(X1) exp(X2) 29 is a diffeomorphism. To construct local coordinates on G/H, choose a vector space complement m to h = Lie(H) in g. Then there are neighborhoods U1 of 0 ∈ m and

U2 of 0 ∈ h so that (5.4) is a diffeomorphism. Then a basis {X1,...,Xm} of m determines local coordinates on the neighborhood gπ(exp(U1)) of gH ∈ G/H by

(5.5) ϕ(x1, . . . , xm) = g exp(x1X1 + ··· xmXm).

For details, and a proof of the following proposition, see [6, Ch II, §4].

Proposition 5.6. If a smooth action of a Lie group G on a manifold M is tran- sitive, then gGp 7→ g · p is a diffeomorphism G/Gp ' M.

When H is a closed subgroup of a Lie group G, we call G/H a homogeneous space. An example is that real projective space RP(n) is a homogeneous space. The action of GL(n + 1, R) by g · [v] = [gv] is smooth (why?). The stabilizer of the column vector p = [1, 0, 0,..., 0]t is

n a b  o P = : a ∈ R×, b ∈ Rn,D ∈ GL(n, R) . 0 D

The action is transitive, so RP(n) is diffeomorphic to GL(n + 1, R)/P .

5.2. Invariant differential operators. Let M be a smooth manifold of dimen- sion m. Let (U, ϕ) be a coordinate system in a neighborhood of a point p ∈ M. ∂ Consider the partial derivatives defined on U. It will be useful for us to use the ∂xi m multi-index notation for higher derivatives. For this let α = (α1, . . . , αm) ∈ Z≥0 ∂α ∂α1 ∂αm and write = ... . Then a differential operator on M may be defined ∂xα ∂xα1 ∂xαm to be a linear map from C∞(M) to itself which, in any local coordinate system, has the form X ∂αf (5.7) Df = a , α ∂xα ∞ with aα ∈ C (M). The space of differential operators on a manifold M is denoted by D(M). The space D is an algebra and (hence) a module over C∞(M). When G is a Lie group of dimension n and H a closed subgroup of G, a differential operator on M = G/H is said to be G-invariant if `g(Df) = D(`gf), where `g is −1 left translation by g (defined by (`gf)(x) = f(g x)). The algebra of G-invariant differential operators is denoted by DG(G/H). Let us first consider the case when H = {e}, i.e., M = G. For X ∈ g define right translation by X by the formula d (R(X)f)(x) = f(x exp(tX))| , for f ∈ C∞(G). dt t=0 30 Then R(X) is a left invariant vector field on G. In particular, R(X) is in DG(G). In the local coordinates (5.5) on gU, d (R(X )f)(g) = g(g exp(tX ))| j dt j t=0 d = f(ϕ(0,..., 0, t, 0,..., 0))| dt t=0 ∂f = . ∂xj x=0

When X ∈ gC the above formula for R(X) does not make sense. In this case right translation may be defined by R(X1 + iX2) = R(X1) + iR(X2), X1,X2 ∈ g. Therefore, for X ∈ gC, R(X) ∈ DG(G). Therefore we may conclude the following.

Corollary 5.8. If Y1,...YN ∈ gC, then R(Y1) ··· R(YN ) is a G-invariant differen- tial operator on G.

We now show that the differential operators R(Y1) ··· R(YN ),Yi ∈ gC, span DG(G). In fact, we show a little more. For any basis {X1,...,Xn} of gC write α α1 αn α R(X ) for R(X1) ··· R(Xn) for any multi-index α. We will show that {R(X ): n α ∈ Z≥0} is a basis of DG(G). The technical fact we use is in the following lemma.

Lemma 5.9. Let M be any manifold with a set of vector fields ξ1, . . . , ξm with the property that for each x ∈ M, ξ1,x, . . . , ξm,x is a basis of the complexified tangent α α1 αm ∞ space Tx(M). Then {ξ ≡ ξ ··· ξ : α ∈ Z≥0} is a basis over C (M) of D(M).

Proof. The proof is in [15, §1.1]. 

α It is immediate from the Lemma that {R(X ): α ∈ Z≥0} is independent. It also follows from the lemma that each differential operator in D(G) is of the form

X α ∞ aαR(X ), aα ∈ C (G).

For D ∈ DG(G), the G-invariance gives

(Df)(g) = (D(`g−1 f)(e) X α = aα(e)(R(X )(`g−1 f)(e) X α = aα(e)(R(X )f)(g).

P α Therefore, D = AαR(X ), Aα = aα(e) ∈ C. We have shown the following.

α Proposition 5.10. {R(X ): α ∈ Z≥0} is a basis (as complex vector space) of DG(G).

5.3. The enveloping algebra. The description of DG(G) given above suggests the definition of the enveloping algebra U(gC). 31 We will give the construction for Lie algebras over C, any base field may be considered in the same manner. Consider the tensor algebra of gC,

T (gC) = C ⊕ gC ⊕ (gC ⊗ gC) ⊕ · · · and the ideal I generated by

{X ⊗ Y − Y ⊗ X − [X,Y ]: X,Y ∈ gC}.

Definition 5.11. The enveloping algebra of gC is U(gC) = T (gC)/I.

Let π : T (gC) → U(gC) be the natural quotient map. Then one typically writes

Y1Y2 ··· YN for π(Y1 ⊗ Y2 ⊗ · · · ⊗ YN ), for Yi ∈ gC.

Theorem 5.12. (Poincare-Birkoff-Witt Theorem, see [7, §17.3]) For any basis

α αi αn n {X1,...,Xn} of gC, {X = X1 ··· Xn : α ∈ Z≥0} is a basis of U(g).

We may now restate what the characterization of D given earlier.

Proposition 5.13. As algebras, DG(G) 'U(gC). EXERCISES

(5.1) Give an example for which the conclusion of Prop. 5.2 fails if G is not second countable. 2n+1 (5.2) Let G = H2n+1 be the Heisenberg group defined as R with multiplication given by (x, y, z) · (x0, y0, z0) = (x + x0, y + y0, z + z0 + x · y0) (where x · y0 is the usual n ∂ ∂ ∂ dot product on R ). Find generators of G(G) in terms of , and D ∂xi ∂yi ∂t (5.3) Prove that there is a one-to-one correspondence between complex representa- tions of a complex Lie algebra and modules over the enveloping algebra. (5.4) Suppose that G is a Lie group with Lie algebra g. Show that the adjoint action of G on g extends to a representation on U(g) (by Ad(g)(Y1Y2 ··· YN ) =

Ad(g)(Y1)Ad(g)(Y2) ··· Ad(g)(YN )). Show that this representation is G-finite in the sense that span{Ad(g)u : g ∈ G} is finite dimensional for any u ∈ U(g). Describe the differential of the Ad-representation on U(g). Show that if G is a reductive group then under the Ad-representation U(g) decomposes into a direct sum of irreducible finite dimensional representations. (Hint: you may wish to consider the filtration Un = span{Y1Y2 ··· Yk : Yi ∈ g, k ≤ n}.) (5.5) Suppose that h is a Lie subalgebra of the complex Lie algebra g and m is a vector space complement of h in g. Fix a basis {Y1,...,Ym} of m. Show that β m U(g) = span{Y : β ∈ Z≥0} ⊕ U(g)h.

32 LECTURE 6. Differential operators between sections of bundles, I

Suppose that E and F are complex vector spaces and C∞(Rn,E) is the space of smooth E-valued function in Rn. Then a linear map D : C∞(Rn,E) → C∞(Rn,F ) is a differential operator if it is of the form X ∂α (6.1) D = a α ∂xα ∞ n where aα ∈ C (R , Hom(E,F )).

Now suppose that M is a smooth manifold and πE : E → M and πF : F → M are smooth vector bundles over M. Then for any p ∈ M, there is a coordinate −1 neighborhood U of p so that we have local trivializations πE (U) ' U × E and −1 πF (U) ' U × F . A linear map D from smooth sections of E to smooth sections of F is a differential operator if in each local trivialization D is in the form of (6.1). P The smallest integer k with αj ≤ k, for all α for which aα 6= 0, is called the order of D.

Example 6.2. Let M be any smooth manifold and E (resp. F) be the bun- dle of p-forms (resp. p + 1-forms) on M. Locally a p-form is expressed as ω = P a dx ∧ . . . dx . Then exterior differentiation is an example of a j1<···

Now let H be a closed subgroup of a Lie group G, and let σ : G → GL(E) be a (smooth) representation. Then a vector bundle over G/H with fiber E over the identity coset may be constructed as follows. Define an equivalence relation on G×E by (gh, v) ∼ (g, σ(h)v), for g ∈ G, h ∈ H and v ∈ E. Then set E = G×E/ ∼. The notation G × E is often used for E. Let π : E → G/H be the map (g, v) 7→ gH. H Proposition 6.3. E has a unique structure of smooth manifold so that π : E → G/H is a smooth vector bundle. The space of smooth sections may be identified with C∞(G/H, E) ≡ {f : G → E | f is smooth and f(gh) = σ(h−1)f(g), for all h ∈ H, g ∈ G}.

Exercise (6.2) asks for a proof of this proposition. A homogeneous vector bundle over G/H is a finite rank bundle π : E → G/H with an action of G satisfying

(1) π(gv) = gπ(v) for all g ∈ G and v ∈ E, and (2) the action of H on π−1(eH) is by linear transformations.

It is a fact that each homogeneous vector bundle is equivalent to G × E for the H representation of H on E ≡ π−1(eH). 33 Example 6.4. Here are a few standard examples. E E g/h Tangent bundle (g/h)∗ Cotangent bundle ∧p(g/h)∗ Bundle of p-forms Example 6.5. Let Z = CP(n), complex projective n-space. Therefore, Z = {complex lines in Cn+1 through the origin}. We will denote the line through v ∈ Cn+1 by [v]. Then Z is a smooth manifold in a natural way. Gl(n + 1, C) acts transitively on Z, so Z ' GL(n + 1, C)/P where n a b o P = , a ∈ C×, b ∈ Cn and d ∈ GL(n, C) 0 d t (the stabilizer of the column vector e1 ' [(1, 0,..., 0) ]). The family of one dimen- sional representations  a b  χ = am m 0 d define the powers of the canonical bundle L1. Let’s check that L1 is the homoge- neous bundle for the character χ1. The canonical bundle is defined as the disjoint union of all lines through the origin of Cn+1. The fiber over a point Z in projective space is the line Z. It is enough to see that L1 is a homogeneous vector bundle and −1 that the action of P on the fiber π ([e1]) = Ce1 is by χ1: a b  a b  ze = aze = χ (ze ). 0 d 1 1 1 0 d 1

Our interest will be in homogeneous vector bundles. The space of sections C∞(G/H, E) is a representation of G. The action is by left translation of sections: −1 (`gf)(x) = f(g x). It is also the case that the space of sections is a representation d of the Lie algebra g of G:(X · f)(x) = dt f(exp(−tX)x)|t=0. So let (σ, E) and (%, F ) be finite dimensional representation of H and let E → G/H and F → G/H be the corresponding homogeneous vector bundles. Then the space of differential operators from E → G/H to F → G/H will be denoted by D(E, F). Such a differential operator D is G-invariant means that D(`gf) = `gD(f) for all g ∈ G and all smooth sections f of E. The space of G-invariant differential operators will be denoted by DG(E, F).

We will identify the space DG(E, F) in terms of the enveloping algebra. The statement is contained in Prop. 6.11. It is a generalization of Prop. 5.13. The statement and proof require a little preparation. Consider the Lie algebra h. Then the representation of h on E gives E a U(h)- module structure defined by Y1 ··· YN v = σ(Y1) ··· σ(YN )v, for Y1,...,YN ∈ g. (See

Exercise 5.3.) Consider the representation of h on HomC(E,F ) as defined in (4.3).

Then HomC(E,F ) is a U(h)-module as follows. There is an antiautomorphism of ◦ N U(h) defined (Y1Y2 ...YN ) = (−1) YN ...Y2Y1, whenever Yj ∈ h, and extending 34 linearly to all of U(h). Then for T ∈ HomC(E,F ), u ∈ U(h) the module action ◦ is given by (u · T )(v) = T (u v). Let J be the submodule of U(g) ⊗ HomC(E,F ) generated by

uh ⊗ T − u ⊗ (T ◦ σ(h◦)), for u ∈ U(g), h ∈ U(h) and T ∈ Hom(E,F ).

Then we define1

U(g) ⊗ HomC(E,F ) = U(g)⊗HomC(E,F )/J . U(h) This is the usual tensor product of the right U(h)-module U(g) with the left U(h)- ◦ module HomC(E,F ) (left action defined by h · T = T ◦ σ(h )). Now consider any finite dimensional vector space F . Let C∞(G, F ) be the space of smooth functions from G to F . We will temporarily use the notation of DG(E,F ) for the vector space of linear maps from C∞(G/H, E) → C∞(G, F ) with the fol- lowing two properties. (1) In any local coordinate system X ∂α (6.6) D = a , α ∂xα α with aα a smooth function with values in HomC(E,F ), and (2) for each g ∈ G,

D(`gf) = `gD(f).

P For each uj ⊗ Tj ∈ U(g)⊗HomC(E,F ), define an element of DG(E,F ) by X X ( uj ⊗ Tj)˜f ≡ Tj(R(uj)f) j

∞ for all f ∈ C (G/H, E). For Y ∈ h, u ∈ U(g) and T ∈ HomC(E,F ),

((uY ⊗ T )˜f)(g) = T ((R(u)R(Y )f)(g)) d = T ( σ(exp(−tY ))(R(u)f)(g))| dt t=0 = (u ⊗ T ◦ dσ(Y ◦))˜f(g).

Therefore J ˜ = 0, so there is a well-defined map U(g) ⊗ HomC(E,F ) into U(h) DG(E,F ).

The following lemma describes DG(E,F ) in terms of the enveloping algebra. P P Lemma 6.7. The map uj ⊗ Tj 7→ ( uj ⊗ Tj)˜ is an isomorphism

U(g) ⊗ HomC(E,F ) → DG(E,F ). U(h)

1The definition here is that of the usual tensor product for modules over an algebra. If M is a right module over an algebra R and N is a left module over R , then M⊗RN is M⊗CN/J, where J is the subspace generated by all mr ⊗ n − m ⊗ rn. If M is also a left module for another algebra S, then M⊗RN is a left S module by s · (m ⊗ n) = (sm) ⊗ n. 35 Proof. We need to see that the map is surjective and injective. For surjectivity, let D : C∞(G/H, E) → C∞(G, F ) be G-invariant and satisfy (6.6). By applying ∂α Lemma 5.9 we may write , in a neighborhood of e, each as a linear combination ∂xα (over C∞) of R(Xgb) for various multi-indices β. Therefore,

X α (Df)(g) = aα(g)(R(X )f)(g) α ∞ with aα ∈ C (G, Hom(E,F )) and g in a neighborhood of e. By G-invariance

Df(g) = (`g−1 D(f))(e)

= D(`g−1 f)(e) X α = aα(e)R(X )(`g−1 f)(e) α X α = aα(e)(R(X )f)(g) α X α ˜ = ((X ⊗ aα(e)) f)(g). α For injectivity we will use two fairly easy facts. The proofs are left as exercises.

Let {Y1,...,Ym} be a basis of m (with m any vector space complement of h in g).

Fact 6.8. If V is any U(h)-module, then

β m U(g) ⊗ V ' span{Y : β ∈ Z≥0} ⊗ V U(h) C β as vector spaces, and if {vj} is a basis of V , then X ⊗vj} is a basis of U(g) ⊗ V . U(h)

(The proof is similar to the proof in Exercise (5.5).)

Fact 6.9. There is a neighborhood U of 0 in m so that exp : U → π(exp(U)) is a diffeomorphism, and

C∞(G/H,E) → C∞(exp(U),E)

f 7→ f|exp(U) is onto.

(The idea of the proof is to begin with a coordinate neighborhood V of 0 in m. Take U to be an open ball about 0 in m (with respect to any metric) contained ∞ on V . Now let F ∈ C (exp(U),E). Then F extends to a smooth function F0 : −1 exp(U) → E (why?). Now set f0(exp(X)h) = σ (h)f0(exp(X)),X ∈ V, h ∈ H. Apply the C∞-Urysohn Lemma to extend to a function f n all of G.) P P Now we return to injectivity of our map uj ⊗ Tj 7→ ( uj ⊗ Tj)˜. Lemma 5.9 says that {R(Y β)} is independent over C∞(exp(U)) (as a set of vector fields on 36 P ∞ exp(U)). Suppose that ( uj ⊗ Tj)˜f = 0, for all f ∈ C (G/H, E). For injectivity P β we need to conclude that β Y ⊗ Tβ = 0 in U(g) ⊗U(h) Hom(E,F ). By Fact 6.9 we know that X β X β Tβ(R(Y )f)(g) = (R(Y )(T ◦ F ))(g) = 0, for all F ∈ C∞(exp(U),E) and all g ∈ exp(U). By the independence of {R(Y β)} ∞ (applied to each coordinate of Tβ ◦ F ), Tβ(F (g)) = 0 for all F ∈ C (exp(U),E), so Tβ = 0 for all gb. This completes the proof. 

The group H acts on U(g) ⊗ HomC(E,F ) by (6.10) h · (u ⊗ T ) = Ad(h)u ⊗ %(h)T σ(h−1), h ∈ H.

For h ∈ H,Y ∈ h, u ∈ U(g) and T ∈ HomC(E,F ), h · (uY ⊗ T − u ⊗ (T ◦ dσ(Y ◦))) ∈ J so that action of H preserves J . Thus, (6.10) is well-defined on U(g) ⊗ HomC(E,F ). U(h) H Denote by {U(g) ⊗ HomC(E,F )} the space of all elements in U(g) ⊗ HomC(E,F ) U(h) U(h) which are fixed by this action of H.

Proposition 6.11. ([11, Prop. 1.2]) For homogeneous vector bundles E and F over G/H H DG(E, F) ' {U(g) ⊗ HomC(E,F )} . U(h)

P ∞ Proof. In view of Lemma 6.7, it is enough to show that ( uj ⊗Tj)˜f ∈ C (G/H, F) ∞ P H for all f ∈ C (G/H, E) if and only if uj ⊗ Tj ∈ {U(g) ⊗ HomC(E,F )} . U(h) ∞ Let f ∈ C (G/H, E). For X ∈ g and T ∈ HomC (E,F ), d (X ⊗ T )˜f(gh) = T ( f(gh exp(tX))| ) dt t=0 d = T ( f(g exp(tAd(h)X)h)| ) dt t=0 = T (σ(h−1)(R(Ad(h)X)f)(g)) = %(h−1)(h · (X ⊗ T ))˜f(g).

It follows that for uj ∈ U(g) and Tj ∈ HomC(E,F ) X  −1 X  (uj ⊗ Tj)˜f (gh) = %(h ) (h · (uj ⊗ Tj))˜f (g). P ∞ ∞ Therefore, (uj ⊗ Tj)˜f ∈ C (G/H, F), for all f ∈ C (G/H, E) if and only if P P P P (uj ⊗ Tj)˜= h · (uj ⊗ Tj)˜, which is equivalent to (uj ⊗ Tj) = h · (uj ⊗ Tj) in U(g) ⊗ HomC(E,F ).  U(h) EXERCISES

(6.2) Fill in the details and prove the statements given in Example 6.5. (6.2) Prove Prop. 6.3. 37 LECTURE 7. Differential operators between sections of bundles, II

There are many interesting special cases of Prop. 6.11. In this lecture we will discuss a few. Suppose that G/H is a reductive homogeneous space. This means that h has an H-invariant complement in g. Let us write this as g = m⊕h, with Ad(h)m = m, for all h ∈ H. Then one may show that U(g) ' S(m) ⊕ U(g)h, as H-representations, where S(m) is the symmetric algebra of m. This decomposition of the enveloping algebra is closely related to the one of Exercise 5.5. See [6] for details. When E = F = C, the trivial H-representations, the submodule J is just U(g)h ⊗ C. Therefore,

Corollary 7.1. When G/H is reductive,

H DG(G/H) ' S(m) .

Example 7.2. The n-sphere is a homogeneous space for SO(n + 1). The group SO(n + 1) acts on Rn+1 by multiplication of a matrix in SO(n + 1) by a column vector. The stabilizer of (0,..., 0, 1)t is n g0 0 o H = : g0 ∈ SO(n) . 0 1 An H-invariant complement of h in g is   x1  x2  n   o  .  m =  .  .    xn −x1 −x2 · · · −xn 0 As an SO(n)-representation, m is equivalent to the standard representation on Rn.

If {X1,...,Xn} is an orthonormal basis of m (with respect to the negative of the H P 2 k trace form), then S(m) ' span{( Xj ) : k = 0, 1, 2,... }. The corresponding P 2 invariant differential operators are the powers of the Laplacian  = R(Yj) .

Continuing with the hypothesis that G/H is a reductive homogeneous space and m is an invariant complement, many first order differential operators may be constructed. The first we consider is exterior differentiation (the d-operator). By (5.3) the tangent space of G/H at the identity may be identified (as an H- representation) with m. For X∗ ∈ m∗, let e(X∗): ∧pm∗ → ∧p+1m∗ be exterior ∗ ∗ ∗ multiplication, i.e., e(X )ω = X ∧ ω. Choose a basis {Xj} of m and let {Xj } be the dual basis. P ∗ p ∗ p+1 ∗ Claim: δ = Xj ⊗ e(Xj ) is an H-invariant in S(m) ⊗ HomC(∧ m , ∧ m ). First we check that δ is independent of the basis (and dual basis) used. Let {Yj} be 38 ∗ P ∗ P ∗ another basis of m with dual basis {Yj }. Write Xj = akjYk and Xj = bljYl . Then X ∗ X ∗ Xj⊗e(Xj ) = akjbljYk ⊗ e(Yl ) j j,k,l (7.3) X X ∗ = ( akjblj)Yk ⊗ e(Yl ). k,l j ∗ ∗ However, the fact that {Xj} and {Xj } are dual bases, gives us δkl = Xk (Xl) = P ∗ P P p bpkYp ( q aqlYq) = p bpkapl. That is, the matrices A = (aij) and B = (bij) t −1 P ∗ satisfy A = B . Now it follows that the last term in (7.3) is Yk ⊗ e(Yk ). Now P ∗ P ∗ P ∗ we check that h · Xj ⊗ e(Xj ) = Xj ⊗ e(Xj ). Note that h · Xj ⊗ e(Xj ) = P ∗ ∗ ∗ j(Ad(h)Xj) ⊗ e(Ad(h)Xj ). Since (Ad(h)Xj )(Ad(h)Xk) = Xj (Xk) = δjk we see ∗ that {Ad(h)Xj} and {Ad(h)Xj } are a pair of dual bases. The claim now follows from independence of basis. It follows that a G-invariant operator d corresponds to δ. The formula is d = P j R(Xj) ⊗ e(Xj). This is exterior differentiation. Many G-invariant differential operators of order one on a symmetric space may be easily constructed. A symmetric space is of the form G/K, where K is a maximal compact subgroup. As discussed in Lecture 1, the Cartan decomposition g = k ⊕ s satisfies [k, s] ⊂ s. Therefore s is a K-invariant complement to k in g. Let E be a vector bundle over G/K. Define t(X): E → s ⊗ E by t(X)v = X ⊗ v. Fix an orthonormal basis {Xj} of s. Then a calculation similar to the calculation P above shows that Xj ⊗ t(Xj) is a K-invariant in S(s) ⊗ HomC(E, s ⊗ E). The P corresponding differential operator is R(Xj) ⊗ t(Xj). One may say more. Suppose the E and F are homogeneous bundles over G/K. Then HomK (s ⊗ E,F ) is isomorphic to a K-subrepresentation of DG(E, F). (This ∗ K ∗ K follows from the fact that s ' s and {s ⊗ HomC(E,F )} '{s ⊗ E ⊗ F } '

HomK (s ⊗ E,F ).) The differential operator corresponding to p ∈ HomK (s ⊗ E,F ) P is p ◦ ( Xj ⊗ t(Xj)). In particular, each irreducible constituent F of s ⊗ E gives a first order differential operator. Each first order differential operator arises this way. Now we consider the homogeneous space G/P¯, where P¯ is a parabolic subgroup of a reductive Lie group G. Then G/P¯ is not a reductive homogeneous space. This case will be of particular interest to us in the remaining lectures. As usual, write P = LN with N the nilradical and L a reductive subgroup. As above, let (σ, E) and (%, F ) be finite dimensional representations of P¯. By Prop. 6.11,

P¯ ∗ P¯ DG(E, F) ' {U(g) ⊗ HomC(E,F )} ' {U(g) ⊗ (E ⊗ F )} . U(p) U(p) In the right hand side E∗ ⊗ F is a U(p)-module by the dual representation σ∗ of p on E∗ and trivial action on F . 39 The map U(g) ⊗ (E∗ ⊗ F ) → (U(g) ⊗ E∗) ⊗ F U(p) U(p) u ⊗ (e∗ ⊗ f) 7→ (u ⊗ e∗) ⊗ f is an isomorphism as P¯-representations. Now, for very general reasons, we have {(U(g) ⊗ E∗) ⊗ F }P¯ U(p) ∗ ∗ (7.4) ' HomP¯ (F , U(g) ⊗ E ) U(p) ∗ ∗ ' HomL,n(F , U(g) ⊗ E ). U(p) It should be noted that N is connected, so in the second isomorphism above, equiv- ariance with respect to L and n is equivalent to equivariance with respect to P¯. If N¯ acts trivially on F then ∗ P¯ ∗ ∗ n (7.5) {(U(g) ⊗ E ) ⊗ F } ' HomL(F , {U(g) ⊗ E } ). U(p) U(p) The following characterization of the invariant differential operators has now been proved.

Proposition 7.6. For a parabolic subgroup P¯ of a reductive group G, and homo- geneous vector bundles E and F defined by representations of P¯ on E and F , ∗ ∗ DG(E, F) ' Homg,L U(g) ⊗U(p) F , U(g) ⊗U(p) E .

Proof. By the above discussion ∗ ∗ DG(E, F) ' HomL,n(F , U(g) ⊗ E ). U(p) Now ∗ ∗ ∗ ∗ HomL,n(F , U(g) ⊗U(p) E ) ' Homp,L(F , U(g) ⊗U(p) E ) ∗ ∗ ' Homg,L(U(g) ⊗U(p) F , U(g) ⊗U(p) E ). The last isomorphism is the general fact for modules over rings (with identity) that

HomR(R ⊗ M,N) ' HomS(M,N). S  Corollary 7.7. If, under the hypothesis of the proposition, N¯ is trivial on F , then ∗ ∗ n DG(E, F) ' HomL F , {U(g) ⊗ E } . U(p)

As we will see in Lecture 11, this proposition and corollary reduce the search for invariant differential operators to an algebraic question.

40 EXERCISES

(7.1) What are the invariant differential operators on SL(2, R)/SO(2)? (7.2) Show that the formula for exterior differentiation found above coincides with the usual one (defined for example in Example 6.2).

(7.3) Determine all G-invariant differential operators from the bundle Lk to Lm on

SL(2, R)/P¯. Here Lm denotes the homogeneous bundle for the character  a 0  χ = am m c d of P¯. (7.4) Verify the isomorphisms (7.4) and (7.5).

41 LECTURE 8. Conformally invariant systems, I

Suppose that g is a real or complex Lie algebra and M is a smooth manifold. Assume that there is a map π : g → D(M) satisfying the following two conditions. (A1) π is a Lie algebra homomorphism, i.e., π([X,Y ]) = [π(X), π(Y )] (the bracket on the righthand side being the commutator of differential operators). (A2) For each X ∈ g, π(X) is a first order differential operator.

By condition (2) each π(X) may be decomposed as π(X) = π0(X) + π1(X), with

π0(X) multiplication by a smooth function on M and π1(X) a vector field on M. Given a vector bundle E → M, we say that E → M is a g-bundle if there is a linear map πE : g → D(E) satisfying (1) πE ([X,Y ]) = [πE (X), πE (Y )], and

(2) [πE (X), f] = π1(X)f,. for all X,Y ∈ g and all f ∈ C∞(M).

Definition 8.1. A conformally invariant system on E (with respect to π and πE ) is a finite set of differential operators D1,...,Dn ∈ D(E) so that (1) {D1,...,Dn} is linearly independent at each point of M, and (2) for each X ∈ g there is an n × n matrix C(X) of smooth functions on M so that in D(E) X [πE (X),Dj] = Cij(X)Di. i ∞ The map C : g → Mn×n(C (M)) is called the structure operator. Two conformally 0 0 invariant systems D1,...,Dn and D1,...,Dn are said to be equivalent if there is ∞ an invertible matrix in A ∈ Mn×n(C (M)) so that

0 X Dj = AijDi. i

A conformally invariant system D1,...,Dn is reducible if there is an equivalent 0 0 0 0 conformally invariant system D1,...,Dn for which D1,...,Dm, m < n, is a con- formally invariant system.

Observe that the common solution space of Dj = 0 is a g-invariant subspace of C∞(M, E). Furthermore, two equivalent conformally invariant systems have the same common solution spaces.

Lemma 8.2. Let C be the structure operator for a conformally invariant system. Then for X,Y ∈ g,

C([X,Y ]) = π1(X)C(Y ) − π1(Y )C(X) + [C(X),C(Y )].

Proof. Let D1,...,Dn be a conformally invariant system with structure operator C. Then 42 X [πE ([X,Y ]),Dj] = Cij([X,Y ])Di. i On the other hand

[πE ([X,Y ]),Dj] = [[πE (X), πE (Y )],Dj], by condition (1) of the definition of g-bundle,

= [πE (X), [πE (Y ),Dj]] − [πE (Y ), [πE (X),Dj]], by the Jacobi identity, X X = [πE (X), Ckj(Y )Dk] − [πE (Y ), Ckj(X)Dk] k k X  = [πE (X),Ckj(Y )]Dk + Ckj(Y )[πE (X),Dk] k X  − [πE (Y ),Ckj(X)]Dk − Ckj(X)[πE (Y ),Dk] k X X = π1(X)Ckj(Y ))Dk + Cik(X)Ckj(Y )Di k i,k X X − π1(Y )Ckj(X))Dk − Cik(Y )Ckj(X)Di, k i,k by condition (2) of the definition of g-bundle, X = π1(X)Cij(Y ) − π1(Y )Cij(X) i X  + (Cik(X)Ckj(Y ) − Cik(Y )Ckj(X)) Di. k The lemma now follows by comparing the two expressions for [πE ([X,Y ]),Dj]. 

The theory of conformally invariant systems is developed in great generality in [1]. However, at this point we will consider a special case of the above formalism which is of particular interest. The situation we will focus on is related to the so-called principal series repre- sentations of a real reductive Lie group. Therefore, we let G be a real reductive Lie group and P a parabolic subgroup of G. Then P has Levi decomposition P = LN and we let P¯ = LN¯ be as in (3.7). We have no need to carefully define the prin- cipal series representations. The representations of interest to us are constructed by ‘inducing’ an irreducible finite dimensional representation (σ, E) of P¯. The irre- ducibility of E guarantees that σ(¯n) = IE for alln ¯ ∈ N¯. There is a representation of G on the space C∞(G/P,¯ E) of smooth sections of the homogeneous vector bundle E → G/P¯ given by (Π(g)ϕ)(x) = ϕ(g−1x)), ϕ ∈ C∞(G/P,¯ E).

We have not said what a representation of a Lie group on an infinite dimensional space is - there are some continuity conditions which are required to hold - however 43 it suffices here to simply note two points. First, Π(g1)Π(g2) = Π(g1g2), and second d Π may be differentiated in the sense that π(X)f = dt Π(exp(tX))f exists and gives a function in C∞(G/P,¯ E). It follows that π([X,Y ]) = [π(X), π(Y )]. Instead of studying C∞(G/P,¯ E) we will use an alternative realization of the representation, which is often referred to as the ‘N-picture’. For this, we apply the Bruhat decomposition (3.11). Thus NP¯ is a dense open subset of G. Then restricting sections ϕ ∈ C∞(G/P¯) to N gives an injection C∞(G/P¯) → C∞(N,E). Two spaces of functions will play a role:

C∞(N,E) = {F : N → E : F is smooth} and ∞ ∞ Cσ (N) = {F : N → E : F = ϕ|N for some ϕ ∈ C (G/P, E)}.

The differential of the action of G on C∞(G/P, E) may be transported to an action ∞ of g on Cσ (N); this is done in the proof of Prop. 8.4. Write the Bruhat decomposition of g ∈ NLN as

g = n(g)p¯(g) ∈ NP with the further decomposition

g = n(g)l(g)n¯(g) ∈ NLN.

At the Lie algebra level write

X = Xn + Xp ∈ n ⊕ p.

X = Xn + Xl + Xn ∈ n ⊕ l ⊕ n.

There is useful observation on the relationship between the above decompositions. d X = exp(tX)| dt t=0 d = n(exp(tX))l(exp(tX)n¯(exp(tX))| dt t=0 d d d = n(exp(tX))| + l(exp(tX))| + n¯(exp(tX))| dt t=0 dt t=0 dt t=0 is the decomposition of X with respect to g = n ⊕ l ⊕ n. Therefore, d X = n(exp(tX))| , n dt t=0 d Xl = l(exp(tX))|t=0, (8.3) dt d X = n¯(exp(tX))| , n dt t=0 d X = p¯(exp(tX))| . p dt t=0 44 Our immediate goal is to explicitly determine the action of the Lie algebra g on ∞ ∞ Cσ (N). For this we define right action of n on C (N,E). When Y ∈ n set d (R(Y )f)(n) = f(n exp(tY ))| , n ∈ N dt t=0 for f ∈ C∞(N,E). The notation R(u), u ∈ U(n) will be used for the natural extension of R to the enveloping algebra.

∞ Proposition 8.4. Let Y ∈ g. Then the action of g on Cσ (N) arising from the left translation on C∞(G/P, E) is given by

−1 −1 (8.5) (πσ(Y )f)(n) = σ((Ad(n )Y )p)f(n) − (R((Ad(n )Y )n)f)(n), for n ∈ N. Two special cases are:

−1 for Y ∈ l, πσ(Y )f = σ(Y )f − R(Ad((·) )Y − Y )f −1 for Y ∈ n, πσ(Y )f = −R(Ad((·) )Y )f.

−1 Proof. Suppose that f = ϕ|N . Then as long as g n ∈ NP , we have −1 −1 −1 −1 (πσ(g)f)(n) = ϕ(g n) = σ(p¯(g n) )f(n(g n)). For g close enough to the identity, by the openness of NP , g−1n ∈ NP . Now, taking g = exp(tY ) and differentiating at t = 0 we get:

(πσ(Y )f)(n) d d = σ(p¯(exp(−tY )n)−1)| f(n) + f(n(exp(−tY )n))| dt t=0 dt t=0 d d = σ(p¯(exp(−tAd(n−1)Y )−1))| f(n) + f(nn(exp(−tAd(n−1)Y ))| dt t=0 dt t=0 −1 −1 = σ((Ad(n )Y )p)f(n) − (R((Ad(n )Y )n)f)(n), by (8.3). This gives (8.5). For the two special formulas, write n = exp(X) ∈ N. Then Ad(n−1)Y = 1 Y − [X,Y ] + 2 [X, [X,Y ]] − · · · . So −1 −1 −1 (Ad(n )Y )p = Y and (Ad(n )Y )n = Ad(n )Y − Y, when Y ∈ l, −1 −1 −1 (Ad(n )Y )p = 0 and (Ad(n )Y )n = Ad(n )Y, when Y ∈ n.

Now the two formulas follow. 

We will often use the formulas of the Proposition when they are evaluated at n = e. Therefore we state this case separately as a corollary.

Corollary 8.6. In the action of g on C∞(N,E)

(πσ(Y )f)(e) = σ(Yp)f(e) − (R(Yn)f)(e), ( −(R(Y )f)(e), if Y ∈ n (πσ(Y )f)(e) = σ(Y )f(e) if Y ∈ p.

45 Remark 8.7. It is an important observation that the formula of the proposition ∞ implies that πσ extends to a representation of g on C (N,E).

Applying the proposition to the trivial one dimensional representation of P¯ shows that there is a map π : N → D(N) satisfying (A1) and (A2):

−1  (π(X)f)(n) = − R((Ad(n )X)n)f (n), n ∈ N,X ∈ g.

The homogeneous bundle E → G/P¯ restricted to N is the trivial bundle N×E → N. We will, by slight abuse of notation, refer to this trivial bundle as E. Then the space of smooth functions C∞(N,E) is the space of smooth sections of E → N. This bundle is a g-bundle under the action of Prop. 8.4, extended to all of C∞(N,E). The second property of the definition of a g-bundle holds since the first term on the right-hand side of (8.5) commutes with smooth function (in D(E)).

Considering g-bundles on E → N defined by πσ we make the following definition.

Definition 8.8. A conformally invariant system D1....,Dn on E → N is called straight if and only if [πσ(X),Dj] = 0 for all X ∈ n. Therefore, being straight is equivalent to the structure operator vanishing on n.

Theorem 8.9. ([1]) Any conformally invariant system on E → N is equivalent to a straight conformally invariant system.

Several comments are in order. If a differential operator D ∈ D(E) commutes with all X ∈ n, then D is an N-invariant differential operator. In this case we know from 6.11 (applied to G = N and H = {e}) that D is a linear combination of (u ⊗ T )˜, for some u ∈ U(n) and T ∈ HomC(E,E). The following proposition will be used later. It shows how certain computations may be reduced to computations at the identity.

Proposition 8.10. Suppose that D1,...,Dn ∈ D(E) and each Dj commutes with πσ(X), X ∈ n. Assume that D1,...,Dn are linearly independent at e and there is a map b : g → gl(n, C) so that

n X ([πσ(X),Dj]f)(e) = (b(X))ij(Dif)(e), i=1

∞ for all X ∈ g and all f ∈ C (N,E). Then D1,...,Dn is a straight conformally invariant system. The structure operator is given by C(X)(n) = b(Ad(n−1)X), for n ∈ N and X ∈ g.

Proof. As noted above, since Dj commutes with πσ(X),X ∈ n, Dj is invariant under left translation `n−1 . Now the independence at an arbitrary n follows from N-invariance. 46 −1 We will use the fact that `n−1 πσ(X)`n = πσ(Ad(n )X), which follows easily from the last formula in (8.4).

([πσ(X),Dj]f)(n) = (`n−1 ([πσ(X),Dj]f))(e) −1 = ([πσ(Ad(n )X),Dj](`n−1 f))(e) X −1 = b(Ad(n X))(Dj(`n−1 f))(e) i X −1 = b(Ad(n X))(Djf)(n). i The conformal invariance, along with the formula for the structure operator, now follows.  EXERCISES

(8.1) If D1,...,Dm is a straight conformally invariant system, show that the struc- ture operator C(X) is a matrix of constants when X ∈ l. (Hint: Consider Lemma 8.2.)

47 LECTURE 9. Prehomogeneous vector spaces and invariant theory

Let G be a complex Lie group and ρ : G → gl(V ) a holomorphic representation (that is, the differential is complex linear). Then (ρ, V ) is a prehomogeneous vector space if there exists an open G-orbit in V . Over the next several lectures show how to use the invariant theory of a prehomogeneous vector space to construct conformally invariant systems of differential equations. Suppose (ρ, V ) is a prehomogeneous vector space. A fact is that there is precisely one open orbit and this open orbit is dense in V . If V = ρ(G)v then we call v a generic element of V . The complement of the dense orbit is called the singular set. The singular set consists of the elements v ∈ V so that dim(StabG(v)) > dim(G)−dim(V ), and v is generic if and only if dim(StabG(v)) = dim(G)−dim(V ). Here are a few simple examples. Let G = GL(n, C) and V = Sym(n, C). Then ρ(g)X = gXgt defines a prehomogeneous vector space. The dense orbit is {ggt : g ∈ G}. This is the orbit of I. The stabilizer of I is {g ∈ G : ggt =

I} = O(n, C), therefore dim(StabG(v)) = dim(G) − dim(V ). It follows that B ∈ Sym(n, C) is generic if and only if the corresponding bilinear form is nondegenerate, i.e., det(B) 6= 0. There are a finite number of orbits of G on V . These orbits are

Om ≡ {B ∈ Sym(n, C) : rank(B) = m}, m = 0, 1, . . . , n.

Another example is the action of G = (GL(1))n ' (C×)n on Cn−1 given by α1 α2 αn−1 (α1, . . . , αn) · (z1, . . . , zn−1) = ( z1, z2,..., zn−1). α2 α3 αn The generic elements are precisely those elements of Cn−1 having no coordinates equal to zero. An important family of examples arises as follows. Suppose P = LN is a para- bolic subgroup of a complex reductive group and suppose that N is abelian. Then the adjoint representation of L on n is a prehomogeneous vector space. In fact, L has only a finite number of orbits on n. Examples of this form are called preho- mogeneous vector spaces of parabolic type. See [10, Ch. X] for a proof of a slightly more general fact due to Vinberg. Often a prehomogeneous vector space has a relatively invariant polynomial, that is, a polynomial f(x) on V satisfying f(ρ(g)x) = χ(g)f(x), for some character χ of G and all g ∈ G. The first two examples given above have relatively invariant polynomials. In the first example det(ρ(g)B) = det(gBgt) = det(g)2 det(B), so f(X) = det(X) is a relatively invariant polynomial and χ(g) = det(g)2 is the corresponding character of G. Note that the singular set is {B ∈ V : f(B) = 0}. Now consider the

m1 mn second example. The characters of G are χ(α1, . . . , αn) = α1 ··· αn , for all n k1 kn−1 (m1, . . . , mn) ∈ Z . Each monomial pk(Z) = z1 ··· zn−1 is a relative invariant. 48 Then

k1 k2−k1 kn−1−kn−2 −kn−1 pk((α1, . . . , αn) · Z) = α1 α2 . . . αn−1 αn pk(Z).

The semigroup of relatively invariant polynomials is generated by {zj : j =

1, . . . , n}. Here the singular set is the union of the irreducible hypersurface {zj = 0}. A theorem of Bernstein [3] states that for any polynomial f(X) on a vector space

V there exists a polynomial differential operator P (s, xj, ∂xj ) on V , polynomial in s, and a polynomial b(s) so that

s s−1 P (s, xj, ∂xj )f(x) = b(s)f(x) . Such polynomials form an ideal in C[s]. One refers to the monic generator as the b-function of f(x). In general, the b-function (and the P (s, xj, ∂xj )) are difficult to find.

One important situation for which P (s, xi, ∂xi ) can be calculated is when f(x) is the relative invariant polynomial for a prehomogeneous vector space for a reductive group G. Suppose that G is a complex reductive group and (ρ, V ) is a prehomo- geneous vector space for G. Assume that f(x) is a relatively invariant polynomial; f(ρ(g)x) = χ(g)f(x).

Proposition 9.1. (See [9, Prop. 2.21]) The dual (ρ∗,V ∗) is a prehomogeneous vector space with a relatively invariant polynomial. Denoting this relative invariant by f ∗(x), we have f ∗(ρ∗(g)y) = χ−1(g)f ∗(y), g ∈ G, y ∈ V ∗ and deg(f ∗(y)) = deg(f(x)).

To find P (s, xi, ∂xi ) we use the following construction. For a finite dimensional vector space W , given a polynomial p(y) ∈ P (W ∗) there is a unique differential hy ,xi hy ,xi operator p(∂x) on W with constant coefficients satisfying p(∂x)e = p(x)e .

To describe this differential operator, choose a basis {ej} of W and a dual basis ∗ ∗ ∗ ∗ P {ej } of W (i.e., a basis of W so that hej , eki = δjk). Write x = xiei and P ∗ ∂ y = yie . Then p(∂x) is obtained from p(y) by replacing each yj by ∂x = . i j ∂xj For (ρ, V ) and G as above we now may state the following proposition.

Proposition 9.2. (See [9, Prop. 2.22]) There is a polynomial b(s) so that ∗ s+1 s f (∂x)f(x) = b(s)f(x) .

In fact, the polynomial b(s) is the Bernstein polynomial.

Example 9.3. The action of GL(p, C)×GL(q, C) on V = Mp×q(C) by ρ((l1, l2))X = −1 l1Xl2 defines a prehomogeneous vector space of parabolic type. Let’s assume that p ≤ q. Then the generic elements are the matrices of rank p. There is a relative invariant if and only if p = q. Let us assume this. Then f(X) = det(X) is a −1 −1 relative invariant since f(l1Xl2 ) = χ(l1, l2)f(X), with χ(l1, l2) = det(l1l2 ). The dual representation is equivalent to ρ1 : GL(p, C) × GL(q, C) → gl(Mp×p), defined 49 by ρ1(l1, l2) = ρ(l2, l1). (Check: If h , i is the trace form, then X → hX, i is a vec- ∗ ∗ −1 −1 tor space isomorphism V → V . Also, (ρ (l1, l2)hX, i)(Z) = hX , ρ(l1 , l2 )Zi = −1 −1 T race(Xl1 Zl2) = T race(l2Xl1 Z) = hρ1(l2, l1)X,Zi.) For the dual prehomoge- neous vector space the relative invariant is also det(Y ). The proposition states that there is a polynomial b(s) so that

s+1 s (9.4) det(∂xij ) det(X) = b(s) det(X) . This may be seen as follows. We claim that

s+1 s Fs(X) = det(∂xij ) det(X) / det(X) is a GL(p, C) × GL(p, C) invariant rational function on V . This holds because

−1 −1 s+1 −1 s Fs(l1Xl ) = det(∂ −1 ) det(l1Xl ) / det(l1Xl ) = fs(X). 2 l2xij l1 2 2

By Exercise 9.2, Fs(X) must be a constant (depending on s). One may check that this constant is polynomial in s. We arrive at (9.4). A (somewhat involved) s+1 s calculation shows that det(∂xij ) det(X) = (s + 1)(s + 2) ... (s + p) det(X) .

Now we turn to invariant theory. By this we simply mean the decomposition of the symmetric algebra S(V ) into the direct sum of irreducible G representations for a representation (ρ, V ) of G. Let b = h + n˜ be a Borel subalgebra of g. Therefore a set of positive roots ∆+ is determined by requiring α ∈ ∆+ if and only if g(α) ⊂ n˜. Then, as discussed in Lecture 4, the highest weight vectors are the vectors in V annihilated be n˜. In particular, the highest weight vectors of the irreducible constituents of S(V ) span S(V )n˜ ≡ {u ∈ S(V ): X · u = 0. for all X ∈ n˜}.

n˜ Note that since X · u1u2 = (X · u1)u2 + u1(X · u2), S(V ) is a subalgebra of S(V ). Since S(V ) ' P (V ∗), as G-representations, it is equivalent to consider the ques- tion of decomposing P (V ∗). When G = SL(2, C) and V = C2 is the ‘standard’ representation, then

n˜ m S(V ) = {v+ : m = 0, 1,... }' C[v+].

Here we are taking v+ to be a highest weight vector in V . The corresponding decomposition of S(V ) is ∞ X S(V ) ' Vm, m=0 with Vm the irreducible representation of SL(2, C) of dimension m + 1. In later lectures we will use the following theorem which describes the decomposi- tion of S(V ) for a family of prehomogeneous vector spaces. Suppose that P = LN is a parabolic subgroup of a complex reductive group G and N is abelian. The decomposition of S(n) may be described in a very nice way. To do this we must introduce the notion of strongly orthogonal roots. Two roots α and β are called 50 strongly orthogonal if and only if α ± β are not roots. (Note that this means that [g(±α), g(±β)] = 0.) Form a family of strongly orthogonal roots as follows. Let ∆+ be a positive system of roots defining a Borel subalgebra containing p. Then as described earlier, p is determined by specifying one simple root β1. Then ∆(l) = span{∆ \{β1}} ∩ ∆ and β1 is the unique simple root in ∆(n). Then let us write the simple roots as ∗ Π = {β1, . . . , βn}. Let {H1,...,Hn} be the basis of h dual to the basis Π of h . The lexicographic order gives the positive system ∆+. We will use this order to choose a family of strongly orthogonal roots. Let γ1 be the greatest root in ∆(n).

Then choose γ2 to be the greatest root in ∆(n) which is strongly orthogonal to γ1.

Continue by choosing γj+1 to be the greatest root in ∆(n) strongly orthogonal to

γ1, . . . , γj. Continue this until there are no roots in ∆(n) strongly orthogonal to

γ1, . . . , γr. r X + Fact:m ˜ γ˜ ≡ mjγj is dominant with respect to ∆ (l) when m1 ≥ · · · ≥ mr. j=1

Theorem 9.5. ([14]) For L, n and the choice of positive systems of roots as above, X S(n) ' Em˜ γ˜.

m1≥···≥mr ≥0

Stated slightly differently, this theorem says that there are ‘fundamental’ invari- n˜ ants u1, . . . , ur in S(n) which freely generate the algebra. So S(n) ' C[u1, . . . , ur], m1 mr as algebras. So, u1 ··· ur are the highest weight vectors of the irreducible con- stituents of S(n).

Example 9.6. Consider the parabolic subgroup P = LN in GL(n, C), n = p + q, defined by the simple root p − p+1 (for the positive system of roots as in (2.8)). Assume p ≤ q. In block form this is the subgroup n a b o P = : a ∈ GL(p, C), d ∈ GL(q, C) and b ∈ M (C) 0 d p×q Then L ' GL(p, C) × GL(q, C), and the adjoint action is equivalent to the action −1 (`1, `2)X = `1X`2 , X ∈ Mp×q(C). The roots in n are i − j for 1 ≤ i ≤ p < j ≤ p + q. The family of strongly orthogonal roots described above is

γ1 = 1 − n,

γ2 = 1 − n−1, . .

γp = p − n−p+1 X Then S(n) ' E(m1,...,mp,0,...,0,−mp,...,−m1).

m1≥···≥mr ≥0 51 Example 9.7. Consider the parabolic subgroup n a b  o P = : a ∈ GL(n, C) and b ∈ Sym(n, C) 0 (at)−1 in Sp(2n, C). Then L ' GL(n, C) and the action on n ' Sym(n, C) is `·X = `X`t,

X ∈ Sym(n, C). The set of strongly orthogonal roots is γi = 2i, i = 1, . . . , n. P Therefore Schmid’s Theorem says that S(n) = E(2m1,2m2,...,2mn), m1 ≥ · · · ≥ mn ≥ 0. It is usually useful to know explicitly the decomposition. For this we will use the L-isomorphism S(n) ' P (n∗). One may easily check that n∗ is Sym(n, C) t −1 −1 with the action ` · X = (` ) X` . Let µj = γ1 + ··· + γj and let fj(X) be the determinant of the principal (upper left) j × j minor of X. Then       a1 a1 a1    ..   ..   ..   .  · fj (X) = fj( .  X  . ) an an an 2 = (a1 ··· aj) fj(X), so fj has weight µj. In Exercise (9.3) you will show that fj(X) is annihilated by n˜, so is a highest weight vector. You will also show that the irreducible subrepresentation isomorphic to Eµj is the span of the determinants of the j × j minors.

EXERCISES

(9.1) Determine all parabolic subalgebras of the complex classical groups for which n is abelian. (9.2) Prove that the only G-invariant rational functions on a prehomogeneous vector space are the constants. (9.3) In the setting of Example 9.7, describe the Lie algebra action of l on poly- nomials. For j = 0, 1 . . . n, show that fj(X) is annihilated by n˜ and the span of the determinants of the j × j minors is an irreducible L-representation of highest weight µj.

52 LECTURE 10. An example of a conformally invariant system

In this lecture we will explicitly write down a conformally invariant system of differential operators. Consider the homogeneous space G0/P 0, P0 = L0N0 and

P 0 = θ(P0) with

G0 = GL(n, R)   `1 0 L0 = { : `1 ∈ GL(p, R), `2 ∈ GL(q, R)} (10.1) 0 `2 IX N = { : X ∈ M (R)}. 0 0 I p×q

In this example N0 is abelian, a property which simplifies matters. We will assume that p ≤ q.

We will write g (resp. n, etc.) for the complexification of g0 (resp. n0, etc.). Also, we take G = GL(n, C) and P = LN the (complex) subgroup with Lie algebra p.

For the positive system of roots of Example 2.8, p is defined by S = {p − p+1} (see 3.5). We will use the notation n˜ for the span of the root spaces g(α) with g(α) ⊂ l and α ∈ ∆+. Note that n˜ is not to be confused with n, and h + n˜ + n is the Borel subalgebra defined by ∆+. As noted in Lecture 9, n is a prehomogeneous vector space under the adjoint action of L. The L-action can be written as follows: L = GL(q, C) × GL(q, C) −1 acting on Mp×q(C) by (`1, `2) · X = `1X`2 . The orbits are given by the rank of X and the closures of the orbits are given by

Om = {X ∈ Mp×q(R) : determinant of all (m + 1) × (m + 1) minors vanish}.

Our first goal is to decompose S(n) ≡ P(n) as a L-representation. Theorem (9.5) tell us that the fundamental n˜-invariants have weights

γ1 = 1 − n = (1, 0,..., 0, −1)

γ1 + γ2 = (1 − n) + (2 − n−1) = (1, 1, 0,..., 0, −1, −1) ... p X γ1 + ... + γp = (i − n−i+1) = (1, 1,..., 1, 0,..., 0, −1,... − 1). 1

The (unique) L-subrepresentations of P(n) having these highest weights are

Vm = span{det(m × m minors)}.

It will be important to introduce a little notation for the minors. Let Y = (yij) ∈

Mp×q(C). For some m = 1, . . . p let (10.2) R ⊂ {1, . . . q},S ⊂ {1, . . . p} with #R = #S = m, 53 m and let ∆R,S(Y ) be the determinant of the matrix obtained by deleting the i-th rows for i∈ / R and the j-th columns for j∈ / S. An easy calculation shows that the m weight of ∆R,S(Y ) is X X (10.3) i − p+j. i∈S j∈R

m Lemma 10.4. Vm = span{∆R,S(Y ): R,S satisfy (10.2)} is an irreducible L- representation of highest weight γ1 + ... + γm.

Proof. For L -invariance of Vm, suppose that ` = (e, `2) with `2 ∈ GL(q, C). Then

`2Y has rows which are linear combinations of the rows of Y . Therefore the de- terminant of an m × m minor of ` · Y is a linear combination of determinants of −1 other m × m minors. A similar argument applies for (`1, e) · Y = Y `1 . A glance at (10.3) shows that the only dominant weight which occurs is γ1 + ... + γm. We conclude that Vm is irreducible. 

∞ ∞ Our conformally invariant system will act on C (N0) = C (N0, C) with the action of g coming from the principal series representation

∞ −1 C (G0/P 0, Cs) = {f : G0 → C : f(g`n) = χs(` )f(g)},

det(`1) s/2 where χs(`1, `2) = | | . det(`2) s Note that dχs(X,Y ) = 2 (T r(X) − T r(Y )). As described in Prop. (8.4), the ∞ action of Z ∈ g on C (N0) is given by

−1 −1 (10.5) (πs(Z)f)(n) = dχs((Ad(n )Z)p)f(n) − (R((Ad(n )Z)n)f)(n), n ∈ N0.

Recall that (·)p denotes the projection to p in the decomposition g = n + (l + n). Similarly for (·)n. Let us make this action explicit for our example. We will use the notation

0 X IX n = exp = ,X ∈ M (R). X 0 0 0 I p×q

The matrix with 1 in the ij-place and 0 elsewhere is denoted by Ei,j. We write P X = (xi,j) (i.e, X = xijEi,j).

Proposition 10.6. For G = GL(n, R), n = p + q and P = LN as in (10.1)

−1 (1) (π(`)f)(n) = χs(`)f(`n` ) 0 E  (2) π( ji = − ∂ 0 0 ∂xji   0 0 P ∂ (3) π( ) = −s xji − k,l xki xjl ∂x . Eij 0 kl

54 Proof. The statement in (1) follows from the definition of the representation π of G, realized on C∞(N). To prove (2) apply (10.5): 0 E  I −X 0 E  (π ji f)(n ) = −(R(Ad ji )f)(n ) 0 0 X 0 I 0 0 X 0 E  = −(R( ji )f)(n ) 0 0 X df = − (n ) dt X+tEji ∂f = −( )(nX ). ∂xji The last identity in the Lemma also follows from (10.5). We need to compute  0 0 the n ⊕ l ⊕ n decomposition of Ad(n−X ) : Eij 0       0 0 0 −XEijX −XEij 0 Ad(n−X ) = + . Eij 0 0 0 Eij EijX    0 0   s Therefore, dχs Ad(n−X ) = 2 (−T r(XEij)−T r(EijX)) = −s xji. Eij 0 l+n We also have,      0 0  −XEij 0 r Ad(n−X ) = −r Eij 0 n Eij EijX X 0 E  = − x x r kl ki jl 0 0 k,l X ∂ = − xki xjl . ∂xkl k,l Now (c) follows. 

The construction of the conformally invariant system from Vm is as follows. There is a natural vector space isomorphism between P(n) and the constant co- efficient differential operators on n. Given P (Y ) ∈ P(n), P (∂x) is the constant coefficient differential operator satisfying hY,Xi hY,Xi (10.7) P (∂x) e = P (Y ) e ,  0 0 0 X where hY,Xi = T r( ) = T r(YX). Explicitly, given a monomial Y 0 0 0 mij ∂ mij P (Y ) = Πi,jy , P (∂X ) = Πi,j( ) . ij ∂xji Theorem 10.8. For m = 1, 2, . . . , p and s = −(m − 1) m {∆RS(∂x): R,S as in (10.2)} ∞ is a conformally invariant system on Cs (n).

The proof of this Theorem will occupy the remainder of this lecture. Our strategy     0 0 m is to write a formula for the commutator πs , ∆RS(∂x) for arbitrary s. Eij 0 55 We will obtain a sum of two terms (see (10.19)). The first term is of the form needed for a conformally invariant system, the second term is the product of a linear function of s and sum of derivatives. The value s = −(m − 1) makes the second term vanish. m Note that [πσ(X), ∆RS(∂x)] = 0, for X ∈ n, as is clear by (2) of Prop. 10.6. The m computation of [πσ(X), ∆RS(∂x)] = 0, for X ∈ l is left as an exercise. One sees m that this commutator lies in spanC{∆RS(∂x)}, regardless of the value of s. ∂ The computation of the commutator takes place in the Weyl algebra C[xij, ]. ∂xij Before beginning the computation we write down a few formulas which will be used. In the Weyl algebra we have: ∂ ∂ (10.9) [ , xij] = 1, [ , xkl] = 0 when (i, j) 6= (k, l). ∂xij ∂xij

(10.10) [u, v1 v2] = [u, v1] v2 + v1 [u, v2].

(10.11) [u, v1 v2 v3] = [u, v1] v2 v3 + v1 [u, v2] v3 + v1 v2 [u, v3].

The next formula simply uses the expansion of a determinant along a row or column. We must take some care in writing this. Write R = {r1, . . . , rm}, S = 0 {s1, . . . , sm}. When j ∈ R (resp. l ∈ S), let j = rj0 (resp. l = sl0 ) define j (resp. l0). Then for j ∈ R,

 m ∆R,S if k = j X j0+l0 m  (−1) xk,l ∆R,S,j,l = 0 if k ∈ R \{j}  m l∈S ±∆R−{j}∪{k},S if k∈ / R. m m−1 m Here ∆R,S,j,k = ∆R−{j},S−{k}, the determinant of the matrix obtained from ∆R,S by omitting the j0-th row and the k0-th column. This, along with the similar expansion along the i-th column, gives

 m ∆R,S(∂X ) if k = j X j0+l0 m  (10.12) (−1) ∆R,S,j,l(∂X ) ∂kl = 0 if k ∈ R \{j} l∈S  m ±∆R−{j}∪{k},S(∂X ) if k∈ / R.

 m ∆R,S(∂X ) if l = i X k0+i0 m  (10.13) (−1) ∆R,S,k,i(∂X ) ∂kl = 0 if k ∈ S \{i} k∈R  m ±∆R,S−{i}∪{l}(∂X ) if l∈ / S. Our final notation is that ( 1 if j ∈ R δR(j) = 0 if j∈ / R.

We similarly define δS. 56 Lemma 10.14.

m i0+j0 m [∆R,S(∂X ), xji] = δR(j) δS(i)(−1) ∆R,S,j,i(∂X ).

Proof. This is clearly zero unless j ∈ R and i ∈ S. When j ∈ R and i ∈ S, (10.12) applies with k = j to give

m X l0+j0  m  [∆R,S(∂X ), xji] = (−1) ∆R,S,j,l(∂X ) ∂jl, xji l∈S j0+l0 m = (−1) ∆R,S,j,l(∂X ), by (10.9).

 Lemma 10.15.

X  m  m ∆R,S(∂X ), xki xjl∂kl = δR(j)δS(i)(1 − m)∆R,S,j,i(∂X ) k,l ! m X  m  + δR(j) xji∆R,S(∂X )) + xki ±∆R−{j}∪{k},S(∂X ) k∈ /R ! m X  m  + δS(i) xji∆R,S(∂X ) + xjl ±∆R,S−{i}∪{l}(∂X ) . l∈ /S

Proof. By Lemma(10.14) and (10.11),

X  m  ∆R,S(∂X ), xki xjl∂kl k,l (10.16) X m m  = xki[∆R,S(∂X ), xjl] ∂kl + [∆R,S(∂X ), xki] xjl ∂kl . k,l The right hand side of (10.16) is

X j0+l0 m δR(j)δS(l) xki(−1) ∆R,S,j,i(∂X ) δkl kl k0+i0 m  + δR(k)δS(i)(−1) ∆R,S,k,i(∂X )xjl ∂kl ! X X j0+l0 m = δR(j) xki (−1) ∆R,S,j,l(∂X ) ∂kl k l∈S ! X X k0+i0 m + (−1) ∆R,S,k,i(∂X ) xjl∂kl . l k∈R

In the first term, by (10.12) only the k = j and k∈ / R terms survive. They give ! m X  m  (10.17) δR(j) xji∆R,S(∂X ) + xki ±∆R−{j}∪{k},S(∂X ) . k∈ /R The second term requires a little more work. By (10.9)

xjl ∂kl = ∂klxjl − δjk. 57 Combining (10.13) with this observation, we can show that the second term equals m δS(i) ∆R,S(∂X )xji+ X m i0+j0 m  ± ∆R,S−{i}∪{l}(∂X )xjl − qδR(j)(−1) ∆R,S,j,l(∂X ) . l∈ /S

Now we must move the xjl to the left of the expression using Lemma(10.14). The second term now is  m j0+i0 m  δS(i) xji∆R,S(∂X ) + δR(j)(−1) ∆R,S,j,i(∂X )

X  m m  + ±xjl∆R,(S−{i}∪{l})(∂X ) ± δR(j)∆R,S−{i}∪{l},j,l(∂X ) l∈ /S i0+j0 m − qδR(j)(−1) ∆R,S,j,i(∂X ).

Since ±∆R,S−{i}∪{l},j,l(∂X ) = ∆R,S,j,i(∂X ), this equals ! m X m δS(i) xji∆R,S(∂X ) + xjl(±)∆R,S−i∪l(∂X ) (10.18) l∈ /S i0+j0 m + δR(j)δS(i)(−1) (1 − m)∆R,S,j,i(∂X ). Combining (10.17) and (10.18) we obtain the formula in the statement of the Lemma. 

We now conclude that m [π(Eij), ∆R,S] i0+j0 m = δR(j) δS(i)(2s − 1 + m)(−1) ∆R,S,j,i(∂X ) ! m X  m  (10.19) + δR(j) xji∆R,S(∂X )) + xki ±∆R−{j}∪{k},S(∂X ) k∈ /R ! m X  m  + δS(i) xji∆R,S(∂X ) + xjl ±∆R,S−{i}∪{l}(∂X ) . l∈ /S This completes the proof of the Theorem.

EXERCISES

m (10.1) Compute [πσ(X), ∆RS(∂x)], for X ∈ l.

58 LECTURE 11. Conformally invariant systems, II

In this lecture the connection between conformally invariant systems and Verma modules will be investigated. It turns out that each conformally invariant sys- tem acting on C∞(N, E) determines a finite dimensional p-subrepresentation of ∗ U(g) ⊗U(p) E . Simple conditions on the conformally invariant system guarantee ∗ n ∗ that (a) F ⊂ {U(g) ⊗U(p) E } , (b) the module U(g) ⊗U(p) E is reducible (c) there is a (nonzero) G-invariant differential intertwining operator between bundles E and F ∗ on G/P¯.

Modules of the form U(g) ⊗U(p) V for finite dimensional irreducible represen- tations of p are called generalized Verma modules. Therefore, by the discussion of Lecture 7, the existence of conformally invariant systems acting on C∞(N, V) ∗ gives information about Homg(U(g) ⊗U(p) F, U(g) ⊗U(p) E ), and in particular, the reducibility of generalized Verma modules. Our first goal is to show how a conformally invariant system on C∞(N, E) gives a ∗ p-submodule of U(g) ⊗U(p) E . We will keep in place our assumptions from Lecture 8 that (σ, E) is a representation of P¯, E is the trivial bundle on N, E is a g-bundle via πσ and all conformally invariant systems are straight. In addition we assume that N¯ acts on E by the identity. We will begin by proving a well-known lemma which states that a Verma module is isomorphic to a space of distributions on N supported at the identity. The E∗- valued distributions on N supported at the identity may be defined as

0 ∗ ∞ De(N, E ) ≡ {Λ:C (N, E) → C : Λ is continuous and Λ(f) = 0 if e∈ / supp(f)}. The continuity of Λ is with respect to the C∞ topology on C∞(N, E). The Lie 0 ∗ algebra g acts on De(N, E ) by ∞ (X · Λ)(f) = −Λ(πσ(X)f),X ∈ g, f ∈ C (N, E).

0 ∗ Note that this makes De(N, E ) a U(g)-module by ◦ (u · Λ)(f) = Λ(πσ(u )f), u ∈ U(g), where u → u◦ is the involution defined in Lecture 6. Note that N is diffeomorphic to Rn (via the exponential map). A standard fact about distributions on Rn is that those supported at 0 are precisely the distributions of the form X ∂αf Λ(f) = aα ∂xα x=0 α for some finite set of aα ∈ C. See, for example, [13, Thm. 6.25]. It follows that 0 ∗ each distribution in De(N, E ) is of the form (11.1) Λ(f) = he∗ , (R(u)f)(e)i, for some u ∈ U(n). 59 Lemma 11.2. The linear map

∗ 0 ∗ φ : U(g) ⊗U(p) E → De(N, E ) determined by

∗ ∗ ◦ φ(u ⊗ e ): f 7→ he , (πσ(u )f)(e)i is a U(g)-module isomorphism.

Proof. It should first be checked that φ is well-defined. For this it is enough to check that for Y ∈ p φ(uY ⊗ e∗) = φ(u ⊗ σ∗(Y )e∗). This is a calculation:

∗ ∗ ◦ φ(uY ⊗ e ) = he , (πσ((uY ) )f)(e)i ∗ ◦ ◦ = he , (πσ(Y )πσ(u )f)(e)i ∗ ◦ = −he , σ(Y )(πσ(u )f)(e)i, by (8.5), with n = e, Y ∈ p ∗ ∗ ◦ = hσ (Y )e , (πσ(u )f)(e)i = φ(u ⊗ σ∗(Y )e∗)(f).

∗ The surjectivity of φ follows from (11.1) along with the fact that U(g)⊗U(p) E ' U(n) ⊗ E∗ as vector spaces and the fact of Exercise (11.1). ∗ ∗ P ∗ For injectivity, suppose uj ⊗ ei is a basis of U(n) ⊗ E and φ( aijuj ⊗ ej ) = 0. P ∗ Then, by Exercise 11.1, ij aijhei , (R(uj)f)(e)i = 0. Therefore, by independence ∗ P ∞ of {ei }, j aij(R(uj)f)(e) = 0, for all i and all f ∈ C (N, E). By N-invariance of P P R(uj), j aijR(uj)f = 0, for all i and f. By (5.13) this implies j aijuj = 0, for all i. By the independence of uj, all aij = 0.

The final item to check is that φ is a U(g)-homomorphism. For u1 ∈ U(g),

∗ ∗ ◦ φ(u1u ⊗ e ) = he , (πσ((u1u) )f)(e)i ∗ ◦ ◦ = he , (πσ(u )πσ(u1)f)(e)i ∗ ◦ = φ(u ⊗ e )(π(u1)f) ∗  = u1 · φ(u ⊗ e ) (f).

 Remark 11.3. This lemma was nearly proved earlier. In Lemma 6.7 we may take F = C, the trivial one dimensional representation. Then

∗ ∗ DG(E,F ) ' (U(g) ⊗U(p) E ) ⊗ C = U(g) ⊗U(p) E .

However DG(E,F ) is the space of linear maps D : C∞(G/P,¯ E∗) → C∞(G) 60 satisfying (6.6). Following such a map by evaluation at e gives a distribution sup- ported at e. Each such distribution is of this form by the standard result about distributions mentioned earlier.

Now suppose that D1,...,Dm is a conformally invariant system of differential operators acting on C∞(N, E). Write X ∞ [πσ(X),Dj ] = Cij(X)Di,Cij(X) ∈ C (N). i ∗ ∗ For each e ∈ E define the order zero distribution Λe∗ by ∗ ∞ Λe∗ (f) = he , f(e)i, f ∈ C (N, E). For D ∈ D(E) define ∗ (DΛe∗ )(f) = he , (Df)(e)i. If Y ∈ p we use this definition and (8.6) to compute  Y ·(DjΛe∗ ) (f) = −DjΛe∗ (πσ(Y )f) ∗ = −he , (Djπσ(Y )f)(e)i ∗  ∗  = he , [πσ(Y ),Dj]f (e)i − he , πσ(Y )Djf (e)i X ∗ ∗ = Cij(Y )(e)he , (Dif)(e)i − he , σ(Y )(Djf)(e)i i X = Cij(Y )(e)(DiΛe∗ )(f) − (DjΛσ∗(Y )e∗ )(f). i Therefore, X (11.4) Y · (DjΛe∗ ) = Cij(Y )(e)DiΛe∗ − DjΛσ∗(Y )e∗ . i This proves the following proposition.

Proposition 11.5. If D1,...,Dm is a conformally invariant system acting on C∞(N, E), then ∗ ∗ (11.6) F ≡ span{DjΛe∗ : e ∈ E , j = 1, . . . , m} 0 ∗ is a p-submodule of De(N, E ).

By applying Lemma 11.2 we get the following corollary.

Corollary 11.7. For the finite dimensional p-module F defined by (11.6), ∗ Homp(F, U(g) ⊗U(p) E ) 6= {0}.

Note that there is no reason for n to act trivially on F .

A simple condition on D1,...,Dm guarantees that the n-action on F is trivial.

Recall from Exercise (3.3) that for each parabolic subalgebra p there is an H0 ∈ h (α) (α) so that (a) α(H0) ∈ Z, (b) g ⊂ l if and only if α(H0) = 0, and (c) g ⊂ n if and only if α(H0) > 0. This allows us to define the notion of homogeneity for a conformally invariant system. A conformally invariant system is called homogeneous 61 when there is a constant d ∈ C so that [πσ(H0),Dj] = dDj for all j = 1, . . . , m (or, equivalently, Cij(H0) = dδij).

Lemma 11.8. If D1,...,Dm is a homogeneous conformally invariant system with structure operator C, then C(Y )(e) = 0 for all Y ∈ n.

Proof. Let Y ∈ n. By Lemma 8.2

C([H0,Y ])(e) = (π1(H0)C(Y ))(e) − (π1(Y )C(H0))(e) + [C(H0),C(Y )](e).

Since H0 ∈ h ⊂ l, (8.5) gives π1(H0) = 0 at e, so the first term above vanishes. The other two terms vanish since C(H0) is a constant multiple of the identity matrix. Now for Y ∈ g(−α) ⊂ n,

0 = C([H0,Y ]) = −α(H0)C(Y )(e),

But α(H0) 6= 0, so C(Y )(e) = 0. By linearity of C, C(Y ) = 0 for all Y ∈ n. 

Theorem 11.9. If D1,...,Dm is a homogeneous conformally invariant system ∞ acting on C (N, E), then for the p-representation F = spanC{DjΛe∗ : j = 1, . . . , m, e∗ ∈ E∗}, ∗ n Homl(F, {U(g) ⊗U(p) E } ) 6= {0}.

Proof. By Corollary 11.7 it is enough to verify that n acts by zero on F . However, this is clear from (11.4) by applying Lemma 11.8 and the fact that the n action on ∗ E (hence on E ) is trivial. 

∗ In order to deduce reducibility of U(g)⊗U(p) E we need a condition for F to con- ∗ tain a constituent not equivalent to E . Assume that D1,...,Dm is a homogeneous conformally invariant system with [πσ(H0),Dj] = dDj for all j. By irreducibility of E, σ(H0) acts by a scalar c on E (by Schur’s Lemma). Hence the action on ∗ E is by −c. Now (11.4) tells us that πσ(H0)DjΛe∗ = (d + c)DjΛe∗ . Therefore if ∗ d 6= 0, F and E have different H0-weights. We may conclude that if d 6= 0, then ∗ U(g) ⊗U(p) E is reducible. In order to apply 7.7 to obtain the existence of G-invariant differential operators between homogeneous bundles on G/P¯, the group P¯ must be considered. Recall that P¯ is not in general connected, so the existence of a p homomorphism F → ∗ n {U(g) ⊗U(p) E } is not sufficient for the existence of an invariant intertwining operator. The adjoint representation of P¯ on U(g) and the dual representation on E∗ give a representation of P¯ on U(g) ⊗ E∗. It is easily checked that this gives a well- ¯ ∗ ¯ 0 ∗ defined representation of P on U(g) ⊗U(p) E . Furthermore, P acts on De(N, E ) by (¯p · Λ)(f) = Λ(`p¯−1 f). Then it is easily seen that the isomorphism of Lemma 11.2 is a P¯-intertwining map. 62 Theorem 11.10. Under the hypothesis of Theorem 11.9, if F is P¯-stable, then ∗ DG(E, F ) 6= {0}.

Proof. The statement follows from the above discussion. 

Given a homogeneous conformally invariant system a formula for the invariant differential operator is easily given. To do this, choose a basis {fj} of the image of ∗ n ∗ ∗ ∗ F in {U(g) ⊗U(p) E } . Let {fj } be the dual basis. Also fix a basis {ek} of E . P ∗ P ∗ ∗ ∗ P¯ Write fj = k ujk ⊗ek, ujk ∈ U(g). Then j fj ⊗fj is in {(U(g)⊗U(p) E )⊗F } . For ϕ ∈ C∞(G/P,¯ E)

X ∗ (11.11) (Dϕ)(g) = hek , (R(ujk)ϕ)(g)ifj . j,k Example 11.12. Let G be GL(p + q, R) as in Lecture 10. Then in the nota- tion of that Lecture, for each m = 1, . . . , p consider the conformally invariant sys- m tem {∆R,S(∂X )}. This is a homogeneous system for E = Cs, s = −(m − 1). The p-module F is irreducible of highest weight γ1 + ··· + γm and is dual to m spanC{∆R,S(Y )}. We may conclude from this that ∗ Homg(U(g) ⊗U(p) F, U(g) ⊗U(p) Cs) 6= {0}, ∗ in particular, U(g) ⊗U(p) Cs is reducible. In this example dim(E) = 1, so we may ∗ omit the basis {ek} from the discussion. Let Λ1 be the distribution Λ1(f) = f(e). Then m F = span{∆R,S(∂X )Λ1}. ∗ m ¯ Therefore, F is equivalent to span{∆R,S} as P representation. ∞ ¯ ∞ Realizing C (G/P, Cs) as Cs (N) (as in (11.11)) the intertwining operator C∞(N) → C∞ (N, F ∗) may be written explicitly as follows. Let {f } be the s σµ µ R,S m ∗ m basis corresponding to ∆R,SΛ1, then the dual basis is fR,S = ∆R,S. Then

X m ∗ (Df)(n) = (∆R,Sf(n))fR,S. R,S

∗ n Now let’s turn things around. Suppose that we have F ⊂ {U(g) ⊗U(p) E } . (So ∗ Homg(U(g) ⊗U(p) F, U(g) ⊗U(p) E ) 6= {0}.) It will be shown in the remainder of this lecture that F determines a conformally invariant system. As noted in Lecture 4 there are several ways to define a representation of p on End(E). We choose to use the action defined by Y · T = −T ◦ σ(Y ), for Y ∈ p, T ∈ End(E). Therefore, End(E) is a U(p)-module under the action u · T = T ◦ σ(u◦) as in the paragraph preceding (6.6). Note that the p-representation End(E) is equivalent to E∗ ⊗ E when the action on the tensor product is on E∗ ∗ ∗ ∗ only; Y · e ⊗ e = (σ (Y )e ) ⊗ e. In what follows the module U(g) ⊗U(p) End(E) is defined with the U(p)-module structure on End(E) described above. It follows ∗ easily that U(g) ⊗U(p) End(E) ' (U(g) ⊗U(p) E ) ⊗ E. 63 Lemma 11.13. Defining

u ⊗ T 7→ Du⊗T with

 ◦  (11.14) Du⊗T f (n) = T (πσ(u )(`n−1 f))(e) gives, by linear extension, an isomorphism

U(g) ⊗U(p) End(E) → DN (E). Moreover, for Y ∈ p    (11.15) [πσ(Y ),Du⊗T ]f (e) = DY u⊗T f (e) + σ(Y ) Du⊗T f (e).

Proof. First it needs to be checked that the map in (11.14) is well-defined on

U(g) ⊗U(p) End(E). For Y ∈ p,  ◦  DuY ⊗T f (e) = T (πσ((uY ) )f)(e) ◦  = −T (πσ(Y )πs(u )f)(e) ◦  = −T σ(Y )(πs(u )f)(e) , by Lemma 8.6  = Du⊗(T ◦σ(Y ))f (e).

In Lecture 8 we have shown that DN (E) 'U(n) ⊗ End(E) (by taking G = N in Prop. 6.11). We have also seen (Prop. 6.11) that U(g) ⊗U(p) End(E) 'U(n) ⊗ ∗ End(E) as vector spaces. Thus, U(g)⊗U(p) E ' DN (E). However the isomorphism of Prop. 6.11 is not given in the form of the statement of the lemma. We must see that ((u ⊗ T )˜f)(e) = (Du⊗T f)(e), for u ∈ U(n). This is left for Exercise (11.1). The proof of (11.15) is a calculation. Let Y ∈ p.  [πσ(Y ),Du⊗T ]f (e)   = πσ(Y )Du⊗T f (e) − Du⊗T πσ(Y )f (e) ◦ = σ(Y )(Du⊗T f)(e) − T (πσ(u )πσ(Y )f)(e) ◦  = σ(Y )(Du⊗T f)(e) + T (πσ((Y u) )f)(e)  = σ(Y )(Du⊗T f)(e) + DY u⊗T f (e).



∗ Suppose F ⊂ U(g) ⊗U(p) E is any p-submodule. Fix a basis f1, . . . , fn of F and let the representation of p be written in terms of matrix coefficients: X Y · fj = aij(Y )fi, bij(Y ) ∈ C. i

Also fix a basis e1, . . . , em of E and write X σ(Y )ek = blk(Y )fl, blk(Y ) ∈ C. l 64 ∗ Consider fj ⊗ ek ∈ (U(g) ⊗U(p) E ) ⊗ E 'U(g) ⊗U(p) End(E) and set Djk ≡

Dfj ⊗ek ∈ DN (E). Then the lemma tells us that for Y ∈ p,  [πσ(Y ),Djk]f (e) = D f(e) + σ(Y )(D f)(e) (11.16) Y ·fj ⊗ek jk X X = aij(Y )(Dikf)(e) + blk(Y )(Djlf)(e). i l ∗ Theorem 11.17. Suppose F ⊂ U(g) ⊗U(p) E is a p-submodule and let fj, ek and Djk be as above. Then {Djk} is a conformally invariant system.

Proof. By the N-invariance of each Djk it suffices to prove independence at e. P ∞ P Suppose cjk(Djkf)(e) = 0, for all f ∈ C (N, E). Then by the lemma, fj ⊗ ek = 0. But {fj ⊗ek} is independent, so cjk = 0, for all j, k. Prop. 8.10 and (11.16) imply {Djk} is a conformally invariant system. 

EXERCISES

◦ ∞ (11.1) Prove that πσ(u ) = R(u) as operators on C (N, E), when u ∈ U(n).

(11.2) Prove that if D1,...,Dm is a (not necessarily homogeneous) conformally invariant system with the property that each Dj annihilates the constants, then ∗ the generalized Verma module U(g) ⊗U(p) E is reducible.

65 LECTURE 12. Conformally Invariant Systems: The case of Abelian nilradicals.

Assume that g0 is a simple real Lie algebra and that g0 contains a parabolic subalgebra p0 = l0 ⊕ n0 with n0 abelian. Let p0 = l0 + n0 be the opposite parabolic.

Denote by g , p, p, l, n and n, the complexifications of g0, p0, p0, l0, n0 and n0. Following the setup for Theorem 9.5, there is a lexicographic order on h∗ de- + fined by a set {H1,...,Hr} with the following properties. (a) Letting ∆ be the corresponding positive system,

X h + g(α) ⊂ p. α∈∆+

(b) β > α for all β ∈ ∆(n) and α ∈ ∆(l). (c) There is a unique simple root β1 in ∆(n). (d) The maximal strongly orthogonal set of roots appearing in Theorem

9.5 is constructed by taking γ1 to be the largest root in ∆(n) (with respect to the lexicographic order). Then inductively, γj+1 is the largest root in ∆(n) strongly orthogonal to γ1, . . . , γj. ∗ Recall that if κ is the killing form and if λ ∈ h , an element Hλ is defined by

κ(Hλ,H) = λ(H),H ∈ h. Define

X h− = CHγi .

Then we have the orthogonal decomposition

+ + h = h− ⊕ h , where h = {H ∈ h : γi(H) = 0 for all i}.

Example 12.1. Let g0 = sl(p + q, R)(p ≤ q, n = p + q), g = sl(p + q, C) and let

   a1    a2   h =   : a ∈ C and Σa = 0 .  ..  i i  .      an 

+ The roots ∆ (g, h) are i − j, 1 ≤ i < j ≤ p + q. Then the parabolic subalgebras with abelian n are of the form

AB  : A ∈ gl(p, C),D ∈ gl(q, C),B ∈ M (C), T r(A) + T r(D) = 0 0 D p×q

with p + q = n. Then ∆(n) = {i − j : i ≤ p < j} and the unique simple root in

∆(n) is β1 = p −p+1. The lexicographic order may be defined by {H1,...,Hn−1}, 66 with p X H1 = Eii i=1 j−1 X Hj = Eii, for 1 < j ≤ p i=1 j X Hj = Eii, for p < j < n. i=1

Then it is easily checked that the maximal set of strongly orthogonal roots is γ1 =

1 − n, γ2 = 2 − n−1, . . . , γp = p − n−p+1. It is also easily checked from the definitions that h− consists of diagonal matrices:

h− = {diag(t1, t2, . . . , tp, 0,..., 0, −tp,..., −t2, −t1)} .

The following lemma has a somewhat long proof and we will not give it. However, it is very easily verified for the above example.

Lemma 12.2. ([12]) Let ρ denote restriction of roots from h to h−. There are two cases:

1 (1) ρ(∆) ∪ {0} = {± 2 (γi ± γj)|h− : 1 ≤ i, j ≤ p}. In this case the non-zero ρ-image of some subsets of ∆ are given by 1 ρ(∆+(l)) = { (γ − γ )| : 1 ≤ i < j ≤ p}, 2 i j h− 1 ρ(∆(n)) = { (γ + γ | : 1 ≤ i, j ≤ p}. 2 i j h− 1 1 (2) ρ(∆) ∪ {0} = {± 2 (γi ± γj)|h− , ± 2 γi|h− : 1 ≤ i, j ≤ p}. In this case the non-zero ρ-image of some subsets of ∆ are given by 1 1 ρ(∆+(l)) = { (γ − γ )| : 1 ≤ i < j ≤ p} ∪ { γ | }, 2 i j h− 2 i h− 1 1 ρ(∆(n)) = { (γ + γ )| : 1 ≤ i, j ≤ p} ∪ { γ | }. 2 i j h− 2 i h− Example 12.3. We return to the case g = sl(p+q, C). Assume that H ∈ h−. Then + γi(H) = 2ti, and for α ∈ ∆ (l), α(H) is either ti −tj(i < j), ti or 0. The possibility of α(h) = 0 occurs only when p < q. If α ∈ ∆(n), then α(H) is ti + tj, i < j or ti.

It will be important for us to define some subalgebras of g in a systematic fashion. This is done as follows.

Definition 12.4. Let p be the number of strongly orthogonal roots γi. p 1 (1) ∆ = {α ∈ ∆ : α|h− 6= ± 2 γi}, for any i. p P (α) 2 (2) g = (h + α∈∆p g )ss, where the subscript means the semisimple part .

2 Recall that a reductive Lie algebra can be written as g = gss ⊕ z. We refer to gss as the P (α) semisimple part of g. It may be easily checked that gss = [g, g]. In the case at hand h+ α∈∆p g is reductive; taking the semisimple part is simply removing the part of h which commutes with the root vectors for roots in ∆p. 67 m p (3) ∆ = {α ∈ ∆ : α|h− ∈ spanR{ρ(γi): i ≤ m}} m P (α) (4) g = (h + α∈∆m g )ss. Lemma 12.5. For m = 1, . . . , p the following hold.

(1) ∆m is a root subsystem of ∆. (2) gm is a simple Lie algebra and pm = gm ∩ p is a parabolic subalgebra with abelian nilradical. The Levi decomposition of pm is lm + nm where lm = gm ∩ l and nm = gm ∩ n.

We will not give a proof of this lemma, however, in our example in sl(p + q, C) the lemma clearly holds since the subalgebras gm are as follows. A 0 B  m   g =  0 0 0  ; A, D ∈ gl(m, C),B,C ∈ Mm×m(C), T r(A + D) = 0 .  C 0 D  Note that each gm ' sl(2m, C) and the parabolic subalgebra pm is the ‘middle parabolic’ subalgebra.

In what follows Eα will be a root vector for α. Also, Hγj = [Eγj ,E−γj ] if the root vectors are normalized properly. We will assume such a normalization. The goal of this lecture is to build conformally invariant systems on appropriate line bundles over G/P . If λ is the fundamental weight for the unique simple root

β1 in ∆(n), then there are one dimensional representations of l with weight −sλ.

Let us denote these representations by C−s. Theorem 11.17 will be used as follows. The invariant theory (in particular, Theorem 9.5) will suggest certain candidates 3 n for irreducible l-representations F which embed into {U(g) ⊗U(p) Cs} . However, as we will see, this can happen only for certain values of s. The remainder of this section contains a computation of these values of s. We give a sketch of a very clever manipulation of root vectors due to Wallach. for more details see [16]. It should be pointed out that the special case of abelian n which we are considering is crucial to the computation. We begin the computation by writing down the candidates for the irreducible n l-modules we hope to embed in {U(g) ⊗U(p) Cs} . P j j As in Lecture 9, set n˜ = α∈∆+(l) g(α). Similarly, we set n˜ = n˜∩l . Then Theo- n˜ rem 9.5 says that S(n) ' C[u1, . . . , up], where each um is the highest weight vector for an irreducible subrepresentation of S(n) of highest weight γ1 + ··· + γm. We will denote this subrepresentation by Fm. The candidates for l-subrepresentations n of {U(g) ⊗U(p) Cs} are Fm ⊗ Cs.

m Proposition 12.6. ([16]) um ∈ U(n ) and is the highest weight vector of a one- m Pm dimensional l -representation with highest weight 1 γi.

3Defining n to act trivially on F gives a p-representation on F . Therefore, such an embedding of F gives a p-homomorphism. Thus Theorem 11.17 applies. 68 Note that since n is abelian U(n) = S(n). Therefore, Fj ⊗ Cs ⊂ U(n) ⊗ Cs =

U(g) ⊗U(p) Cs. We will need to find the ‘special values’ s for which Fj ⊗ Cs ⊂ n {U(g) ⊗U(p) Cs} . Thus, one needs to compute Y v ⊗ 1 = [Y, v] ⊗ 1 ∈ U(g) ⊗U(p) Cs, for all Y ∈ n and all v ∈ Fj. The following lemma shows that this can be accomplished by computing just one bracket.

Lemma 12.7. Let E−γm be a nonzero root vector for −γm. If, for some particular n value of s, [E−γm , um]⊗1 = 0 in U(g)⊗U(p) Cs, then Fm ⊗Cs ⊂ {U(g)⊗U(p) Cs} .

Proof. Any representation of a g on a vector space V defines a U(g)-module struc- ture on V . (See Exercise (5.3).) Therefore the adjoint representation of g on g makes g a U(g)-module. We will temporarily use the notation u · Z for the action of u ∈ U(g) on Z ∈ g in this module. m Recall that um spans a one-dimensional representation of l , therefore the de- m m rived algebra [l , l ] acts by zero on um. In other words, we have the identity m m m m [Y, um] = 0 in U(g) for all Y ∈ [l , l ]. Similarly, [l , l ] annihilates Cs. We will use these two facts a number of times in the following calculations. m m We claim that for any W ∈ g and u ∈ U([l , l ]), [u · W, um] ⊗ 1 = u[W, um] ⊗ 1 m m is an identity in U(g) ⊗U(p) Cs. We prove this for all u = Y1Y2 ...Yk,Yi ∈ [l , l ] by induction on k. First k = 1. So, let Y ∈ [lm, lm].

[Y · W, um] ⊗ 1 = [[Y,W ], um] ⊗ 1

= ([Y, [W, um]] − [W, [Y, um]]) ⊗ 1

(12.8) = [Y, [W, um]] ⊗ 1

= Y [W, um] ⊗ 1 − [W, um]Y ⊗ 1

= Y [W, um] ⊗ 1

0 For u = Y1uY2 ...Yk = Y1u ,

0 [u · W, um] ⊗ 1 = [Y1 · (u W˙ ), um] ⊗ 1 0 = Y1[u · W, um] ⊗ 1, by (12.8), 0 = Y1u [W, um] ⊗ 1, by induction,

= u[W, um] ⊗ 1.

m m m m m m Since n is irreducible for [l , l ], for any Y ∈ n there is some uY ∈ U([l , l ]) so that Y = uY · E−γm . It now follows from the claim that if [E−γm , um] ⊗ 1 =

0, then [Y, um] ⊗ 1 = [uY · E−γm ] ⊗ 1 = uY [E−γm , um] ⊗ 1 = 0. In particular

[E−γ1 , um] ⊗ 1 = 0.

Now we will use the fact that um is a highest weight vector to say that [X, um] =

0, for X ∈ n˜. We now claim that for any u ∈ U(n˜), u[W, um] ⊗ 1 = [u · W, um] ⊗ 1, 69 for any W ∈ n. To see this suppose u = X ∈ n˜. Then

[X · W, um] ⊗ 1 = [[X,W ], um] ⊗ 1

= [X, [W, um]] ⊗ 1 − [W, [X, um]] ⊗ 1

= [X, [W, um]] ⊗ 1

= X[W, um] ⊗ 1 + [W, um]X ⊗ 1

= X[W, um] ⊗ 1. This, along with induction (as in the proof of the first claim) establishes the claim.

Now it follows that since any Y ∈ n is some u · E−γ1 , u ∈ U(n˜) (since E−γ1 is a lowest weight vector in n,[Y, um] ⊗ 1 = 0, for all Y ∈ n. For any ` ∈ L and Y ∈ n, −1 [Y, Ad(`)um] ⊗ 1 = Ad(`)[Ad(` )Y, um] ⊗ 1 = 0. Finally, by irreducibility of Fm, we have that [Y, v] ⊗ 1 = 0 for all Y ∈ n and v ∈ Fm. 

Hence we need to compute [E−γm , um]. Because of the symmetric nature of our definitions, it is enough to compute [E−γp , up]. This computation that can be done entirely in the smaller Lie algebra gp. This computation is in [16].

In the computation, the only properties of up that will be used are

p (1) up is the highest weight of a l -module, Pp (2) the weight of up is 1 γi, (3) n˜ · up = 0,

(4) the multiplicity of Fm in S(n) is one.

In particular, explicit formulas for up are not needed.

Nonetheless it is interesting to see what um is for the sl(p+q, C) example. These are given by minors in the following sense. Let X = (xij) be in ∈ Mn×n(C) ' n. Let ∆m be the determinant of the minor of a matrix X which is obtained by deleting the last p − m rows and the first q − m columns. Then replace each xij by the root vector Ei,p+j. Since all root vectors in n commute, this defines an element in S(n). More explicitly, X σ um = (−1) E1,p+q−m+σ(1)E2,p+q−m+σ(2) ··· Em,p+q−m+σ(m).

σ∈Sm In particular, for sl(2p, C), X σ up = (−1) E1,p+σ(1)E2,p+σ(2) ··· Em,p+σ(m).

σ∈Sp We need to introduce one more definition and a preliminary Lemma.

+ 1 Definition 12.9. Cp = {α ∈ ∆ (l): ρ(α) = 2 (γi − γp)|h− : i < p}.

p−1 Lemma 12.10. (1) [E−γp , v] = 0 for all v ∈ n . p−1 (2) If α ∈ Cp and β ∈ ∆(n ), then α + β is not a root. p (3) If Z ∈ n˜ , then [Z,E−γp ] = 0. 70 p−1 1 Proof. To prove (1), observe that if α ∈ ∆(n ), then ρ(α) = 2 (γi + γj) with 1 i, j ≤ p − 1. On the other hand, if i, j ≤ p − 1, then 2 (γi + γj) − γp is not one of the possible restrictions of roots to h−. The argument that proves (2) is similar: 1 1 ρ(α) = 2 (γi − γp) for some i < p while ρ(β) = 2 (γk + γl) for some k, l ≤ p − 1. There is no root in ∆ with restriction ρ(α) + ρ(β). The proof of (3) is similar. 

We now sketch the computation of

(12.11) [E−γp , up] ⊗ 1 and find the special value of s making (12.11) zero in U(g) ⊗U(p) Cs.

In the remainder of the lecture we briefly describe this computation and find sj.

It will be useful to keep in mind our example and to think of up as the determinant of an appropriate minor. Then, the next Proposition can be interpreted as an expansion of a determinant by the last row and the furthest most left column of the appropriate minor, similar to the computations we did in Lecture 10.

Lemma 12.12. ([16]) X (12.13) up = Eγp up−1 + [Eα,Eγp ][Eβ,Eγp ]Zα,β

α,β∈Cp,α≤β p−1 where Zα,β ∈ U(n )

P Proof. (Sketch) Write up = aα1,...,αp Eα1 ...Eαp with the sum running over P Pp α1, . . . , αp so that αi = 1 γi. Then X up = Eγp v + aα1,...,αp Eα1 ...Eαp .

αi6=γp By Theorem 12.2 v is in U(np−1).

We want to argue that v is a multiple of up−1. It is clear that v has weight Pp−1 p 1 γi. We need to show that [Z, v] = 0 for all Z ∈ n˜ . This characterizes up−1 (up to a scalar multiple) by the multiplicity one of Theorem 9.5. Therefore we compute [E−γp , up]. By Lemma(12.10), we know that [E−γp , v] = 0. Hence,

X p [E−γp , up] = vH−γp + w1 + wαEα, w1, wα ∈ U(n ).

α∈∆l

p Assume that Z ∈ n˜ . Since up is the highest weight vector of a L-module, we have [Z, up] = 0. We also know, by Lemma (12.10), that [Z,E−γp ] = 0. Thus, applying ad(Z) to the previous formula gives

0 = [Z, [E−γp , up]] = [Z, v]H−γp + v[Z,H−γp ] + [Z, w1] X X + [Z, wα]Eα + wα[Z,Eα]

(12.14) α∈∆(l) α∈∆lC X = [Z, v]H−γp + [Z, w1] + vαEα. α∈∆(l 71 In equation (12.14), [Z, v]H−γp ∈ U(n)h,[Z, w1] ∈ U(n) and the last term is in U(n)(n˜ ⊕ n˜). We conclude from the PBW Theorem that each of the three terms in the last line of (12.14) is zero, therefore [Z, v] = 0 for all Z ∈ n˜p. Since the

L-irreducibles in U(n) have multiplicity one, we have v = c up−1. It is a little more delicate to show that c 6= 0 and we will omit the proof.

For the second term in (12.13), note that since the weight must be γ1 + ··· + γp and no Eγp occurs, there must be a pair Eα+γp ,Eβ+γp which does occur. (This follows from the Lemma 12.2 and the fact that γp is simple.) For α + γp and β + γp to be roots, α and β must be in Cp. Requiring α ≤ β is simply to ensure that the same term is not written twice.  Lemma 12.15. ([16]) 1 X [E , u ] = u H + hγ , γ ic u + w0 E −γp p p−1 −γp 4 p p p p−1 α α α∈∆+(l)

0 + γi−γp where wα ∈ U(n) and cp is the cardinality of Cp = {α ∈ ∆ (l): ρ(α) = 2 , i < p}

Proof. (Sketch) Set w = P [E ,E ][E ,E ]Z . By Proposition (12.12), p α≤β∈Cp α γp β γp α,β we need to show that 1 X [E , w ] = hγ , γ ic u + w0 E −γp p 4 p p p p−1 α α α∈∆(l)

Direct computations lead to the following formula: 1 X X (12.16) [E , w ] = − hγ , γ i [E , [E ,E ]]Z + w0 E . −γp p 2 p p β α γp α,β α α α,β∈Cp,α≤β α∈∆(l)

We need to identify P [E , [E ,E ]]Z with −cp u . To do so, α,β∈Cp, α≤β β α γp α,β 2 p−1 we need to establish a relation between wp and up−1. Once again we use the fact that up is a highest weight vector. For δ ∈ Cp,[Eδ, up] = 0. Lemma 12.10 implies that [Eα,Zα,β] = [Eδ, up−1] = 0. Hence, by Lemma 12.12, for δ ∈ Cp

0 = [Eδ, up] = up−1[Eδ,Eγp ] + [Eδ, wp].

This implies that for δ ∈ Cp, X −up−1[Eδ,Eγp ] = [Eδ, [Eα,Eγp ]] [Eβ,Eγp ]Zα,β

α≤β∈Cp X + [Eα,Eγp ][Eδ, [Eβ,Eγp ]]Zα,β.

α≤β∈Cp

Since [Eδ, [Eα,Eγp ]]Zα,β does not contain [Eδ,Eγp ] (by Lemma 12.2), the PBW Theorem (applied to U(n) and a basis of root vectors) tells us that for each δ ∈ Cp, X X (12.17) [Eδ, [Eα,Eγp ]]Zα,δ + [Eδ, [Eβ,Eγp ]]Zα,δ = −up−1.

α≤δ∈Cp δ≤β∈Cp 72 Adding equation (12.17) over δ ∈ Cp gives X X (12.18) Zα,δ[Eδ, [Eα,Eγp ]] + Zδ,β[Eδ, [Eβ,Eγp ]] = −cpup−1

α,δ∈Cp,α≤δ α,δ∈Cp,δ≤β

Now we observe that sum of roots in Cp is not a root, and therefore the Jacobi identity gives [Eα[Eβ,Eγp ]] = [Eβ[Eα,Eγp ]]. Equation (12.18) now implies cp X (12.19) − u = Z [E , [E ,E ]]. 2 p−1 α,δ δ α γp α≤δ∈Cp The lemma follows. 

Theorem 12.20. Let m = 1, . . . , p and s = −cm. Also let Em = Fm ⊗ Cs. Then n Homl(Em, {U(g) ⊗U(p) Cs} ) 6= 0.

Furthermore, Homg(U(g) ⊗U(p) Em, U(g) ⊗U(p) Cs) 6= {0} and for the bundles Em ¯ and C−s on G/P , D(C−s,Em) 6= {0}.

Proof. Observe that this Lemma implies that 1 [E , u ] ⊗ 1 = u H ⊗ 1 + hγ , γ ic u ⊗ 1 = 0 γp p p−1 −γp 4 p p p p−1

hγp,γpi if and only if sλ(Hγp ) = 4 cp. By Lemma(12.2), 1 1 λ(H ) = λ(H ) = hβ , β i = hγ , γ i. γp α 2 1 1 2 p p The last inequality follows from the fact that the simple root β1 in ∆(n) has the same length as each of the strongly orthogonal roots γi.  Corollary 12.21. For our sl(p + q, C) example the special values are s = m − 1.

EXERCISES

(12.1) Do the computation to show that the special value of s corresponding to up in the sl(p, C) example is −(p − 1) by using the formula for up in terms of the determinant. For the sl(p + q, C) example write down the corresponding conformally invariant system. (See Theorem 11.17.) Compare with Lecture 10.

73 LECTURE 13. Conformally Invariant Systems for Heisenberg Type parabolic subgroups, I

Consider a complex simple Lie algebra g other than sl(2, C). It is a fact that each such g contains a parabolic subalgebra p = l ⊕ n with n a Heisenberg Lie algebra. Up to conjugacy there is a unique such parabolic subalgebra.

Definition 13.1. Let V be a complex vector space with a nondegenerate alternat- ing bilinear form ω (called a symplectic form). It follows that V has even dimension, say 2n. Then define a 2n + 1-dimensional Lie algebra by h2n+1 ≡ V ⊕ C with Lie bracket defined by [(v, z), (v0, z0)] = (0, ω(v, v0)). A Lie algebra is a Heisenberg Lie algebra if it is isomorphic to h2n+1 (n ≥ 1).

Exercise: Verify that this bracket operation defines a Lie algebra. Show that the subalgebra   0 x1 . . . xn z    y1     .  n =  .     yn  0  of gl(n + 1, C) is isomorphic to h2n+1. A Lie algebra n is said to be 2-step nilpotent if n is nonabelian and [n[n, n]] = 0.

It may be shown that h2n+1 is the unique (up to isomorphism) Lie algebra which is 2-step nilpotent and has one-dimensional center. We say that a parabolic subalgebra p = l ⊕ n is of Heisenberg type if n is a Heisenberg Lie algebra. In earlier lectures we were successful in using invariant theory to construct con- formally invariant systems and gain information on reducibility of Verma modules when the parabolic subalgebra is of abelian type, that is when n is abelian. The par- abolic subalgebras of Heisenberg type may be considered ‘close’ to those of abelian type since [X[Y,Z]] is always zero. On the other hand it may be considered as the opposite extreme since the center is as small as possible. We will see how methods similar to those used for parabolics of abelian type may be used for the Heisenberg type parabolics.

Now suppose G0 is a real simple Lie group with Lie algebra g0. A parabolic subalgebra p0 of g0 is of Heisenberg type when p = p0 ⊗ C is a subalgebra of g = g0 ⊗ C of Heisenberg type. Not all simple real Lie algebras have parabolic subalgebras of Heisenberg type. For example so(n, 1) does not. Each split4 simple

Lie group does have one. Let us fix a parabolic subalgebra p0 of Heisenberg type and let P0 be the corresponding parabolic subgroup of G0.

4 A real Lie algebra is split means amin is a Cartan subalgebra. A group with such a Lie algebra is called split. 74 Our goal in the final two lectures is to show how to construct conformally in- ∞ variant systems on C (G0/P¯0, Cs), for appropriate homogeneous line bundles Cs. As in the abelian case, this will be possible for certain ‘special values’ of s. In order to study the relevant invariant theory, let G be a complex Lie group with Lie algebra g and let P = LN be the parabolic subgroup with Lie algebra p = p0 ⊗C. Then, by Vinberg’s theorem ([10, Ch. X]), n/[n, n] is a prehomogeneous vector space under the adjoint action of L. By the complete reducibility of the adjoint action of L on n we may write the L-decomposition as n = V + ⊕ z, where z is the one-dimensional center of n. Then V + ' n/[n, n] is the prehomogeneous vector space which will play a key role in our construction of conformally invariant systems. In addition to the decomposition n = V + ⊕ z write n = V − ⊕ z0.

13.1. The invariant theory. A covariant is an irreducible representation (ρ, W ) of L along with a nonzero L-equivariant polynomial map F : V + → W . By a polynomial map we of course mean a map for which each coordinate is polynomial in V +. So a covariant comes from an element of {P (V +) ⊗ W }L. But this is ∗ + HomL(W ,P (V )). There are four natural covariants τ1, . . . , τ4 which will play a role in our construction. In order to define these covariants we need to set up a little notation. Fix a Cartan subalgebra h of g and a positive system of roots ∆+ = ∆+(h, g). Let Π be the corresponding system of simple roots. Denote by γ the highest root. This is, by definition, the highest weight of the adjoint representation. Choose a root vector (γ) Eγ ∈ g normalized so that −κ(Eγ , θ(Eγ )) = 1. By scaling κ if necessary we assume that hγ , γi = 2. It is a fact that the Heisenberg parabolic is the parabolic subalgebra p defined (as in Lecture 3) by S = Π \{α ∈ Π: hα , γi = 0}. This means that X X p = l ⊕ n = (h + g(α)) + ( g(α)). α∈∆,hα ,γi=0 α∈∆,hα ,γi>0

One can also easily see that z = CEγ . Let Hγ (as defined earlier) be given by γ(H) = hH,Hγ i. Note that it follows that α(Hγ ) = hα , γi. In particular, + ad(Hγ )Eγ = 2Eγ . When α ∈ ∆(V ), hα , γi = 1. Therefore, there is a grading of g by eigenvalues of ad(Hγ ). Set g(k) = {X ∈ g : ad(Hγ )(X) = kX}. Then

g = g(−2) ⊕ g(−1) ⊕ g(0) ⊕ g(1) ⊕ g(2),

± (±γ) and [g(j), g(k)] ⊂ g(j+k). Since g(0) = l, g(±1) = V and g(±2) = g , this grading is g = g(−γ) ⊕ V − ⊕ l ⊕ V + ⊕ g(γ).

(γ) × Since g is one dimensional there is a character χ : L → C so that Ad(`)Eγ =

χ(`)Eγ . This character will play an important role in all that follows. Since the 75 Killing form pairs gγ and g−γ it follows that (g(γ))∗ ' g(−γ) and the representation of L on g(−γ) is by χ(`−1).

Example 13.2. Consider g = sl(r + 1, C), r ≥ 2. It is convenient to form the so- called extended Dynkin diagram by attaching −γ by the same rules as were used to form the Dynkin diagram.

α α α α ◦1 ◦2 ... ◦r−1 ◦r PPP nn PPP nnn PPP nnn PPP nnn PP◦nnn −γ

Then γ = α1 + ... + αr = 1 − r+1 and ∆(h, l) = {i − j : 2 ≤ i < j ≤ r}. The prehomogeneous vector space (Ad,V +) is the same as GL(1, C)×GL(r −1, C) acting on Cr−1 × Cr−1 by

t −1 (λ, g)(v1, v2) = (λ(g ) v1, λ det(g)gv2). Here we are thinking of Cr−1 as column vectors. The character χ is χ(λ, g) = λ2 det(g).

Example 13.3. Let g = so(2r, C). The extended Dynkin diagram is α ◦r−1 Ð ÐÐ ÐÐ α1 α2 ÐÐ ◦ ◦ ... ◦>Ð αr−2 >> >> >> ◦ >◦ −γ αr

+ Here γ = α1+2α2+...+2αr−2+αr−1+αr. A representation equivalent to (Ad,V ) is GL(2) × SO(2r − 4) acting naturally on the tensor product C2 ⊗ C2r−4. Our character χ is χ(g1, g2) = det(g1).

Now we are ready to define the four natural covariants. For 1 ≤ k ≤ 4 and + X ∈ V , we define τk(X) ∈ g by 1 τ (X) = ad(X)k(E ). k k! −γ

Since ad(X) increases the Hγ -weight by 1 we have polynomial maps

+ − τ1 : V → V + τ2 : V → l + + τ3 : V → V + (γ) τ4 : V → g

76 Lemma 13.4. For ` ∈ L, X ∈ V + and 1 ≤ k ≤ 4, we have

τk(Ad(`)X) = χ(`) Ad(`) τk(X).

Proof. Note that 1 τ (X) = ad(X)τ (X). k k k−1 Assume that the Lemma holds for k − 1, then 1 τ (Ad(`)X) = ad(Ad(`)X)τ (Ad(`)X) k k k−1 1 = ad(Ad(`)X)χ(`)Ad(`)τ (X)big) k k−1 1 = χ(`)ad(Ad(`)X)Ad(`)τ (X) k k−1 1 = χ(`)Ad(`)ad(X)τ (X) k k−1 = χ(`)Ad(`)τk(X).

To complete the induction argument we check when k = 1.

τ1(Ad(`)X) = [Ad(`)X,E−γ ] −1 = Ad(`)[X, Ad(` )E−γ ] −1 −1 = Ad(`)[X, χ(` ) E−γ ]

= χ(`)Ad(`)[X,E−γ ]

= χ(`)Ad(`)τ1(X).



+ Since gγ is one-dimensional, there is a quartic polynomial ∆ on V such that 2 τ4(X) = ∆(X)Eγ . The lemma implies that, ∆(Ad(`)X) = χ (`)∆(X). Therefore, if ∆ is nonzero then it is a relative invariant for the prehomogeneous vector space V +, associated to the character χ2. Nothing done so far guarantees that the maps introduced are non-zero. Indeed, in type Cr one can check that τ3 = τ4 = 0 while

τ1 and τ2 are non-zero. For each of the other Lie algebras τ1, . . . , τ4 are all nonzero.

We assume that g is not of type Cr and prove that τk 6= 0 for 1 ≤ k ≤ 4. A key observation is that under this assumption there exists a simple root δ ∈ ∆(V +, h) having the property that δ0 = γ−δ ∈ ∆(V +, h) and γ−2δ is not a root. A little tech- nical lemma shows that it is possible to normalize root vectors E±δ,E±(γ−δ),E±γ and vectors Hδ,Hγ−δ in such a way that the linear map satisfying

Hδ → E11 − E22,Eδ → E12,E−δ → E21

Hδ0 → E22 − E33,Eδ0 → E23,E−δ0 → E32

Hγ → E11 − E33,Eα → E13,E−γ → E31 is an isomorphism between the subalgebra of g generated by {E±δ,E±δ0 } and sl(3).

Lemma 13.5. For scalars x and y we have 77 (1) τ1(xEδ + yEδ0 ) = yEδ − xEδ0 , xy  (2) τ2(xEδ + yEδ0 ) = 2 Hδ − Hδ0 , x2y xy2 (3) τ3(xEδ + yEδ0 ) = − 2 Eδ + 2 Eδ0 , x2y2 (4) τ4(xEδ + yEδ0 ) = 4 Eγ .

Proof. In view of the previous observation, it is enough to do the calculations on sl(3).  √ + Proposition 13.6. The point Xo = 2 Eδ + Eδ0 ) ∈ V is a generic point of the prehomogeneous vector space (ad,V +).

Proof. In Type A2 the Proposition is verified directly. Otherwise, one needs to argue that ∆ generates the semigroup of relative invariant polynomials of (Ad,V +). Since ∆(Xo) 6= 0, the Proposition follows. 

EXERCISES

(13.1) Verify the statements made in Examples 13.2 and 13.3. (13.2) For the simple Lie algebras so(2r + 1, C) and sp(2r, C) give the same infor- mation as was given in Example 13.2.

(13.3) Show that for sp(2r, C) both τ3 and τ4 are zero. (13.4) For each of the classical simple Lie algebras, other than sp(2r, C), find the roots δ and δ0

78 LECTURE 14. Conformally Invariant Systems for Heisenberg Type parabolic subgroups, II

This lecture is a continuation of the previous lecture. We will only describe some of the computations. Details may be found in [2] Since n = V + + z is a Heisenberg Lie algebra, the space V + carries a non- degenerate alternating form ω given by

+ [X1,X2] = ω(X1,X2)Eγ , for X1,X2 ∈ V .

+ + The form is indeed non-degenerate since for each Eα ∈ V , Eγ−α ∈ V and

[Eγ−α,Eα] 6= 0. By applying Ad(`) to the above formula we have

(14.1) ω(Ad(`)X1, Ad(`)X2) = χ(`)ω(X1,X2).

Note that there is an involution of the set of roots in V + given by α 7→ γ − α. Set α0 = γ − α, for each α ∈ ∆(V +), so α 7→ α0 is the involution of ∆(V +). In order to + carry out the necessary calculations we fix a basis {Eα : α ∈ ∆(V )} consisting of 5 + root vectors normalized in a special way . The basis {Eα : α ∈ ∆(V )} satisfies

ω(Eα,Eβ0 ) = 0 if and only if α 6= β. When α = β, we set ω(Eα,Eα0 ) = ωα. Therefore,

ω(Eα,Eβ0 ) = ωαδα,β. It follows that any vector Y ∈ V + may be expanded as

X −1 (14.2) Y = ωα ω(Y,Eα0 )Eα. α∈∆(V +) P Check: Write Y = yβEβ. Then

X −1 X X −1 X ωα ω( yβEβ,Eα0 )Eα = ωα yβω(Eβ,Eα0 )Eα = yαEα = Y. α β α,β α

14.1. Embedding of covariants into the enveloping algebra. The covari- ants τ1, . . . , τ4 will be used to find certain l-subrepresentations of U(n). These l-subrepresentations of U(n) will later be used to construct the systems of the dif- ferential operators which will turn out to be conformally invariant systems. Recall that + τk : V → Wk, k = 1,..., 4 are L-equivariant polynomial maps. That is

+ L ∗ + τk ∈ {P (V ) ⊗ Wk} ' HomL(Wk ,P (V )).

5One must be precise about the normalization of root vectors in order to construct the con- formally invariant systems. The details of this normalization is given in [2]. However, for the computations presented in this lecture, these details are not needed 79 From Lecture 13 we know that − W1 = V ⊗ Cχ

W2 = l ⊗ Cχ + W3 = V ⊗ Cχ

W4 = z ⊗ Cχ ' Cχ2 .

Our goal is to get an element of U(n) for each polynomial τk. This is accomplished as follows. The symplectic form ω gives an isomorphism + ∗ + (V ) ' V ⊗ Cχ−1 + + ∗ + by formula (14.1). Now P (V ) ' S((V ) ) ' S(V ⊗ Cχ−1 ). Since Wk occurs as homogeneous polynomials of degree k, we have ∗ k + Wk → S (V ) ⊗ Cχ−k , therefore ∗ k + Wk ⊗ Cχk → S (V ) ,→ U(n). The following table records the subrepresentations of U(n) determined by the four covariants. Here, the Killing form is used to identify certain dual spaces with subspaces of g. ∗ k Wk ⊗ Cχk 1 V + 2 [l, l] ⊗ Cχ − 3 V ⊗ Cχ2 3 Cχ2 In the second entry we have removed the center of l. This is because the center maps to 0, thus does not contribute. In the k = 1 case, the embedding of V + into S(V +) is clear. Up to a multiplica- tive constant this happens in only one way. + When k = 2 it is not so clear how [l, l] ⊗ Cχ embeds into S(V ). In order to get an explicit embedding, we must trace through the isomorphisms which lead us + from τ2 to [l, l] ⊗ Cχ ⊂ S(V ). There are three points which must be clear. The + first is how τ2 gives [l, l] ⊗ Cχ−1 ⊂ P (V ). Since the Killing form identifies l with itself, Z maps to the polynomial (in X)

κ(τ2(X),Z).

The second point is that P (V +) ' S((V +)∗). This is a general fact for any (finite dimensional) vector space V . Given 1 X u = v∗ v∗ . . . v∗ , v∗ ∈ V ∗, n! σ(1) σ(2) σ(m) i σSm there is a polynomial ∗ ∗ ∗ Pu(X) = v1 (X)v2 (X) . . . vm(X). 80 ∗ The linear extension of the map u 7→ Pu is an isomorphism S(V ) → P (V ). + ∗ + The third point to make explicit how (V ) ' V ⊗Cχ−1 . This is accomplished by Y 7→ ω(Y, · ). Now, combining the last two isomorphisms gives and isomorphism

+ + (14.3) S(V ⊗ Cχ−1 ) → P (V ).

Explicitly, if

1 X u = Y ∗ Y ∗ ...Y ∗ ,Y ∗ ∈ (V +)∗, n! σ(1) σ(2) σ(m) i σSm the corresponding polynomial is

(14.4) Pu(X) = ω(Y1,X)ω(Y2,X) . . . ω(Ym,X).

+ Back to τ2. For Z ∈ [l, l] and α ∈ ∆(V ) write

X ad(Z)(Eα) = Mβ,α(Z)Eβ. β∈∆(V +)

The functions Mβ,α(Z) are called matrix coefficients for the representation of [l, l] + + on V . We claim that the map [l, l] ⊗ Cχ−1 → S(V ) determined by τ2 is

1 X −1 (14.5) Z 7→ ω M 0 (Z)(E E + E E ) 2 β α,β α β β α α,β∈∆(V +)

Proof. This is a computation using the remarks above. We show that the polyno- 1 P −1 mial defined by the 2 α,β∈∆(V +) ωβ Mα,β0 (Z)(EαEβ + EβEα) via formula (14.4) 81 is κ(τ2(X),Z). X −1 ωβ Mα,β0 (Z)ω(Eα,X)ω(Eβ,X) α,β X −1 X  = ωβ Mα,β0 (Z)ω(Eα,X) ω(Eβ,X) β α X −1 X = ωβ ω( Mα,β0 (Z)Eα,X)ω(Eβ,X) β α X −1 = ωβ ω([Z,Eβ0 ],X)ω(Eβ,X) β X −1 = ωβ ω(Eβ0 , [X,Z])ω(Eβ,X), β by differentiating (14.1) and using the fact that Z ∈ [l, l], X −1  = ω ωβ ω([Z,X],Eβ0 )Eβ,X β = ω([Z,X],X), by (14.2) = ω(X, [X,Z]) 1 = κ(E , ω(X, [X,Z])E ), assuming the normalization κ(E ,E ) = 2, 2 −γ γ −γ γ 1 = κ(E , [X, [X,Z]]) 2 −γ 1 = κ([X, [X,E ]],Z) 2 −γ = κ(τ2(X),Z). 

Similar computations may be done for τ3 and τ4.

14.2. The conformally invariant systems. Consider the one dimensional rep- resentations χs, s ∈ R, of L. Extend these representations to all of P¯ by making N¯ act trivially. Denote by Cs, or Cχs , the space of this representation. Note that the s differential of χ has weight sγ. We will use the notation Csγ for the representation ∞ space for the Lie algebra representation of weight sγ. Then consider C (N, Cs) and the action of g by πs as in Prop. 8.4. ∗ + Each embedding Wk ⊗ Cχ−k → S(V ) ,→ U(n) gives a family of differential operators by the right action. For certain special values of s these families of operators give conformally invariant systems. We will show in detail how this goes for k = 1. In fact we will do this two ways. Then we will give a short explanation of the situation of k = 2, 3 and 4. Consider the case k = 1. The embedding V + ,→ U(n) gives a family of differential + + operators {R(Y ): Y ∈ V }. We let Ω1 be the system {R(Eα): α ∈ ∆(V )}.

Proposition 14.6. When s = 0, Ω1 is a conformally invariant systems. 82 +  n Proof. By Theorem 11.17 it suffices to show that V ⊗1−sγ ⊂ U(g) ⊗U(p) C−sγ , when s = 0. This is a simple computation using the facts that n acts by 0 on 1−sγ + (by definition) and [l, l] acts by 0 on 1−sγ (since χ is one dimensional). Let Y ∈ V .

Since [E−γ ,Y ] ∈ n we have

E−γ Y ⊗ 1−sγ = [E−γ ,Y ] ⊗ 1−sγ + YE−γ ⊗ 1−sγ = 0. For β ∈ ∆(V +),

E−βY ⊗1−sγ = [E−β,Y ] ⊗ 1−sγ + YE−β ⊗ 1−sγ X = yα[E−β,Eα] ⊗ 1−sγ α X = yα ⊗ [E−β,Eα]1−sγ α

= s yβhγ , βi1 ⊗ 1−sγ .

The last equality holds because [E−β,Eα] ∈ [l, l] unless α = β,[E−β,Eα] = H−α and −sγ(H−α) = shγ , αi. Now recall that the parabolic subalgebra p is defined by

γ, thus hγ , αi > 0 for all α ∈ ∆(n). We may conclude that Y ⊗ 1−sγ is annihilated by n if and only if s = 0. 

+ A corollary to the proof is that Homg(U(g) ⊗U(p) V , U(g) ⊗U(p) C−sγ ) 6= {0}. A proof of the Proposition may also be given using commutators. Formula (11.15) along with Prop. 8.10 will be applied. In using Formula (11.15) we must calculate WY ⊗ 1, in U(g) ⊗U(p) Hom(Cs, Cs) ' (U(g) ⊗U(p) C−s) ⊗ Cs, for w ∈ p and X ∈ V +.

WY ⊗1−sγ = [W, Y ] ⊗ 1−sγ + YW ⊗ 1−sγ

= [W, Y ] ⊗ 1−sγ

= ([W, Y ]p + [W, Y ]n) ⊗ 1−sγ

= −sdχ([W, Y ]l) ⊗ 1−sγ + ([W, Y ]n) ⊗ 1−sγ Formula (11.15) now gives, for W ∈ p,Y ∈ V +,   [πs(W ),R(Y )]f (e) = R([W, Y ]n)f (e) − sdχ([W, Y ]l)f(e) + sdχ(X)(R(Y )f)(e).

When s = 0, Prop. 8.10 may be applied to conclude that Ω1 is a conformally invariant system. This completes the case of k = 1. We will briefly discuss the other cases. For

τ2,[l, l] ⊗ Cχ ,→ U(n). One observes, by looking at each simple Lie algebra, that often [l, l] decomposes into the direct sum of two simple ideals. In these cases one simple ideal is isomorphic to sl(2, C). We will write [l, l] = lsmall ⊕ lbig, with lsmall ' sl(2, C). Then (14.5) gives two systems of differential operators, one by small big small taking Z ∈ l and the other by taking Z ∈ l . Call these two systems Ω2 big and Ω2 . 83 For k = 3, computations similar to those which gave Formula (14.5) give a system 0 Ω3. It turns out that this does not give a conformally invariant systems (for any − value of s). The point is that V ⊗ Cχ2 occurs in U(n) in two different ways. A 0 second term must be added to Ω3 to get a system Ω3. Similarly τ4 gives a system Ω4 (with one operator). The following table gives the special values of s for which the system is conformally invariant.

big small Type Ω1 Ω2 Ω2 Ω3 Ω4 Ar(r ≥ 2) 0 0 (r − 1)/2 none (r − 2)/2 Br(r ≥ 3) 0 r − 5/2 1 none r − 2 Cr(r ≥ 2) 0 −1/2 none none none Dr(r ≥ 5) 0 r − 3 1 none r − 5/2 E6 0 2 none 3 9/2 E7 0 3 none 5 15/2 E8 0 5 none 9 27/2 F4 0 3/2 none 2 3 G2 0 2/3 none 1/3 1/2

EXERCISES

(14.1) Let p be a parabolic subalgebra of Heisenberg type. Use Prop. 8.10 to  + find a formula for [πs(E−γ ),R(Y )]f (n), for W ∈ p, Y ∈ V and arbitrary n ∈ + N. (Hint: Write n = exp(X + zEγ ),X ∈ V . Then you will need to compute −1 [Ad(n )E−γ ,Y ]. Do this by using the fact that Ad(exp(U)) = exp(ad(U)) for any U ∈ g. Since N is nilpotent, the series expansion of exp(ad(U))(Y ) is finite for U ∈ n.)

84 LECTURE 15. Gyoja’s conjecture

There is a conjecture of Gyoja which relates the reducibility of Verma modules to the roots of the b-function of a certain naturally defined polynomial. A version of this conjecture will be stated at the end of this lecture. First, an example is given which motivates the conjecture. Consider the example of sl(2p, C) and the ‘middle’ parabolic, i.e., the maximal parabolic subalgebra for which p − p+1 is the unique simple root in ∆(n). Then the prehomogeneous vector space n is Mp×p(C) with action of L = S(GL(p, C) × −1 GL(p, C)) given by ρ(`1, `2)X = `1X`2 . ∆(X) = det(X) is a relatively invariant polynomial. As mentioned in Lecture 9,

s+1 s ∆(∂X )∆(X) = (s + 1)(s + 2) ... (s + p)∆(X) . Our computations in Lecture 10 (and again in Lecture 12) show that the Verma module U(g) ⊗U(p) Csλ, with λ equal to the fundamental weight for p − p+1, is reducible for s = 0, −1, −2,..., −(p − 1). These values of s are precisely the roots of the b-function shifted by one. It is a fact that the Verma module is reducible if and only if s ∈ −p + Z>0. This may be stated as the Verma module U(g) ⊗U(p) Cs is reducible if and only if b(s + k) = 0 for some k ∈ Z>0. This illustrates the strong connection between reducibility of Verma modules and roots of certain b-functions Assume for the rest of this lecture that p = l + n us a maximal parabolic sub- X algebra. Assume that p ⊃ h + g(α). Let β be the unique simple root in ∆(n), α∈∆+ and let λβ be the corresponding fundamental weight. In order to state Gyoja’s conjecture we need a polynomial to fill the role played by det(X) in the above example. β Denote by V = V the irreducible representation of lowest weight −λβ and let 6 ∗ β ∗ v− be the lowest weight vector. Then the dual representation V = (V ) has ∗ ∗ highest weight λβ. Let v− be the lowest weight vector in V . Define ∗ ∆β(X) = hv− , exp(X)v−i, for X ∈ n.

Lemma 15.1. ∆β(X) is polynomial in X ∈ n.

Proof. Since X ∈ n, X is nilpotent. Therefore, for any representation ρ, there is an k X Xi integer k so that ρ(X)k = 0. Therefore, exp(X)v = v . The lemma now − i! − i=0 follows. 

Here are a few examples of ∆β(X).

6 A lowest weight vector in a representation ρ is a weight vector v− with the property that (−α) + ρ(X)v− = 0 for all X ∈ g , for α ∈ ∆ . 85 Example 15.2. Let g = sl(p + q, C), and assume p ≤ q. Set n = p + q. Take

β = p −p+1, so the parabolic subalgebra p is the same as in earlier examples. Then Pn p n λ = λβ = i=q+1 i and V ' ∧ C with lowest weight vector v− = eq+1 ∧ · · · ∧ en. The dual is V ∗ ' Vq Cn. (This fact is easily checked by giving a nondegenerate pairing Vp Cn × Vq Cn → C. This is accomplished as follows. Since Vn Cn =

C e1 ∧ · · · ∧ en, there is a scalar Cv,w so that

v1 ∧ · · · ∧ vp ∧ w1 ∧ · · · ∧ wq = Cv,we1 ∧ · · · ∧ en.

Then a nondegenerate pairing is defined by hv1 ∧ · · · ∧ vp , w1 ∧ · · · ∧ wqi = Cv,w.) ∗ ∗ The lowest weight vector in V is therefore v− = ep+1 ∧ · · · ∧ en. Now let 0 X ,X ∈ M (C) 0 0 p×q so that 0 X I X exp = p 0 0 0 Iq

Write X = (xij), so IX ∆ (X) = he ∧ · · · ∧ e , e ∧ · · · ∧ e i β p+1 n 0 I q+1 n

= hep+1 ∧ · · · ∧ en , (eq + Xeq−p+1) ∧ · · · ∧ (ep+q + Xeq)i p p X X = hep+1 ∧ · · · ∧ en , ( xj,q−p+1ej) ∧ · · · ∧ ( xj,qej)i j=1 j=1 = (−1)pq det(X˜) where X˜ is the furthest p × p minor to the right in X.

Example 15.3. Suppose that g is a simple complex Lie algebra, other than sl(n, C), and p is a parabolic subalgebra of Heisenberg type. The the fundamental weight λ = γ is the highest root. So V = g, the adjoint representation. Writing an + element of n as X + zEγ (with X ∈ V and z ∈ C), we have

∆β(X + zEγ ) = κ(E−γ , Ad(exp(X + zEγ ))E−γ )

4 k X ad(X + zEγ ) = κ(E , E ) −γ k! −γ k=0 4 2 = κ(E−γ , (ad(X) + z ad(Eγ ))E−γ ) = ∆(X) + z2.

(The last equality assumes some specific normalization of the root vectors E±γ .)

In [5] several conjectures are stated. The following is a version of one of them. Let p be a maximal parabolic subalgebra of a simple complex Lie algebra g. Let β be the unique simple root in ∆(n) and g the corresponding fundamental weight. 86 Then λ extends to a one-dimensional representation of p. Let b(s) be the b-function for ∆β(X).

Conjecture 15.4. The generalized Verma module U(g) ⊗U(p) Csλ is reducible if and only if b(s + m) = 0, for some m = 1, 2, 3 ... .

This conjecture is known to hold for p of abelian type. Considerable progress is made in the case of parabolics of Heisenberg type in [2]. See Section 7 of [2] for a discussion of the b-function.

EXERCISES

(15.1) Consider so(2n, C) defined by the symmetric bilinear form b on C2n with matrix 0 I . I 0

Let p be the parabolic subalgebra for which ∆(n) = {1 ± j : j = 2, . . . , n}.

Compute the polynomial ∆β(X) for the parabolic subalgebra p.

(12.2) For sp(2n, C) and β any simple root, let pβ be the parabolic subalgebra obtained by omitting β. Compute ∆β(X).

87 References [1] L. Barchini, A. C. Kable, and R. Zierau, Conformally invariant systems of differential oper- ators, Preprint. [2] , Conformally invariant systems of differential operators and prehomogeneous vector spaces of heisenberg parabolic type, Preprint. Available at http://www.math.okstate.edu/ zierau/papers.html. [3] I. N. Bernstein, The analytic continuation of generalized functions with respect to a param- eter, Funct. Anal. and Appl. 6 (1972), 26–40. [4] C. W. Curtis, Linear Algebra, an Introductory Approach, Undergraduate Texts in Mathe- matics, Springer-Verlag, New York, 1984. [5] A. Gyoja, Highest weight modules and b-functions of semi-invariants, Publ. RIMS, Kyoto Univ. 30 (1994). [6] S. Helgason, Differential Geometry, Lie Groups, and Symmetric Spaces, Academic Press, New York, 1978. [7] J. E. Humphreys, Introduction to Lie Algebras and Representation Theory, Graduate Texts in Mathematics, vol. 9, Springer-Verlag, Berlin, 1972. [8] N. Jacobson, Basic Algebra. I., W. H. Freeman and Co., San Francisco, 1974. [9] T. Kimura, Introduction to Prehomogeneous Vector Spaces, Translations of Mathematical Monographs, vol. 215, Amer. Math. Soc., Providence, RI, 1998. [10] A. W. Knapp, Lie Groups Beyond an Introduction, Progress in Mathematics, vol. 140, Birkh¨auser, New York, 2002. [11] A. Kor´anyi and H. M. Reimann, Equivariant first order differential operators on boundaries of symmetric spaces, Inventiones Mathematicae 139 (2000), 371–390. [12] C. C. Moore, Compactifications of symmetric spaces ii, the cartan domains, Amer. J. of Math. [13] W. Rudin, Functional Analysis, McGraw-Hill, New York, 1973. [14] W. Schmid, Die randwerte holomorpher Funktionen auf hermitesch symmetrischen R¨aumen, Invent. Math. 9 (1969), 1–18. [15] V. S. Varadarajan, Lie Groups, lie Algebras, and Their Representations, Prentice-Hall, En- gelewood Cliffs, NJ, 1974. [16] N. Wallach, Analytic continuation of the discrete series II, Transactions of the AMS 251 (1979), 19–37.

Oklahoma State University, Mathematics Department, Stillwater, Oklahoma 74078 E-mail address: [email protected]

Oklahoma State University, Mathematics Department, Stillwater, Oklahoma 74078 E-mail address: [email protected]

88