ON INTEGRATION AND VOLUMES OF

A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Science and Engineering

2021

Thomas M. Honey Department of Mathematics Contents

Abstract 6

Declaration 7

Copyright Statement 8

Acknowledgements 9

1 Introduction and Review 10 1.1 The Usual Case ...... 11 1.1.1 The Grassmannian Manifolds ...... 11 1.1.2 The Grassmannian as a Homogeneous space ...... 13 1.1.3 The Unitary Group ...... 15 1.1.4 The Stiefel Manifolds and the Grassmannian ...... 16 1.1.5 Other Homogeneous Spaces ...... 17 1.1.6 Principal Bundle Structures ...... 18 1.1.7 Volumes of Homogeneous Spaces ...... 20 1.1.8 Volume of the Unitary Group ...... 21 1.1.9 Volume of the Flag Manifolds ...... 23 1.2 The Super Case: A Summary ...... 23

2 Supermanifolds 27 2.1 Smooth Supermanifolds ...... 28 2.1.1 Superdomains and Supermanifolds ...... 30 2.1.2 Integration on Supermanifolds ...... 36 2.2 The Grassmannian ...... 39 2.2.1 Complex Supermanifolds ...... 39

2 2.2.2 The Grassmannian supermanifolds ...... 41

3 Hermitian Forms and the Kronecker Product 44 3.1 Hermitian Forms ...... 45 3.1.1 Bilinear forms ...... 45 3.1.2 Superinvolutions ...... 48 3.1.3 Sesquilinear forms ...... 54 3.1.4 Hermitian forms over supercommutative rings ...... 56 3.1.5 Hermitian Form on hom(U, V )...... 59 3.1.6 Coordinates ...... 60 3.2 Linear Superalgebra ...... 69 3.2.1 Vectorisation and the Kronecker Product ...... 69 3.2.2 Vectorisation ...... 70 3.2.3 The Kronecker Product ...... 71 3.3 The Super Case ...... 74 3.3.1 The Kronecker Product and the ...... 75 3.3.2 Super Vectorisation and the Trace ...... 82 3.4 The Hermitian Form on hom(U, V )...... 83

4 The Volume Element 85 4.1 Complex structures and Hermitian manifolds ...... 86

4.1.1 Cp|q as a Hermitian space ...... 86 4.1.2 Cp|q the Hermitian supermanifold ...... 88 4.2 Grassmannian Supermanifolds as Hermitian Manifolds ...... 90 4.2.1 The Usual Case ...... 91 4.3 The Super Case and the Volume Element ...... 95

5 Calculations 99 5.1 The Usual Case ...... 100 5.2 The Super Case ...... 104 5.3 1|0 × (p + 1)|q ...... 105 5.4 0|1 × p|(q +1)...... 105

p|p 5.5 Grr|s(C )...... 105

3 5.6 r, s > 0, q < r ...... 106

2|1 5.7 Gr1|1(C )...... 107 3|1 5.8 Gr1|1(C )...... 108 3|1 5.9 Gr2|0(C )...... 109 2|1 5.10 Gr2|0(C )...... 111 3|1 5.11 Gr2|1(C )...... 112 3|2 5.12 Gr2|0(C )...... 112 3|2 5.13 Gr1|1(C )...... 114 3|2 5.14 Gr2|1(C )...... 115 3|2 5.15 Gr2|2(C )...... 115 3|2 5.16 Gr3|1(C )...... 116 4|2 5.17 Gr2|0(C )...... 116 4|2 5.18 Gr1|1(C )...... 119

6 Conclusions and Discussion 120

Bibliography 124

A An Introduction to Superalgebra and on Conventions 128 A.1 Superalgebra ...... 129 A.1.1 Superrings ...... 129 A.1.2 Super Vector Spaces ...... 132 A.1.3 Modules over supercommutative rings ...... 135 A.2 Free Finitely Generated Modules ...... 139 A.2.1 Duality and the Tensor Product ...... 141 A.2.2 The Double Dual ...... 141 A.2.3 Trace ...... 142 A.2.4 The Berezinian ...... 143 A.2.5 Canonical Ideal of a Superring ...... 144 A.2.6 The Berezinian Module ...... 144 A.3 Coordinates ...... 145 A.3.1 The Dual Space ...... 147 A.3.2 Scalar Multiplication and Matrices ...... 150

4 A.3.3 The Double Dual ...... 151 A.3.4 The Trace ...... 151 A.3.5 The Berezinian ...... 153

Word count 36798

5 The University of Manchester

Thomas M. Honey Doctor of Philosophy On Integration and Volumes of Supermanifolds January 11, 2021

In this thesis we investigate the volumes of certain supermanifolds. The volumes of supermanifolds have been studied before in particular in [1]. This thesis builds on that work. We develop the necessary tools to study mainly the volume of the complex Grassmannian supermanifolds. In the first two chapters we review the problem and how it has been solved for ordinary Grassmannian manifolds. We contrast that with the super case and then introduce briefly what a supermanifold is and give an exposition on what integration entails in the super case. In the third chapter we develop the tools we need to calculate the volume of the Grassmannian supermanifolds as Hermitian supermanifolds. We develop Hermitian forms in the super case and we conclude that the natural Hermitian form on the space of matrices isn’t positive definite. We then develop the Kronecker product and vectorisation in the super case. With these developed we show the relation between the Berezinian, or superdeterminant, and the Kronecker product. In the fourth chapter we investigate what the volume element of the Grassmannian supermanifolds coming from a natural Hermitian form is and apply the results of the previous chapter so that we can calculate it. In the fifth and last chapter of the main part of the thesis we calculate the volume of the Grassmannian supermanifolds for different values of the relevant parameters. In [1] there is a conjectured formula for the volume of the Grassmannian supermanifolds and we contrast our results with that. We have provided an appendix on superalgebra to provide a guide on the conven- tions and notations used in the main text.

6 Declaration

No portion of the work referred to in the thesis has been submitted in support of an application for another degree or qualification of this or any other university or other institute of learning.

7 Copyright Statement

i. The author of this thesis (including any appendices and/or schedules to this thesis) owns certain copyright or related rights in it (the “Copyright”) and s/he has given The University of Manchester certain rights to use such Copyright, including for administrative purposes.

ii. Copies of this thesis, either in full or in extracts and whether in hard or electronic copy, may be made only in accordance with the Copyright, Designs and Patents Act 1988 (as amended) and regulations issued under it or, where appropriate, in accordance with licensing agreements which the University has from time to time. This page must form part of any such copies made. iii. The ownership of certain Copyright, patents, designs, trade marks and other intel- lectual property (the “Intellectual Property”) and any reproductions of copyright works in the thesis, for example graphs and tables (“Reproductions”), which may be described in this thesis, may not be owned by the author and may be owned by third parties. Such Intellectual Property and Reproductions cannot and must not be made available for use without the prior written permission of the owner(s) of the relevant Intellectual Property and/or Reproductions. iv. Further information on the conditions under which disclosure, publication and com- mercialisation of this thesis, the Copyright and any Intellectual Property and/or Reproductions described in it may take place is available in the University IP Policy (see http://documents.manchester.ac.uk/DocuInfo.aspx?DocID=487), in any rele- vant Thesis restriction declarations deposited in the University Library, The Univer- sity Library’s regulations (see http://www.manchester.ac.uk/library/aboutus/regul- ations) and in The University’s Policy on Presentation of Theses.

8 Acknowledgements

I would first like to thank my supervisors Hovhannes Khudaverdian and Theodore Voronov for their support over the years that it has taken to produce this thesis and for their help to further my mathematics education. I would secondly like to thank all the other mathematics teachers that have con- tributed to my education including others at the University of Manchester, those at the University of Aberdeen where I completed my undergraduate degree in mathematics and finally my teachers at school. I would also like to thank my friends, office mates and the mathematics postgradu- ate students in Manchester in general. They made my time there especially enjoyable. Finally I want to thank my family who have always been there to support me.

9 Chapter 1

Introduction and Review

10 1.1. THE USUAL CASE 11

We are primarily concerned with calculating the volumes of certain Hermitian su-

p|q permanifolds. In particular the Grassmannian supermanifolds, Grr|s(C ), the com- plex supermanifolds of r|s planes in Cp|q. The volume of the simplest of these super- manifolds, the complex projective superspaces CPp|q was obtained in [1]. A conjecture p|q on what the volume of Grr|s(C ) was made in that paper and the following is an p|q investigation of whether or when this conjecture holds. Grr|s(C ) are the ”super” n version of the complex manifolds Grk(C ), the space of k dimensional subspaces of Cn. In addition to [1], recently the question for the volume of symplectic superman- ifolds was considered in [2]. There a general formula for the volume of symplectic supermanifolds is obtained however we don’t use the methods derived from there. In

n this chapter we will give the exposition on working with Grk(C ) and how one usually n obtains the volume of Grk(C ) which we’ll call the usual case. Then we will contrast p|q that with what happens in the super case when one wants to consider Grr|s(C ). The exposition on the super case will be brief in order to highlight the contrast. Details and precise definitions on supermanifolds will be given in the following chapter.

1.1 The Usual Case

1.1.1 The Grassmannian Manifolds

As we stated above, before we embark on laying out the situation for the super case, we will give a detailed summary of what we can say about the volumes of the ordinary Grassmannian manifolds.

Definition 1.1.1. Suppose V is a finite dimensional vector space over a field K, with

K = R, C or H with dimension over K of n. Let 0 ≤ r ≤ n, then the Grassmannian manifold Grk(V ) is defined as the space of all finite dimensional vector spaces of V of dimension k.

In our case K will be C though the following exposition can be repeated for the real case. The most familiar example of these are the real and complex projective

n n−1 spaces in that Gr1(C ) ' CP for instance. As we will be focussing on the case of complex vector spaces, our Grassmannians will be complex manifolds. We will now recall and detail some methods of working with the Grassmannian 12 CHAPTER 1. INTRODUCTION AND REVIEW manifolds. Points are vector subspaces U of a larger vector space V . We need to

n represent U with some object that we can work with. To do that first let Mk(C ) be n the space of n × k matrices over C. Given an A ∈ Mk(C ) we can look at the k × k square submatrices of this matrix. We label the rows of A by ai, and we want to define a specific submatrix, AI , by a multi-index I. We have that I = {i1, i2, . . . , ik} with

1 ≤ ij < il ≤ n for j < l. We then have that the matrix AI is given by   ai1   ai2    AI =  .   .    aik so that the k rows of AI are defined by the multi-index I. We can thus define

n n Mk(C ) := {A ∈ Mk(C ) | det(AI ) 6= 0 for some I}.

This is, in other words, the space of nondegenerate n×k matrices i.e. we have that the column vectors of these matrices are bases of k dimensional subspaces of Cn. Another name for this manifold is the Noncompact Stiefel manifold. An ordered collection of k vectors which are linearly independent is called a k-frame. The Noncompact Stiefel manifold can then be realised as the manifold where points are k frames in Cn. We will introduce the more familiar (compact) Stiefel manifolds later. From this we can

n represent a subspace with an element A of Mk(C ). There is a right action of GLk(C) n on Mk(C ) given by:

n n Mk(C ) × GLk(C) → Mk(C ) (A, g) 7→ Ag.

We have that the column vectors of both A and Ag span the same subspace and that this group action is proper and free. Hence by a standard theorem the quotient manifold is a complex manifold. We, in fact, have the obtained the Grassmannian

n Grk(C ). So we have that

n n Grk(C ) ' Mk(C )/GLk(C).

We now have a way of working with the Grassmannian and this will be the standard way of representing elements of the Grassmannian that we shall use. This way of 1.1. THE USUAL CASE 13 dealing with it is useful as one can, for instance, define real functions on the Grass-

n mannian by defining functions, f : Mk(C ) → R, and as long as f(A) = f(Ag) for any g ∈ GLk(C) then this defines a function on the Grassmannian.

Let I be a multi-index as above, we define UI as:

UI := {A ∈ Grk(C) | det(AI ) 6= 0}. (1.1)

We can now define charts for Grk(C) and fully exhibit it as a complex manifold and this gives us the ability to work in local coordinates when we need to. We have in UI that AI is invertible, and so can be regarded as an element of GLk(C). So we can first define maps

n ϕI : UI → Mk(C ) by

−1 ϕI (A) = AAI .

If we take the case where I = {1, 2, . . . , k}, i.e the ”topmost” k × k submatrix, then if we define AJ to be the submatrix given by deleting AI from A then we have that ! ! AI I ϕI = −1 (1.2) AJ AJ AI

n (n−k) There is subsequently a map φI from Mk(C ) to Mk(C ) given by ! I  −1 φI −1 = AJ AI . AJ AI

(n−k) k(n−k) Hence we have maps from UI to Mk(C ) ' C given by ψI = φI ◦ ϕI : UI → (n−k) Mk(C ). These maps are homeomorphisms so the only thing to check that is that these maps are compatible and so form the charts of an atlas. We have that, for I1 and I , ψ−1 ◦ ψ is given by polynomial equations and hence is holomorphic. Therefore 2 I1 I2 the Grassmannians are complex manifolds of dimension over C of k(n − k).

1.1.2 The Grassmannian as a Homogeneous space

In previous section we have given the Grassmannian as a quotient of a matrix mani- fold by Lie group action. This is useful for working with the Grassmannian directly, however, we will need to study the Grassmannians from another perspective, namely as a homogeneous space. 14 CHAPTER 1. INTRODUCTION AND REVIEW

First we give the definition for a homogeneous space, we take the definition from [3] a more detailed survey is found in [4].

Definition 1.1.2. A smooth (complex) manifold M is a homogeneous space if there is a transitive Lie group action on M. This means:

1. There is a smooth (holomorphic) map

G × M → M

such that

(a) g(hx) = (gh)x for g, h ∈ G x ∈ M

(b) ex = x ∀x ∈ M

2. ∀x, y ∈ M ∃g ∈ G such that x = gy

At any point x ∈ M we can define a subgroup of G, called the isotropy subgroup

Hx with

Hx := {g ∈ G | gx = x}.

If we choose an isotropy subgroup Hx then we have that, by a standard theorem, M is diffeomorphic to the left coset space G/Hx, we will drop the subscript x in most cases. x can be seen as a ”choice of origin” as M ' G/Hx for any x. In the case where the group and subgroup are complex manifolds then the resulting quotient space is also a complex manifold. Any element of the Grassmannian is subspace U of a larger subspace V ; there is a natural group associated to V , GL(V ), the group of invertible linear transformations of V . We now give the Grassmannian as a homogeneous space for GL(V ). We can act on a subspace U on the left by an element g ∈ GL(V ). This maps U to another k dimensional subspace. This action is transitive and so we have that Grk(V ) is a homogeneous space for GL(V ). If we give V ' Cn the standard ordered basis then, if we select the subspace defined by the first k vectors, we can identify Grk(C) as 1.1. THE USUAL CASE 15 the coset space GL(Cn)/B where B is the subgroup of 2 × 2 block upper triangular matrices. i.e. We have that any element of B is of the form ! B1 B2

0 B3 with B1 and B3 invertible matrices.

We can also give Grk(V ) as a homogeneous space for a different group U(n).

1.1.3 The Unitary Group

Hermitian Forms and the Adjoint Map

Let h be a Hermitian form on V . By this we mean a map

h : V × V → C which is conjugate linear in the first variable and linear in the second variable, and satisfies that: h(u, v) = h(v, u) for u, v ∈ V.

We often write Hermitian forms acting on two elements u and v using a bracket so we have: hu, vi = h(u, v).

We call a complex vector space, V , with a Hermitian form h a Hermitian space and denote this with (V, h). Now suppose we take A ∈ End(V ), the Hermitian form allows us to define another map A† called the adjoint, which is defined as being the map such that: hu, A(v)i = hA†(u), vi.

Definition 1.1.3. Given any Hermitian form, then one can define the Unitary group associated to it U(V, h). This is defined as

† U(V, h) := {X ∈ GL(V ) | X X = IdV } where IdV is the identity element of GL(V ). In the case where h is the Euclidean Hermitian form and V is identified with Cn, then the Unitary group is denoted by U(n) and can be given by the following matrix manifold:

n ∗ U(n) := {X ∈ Mn(C ) | X X = In} where X∗ is the conjugate transpose of X. 16 CHAPTER 1. INTRODUCTION AND REVIEW

1.1.4 The Stiefel Manifolds and the Grassmannian

If there is a Hermitian form on the vector space Cn we can define orthonormal frames, and with these, the Stiefel manifolds. Given a collection of vectors {vi} then we have that two of them vi and vj are orthonormal if they satisfy that

h(vi, vj) = δij where δij is a standard Kronecker delta defined by   1 if i = j, δij := .  0 otherwise Definition 1.1.4. Let U be a Hermitian space. The Stiefel Manifolds associated to this space Vk(U) are defined as

† Vk(V ) := {X ∈ Mk(V ) | X X = Ik}.

In other words these are manifolds whose points are orthonormal k-frames in V .

In terms of matrices and with the standard Euclidean Hermitian form on Cn we have

n n ∗ Vk(C ) := {X ∈ Mk(C ) | X X = Ik}.

Suppose we now look again at U a k dimensional subspace of Cn. This has a basis of k vectors, we can now require our basis to be orthonormal, but there are multiple ways to span the same subspace with orthonormal vectors. The Unitary groups can be alternatively characterised as the group of transformations that take an orthonormal basis to another orthonormal basis. By repeating the same line of reasoning that we did in (1.1.1) we can now give the Grassmannian as the following quotient manifold

n n Grk(C ) ' Vk(C )/U(k).

So we can take an element U to be represented by orthonormal k frames with that two representations are equivalent if we can get one from the other by an element of the Unitary group U(k).

n We can give the Grassmannian Grk(C ) as a homogeneous space for U(n) or any Unitary group. There is a left action

n n n U(n) × Vk(C )/U(k) ' Grk(C ) → Vk(C )/U(k) 1.1. THE USUAL CASE 17 and if we look at the stabiliser of the orthonormal frame ! Ik 0

n as an element of Grk(C ) then we have that U(n) Gr ( n) ' . (1.3) k C U(k) × U(n − k)

n Since Grk(C ) is a quotient of two compact manifolds this shows that it is in particular, compact.

1.1.5 Other Homogeneous Spaces

Let us introduce other homogenous spaces that will be relevant. First the Flag mani- folds, though in truth we have already seen them as the Grassmannians are examples of them.

Definition 1.1.5. Given a vector space V then a flag is an increasing sequence of subspaces of V . In particular, it is a collection of subspaces {Ui} such that

{0} = U0 ⊂ U1 ⊂ · · · ⊂ Uk−1 ⊂ Uk = V

A flag is called complete if k = n. We subsequently have that when n = k then dim(Ui) = i. Let di = dim(Ui) − dim(Ui−1), we define the signature of a flag as the k-tuple

(d1, d2, . . . , dk).

Using flags of a specified signature, we can define the manifold

F (d1, . . . dk).

which has flags of signature (d1, . . . , dk) as points.

n Example 1.1.6. The Grassmannians Grk(C ) are examples of Flag manifolds as any subspace U defines the flag 0 ⊂ U ⊂ V.

Labelling the Grassmannian by its signature we have that

n Grk(C ) = F (k, n − k). (1.4) 18 CHAPTER 1. INTRODUCTION AND REVIEW

Flag manifolds are homogeneous spaces for U(n), as one can show that U(n) F (d1, d2, . . . , dk) ' . U(d1) × U(d2) × · · · × U(dk) To finish we can note that the Stiefel manifolds above are homogeneous spaces.Repeating the action we have given above, on an orthonormal k-frame we have a right action of

n U(n) on Vk(C ). If we choose the orthonormal frame ! Ik 0 then the isotropy subgroup of this is isomorphic to U(n − k). Hence we have that U(n) V ( n) ' . k C U(n − k)

2n+1 n Through this we can also note that S is a homogeneous space for U(n) as V1(C ) ' S2n+1.

1.1.6 Principal Bundle Structures

To further analyse the Grassmannians we now recall the notion of a principle bundle. A standard reference and where this definition is from is [5].

Definition 1.1.7. Let M be a smooth manifold and G a Lie Group. A smooth principal bundle over M with group G consists of a manifold P and right action of G on P such that the following conditions are satisfied:

ˆ The right action of G on P is free i.e. we have

P × G → P

and pg = p implies that g = e, the identity element.

ˆ M is the quotient space of P by the equivalence relation induced by G so M = P/G and the canonical projection

π : P → M

is smooth. 1.1. THE USUAL CASE 19

ˆ P is locally trivial. i.e. for every point p ∈ M ∃U 3 p a neighbourhood of U ∈ M such that π−1(U) is diffeomorphic to U × G. This trivialisation ϕ is compatible with the group action in that, if we write ϕ as two maps φ and ψ such that

ϕ(p) = (φ(p), ψ(p)),

then ψ(pg) = ψ(p)g.

We denote by a principal bundle P with base M and fibre G by P (M,G) and we have that principal bundles are special examples of fibre bundles. We have that

n n MK (C ) is a principal GLk(C) bundle with base Grk(C ). We already know that the n action of GLk(C) on the right of Mk(C ) is free and holomorphic and the projection is holomorphic. We just need to demonstrate the trivialisations. Let UI be as in (1.1), n −1 an open subset of Grk(C ). π (UI ) is given by the same set without the equivalence relation, so

−1 n π (UI ) := {A ∈ Mk(C ) | det(AI ) 6= 0}.

We can define

−1 Ψ : π (UI ) → UI × GLk(C) by

Ψ(A) = (π(A),AI ), so we have the required trivialisations. For more detail on this one can look at [6].

n n n We have exhibited Mk(C ) as Mk(C )(Grk(C ), GLk(C)), we can also exhibit the n n Stiefel manifolds in the same manner as the principal bundle Vk(C )(Grk(C ),U(k)). We have seen (1.3) that the Grassmannian can be given as the following quotient manifold U(n) Gr ( n) ' . k C U(n − k) × U(k) From this we will now show that U(n) is a principal U(n − k) × U(k) bundle over

n U(n) Grk(C ). First as a coset space, U(n−k)×U(k) , elements of the Grassmannian can be represented by ! U 0 U 1 0 U2 20 CHAPTER 1. INTRODUCTION AND REVIEW

where U ∈ U(n), U1,U2 ∈ U(k) × U(n − k). There is the natural map

n π : U(n) → Grk(C ) which provides the projection to the base and the right action on U(n) is just the action

n of U(k) × U(n − k) as a subgroup within U(n). So U(n)(Grk(C ),U(k) × U(n − k)) n with π : U(n) → Grk(C ) is a principal bundle. In fact we can repeat this procedure for F (d1, . . . , dk) to obtain other principal bundles with total space U(n).

1.1.7 Volumes of Homogeneous Spaces

We are interested in finding the volume of the Grassmannian supermanifolds. To see why this is interesting we need to give the story in the usual case. Let

π : E → M be a fibre bundle E(M,F ) where F is the fibre. We are concerned with the volume of manifolds, this requires a distinguished volume form to integrate. For the purposes of this section suppose that distinguished form is given by the one defined by a Rieman- nian metric. We will look at the case of Riemannian submersions. There is an induced map T π : TE → TM which is the differential of the map π. We can define VE the vertical bundle to be the subbundle of TE corresponding to ker T π. We have that

VEz is the subspace of TzE corresponding to those tangent vectors sent to 0 under

T πz : TzE → Tπ(z)M. Now suppose we have a Riemannian metric on E, so that it is the Riemannian manifold (E, g1). We then have that

TE = VE ⊕ HE

⊥ with HE = (VE) so that HEz consists of tangent vector which are in the orthog- onal complement to the VEz. Returning to π : (E, g1) → M we have that HEz is isomorphic to Tπ(z)M. We can now define the following.

Definition 1.1.8. Let

π : (E, g1) → (M, g2) be a fibre bundle where E and M have Riemannian metrics g1 and g2 defined on them respectively. We have that

T πz : HEz → Tπ(z)M 1.1. THE USUAL CASE 21

maps HEZ isomorphically onto Tπ(z)M. We call the fibre bundle π : (E, g1) → (M, g2) a Riemannian submersion if this isomorphism is an isometry.

Now suppose we have U ⊆ M such that U is homeomorphic to Rn and U is dense in M. This always exists for a compact manifold Riemannian manifold M. There exists a trivialisation of π−1(U) such that π−1(U) ' U × F . With this we state the following crucial corollary from [1] about the factorisation of volume elements.

Corollary 1.1.9. Let π : E → M be a Riemannian submersion. Let z ∈ E and x = π(z) ∈ M. Suppose we label dVE(z) the Riemannian volume element for E, dVFZ (z) the Riemannian volume element on the fibre Fz induced from the Riemannian metric of E, and dVM (x) the Riemannian volume element of the the base M at x. Then we have the following relation

dVE(z) = dVFz (z) · dVM (x).

Why is this important? We have seen that we have the principal bundles π :

U(n) → F (d1, . . . , dk). There is a natural Riemannian metric on the Unitary group which is bi-invariant under the action of the group. Using this we can induce a Rieman- nian metric on the flag manifolds so that the principal bundle becomes a Riemannian submersion. This paper [7] gives an exposition on this in the case of the Grassman- nians over R and the corresponding orthogonal groups which can be repeated for the case of complex Grassmannians and the unitary groups with. The total space, the fi- bres and the base are all compact and all the fibres will have the same volume. Hence we get immediately that

Vol(U(n)) Vol(F (d , . . . , d )) = . (1.5) 1 k Qk i=1 Vol(U(di)) So, once you know the volume of the Unitary group for any n then the volume of any flag manifold can be easily obtained.

1.1.8 Volume of the Unitary Group

The volume of the Unitary group U(n) has been calculated many times in various ways. It is given with various normalisations in [8], [9], [10], [11], [12], and in [13] as a sampling of the literature that mentions it. We will use the formula given in [12], 22 CHAPTER 1. INTRODUCTION AND REVIEW which gives the volumes of all Stiefel manifolds over R, C and H, the Unitary group, U(n), is a special case of a Stiefel manifold over C, so we can use that formula. We use this formula as using this normalisation for the volume of the Unitary group results in the standard answer for the volume of CPn when we calculate it using (1.5). It is also the volume for the Unitary group that results from studying the following Gaussian integral Z e− Tr(X∗X)dX M(Cn) over the space of complex n × n matrices analogously to how one obtains the volume of the sphere using a Gaussian integral. The above Gaussian integral can be used to calculate the volume of any Stiefel Manifold. We thus have that n n(n+1) 2 π 2 Vol(U(n)) = . (1.6) Qn−1 i=0 i!

The Barnes G-function

We can write the volume of U(n) using the Barnes G-function. The Barnes G-function is related to the multiple Gamma functions and defined using an infinite product. More details can be found in [14]. For our purposes we only need that it satisfies

G(z + 1) = Γ(z)G(z) (1.7) and that

G(1) = G(2) = G(3) = 1.

d3G(x) We have that it is uniquely defined by (1.7) and the condition that dx3 ≥ 0 for real x so we don’t need the precise definition we can express it neatly for positive integers n as n−2 Y G(n) = i! i=0 and this is how we shall use it. Returning to the volume of U(n) we then have that

n n(n+1) 2 π 2 Vol(U(n)) = . G(n + 1) 1.2. THE SUPER CASE: A SUMMARY 23

1.1.9 Volume of the Flag Manifolds

We can now state that the volume of the Flag manifold F (d1, . . . , dk), using (1.5) which was Vol(U(n)) Vol(F (d , . . . , d )) = , 1 k Qk i=1 Vol(U(di)) so that Qk (Qk d ) G(di + 1) Vol(F (d , . . . , d )) = π i=1 i i=1 . (1.8) 1 k G(n + 1) n For the Grassmannians Grk(C ) we have as a special case of the above, using 1.4 that G(k + 1)G(n − k + 1) Vol(Gr ( n)) = πk(n−k) . k C G(n + 1) So this is the story in the usual case. Once one obtains the volume of the Unitary group then one can calculate the volume for any Flag manifold.

1.2 The Super Case: A Summary

Many things in the super case turn out to be generalisations of the usual case, so, much of the same differential geometric constructions can be done for supermanifolds. For brevity, we will omit the precise definitions for supermanifolds and their differential geometry at this point; it will be provided in the subsequent chapter. For superman- ifolds there is a generalisation of volume which can be calculated using the Berezin Integral. With this generalisation of volume, we still have volume elements and (1.1.9) still holds true in the super case. In [1] it was proved in the paper for the super case first and as a corollary it holds in the usual case. So the above analysis on the Grassmannian manifolds and Flag manifolds if done carefully would be valid in the super case. However, we now encounter the main problem. The Berezin Integral can produce the following:

Theorem 1.2.1 (Berezin, [15],[16]). Let U(p|q) be the Unitary supergroup. If p, q > 0 then Vol(U(p|q)) = 0. (1.9)

If one of p or q is zero, then we have an ordinary Unitary group, as U(p) ' U(p|0) ' U(0|p). We have that the volume vanishes if and only if p and q are greater than zero. Every supermanifold has an underlying manifold. In the case of the Unitary 24 CHAPTER 1. INTRODUCTION AND REVIEW supergroups U(p|q) this manifold is U(p)×U(q) which doesn’t have a non-zero volume. One of the curiosities of the theory of supermanifolds and their differential geometry is that we can obtain that the volume is 0 despite the fact that there is an underlying usual manifold with non-zero volume.

p|q For the Grassmannian supermanifolds Grr|s(C ) we have analogously to the usual case that U(p|q) Gr ( p|q) ' . r|s C U(r|s) × U(p − r|q − s) So the Grassmannian supermanifolds are still homogeneous spaces for a Unitary group in the super case. However applying (1.5)

Vol(U(n)) Vol(F (d , . . . , d )) = , 1 k Qk i=1 Vol(U(di)) which still holds in the super case, we have that in general the volume would be given by 0 0 × 0 which is undefined. We know, however, from [1], that for the Complex Projective Superspaces we can directly calculate that

πp2q Vol( p|q) = . CP (p − q)!

From this we can conclude that while the na¨ıve generalisation of (1.5) fails it is still possible to obtain the volume of these supermanifolds if we work more directly. The main part of this thesis is developing the machinery necessary to calculate the volume of the Grassmannian Supermanifolds directly as the indirect method can’t

p|q work in the super case. Through this we have calculated the volume of Grr|s(C ) for certain values of the parameters r, s, p, and q. The current results are summarised in the table on the following page.

p|q We contrast these with the conjectured formula for Grr|s(C ) given in [1] which is

2 2 G((r − s) + 1)G((p − q) − (r − s) + 1) Vol(Gr ( p|q)) = 2rq+sp−2rsπrp+sq−(r +s ) . r|s C G((p − q) + 1)

The plan for the rest of this thesis is as follows. We shall give the necessary background on supermanifolds in the next chapter. This shall be all the material 1.2. THE SUPER CASE: A SUMMARY 25

p|q r|s and p|q Vol(Grr|s(C )) Prediction from the Formula

πp2q πp2q 1|0 and p + 1|q (p−q)! (p−q)!

πq2p πp2q 0|1 and p|q + 1 (q−p)! (p−q)!

r|s and p|p 0 2p(r+s)−2rsπp(r+s)−(r2+s2)G((r − s) + 1)G(1 − (r − s))

Only non-zero when r = s.

1|1 and 2|1 2π 2π

2|0 and 2|1 0 0

1|1 and 3|1 4π2 4π2

2|1 and 3|1 2π2 2π2

2|0 and 3|2 0 0

1|1 and 3|2 0 8π3

2|1 and 3|2 0 8π3

2|2 and 3|2 4π2 4π2

3|1 and 3|2 0 0

2|0 and 4|2 16π4 16π4

1|1 and 4|2 0 16π4

r, s > 0, q < r 0 N/A 26 CHAPTER 1. INTRODUCTION AND REVIEW

p|q needed to define Grr|s(C ) as a complex supermanifold as well as defining the Berezin Integral in the required detail. After this we need to develop a lot of linear superalgebra. We will also give some exposition on Hermitian forms in the super case. Both of these together will form a chapter. The chapter after that will be on the derivation of the Hermitian metric on the Grassmannian supermanifolds and the calculation of the associated volume element. The final chapter will be on calculating the volume of the Grassmannian super- manifolds in some cases. In general it hasn’t been possible to calculate for all possible parameters but some progress has been made. There is also an appendix covering some essential for the text theory on superal- gebra. It focuses on defining superalgebraa, their supermodules and then the topics of duality and linear superalgebra. We provide details on the Berezinian and the trace first in abstract terms then in coordinates. While supercommutative superrings can be treated in much the same way as commutative rings they ultimately need to be treated first as noncommutative rings. This means that we have to choose certain conventions. For instance we detail how having functions written on the left of an argument propagates signs throughout the theory. Chapter 2

Supermanifolds

27 28 CHAPTER 2. SUPERMANIFOLDS 2.1 Smooth Supermanifolds

We will here give a brief exposition on what a supermanifold is. We will endeavour to include what we think is necessary for the rest of the text but in the interests of being concise we may not give all of the detail. We use the the theory developed by Berezin, Konstant and Leites and give the definition of a supermanifold in terms of it being a locally ringed space. For this there are many references where more information and detail may be found, two early ones are [17], and [18]; there is also [19], [20], [21], [22], and [23]. A reference which covers much of the same theory, but with a focus on integration theory and the theory of differential forms on supermanifolds is [24]. We will introduce some algebraic preliminaries first, a fuller exposition on the algebra of superrings and the like that we need is given in the appendix.

Definition 2.1.1. A super vector space is a Z2-graded vector space V = V0 ⊕ V1 over a field k of characteristic not equal to 2. We call elements in V0 even and elements in

V1 odd elements.

We will look at super vector spaces over R and C so from now when referring to the field k it will be one of R or C. An element of a super vector space is homogeneous if it belongs to one of V0 or V1. We will write all formulas assuming that elements are homogeneous unless stated otherwise. A homogeneous element a ∈ A has a parity, denoted in this text bya ˜, though conventions differ on how to denote it with some for instance denoting parity by p(a). For an element a we say that it has parity 0 and is called even if a ∈ A0, and has parity 1 if a ∈ A1 and in this case is called odd.

Definition 2.1.2. There is a parity reversion functor Π on Z2 graded vector spaces Π : V → V . We have that

Π(V ) = Π(V )0 ⊕ Π(V )1 with

Π(V )0 = V1 and Π(V )1 = V0.

The effect of the parity reversion functor is to change the parity of elements so that odd becomes even and the like. The standard super vector space is:

p q R ⊕ ΠR (2.1) which we will denote as Rp|q. We shall call p|q the super dimension of V . 2.1. SMOOTH SUPERMANIFOLDS 29

Definition 2.1.3. A superalgebra over k, A, is a super vector space with a product such that

AiAj ⊆ Ai+j.

Definition 2.1.4. A superalgebra is supercommutative if

ab − (−1)a˜˜bba = 0 ∀a, b ∈ A

Example 2.1.5. The standard example for a supercommutative algebra is, given a usual vector space V (with no grading on the vector space), the exterior algebra V (V ). This algebra is Z graded however if the grading is read module 2 then one has a supercommutative algebra.

Remark 2.1.6. We have that the definition of supercommutativity gives rise to the ”Koszul sign rule”. This is given rigorously in the appendix but can be loosely stated here that when working over a supercommutative ring that switching the order of two elements means that you pick up a sign based on the parity of the elements involved.

We can phrase the exterior algebra in the following manner. We take the free k algebra of polynomials in ξ1, . . . ξq and impose the relations that

ξiξj = −ξjξi 1 ≤ i, j ≤ q.

We call such an algebra a Grassmann algebra. This is isomorphic to the exterior algebra, but we prefer to use the viewpoint of a Grassmann algebra than the exterior algebra of a vector space, as it can be said to be the algebra generated by q odd elements. We denote the Grassmann algebra over k with q generators by k[ξ1, . . . , ξq]. We note that an arbitrary element a of a Grassmann algebra can be written uniquely as:

i i j I 1 2 q a + ξ ai + ξ ξ aji + ... + ξ aI + ξ ξ . . . ξ aq,q−1,...,2,1 (2.2) where I is a multi-index.

Remark 2.1.7. We shall note here that unless stated otherwise we are employing the Einstein summation rule over indices throughout this thesis. 30 CHAPTER 2. SUPERMANIFOLDS

2.1.1 Superdomains and Supermanifolds

We will follow [23] in the definition of a supermanifold. The definition of a superman- ifold can be given succinctly in the following statement.

Definition 2.1.8. A (smooth) supermanifold M of dimension p|q is a paracompact, Hausdorff, second countable topological manifold |M|, of dimension p, endowed with

a sheaf OM of supercommutative algebras, locally isomorphic to

∞ p 1 q C (R )[ξ , . . . , ξ ].

Morphisms between supermanifolds M and N are a pair of maps φ = (|φ| , φ∗) with

∗ |φ| :|M| →|N| a continuous map and φ : ON → OM is a sheaf morphism above |φ|.

That is the formal definition however we are going to detail how one works prac- tically with supermanifolds. When working with usual manifolds we can work with coordinates which are elements of the algebra of functions on some open subset U of

M. So for U we can have coordinates xi : U → R. We can then express functions using these and can define vector fields and the like locally using these and making sure any construction is invariant with respect to changing coordinates. The theory of supermanifolds can be developed so that we work in much the same way as we do for usual manifolds. If we label the usual coordinates xi as being even, then the theory of supermanifolds introduces odd coordinates, the ξj above. The theory of su- permanifolds then concerns how introducing anticommuting, and therefore necessarily nilpotent, variables affects things. It is then in some sense the study of a very mild noncommutative geometry. Working in the supercommutative setting allows most of the main constructions that are phrased in algebraic terms in differential geometry to be stated for smooth supermanifolds. We can still define the cotangent space at a

2 point p if we take mp, the maximal ideal of elements vanishing at p and form mp/mp. In general if we can phrase the definition in the case of usual manifolds in terms of the algebra of functions then one can generalise and define objects in the super case.

We now look at the supermanifold Rp|q, this is the local model for smooth super- manifolds. The underlying topological space is Rp. We define the sheaf of supercom- mutative R-algebras for open U ⊆ Rp by the assignment

∞ 1 q p : p U → OR (U) = CR (U)[ξ , . . . , ξ ]. 2.1. SMOOTH SUPERMANIFOLDS 31

Remark 2.1.9. Rp|q denotes two different objects one a super vector space and the other a supermanifold. The same notation being used for both objects is common and can be justified as we will do later.

One can look at open submanifolds U p|q where |U| ⊂ Rp of Rp|q and call these superdomains. As is standard in the literature we say submanifold rather than subsu- permanifold as the later is too cumbersome an expression.

p|q We could use O(U ) to stand for the global sections of the sheaf OUp|q but we will use an alternative notation for the sections of this sheaf which is C∞(U p|q). For a superdomain we have a natural set of coordinates. Namely we can take (xi, ξj) where x ∈ C∞(U). An element f ∈ O(U p|q) can be expanded like in (2.2) as

i I f = f0(x) + ξ fi(x) + ... + ξ fI (x), (2.3)

∞ where f0(x), fi(x) and the like are elements of C (U). We define the value of f at a point x to be the unique value λ such that f − λ is not invertible. Due the expansion above we have that, in these specific coordinates, f(x) = f0(x) = λ. We will refer to elements of C∞(U p|q) as functions, as is common in the literature and as suggested by the notation, however unlike the case of smooth manifolds we have that a function on a supermanifold is not determined by its value at every point. In fact we have that the value of any ”odd” function will be 0 as an odd function is nilpotent. The core object in studying supermanifolds are the algebras of functions.

∞ 1 q Remark 2.1.10. Looking at the algebra C p (U)[ξ , . . . , ξ ] one can see that if one R took a usual manifold M then took a vector bundle V of rank q on M, and then took that vector bundles natural exterior bundle V(V ), then the algebra of sections

∞ 1 q of that exterior bundle locally would be isomorphic to C p (U)[ξ , . . . , ξ ]. In fact by R a theorem of Batchelor [25] this holds globally in the case of smooth supermanifolds. It doesn’t hold in more restricted settings like the analytic or complex settings. What makes the theory of supermanifolds different from just being a part of the study of vector bundles is that the morphisms between supermanifolds are more general than vector bundle morphisms.

To demonstrate the morphisms between supermanifolds, let U p|q and V m|n be two superdomains with coordinates (xi, ξi) and (yj, θj) respectively. Then a coordinate 32 CHAPTER 2. SUPERMANIFOLDS

transformation from V m|n to U p|q is given as:

i i α β i x = x (y) + θ θ xβα(y) + ...,

i α i α β γ i ξ = θ ξα(y) + θ θ θ ξγβα(y) + ....

Remark 2.1.11. In particular we should note that while we can give coordinates on U p|q as (yi, θi) with the yi ∈ C∞(U p) once we make a change of coordinates all we can say about the new xj is that they are even elements of C∞(U p|q) they in general won’t correspond to an actual function on U.

In general any morphism between supermanifolds at least locally can always be given as a map from one set of coordinates to another. We have the following theorem.

Theorem 2.1.12 ([23]). Let f : M → U p|q be a morphism of supermanifolds. We define coordinates on a supermanifold as the pullbacks of the coordinates on U p|q. There is a bijection between the morphisms f : M → U p|q and the set of even and odd coordinates (xi, ξj) on M such that the values of xi(m) for m ∈ |M|, xi(m) belong to |U|

Stated again for every morphism f we have that the pull back of coordinates on U p|q are p and q even and odd sections of C∞(M) and that given any such collection of p and q sections there is a morphism into U p|q such that the sections are the pullbacks of the coordinates on U p|q. This makes working in coordinates on supermanifolds a viable way to work with them. This theorem also gives that a generic section f of

C∞(M p|q) can be interpreted as a map f : M p|q → R1|1 which we shall use later.

Remark 2.1.13. In proving the theorem above it is used that given an arbitrary morphism f the expressions like f(x + θ1θ2) can be resolved using a Taylor expansion over odd variables. In particular we have for f that it is

f(x + θ1θ2) = f(x) + f 0(x)θ1θ2.

In general such expansions require an arbitrary number of derivatives so that only C∞ supermanifolds are easy to make sense of.

So far we have only given superdomains we now give the supersphere. 2.1. SMOOTH SUPERMANIFOLDS 33

Example 2.1.14. We can define other supermanifolds via equations like the super- sphere Sp|2q which is defined as a submanifold of Rp+1|2q via the equation:

p|2q p+1|2q p S := {x ∈ R | hx, xi = 1} ! y with x = (y1, . . . , yp+1, ξ1, . . . , ξ2q)T = and ξ

hx, xi = (y1)2 + ... + (yp+1)2 + 2ξ1ξ2 + ... + 2ξ2q−1ξ2q

A specific example is S2|2, this is defined as a submanifold of R3|2 by the equation

x2 + y2 + z2 + 2ξζ = 1.

We can define stereographic coordinates from the ”north pole” here by the map

2|2 2|2 f : S → R  x y ξ ζ  f(x, y, z, ξ, ζ) = , , , 1 − z 1 − z 1 − z 1 − z

The inverse map f −1 is given, for u = (u, v, θ, σ), as

 2u 2v hu, ui − 1 2θ 2σ  f −1(u, v, θ, σ) = , , , , 1 + hu, ui 1 + hu, ui 1 + hu, ui 1 + hu, ui 1 + hu, ui

We can develop a similar chart for the south pole and we then have that the supersphere is given by an atlas with two charts with the transition maps being smooth.

The above example helps demonstrate that we can work with supermanifolds in much the same manner as in the usual case. We have a canonical inclusion|M| ,→ M and this corresponds to a map C∞(M p|q) →

∞ C (|M|). The map of algebras here corresponds the map R 7→ R/JR for a supercom- mutative ring R in the appendix. In effect this map means that when working with coordinates we can always move to coordinates on the underlying manifold by ”setting nilpotents to zero”. There are some subtleties for all the constructions we generalise to the super case as for instance a vector field is not determined by its value at every point, and the algebra of differential forms on a manifold doesn’t have elements of ”top” degree as the algebra generated by ”odd” differential forms is a symmetric algebra. We will draw attention to where the theory differs from the usual theory when required. 34 CHAPTER 2. SUPERMANIFOLDS

We shall describe working with supermanifolds using the functor of points approach as this is very useful when describing Lie supergroups and homogeneous spaces. This is an approach using inspiration from algebraic geometry. In essence, we can study a certain supermanifold M that we are looking at by looking at the maps from other supermanifolds to this supermanifold.

Definition 2.1.15. Let M and T be supermanifolds. A T point of M is a morphism T → M. The set of all T -points of M is denoted by M(T ). We have in other words that a T point is an element of

M(T ) = Hom(T,M).

So given a supermanifold M we can define the functor of points of the supermanifold M to be the functor

M : (smflds)op → (sets),T 7→ M(T ),M(φ)(f) = f ◦ φ

Using a version of Yoneda’s Lemma we can then see that maps between super- manifolds are in bijection with the maps between their functor of points and that two supermanifolds are isomorphic if their functor of points are. This line of looking at supermanifolds can be developed. We have the following theorem from [23],

Theorem 2.1.16. Let the functor:

F : (smflds) → (salg) (2.4) that assigns to each supermanifold M its supercommutative algebra of functions O(M),

∨ ∨ and to each morphism (|φ| , φ ) the superalgbera map φM . Then this functor is a full and faithful embedding.

This also means that

Proposition 2.1.17.

Hom(M,N) ' Hom(O(N), O(M)).

What this means is that for smooth supermanifolds everything is determined by their algebra of functions. So if we want to define a map between two supermanifolds we only need to define a morphism between their algebras.

We wrote earlier that in a sense Rp|q the super vector space and supermanifold can be identified together, is a manner, this is in the following manner from [23]. 2.1. SMOOTH SUPERMANIFOLDS 35

Example 2.1.18. We look at Rp|q as a supermanifold. The functor of points approach allows us to bring Rp|q as a super vector space and as a supermanifold together. Suppose we have a supermanifold T then we have that the T points of Rp|q(T ) are given by

p|q p|q p|q R (T ) = Hom(T, R ) = Hom(O(R ), O(T )). Using the theorem on coordinates above we then have the following identifications

p|q ∨ 1 ∨ p ∨ 1 ∨ q p|q R (T ) '{f (x ), . . . , f (x ), f (ξ ), . . . , f (ξ ) | f : T → R }

p q p|q = O(T )0 ⊕ O(T )1 = (O(T ) ⊗ R )0.

Here the first Rp|q is as a supermanifold and the last Rp|q is as a super vector space Rp|q = Rp ⊕ Rq. This series of equivalences means that we can treat a T point of Rp|q as p even sections of O(T ) and q odd sections of the same algebra. Hence while they are different objects using the construction above we have that they are intimately related and conflating the two happens widely in the literature. We will take care to make sure we identify which Rp|q we are using.

Ultimately T points allows one to work with supermanifolds by treating coordinates on a supermanifold M p|q as p even elements and q odd elements of some superalgebra of functions of a supermanifold T . The functor of points approach is useful for Lie supergroups. This is as the statement that a supermanifold G is a Lie supergroup is the same as the statement that its functor of points is group valued in that for any supermanifold T we have that G(T ) is a group as a set.

Example 2.1.19. Take R1|1. We can define its Lie supergroup structure using T points. The product morphism ∇ : R1|1 × R1|1 → R1|1 is given by:

∇((t1, θ1)(t2, θ2)) := (t1 + t2 + θ1θ2, θ1 + θ2).

Where (ti, θi) represent two distinct T -points for some supermanifold T . Using this we see that the axioms for a group are satisfied so we have that R1|1 is a Lie supergroup.

Another example is that we can take Rp|q(T ) with the usual addition and we have the concept of Rp|q as a vector supermanifold. We have introduced the functor of points to allow us to do calculations that we can then transfer the results of to the case of a supermanifold being a topological manifold with a sheaf of supercommutative algebras. 36 CHAPTER 2. SUPERMANIFOLDS

2.1.2 Integration on Supermanifolds

There is a notion of integration on supermanifolds and it is here that some of the more interesting aspects of supermathematics emerge. An integral on a smooth super- manifold is called a Berezin integral after Felix Berezin [16] who was a pioneer in the study of supermanifolds and supermathematics. The equivalent of the determinant for the supercase is called the Berezinian in his honour. The main references for this we shall use are [24], [26], and [1]. The last reference, [1], in §5 and §6 doesn’t go into full details about integration theory but is a concise reference for how to work with tensors on a supermanifold and how to handle integration.

Let f(x, ξ) ∈ C∞(Rp|q). Then we have that from (2.3) that we can expand f as

i I f(x, ξ) = f0(x) + ξ fi(x) + ... + ξ fI , or fully expanding the last term

i q 1 f0(x) + ξ fi(x) + ... + ξ . . . ξ f1,2,...,q(x).

We label the terms in this expansion by degree which is defined by the number of ξi

i 1 2 q present. f0(x) has degree 0, ξ fi(x) are the degree 1 terms, while ξ ξ . . . ξ fq,q−1,...,2,1(x) is of degree q. In this expansion there is naturally only one term of degree 0 and q. The degree q term is the term with highest degree and that is how it shall be referred to from now on.

Definition 2.1.20. Suppose f(x, ξ) ∈ C∞(Rp|q) is such that in its expansion we have that the fI are all compactly supported or rapidly decreasing. Then the Berezin Integral of f is Z Z 1 2 p 1 q p [dx , dx , . . . , dx | dξ , . . . , dξ ]f(x, ξ) := f1,...,q(x)d x. Rp|q Rp

The condition on all of the fI is required for this to apply under a change of variables.

We now explain the notation. Here dpx stands for the standard volume form 1 2 p Vn ∨ p 1 2 p dx ∧dx ∧· · ·∧dx which is a section of the line bundle (T R ). [dx , dx , . . . , dx | dξ1, . . . , dξq] is not formed from the wedge product dxi and dξj, which are elements of the algebra of differential forms on M, but is instead a section of the Berezin Module

Ber(T ∨Rp|q). The Berezinian of a module can be built in a couple equivalent ways, 2.1. SMOOTH SUPERMANIFOLDS 37 these can be found in detail in [20], [27], and [19]. However for our purposes we will adopt the position as in [1] and [26] that the Berezinian of an R supermodule M, where R is some supercommutative ring, is such that if A : M → M is an even module automorphism then if x ∈ Ber(M) then the induced action on Ber(M) is given by

x 7→ x Ber(A) where Ber(A) is the Berezinian of the map A. If A is represented by the even super- matrix ! A A A = 00 01 A10 A11 where A00 and A11 are blocks of even elements of R and A01 and A10 are odd elements of R then we have

−1 −1 −1 −1 Ber(A) = det(A00) det(A11 − A10A00 A01) = det(A00 − A01A11 A10) det(A11) .

In a simplified case if A01 and A01 are zero then we have that det(A ) Ber(A) = 00 . det(A11) For more details on supermatrices in particular and the Berezinian as both a module and a function on the space of supermatrices we refer to the appendix. The symbol [dx1, dx2, . . . , dxp | dξ1, . . . , dξq] is often given as D(x, ξ) and we will occasionally use this notation as it is a concise shorthand. The longer bracket symbol is however more useful to do calculations with in that if we replace x with xλ then we have that [dxλ | dξ] = [dx | dξ]λ and for the odd coordinates we have

[dx | dξλ] = [dx | dξ]λ−1.

The bracket notation and the way it transforms relate to integration as per the fol- lowing theorem from [1].

Theorem 2.1.21. Suppose we have an invertible change of coordinates (x(y, θ), ξ(y, θ)) on Rp|q then we have that Z Z D(x, ξ) D(x, ξ)f(x, ξ) = ± D(y, θ) f(x(y, θ), ξ(y, θ)). p|q D(y, θ) R Rp|q 38 CHAPTER 2. SUPERMANIFOLDS where i i ! D(x, ξ) ∂x ∂ξ = Ber ∂yj ∂yj D(y, θ) ∂xi ∂ξi ∂θj ∂θj and ∂xi ± = sgn det( ). ∂yj Here we can see how the change of coordinates and the Berezinian are reflected in the bracket notation. If we scale an odd coordinate by multiplying it by λ then we get that the bracket is multiplied by   1 0  .   ..       1     1    −1 Ber  .  = λ . (2.5)  ..     λ     .   ..    0 1

If we change the even coordinates by a linear map A00 and the odd coordinates by a similar map A11 so that we don’t mix the coordinates we have that the bracket would be multiplied by det(A ) 00 , det(A11) only upon transformations that mix odd and even coordinates do we use the full expression of the Berezinian. This is how the Berezinian was first found in relations to the change of coordinates of the Berezin Integral. The Ber function has the following algebraic characterisation which is elaborated upon in the appendix,

Ber(eA) = eTrs(A) for an even supermatrix A and where ! A00 A01 Trs = Tr(A00) − Tr(A00). A10 A11 Returning to integration on supermanifolds, partitions of unity exist on smooth supermanifolds so we can define integration on them as in the usual case. We will finish with an example showcasing the differences between integration in the usual case and in the super case. 2.2. THE GRASSMANNIAN SUPERMANIFOLD 39

Example 2.1.22. Let f ∈ C∞(Rp|q) such that

i q q−1 1 f(x, ξ) = f0(x) + ξ fi(x) + ... + ξ ξ . . . ξ fq,q−1,...,2,1(x)

and that fq,q−1,...,2,1(x) is 0. Then Z Z D(x, ξ)f(x, ξ) = 0 dpx = 0. Rp|q Rp So in the supercase we can integrate a nonzero function and arrive at the answer 0. It is because of this phenomenon that Berezin found in [15] that

vol(U(p|q)) = 0, elaborating, there is not a term of highest degree when integrating some natural volume form so the volume is 0. An explicit example of this calculation for U(1|1) is found in [1].

2.2 The Grassmannian Supermanifold

2.2.1 Complex Supermanifolds

Cp|q, the super vector space

We now need to look at complex supermanifolds. There are two different notions of complex supermanifolds in the literature. One in which the odd variables can be decomposed as ξ = η+iζ and that there is a notion of the conjugate of an odd variable and the other where no such structure is assumed. We will be using the notion that odd variables can be decomposed into their real and imaginary part. This approach can be found in [21] as can some of the following material. The other approach is for example present in [26]. First we need some linear superalgebra. Let R2p|2q be a super vector space. The standard complex structure J on R2p|2q is given by ! J 0 J = p 0 Jq where  ! ! 0 −1 0 −1 Jl = diag  ,...,  1 0 1 0 40 CHAPTER 2. SUPERMANIFOLDS

! 0 −1 and is repeated l times. This amounts to giving coordinates on R2p|2q as 1 0

 T x1, y1, . . . , ηq, ζq so they come in pairs (xi, yi)T and (ηi, ζi)T . We would just write this as a column but it is an inefficient use of space so we will write it as a transposed row, by the transpose here though we shall mean a na¨ıve transpose. We have that J 2 = −I and with this one can complexify R2p|2q to obtain R2p|2q ⊗ C. We can extend J, to act on this new space, by the identity so J ⊗ Id. After this we can see that J has two eigenvalues i and −i. Labelling R2p|2q by V we can define two eigenspaces

V1,0 := {v ∈ V | J(v) = iv} and V0,1 := {v ∈ V | J(v) = −iv}.

p|q We can define C to be the eigenspace V1,0. We can then change coordinates so that i i 2p|2q j j j the coordinates (z , ξ ) of V1,0 are related to the coordinates of R by z = x + iy and ξj = ηj + iζj so we have defined Cp|q. This is all we need for now however we will return to this when discussing Hermitian forms on supermanifolds.

Cp|q, the supermanifold

Let R2p|2q be the supermanifold R2p|2q. We have that its algebra of functions is C∞(R2p|2q). We can tensor this with C to form C∞(R2p|2q) ⊗ C and do the same with the tangent bundle to form T R2p|2q ⊗ C. We can then introduce a complex struc- ture on the sections of the complexified tangent bundle. Suppose f ∈ C∞(R2p|2q) ⊗ C, this can be interpreted as a map into R1|1 ⊗ C such that

f(xi, yi, ηi, ζi) =

u(xi, yi, ηi, ζi) + iv(xi, yi, ηi, ζi) + α(xi, yi, ηi, ζi) + β(xi, yi, ηi, ζi).

We can look at the supermatrix of partial derivatives Df then we have that this will form a 2p|2q × 2|2 supermatrix. We have then that f is holomorphic if

JDf = DfJ 0 where here the J and J 0 are square supermatrices representing the complex structure of the spaces of size 2p|2q and 2|2 respectively. This in other words is the condition 2.2. THE GRASSMANNIAN SUPERMANIFOLD 41 that f satisfies the Cauchy-Riemann equations. As in the usual case if we have that zj = xj +iyj and ξj = ηj +iζj then the condition that f is holomorphic can be reduced to ∂f ∂f = 0 and = 0. ∂z¯j ∂ξ¯j We denote the holomorphic functions by Hp|q. We have that

p|q p 1 q p 1 q H 'H ⊗ C[ξ , . . . , ξ ] = H [ξ , . . . , ξ ] where Hp is the algebra of holomorphic functions on Cp and C[ξi . . . , ξq] is a Grassmann algebra over C. A complex supermanifold is then a topological space M with a sheaf p of algebras such that locally it is isomorphic to (|C| , Hp|q).

2.2.2 The Grassmannian supermanifolds

We look at the case of the complex Grassmannian supermanifolds. These were first defined in [19] and further study has gone into them since. They were defined using an atlas, in addition to using an atlas we will look at them using homogeneous coordinates. For more detail on treating the Grassmannians as homogeneous spaces we have have

[23]. Let us start with the complex supermanifold GLp|q(C). Again this notation can refer to two objects. We will start with the simpler one with GLp|q(C) an ordinary complex manifold and then move onto GLp|q(C) the complex supermanifold. Using the functor of points these two objects can regarded as different aspects of the same

p|q object. Let us look at C the super vector space. Then GLp|q(C) can be first defined as the even invertible endomorphisms of Cp|q. Even here is the same as those maps from Cp|q which preserve the grading. Here we have that

GLp|q(C) ' GLp(C) × GLq(C).

The complex supermanifold GLp|q(C) is   GLp|q(C) , Hp2+q2|2pq where

GLp|q(C) ' GLp(C) × GLq(C).

This is a supermanifold of dimension p2 +q2|2pq, as its elements are p|q ×p|q matrices. We can also look more generally at matrix supermanifolds so we can have the space 42 CHAPTER 2. SUPERMANIFOLDS of r|s × p|q matrices over C. GLp|q(C) is a Lie supergroup with the group operation being matrix multiplication.

p|q Let Mr|s(C ) be the supermanifold of p|q × r|s of complex matrices, so a complex supermanifold of dimension pr + qs|ps + qr. We can multiply a matrix on the right by an element of GLr|s(C). A matrix M is called nondegenerate if there exists an element g ∈ GLr|s(C) such that Mg contains a submatrix of size r|s × r|s which is the identity matrix. This is a similar to what a nondegenerate matrix is in the usual case however in the usual case we use the determinant and specify that it has to be nonzero whereas in the super case the Berezinian is not defined on all matrices so

p|q can’t be used. Let Mr|s(C ) be the space of nondegenerate matrices. There is an action of the Lie supergroup GLr|s(C) on this supermanifold. Analogously to the usual p|q case this gives the Grassmannian supermanifolds Grr|s(C ). The underlying complex p q manifold is Grr(C ) × Grs(C ) corresponding to the graded subspaces of dimension r|s of the super vector space Cp|q. We can develop charts on these supermanifolds in the following way.

p|q A generic element A of Grr|s(C ) is an equivalence class of supermatrices so that

B is the same as A if there exists an element g ∈ GLr|s(C) such that B = Ag. It represented by a p|q × r|s matrix

! A A A = 00 01 A10 A11

where A00 and A11 are matrices of coordinates and A01 and A10 are matrices of odd coordinates. Let I|J be a multi-index, indexing r|s, out of the p|q, rows of the matrix

A. This forms a matrix AI|J . We have a reduced matrix AI|J red coming from setting the nilpotent elements to zero. This matrix being nondegenerate then defines an open

p q set of Grr(C ) × Grs(C ). So we can use these matrices to define open subsets UIJ of p|q Grr|s(C ) with the condition that AIJ is invertible. We can then as in the usual case define maps

pr+qs|ps+qr ϕ : UI|J → C by

−1 ϕ(A) = AAI|J . 2.2. THE GRASSMANNIAN SUPERMANIFOLD 43

For the case of I|J = {1, 2, . . . , r| | 1, . . . , s} then we have that   Ir 0   −1 W00 W01 AA =   IJ  0 I   s  W10 W11

p|q so that Grr|s(C ) is a r(p − r) + s(q − s)|r(q − s) + s(p − r) dimensional complex supermanifold. With these maps we can develop an atlas for the Grassmannian su- permanifolds. This is one way of looking at the Grassmannian supermanifolds. Similarly to the usual case we can define the Grassmannian supermanifolds as homogeneous spaces for the Unitary supergroups. U(p|q) is defined as a subgroup of GLp|q(C).

−1 TC † U(p|q) := {U ∈ GLp|q(C) | H U HU = U U = Ip|q}. where ! ! ! I 0 U U U ∗ −U ∗ H = p and if U = 00 01 then U TC = 00 01 . ∗ ∗ 0 iIq U10 U11 U10 U11

This matrix defines the standard Hermitian metric on Cp|q, the super vector space. We then have that U(p|q) Gr ( p|q) ' r|s C U(r|s) × U(p − r|q − s) Example 2.2.1. We finish with the well studied Grassmannian supermanifolds the

p|q p+1|q complex projective superspaces CP ' Gr1|0(C ). Coordinates are given as

z = [z0 : ··· : zp, ξ1 : ··· : ξq]T here T is just the ordinary transpose. We can define maps to Cp|q by !T z0 zp ξ1 ξq ϕ (z) = ,..., , ,..., . i zi zi zi zi

Here one of the terms will be identically 1 and actually a map to Cp+1|q but we can p|q p+1|q project down to C to achieve that this is a chart. Using that ϕi maps into C is useful however and will be used in calculating later. Chapter 3

Hermitian Forms and the Kronecker Product

44 3.1. HERMITIAN FORMS 45

We now discuss Hermitian forms for supermodules. Bilinear forms in the super case are considered in [19]. A full exposition of Hermitian forms in the usual case is given in [28]. We will adapt and expand on these works to give an exposition of Hermitian forms in the super case. We detail the abstract case before moving to the case of working in coordinates. The ultimate aim is to derive an induced metric on hom(U, V ) for two Hermitian spaces (U, g) and (V, h) and show that this induced metric fails to be positive definite even if g and h are. We derive this form as, using this, we can write an induced Hermitian form on the Grassmannian supermanifolds from the Hermitian form on Cp|q. To calculate the volume element for our Hermitian supermanifolds we need to cal- culate the Berezinian of the supermatrix defining the Hermitian form. Our Hermitian form on the Grassmannian supermanifolds will be given as the trace of a composition of supermatrices. We develop the Kronecker product of two supermatrices and then its relation to the Berezinian in order to give an explicit form for the supermatrix which defines a Hermitian form when it is given as a trace and then compute its Berezinian.

3.1 Hermitian Forms

3.1.1 Bilinear forms

In the following for bilinear forms we let M be a right reflexive supermodule over

∨∨ a supercommutative ring R. Reflexive means that the map IM : M → M is an isomorphism.

Definition 3.1.1. A bilinear form b on a right supermodule R is a map

b : M × M → R which is biadditive and such that

b(mλ, nκ) = (−1)(m ˜ +˜b)λ˜λb(m, n)κ.

It is called an even form if ˜b = 0 and odd if ˜b = 1.

We can also identify b with an even or odd map b : M ⊗ M → R which is right linear. We will work with even forms, so from now on by a bilinear form b we shall mean an even form. 46 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT

Given b then we can associate to this bilinear form another bilinear form bτ , which we will call the opposite form, defined by

m˜ n˜ bτ (m, n) = (−1) b(n, m). using the viewpoint that b is a right linear map b : M ⊗ M → R then this form is defined as

bτ = b ◦ CR⊗R

Where CR⊗R is the braiding isomorphism. We can associate to b a mapping bl : M → M ∨ called the left adjoint of b defined by

bl(m)(n) = b(m, n) m, n ∈ M,

We say that b is non-degenerate if this map bl is an isomorphism. We have that bτ ∨ defines another mapping br : M → M :

m˜ n˜ br(m)(n) = bτ (m, n) = (−1) b(n, m) which we call the right adjoint. We have the following:

Lemma 3.1.2.

∨ br = bl IM . (3.1)

∨ Where bl is the dual map to bl

∨ ∨∨ ∨ bl : M → M .

Proof. We have on the left hand side that:

n˜m˜ br(m)(n) = (−1) b(n, m) and on the right hand side we have that

∨ ˜bm˜ bl IM (m)(n) = (−1) IM (m)(bl(n))

˜bm˜ m˜ (˜b+˜n) = (−1) (−1) bl(n)(m) = (−1)m˜ n˜b(n, m) 3.1. HERMITIAN FORMS 47

In particular this implies that bτ is non-degenerate if b is, as IM is an isomorphism. The adjoint maps give a map from a supermodule M into its dual M ∨, if this map is nondegenerate, i.e an isomorphism, then we may define a form p on M ∨. We first

−1 −1 define this via the maps bl and br. As the form is nondegenerate then bl and br exist. These are maps from M ∨ to M however we need a map from M ∨ to M ∨∨. We can obtain such maps by composing those maps with the canonical map IM . So we define pr, and we can construct analogously the map pl, by:

∨ ∨∨ pr : M → M (3.2)

−1 pr = IM bl . (3.3)

We have that:

Lemma 3.1.3.

∨ pl = pr IM ∨ .

Proof. We have that for elements ω, τ ∈ M ∨:

∨ ˜bω˜ pr IM ∨ (ω)(τ) = (−1) IM ∨ (ω)(pr(τ))

τ˜ω˜ = (−1) pr(τ)(ω)

τ˜ω˜ −1 = (−1) IM bl (τ)(ω)

τ˜ω˜ −1 = (−1) IM (bl (τ))(ω)

˜bω˜ −1 = (−1) ω(bl (τ)) and for pl we have that

−1 pl(ω)(τ) = IM br (ω)(τ)

∨ −1 = IM (bl IM ) (ω)(τ)

−1 −1∨ = IM IM bl (ω)(τ)

−1∨ = bl (ω)(τ)

˜bω˜ −1 = (−1) ω(bl (τ))

We can then define p : M ∨ × M ∨ → R, for ω, τ ∈ M ∨, by :

˜bω˜ −1 p(ω, τ) = (−1) ω(bl (τ)) (3.4) 48 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT

So far, we haven’t imposed any symmetry conditions. The bilinear forms that we want to study are the symmetric ones. Symmetry conditions come from (3.1) in that we say that our bilinear form is symmetric if

∨ bl IM = bl (3.5) which is the same condition as:

b(m, n) = (−1)m˜ n˜b(n, m) (3.6) or that a bilinear form is equal to its opposite form. When talking of a symmetric bilinear form b we can then drop the l subscript in the adjoint and just refer to b where context makes it clear what we are considering. We can summarise this in the following definition:

Definition 3.1.4. A symmetric form b on an R-supermodule M is either a symmetric non-degenerate bilinear form b : M × M → R or an isomorphism b : M → M ∨ such

∨ that b = b IM .

3.1.2 Superinvolutions

Since we will be working with a Hermitian form on a complex supermanifold we first we need to introduce involution and sesquilinear forms in the proper setting. Adapting the definition that we use in the usual case, we can make the following definition for an involution.

Definition 3.1.5. Given a superring R, not necessarily supercommutative, an invo- lution on R is an antiautomorphism σ of order ≤ 2 such that:

σ(r + s) = σ(r) + σ(s) σ(rs) = (−1)r˜s˜σ(s)σ(r)

Note that σ(1) = 1. We denote a ring R with a involution σ by (R, σ). Not all rings admit an involution, however, all commutative rings admit an involution given by σ(xy) = (−1)x˜y˜yx so we can subsume the bilinear forms above into the theory of sesquilinear forms.

Let R be a supercommutative ring with involution σR, then given an R-algebra A we say that A is an R-algebra with involution if it has an involution σA which extends 3.1. HERMITIAN FORMS 49

σR. So that

σR(ra) = σR(r)σA(a) for r ∈ R and a ∈ A. Now we want to see how involutions and modules over a ring R are compatible. Let (R, σ) be a superring with an involution σ (again we stress that we don’t require any commutativity constraints on the ring R) then from any right module M we can define a left module (M, ∗). The elements of M are the same as the elements in M, however, we denote an element in M by m and the same element in M bym ¯ . In line with how it is presented in the appendix we let ∗ denote the left action of R on M, and it acts by the rule that:

r ∗ m¯ = (−1)r˜m˜ mσ(r).

We shall call M the opposite module to M (relative to the involution σ), so given (R, σ) and given any right (left) module N then we can define a new left (right) module N using this construction. In fact is a covariant functor from the category of right (left) R modules to the category of left (right) R modules. In order to demonstrate that this is a functor we need to look at morphisms. If we have a morphism of right R modules A : M → N then we need to define a morphism between the left modules M and N. Given A then we can define a new morphism A¯ : M → N by the rule that for an elementm ¯ of M then:

(m ¯ )A = (−1)m˜ A˜A(m).

Remark 3.1.6. Writing the morphism A on the right is essential here in order to have consistent signs.

We then have that:

(r ∗ m¯ )A¯ = (−1)A˜(˜r+m ˜ )A(mσ(r)(−1)r˜m˜ )

= (−1)A˜m˜ A(m)σ(r)(−1)r˜(A˜+m ˜ )

= r ∗ (m ¯ )A¯ so that this morphism is a left module homomorphism as required. The duality functor is a contravariant functor from the category of right (or left) modules to the category of left (right) modules. So given given the right module M 50 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT then from M ∨ we can construct the module M ∨ = M ∗ which we shall call the antidual space, this space is the opposite module to the dual module. This is a right R module and the operation of applying ∗ is a contravariant functor from the category of right modules to itself. We shall call this the antiduality functor. The right action on φ¯ ∈ M ∗, is given by:

(φ,¯ r)(m) 7→ (φ¯ ∗ r)(m) = (−1)r˜φ˜σ(r)φ(m)

φ has a bar analogously to the situation with m andm ¯ . Both denote the same map however we impose a bar to signal whether we are treating it as an element of the dual or antidual space. Given a homomorphism A : M → N then we can define a homomorphism A∗ : N ∗ → M ∗ by the usual pullback in that:

A∗(φ¯)(m) = (−1)A˜φ˜(φ)A∨(m).

It interacts with the right action by:

A∗(φ¯ ∗ r)(m) = (σ(r)φ)A∨(m)(−1)r˜φ˜(−1)A˜(˜r+φ˜)

= σ(r)φ(A(m))(−1)r˜(φ˜+A˜)(−1)φ˜A˜

= (−1)r˜(φ˜+A˜)σ(r)A∗(φ¯)(m)

= (A∗(φ¯) ∗ r)(m).

Given M one can also apply these functors in the other order to get the space M¯ ∨. Given an element ψ ∈ M¯ ∨ then we have that:

(r ∗ m¯ )ψ = r(m ¯ )ψ.

We have (m)ψ ∈ R so the product in the last equality is the result of (r, (m ¯ )ψ) being multiplied together in R. The right action of R on M¯ ∨ is given by:

(m ¯ )(ψ, r) 7→ (m ¯ )ψr.

Lemma 3.1.7. Let M be a right module over a ring with involution (R, σ) then the map: ∗ ¯ ∨ DM : M → M given for φ¯ ∈ M ∗ and m¯ ∈ M¯ by

¯ m˜ φ˜ (m ¯ )DM (φ) := (−1) σ(φ(m)) 3.1. HERMITIAN FORMS 51 defines an isomorphism of right modules and hence D gives a natural transformation of the functors ∗ and ¯ ∨

¯ ¯ ∨ Proof. First we show that (m)DM (φ) is an element of M .

¯ m˜ r˜ ¯ (r ∗ m)DM (φ) = ((−1) mσ(r))DM (φ) = (−1)φ˜(˜r+m ˜ )σ(φ((−1)m˜ r˜mσ(r)))

= (−1)φ˜r˜rσ(φ(m)) ¯ = r(m)DM (φ)

So it is an element of M¯ ∨. Now to show that the map is right linear.

¯ m˜ (φ˜+˜r) r˜φ˜ (m)DM (φ ∗ r) = (−1) σ((−1) σ(r)φ(m)) = (−1)φ˜m˜ σ(φ(m))r ¯ = (m)DM (φ)r

The inverse map is given by:

−1 ψ˜m˜ DM (ψ)(m) := (−1) σ((m ¯ )ψ).

We then have that:

−1 ψ˜(m ˜ +˜r) DM (ψ)(mr) = (−1) σ((mr ¯ )ψ) = (−1)ψ˜(m ˜ +˜r)σ((−1)r˜m˜ σ(r)(m ¯ )ψ)

= (−1)ψ˜m˜ σ((m ¯ )ψ)r

−1 = DM (ψ)(m)r and

−1 m˜ (ψ˜+˜r) DM (ψr)(m) = (−1) σ((m ¯ )ψr) = (−1)r˜(ψ˜+m ˜ )(−1)m˜ (ψ˜+˜r)σ(r)σ((m ¯ )ψ)

r˜ψ˜ −1 = (−1) σ(r)DM (ψ)(m)

−1 = ((DM (ψ)) ∗ r)(m). 52 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT

To show that these are inverse we have that:

−1 ¯ φ˜m˜ ¯ DM (DM (φ))(m) = (−1) σ((m ¯ )DM (φ)) = (−1)φ˜m˜ σ((−1)φ˜m˜ σ(φ(m)))

= φ¯(m) and similarly for the other direction.

We have for a pair of left and right R module the notion of a dual pairing from [29].

Definition 3.1.8. A dual pairing is given by two modules, P , a left R module, Q, a right R module, and a map P × Q → R denoted by the pairing h , i. The map satisfies the following properties:

h , i is biadditive

hrp, qi = rhp, qi

hp, qri = hp, qir

hp, Qi = 0 implies p = 0 and, hP, qi = 0 implies q = 0.

We have a natural dual pairing in M ∨ × M, in that there is a map from M ∨ × M into R given by for ω ∈ M ∨ and m ∈ M hω, mi which has the required properties for a dual pair h , i. We note in particular that this is a pairing between the right module M and the left module M ∨. We now want to see how using the functor we can define another dual pair related to the original one. We can’t do this in the same manner so have M ∗ × M as M is a left module and M ∨ is a right module. However we have an involution so we can define the pairing betweenω ¯ ∈ M ∨ andm ¯ ∈ M by

hm,¯ ω¯i = (−1)ω˜m˜ σ(ω(m)). (3.7)

One can check and see that this gives a dual pairing. Given M ∗ we can apply ∗ again to come to the module M ∗∗. As in the case of

∨∨ M we have that there is a natural map IM which in this case is defined by:

¯ ¯ φ˜m˜ IM (m)(φ) = ¯ιm(φ) = (−1) σ(φ(m)). 3.1. HERMITIAN FORMS 53

We have that this is a right linear map from M to M ∗∗. The right action on M ∗∗ is given by: (C¯, r)(φ¯) = (C¯ ∗ r)(φ¯) = (−1)r˜C˜σ(r)C(φ¯) for C¯ ∈ M ∗∗. To show that this definition is correct we have:

¯ φ˜(m ˜ +˜r) IM (mr)(φ) : = (−1) σ(φ(mr)) = (−1)φ˜(m ˜ +˜r)σ(φ(m)r)

= (−1)φ˜(m ˜ +˜r)+˜r(φ˜+m ˜ )σ(r)σ(φ(m))

m˜ r˜ ¯ = (−1) σ(r)IM (m)(φ) ¯ = (IM (m) ∗ r)(φ).

For any A : M → N we can define A∗∗ and we have that the following map commutes

M A N (3.8) IM IN ∗∗ M ∗∗ A N ∗∗ i.e. we have that ∗∗ ¯ ¯ A (ιm)(ψ) = ιA(m)(ψ). for ψ¯ ∈ N ∗. We have this as A∗∗ : M ∗∗ → N ∗∗ is defined by A∗∗(C¯)(ψ¯) = (−1)A˜C˜C¯(A∗(ψ¯)) and hence we can see that

∗∗ ¯ m˜ A˜ ∗ ¯ A (ιm)(ψ) = (−1) ι¯m(A (ψ)) = (−1)m˜ A˜+m ˜ (A˜+ψ˜)σ(A∗(ψ¯)(m))

= (−1)ψ˜(A˜+m ˜ )σ(ψ(A(m)))

and

¯ ψ˜(A˜+m ˜ ) ιA(m)(ψ) = (−1) σ(ψ(A(m))) and so prove that the square in (3.8) commutes. As in the case of the usual double dual we have that the I is a natural transformation between the identity functor and the double antidual functor. 54 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT

Opposite modules and semilinear maps

Before moving on to sesquilinear forms as we will use them later we want to introduce semilinear maps.

Definition 3.1.9. Given a left module M over a ring with involution (R, σ) then an (even) semilinear map f is a map from the left module M to its opposite, the right module M, f : M → M such that

f(mr) = (−1)m˜ r˜r ∗ f(m)

The only example we shall use, is what effectively the identity map,

M → M m 7→ m.¯

And we have that mr 7→ (−1)r˜m˜ r ∗ m¯ =mσ ¯ (r).

Moreover in the case where we use it R will be supercommutative so we can interpret this as a map of right modules.

3.1.3 Sesquilinear forms

Definition 3.1.10. Let M be a module over a ring with involution (R, σ). Then an even sesquilinear form on M is a map h

h : M × M → R such that h(xλ, yκ) = (−1)λxσ(λ)h(x, y)κ.

Given a sesquilinear form then one can induce, as in the normal bilinear case, the

∗ map hl : M → M , the left adjoint map, which has the property that hl(x)(y) = h(x, y). A form is nondegenerate if hl is an isomorphism. Given h one can also look at the opposite form hτ

hτ : M × M → R

x˜y˜ hτ (x, y) = (−1) σ(h(y, x)) 3.1. HERMITIAN FORMS 55

We can generate from this the right adjoint of h, hr, and we have that

∗ hr = hl IM which is shown by the following computation

m˜ n˜ hr(m)(n) = (−1) σ(h(n, m)) and

∗ ∗ (hl IM )(m)(n) = hl (¯ım)(n)

= ¯ım(hl(n))

m˜ n˜ m˜ n˜ = (−1) σ(hl(n)(m)) = (−1) σ(h(n, m))

This is also true for odd sesquilinear forms but we won’t be considering them in great detail. Now similarly to the above with a bilinear form then given a sesquilinear form and its on a module M then we can induce a form on the antidual space M ∗. We have that the map

−1 ∗ ∗∗ µl := IM hr : M → M is the desired one. Given α, β ∈ M ∗ then we have that

−1 α˜β˜ −1 µl(α)(β) = IM hr (α)(β) = (−1) σ(β(hr (α)))

∗ Using that hr = hl IM we then have that this simplifies to:

−1 α(hl (β)).

So, similarly to (3.4) we can a sesquilinear form

µ : M ∗ × M ∗ → R (3.9)

which will be given by :

−1 µ(α, β) = α(hl (β)). (3.10)

One can show that this form satisfies all the same properties as a sesquilinear form should. One can look at how the left and right adjoint of a sesquilinear form relate to each other as this imposes symmetry conditions on our sesquilinear form. We will focus on the case where

∗ (hl) IM = hl, 56 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT

∗ or dropping the l, we say that h = h IM and we then say that the form is Hermitian. In terms of the sesquilinear form this is stating that we focus on the case where

x˜y˜ h(x, y) = hτ (x, y) = (−1) σ(h(y, x)).

Definition 3.1.11. Given h : M → M ∗, then we say that this is a Hermitian form if h is symmetric:

x˜y˜ h(x, y) = hτ (x, y) = (−1) σ(h(y, x)) and is an isomorphism. We call the pair (M, h) a Hermitian space.

3.1.4 Hermitian forms over supercommutative rings

Now suppose our ring R is supercommutative as well as having an involution σ : R → R defined on it. Then given any of the modules we have defined earlier we can define an action on the opposite side making them bimodules. Using supercommutativity we can also write all maps on the left of arguments. For example if ψ : M¯ → R is an element of M¯ ∨ then we can now write

ψ(m ¯ ) and we have that ψ(m ¯ ∗ r) = ψ(mσ ¯ (r)) = ψ(m ¯ )r.

Using this we also have that (3.1.7) can now be expressed in simpler terms. We have that ¯ DM (φ)(m ¯ ) = σ(φ(m)) and

−1 DM (ψ)(m) = σ(ψ(m ¯ )).

Now suppose R is a supercommutative algebra over C. This will be our setting when working with the Grassmannian supermanifolds. We’ll start with the simplest, but still important, case where our algebra is C.

Example 3.1.12. A supermodule over C is given by a super vector space Cr|s. Now we require for a Hermitian form on Cr|s that we have, for z, w ∈ Cr|s,

h(z, w) = (−1)z˜w˜h(w, z). 3.1. HERMITIAN FORMS 57

In particular this implies that

z˜ h(z, z) = i λ, λ ∈ R so that when z is even then h(z, z) is real and when it is odd then h(z, z) is purely imaginary. In the case where z and w are of different parities then we have that

h(z, w) = 0 for consistency as there are no odd elements in C as a superring.

Example 3.1.13. Now suppose we look at the supercommutative algebra

1 1 p p 1 1 q q A := C[¯z , z ,..., z¯ , z ; ξ¯ , ξ ..., ξ¯ , ξ ] with zj being even, the ξj being odd variables and we have that σ(zj)=¯zj and σ(ξj) = ξ¯j. We shall call elements of this algebra polynomials as the algebra A is the equivalent to an algebra of polynomials in the usual case. A generic element a ∈ A is given as

i ¯i ¯1 1 ¯q q a = a0(¯z, z) + ai(¯z, z)ξ + a¯ı(¯z, z)ξ + ··· + aI ξ ξ ... ξ ξ

where the ai are usual polynomials in z andz ¯. We have that for P ∈ M, an A module, then h(P,P ) will be an element of A where the coefficients of the polynomial, i.e. the coefficients of the a0, ai etc, will be real for even P and imaginary for odd P . Here we can allow that h(P,Q) 6= 0 for elements of different parity as A contains odd elements.

Given a ring R like the usual ring of polynomials R[x] then we have that these rings can often have an evaluation homomorphism associated to them. So given a polynomial P we can substitute an an element a ∈ R to obtain P (a) ∈ R and that P (x) 7→ P (a) is a ring homomorphism. We have an evaluation homomorphism for every a ∈ R. Using this one can define positive elements of R[x]. These will be the elements Q of R[x] such that Q(a) is positive for all a ∈ R. With that in mind, we now wish to define the notion of a positive definite Hermitian form in the super case.

Definition 3.1.14. Suppose R is a supercommutative algebra over C and M an R module. Let Rred be the reduced ring of R obtained by forming the quotient ring R/JR 58 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT

where JR is the canonical ideal generated by the nilpotent elements. We also suppose that Rred has evaluation homomorphisms into C. We say that

h : M × M → R is positive definite if, for even m, we have that

h(m, m)red

is a positive element of Rred. For odd n ∈ R we require that

−1 i h(n, n)red

is a positive element of Rred.

Given an element r in a superring R where Rred can be evaluated then more generally we shall call r positive if rred is.

Tensor Product of Hermitian Forms

We have that given two modules over a supercommutative ring R, U and V , that we can form their tensor product U ⊗ V . Now suppose these are both Hermitian spaces (U, g) and (V, h). Then as g and h are maps from a module to their antidual module then we can tensor these maps to give a form on U ⊗ V given by the rule:

g ⊗ h(u ⊗ v)(w ⊗ t) := (−1)w˜v˜g(u, w)h(v, t).

This is a Hermitian form. If we have two positive definite Hermitian forms then the tensor product can be seen to also be positive definite. We’ll elaborate on the case where u ⊗ v is an even element given by the tensor product of two odd elements as this is the only case where it is not clear. Then we have that

g ⊗ h(u ⊗ v)(u ⊗ v) = −g(u, u)h(v, v)

= −i2λµ for positive λ, µ ∈ A

= λµ which is positive. 3.1. HERMITIAN FORMS 59

3.1.5 Hermitian Form on hom(U, V )

Now we want to look at the case of hom(U, V ) for two supermodules U and V over a supercommutative ring. If we have that (U, g) and (V, h) are two Hermitian spaces then we want to induce a Hermitian form on the module hom(U, V ) which is a map

hom(U, V ) → hom(U, V )∗

How do we do this? We have that

hom(U, V ) ' V ⊗ U ∨, if U and V are finite dimensional, and we use this to induce a form on hom(U, V ). What we need is a map V ⊗ U ∨ → (V ⊗ U ∨)∗.

We have that similarly to the case without involution that

(V ⊗ U ∨)∗ = U ∨∗ ⊗ V ∗.

We will construct a map from V ⊗ U ∨ to V ∗ ⊗ U ∨∗ and then the commutativity isomorphism gives an element in (V ⊗ U ∨)∗. In detail we have

U ∨∗ = (U ∨)∨.

Now g is a map U → U ∗. We need from this using the tools above to derive a map

∨ ∨∗ −1 ∨ U → U . The required map is (g ) ◦ DU ∨ . We need DU ∨ in order to have the image lie in U ∨∗. So we form the map

−1 ∨ ∨ ∗ ∨∗ h ⊗ (g ) ◦ DU ∨ : V ⊗ U → V ⊗ U and we have that our induced Hermitian form on V ⊗ U ∨ is given by

−1 ∨ α˜w˜ −1 ∨

h ⊗ (g ) ◦ DU ∨ (v ⊗ α)(w ⊗ β) = (−1) h(v, w)DU ∨ ((g ) (α)(β))

= (−1)α˜w˜h(v, w)σ(g(α, β)) where g is the induced Hermitian form on U ∗.

Remark 3.1.15. Since any element of U ∗ can be interpreted as an element of U ∨ then that g is formally defined on U ∗ presents no problems. 60 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT

Remark 3.1.16. From this we can also note that the induced Hermitian form on the

dual space U ∨ is hα, βi = σ(g(α, β)) (3.11)

So we have that a Hermitian form z on V ⊗ U ∨ can be given as α˜w˜ g z(v ⊗ α, w ⊗ β) = (−1) h(v, w)σ( (α, β)). (3.12)

We shall come back to this, however we will discuss coordinates first.

3.1.6 Coordinates

We now put this all in terms of coordinates. Clarifying how one can switch from coordinates to abstract maps is useful to spell out. If we are working with finitely generated free modules then suppose we have a R module M with basis {ei}. We shall relate every construction to this basis in what follows.

Conjugate map and the Conjugate Dual map

Suppose we have a (R, σ) module M with basis {ei}. Now M is the same as an abelian group as M and so as a basis for M we can take the same elements as before, however this time we’ll label them with {e¯ı}, so with a barred index. Using the dual pairing ¯ (3.7) we can now define the dual basis {e } of {e¯ı} by

¯ ˜ı˜ j ˜ı˜ j he¯ı, e i = (−1) σ(he , eii) = (−1) σ(δi ). (3.13)

Now supposing that we have a module homomorphism A : M → N then we can work out from this pairing what A¯ : M → N looks like in coordinates, so when keeping track of the left and right module structures. j A is represented by a matrix A = ai or ! A A A = 00 01 A10 A11 and we recall that: j A(ei) = ejai .

Now if a linear map B was to apply on the right then we would have that

j (ei)B = bi ej. 3.1. HERMITIAN FORMS 61

So we have that using (3.13) and that (m ¯ )A¯ = (−1)A˜m˜ A(m) that if we have that the ¯ ¯ı matrix of A is B = b¯

¯ ¯ (˜ı+A˜)˜ j ˜ıA˜ h(e¯ı)A, e i = (−1) σ(he , (−1) A(ei)i)

k¯ ¯ A˜(˜ı+˜)+˜ı˜ j k hb¯ı ∗ ek¯, e i = (−1) σ(he , ekai i)

k¯ k˜˜ j A˜(˜ı+˜)+˜ı˜ k j b¯ı (−1) σ(δk) = (−1) σ(ai )σ(δk)

¯ (A˜+˜)(˜ı+˜) j b¯ı = (−1) σ(ai ).

So we have that A¯ is represented by the matrix ! A∗ (−1)A˜+1A∗ 00 10 . A˜ ∗ ∗ (−1) A01 A11 We will label this later as ATC . If we have a row vector m (so a row of elements of R representing an element of M¯ ) then we have that

(m)A¯ = mATC .

From now we will focus on the case where R is supercommutative so that we can write morphisms always on the left. If we are working in a supercommutative setting then we can induce a right module structure on any left module naturally using that supercommutativity. If R is supercommutative then we can define the reverse pairing

¯ of e¯ı and e which is given by ¯ j he , e¯ıi = σ(δi ).

If we are now writing A¯ on the left, then we have that:

¯ ¯ ¯ k¯ ¯ k¯ ¯ he , A(e¯ı)i = he , ek¯ ∗ b¯ı i = he , ek¯ib¯ı = b¯ı ¯ ¯ j j he , A(e¯ı)i = σhe ,A(ei)i = σ(ai ) so that

¯ j b¯ı = σ(ai )

¯ j and so that A is represented relative to the basis e¯ı by σ(ai ) and so that ! A¯ A¯ A¯ = 00 01 . ¯ ¯ A10 A11

In the interest of clarity, this is what A¯ looks like when we are representing

A¯(m) 62 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT with m a column vector of elements of R. Using similar methods we then can see that A∗ : N ∗ → N ∗ is given by the matrix ACT which is given by ! A∗ (−1)A˜A∗ ACT = 00 10 A˜+1 ∗ ∗ (−1) A01 A11

j ∗ j i where if the original matrix has entries ai then (a )i = σ(aj). If R is a supercommu- ∗ tative algebra over C then we have that Apq is the usual conjugate transpose and so we have that ACT = AST . Similarly to the case without conjugation involved we can define ! A∗ (−1)A˜+1A∗ ATC = 00 10 A˜ ∗ ∗ (−1) A01 A11

TC TS i CT (A˜+˜)(˜ı+˜) j TC or A = A . We have in terms of aj that A = (−1) σ(a)i and A = (A˜+˜ı)(˜ı+˜) j (−1) σ(a)i . Let v be a column vector. We then define that

∗v = σ(tv) ! v so that for an even vector v = we have that: ξ   ∗v = v∗ −ξ∗ and in general we have that !   v1 ∗v = v1∗ (−1)v˜+1v2∗ for v = a homogeneous vector. v2 We can summarise the relations to do with the conjugate dual map as in the appendix with the following:

(ACT )TC = (ATC )CT = A (3.14)

∗(vA) = (−1)v˜A˜ACT ∗v

∗(Aw) = ∗wATC (−1)w˜A˜

(CT )4 = Id, (CT )3 = TC, (CT )2 6= Id,

(B + C)CT = BCT + CCT

(BC)CT = (−1)B˜C˜CCT BCT

omitting similar relations dealing with TC

(3.15) 3.1. HERMITIAN FORMS 63

We also have the following relationship with regards to these matrices as a conse- quence of (3.8) K−1A(TC)2 K = A here K is ! I 0 K = 0 −I and since it is it’s own inverse we also have that

KA(TC)2 K−1 = A is also true. Before moving on it is worth describing the following. Let ω ∈ M ∗ and m ∈ M. We can obtain ω(m) from these. How does this work in coordinates? We have that

¯ı j ω = e ∗ ω¯ı and m = ejm .

Denoting by the same letter, ω, the column vector with elements given by ω¯ı, and similarly for m then we have that:

ω(m) = ∗ωm

This will be immediately applicable in the next section.

Sesquilinear forms in coordinates

Given a sesquilinear form h : M × M → R we wish to associate a matrix to it given a basis {ei}. We define the entries of the matrix H representing the bilinear by h¯ıj = h(ei, ej). We then have for two elements m and n given by column vectors (or right coordinates) that H(m, n) = ∗mHn.

We now what to look at what the map hl looks like in coordinates. hl satisfies that for m, n ∈ M we have that

hl(m)(n) = h(m, n) so if hl is represented by a matrix B then

∗ ∗ TC hl(m)(n) = (Bm)n = mB n 64 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT so that BTC = H and B = HCT .

∗ We have in terms of the left and right adjoint that hr = hl IM . Now in coordinates we have seen that the map IM is given by the matrix ! I 0 K = , 0 −I

∗ where I is the usual identity matrix of the appropriate size. So as hr = hl IM then we have that if hr is represented by a matrix C then

C = H(CT )2 K

(CT )2 or that hr(m) = H Km. Now let h be a Hermitian form so that

∗ h = h IM .

This works out in coordinates that our H is of the form: ! H H H = 00 01 ∗ H01 H11

∗ ∗ with H00 = H00 and H11 = −H11. This is equivalent to the condition that:

HCT = H(CT )2 K (3.16)

∗ This condition is that h = h IM though using the rules for manipulating supermatrices in this case it can be restated as

H = KHCT . (3.17)

This can also be stated as

˜˜ı hij = (−1) hji. (3.18)

For the anti dual space in the event that the sesquilinear form is nondegenerate we have seen from (3.9) that there is a induced form µ on the anti-dual space given by

µ : M ∗ × M ∗ → R

−1 µ(α, β) = α(hl (β)).

Giving α and β in right coordinates then we have that in terms of matrices and coordinates we have that µ(α, β) = ∗α(HCT )−1β. 3.1. HERMITIAN FORMS 65

If our form is Hermitian we then have, using (3.16), that

µ(α, β) = ∗αH−1Kβ.

Now we return to the discussion of positive definiteness of Hermitian forms. We have that a Hermitian form is positive definite if H00red is a positive definite matrix −1 and that i H11red is positive definite where Hred stands for the matrix when nilpotents are set to 0. If we look at the Hermitian form in the antidual case then we have that this is positive definite if the original form is. Namely the matrix of this form is

−1 −1 −1 −1 −1! (H00 − H01H11 H10) −H00 H01(H11 − H10H00 H01) −1 −1 −1 −1 −1 −H11 H10(H00 − H01H11 H10) (H11 − H10H00 H01) The submatrix corresponding to the even-even case is given by

−1 −1 (H00 − H01H11 H10) , this is positive definite. We can see this via the Woodbury formula alternatively called the matrix inversion lemma.

Lemma 3.1.17 (Woodbury matrix identity). For elements A, U, C, and V in a ring R we have that if A and C are invertible then the following holds.

(A + UCV )−1 = A−1(A − U(C−1 + VA−1U)−1V )A−1

Proof. Can be done by a direct check.

This is usually stated just for matrices but it holds for arbitrary rings. From this lemma we get that

−1 −1 −1 −1 −1 −1 −1 (H00 − H01H11 H10) = H00 + H00 H01(H11 − H10H00 H01) H10H00

−1 so that the matrix is H00 which is positive definite if H00 is plus some nilpotent terms and hence the whole matrix is positive definite. For the odd-odd block we have that it is given by

−1 −1 −1 −1 −1 −H11 − H1 H10(H00 − H01H11 H10) H01H11 .

−1 −1 −1 −1 −1 Using that i H11 is positive definite then we have that (i H11) = i (−H11 ) is positive definite so similarly as to the even-even matrix we get that the odd-odd matrix in the antidual case is positive definite as required. Now we return to the case of the induced metric on hom(U, V ). 66 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT

Theorem 3.1.18. Given two Hermitian spaces (U, g) and (V, h), with positive definite

Hermitian forms, then the induced Hermitian form z (3.12) is not positive definite.

Proof. α˜w˜ g z(v ⊗ α, w ⊗ β) = (−1) h(v, w)σ( (α, β)) (3.19)

Suppose we apply z to v ⊗ α where both v and α are odd (and so that v ⊗ α is even). We have that

z(v ⊗ α, v ⊗ α) = −h(v, v)σ(g(α, α)) = −iλiµ

= (−i)2λµ

= −λµ

Hence as both λ and µ are positive we have that z is negative.

Remark 3.1.19. This was noticed for super vector spaces in [30].

We will want to work more directly using that elements of hom(U, V ) can be given as matrices. With that in mind we have the following theorem:

Theorem 3.1.20. Let (U, g) and (V, h) be two Hermitian spaces. Let A and B be two element of hom(U, V ) given as matrices. Then we have the following expression for the induced Hermitian form on hom(U, V ):

−1 TC hA, Bi = Trs(G A HA).

j ∨ Proof. Let {fi} be a basis of V and {e } a basis for U then, denoting all the metrics 3.1. HERMITIAN FORMS 67 by brackets, we have that for A, B ∈ V ⊗ U ∨:

i j k l i j k l hA, Bi =hfiaje , fkbl e i = hfiaj ⊗ e , fkbl ⊗ e i

i k j l ˜(˜b+˜l) =hfiaj, fkbl ihe , e i(−1)

j l i k (˜+˜l)(˜+˜a+˜l+˜b) ˜(˜b+˜l) =he , e ihfiaj, fkbl i(−1) (−1) now as h , i is a Hermitian form being sesquilinear in the first argument we get

j l i k (˜+˜l)(˜+˜a+˜l+˜b)+˜(˜b+˜l) ˜ı(˜ı+˜+˜a) =he , e iajhfi, fkibl (−1) (−1) from (3.11 we get

j¯l ˜l i k (˜+˜l)(˜+˜a+˜l+˜b)+˜(˜b+˜l)+˜ı(˜ı+˜+˜a) =g (−1) ajh¯ıkbl (−1) from (3.18) we have

l¯ i k (˜+˜l)(˜+˜a+˜l+˜b)+˜(˜b+˜l)+˜ı(˜ı+˜+˜a)+˜+˜˜l =g ajh¯ıkbl (−1)

l¯ i k (˜+˜l)+(˜+˜l)(˜a+˜b)+˜˜b+˜+˜ı(˜ı+˜+˜a) =g ajh¯ıkbl (−1)

˜l(1+˜a+˜b) l¯ i (˜a+˜ı)(˜ı+˜) k =(−1) g (aj(−1) )h¯ıkbl putting this in terms of the original matrices we get:

=sT r(G−1ATC HB) with ATC = ATS from above

So we have expressed the induced Hermitian form on hom(U, V ) in terms of the trace of matrices.

We can get to this result another way via defining the adjoint of a map A ∈ hom(U, V ).

Definition 3.1.21. Given the spaces U and V with their metrics given by brackets h , iU and h , iV then we can define the adjoint of a map A : U → V by the relationship that for any u ∈ U and v ∈ V then A† : V → U is defined by:

A˜u˜ † hA(u), viV = (−1) hu, A (v)iU .

We then have that for g : U → U ∗ and h : V → V ∗ represented by matrices G and H 68 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT respectively then

h(A(u))(v) = (−1)A˜u˜g(u)(A†(v))

∗(HCT Au)v = (−1)A˜u˜ ∗(GCT u)A†v

(−1)A˜u˜ ∗uATC Hv = (−1)A˜u˜ ∗uGA†v

so that

ATC H = GA†

A† = G−1ATC H.

We can then using this define the metric on hom(U, V ) by:

† hA, Bi = Trs(A B) which we can see is the same as

−1 TC Trs(G A HB) due to the definition of the adjoint. The adjoint of two maps A and B satisfies the following properties:

(AB)† = (−1)A˜B˜ B†A†

(A†)† = A

(A†)−1 = (A−1)†

(Ar)† = (−1)r˜A˜σ(r)A† for r a scalar.

It is worth explicitly spelling out why (A†)† = A. We have that

(A†)† = (G−1ATC H)†

= H−1(G−1ATC H)TC G

= H−1HTC A(TC)2 (G−1)TC G

we now apply (3.17) so that as HTC = HK

= H−1HKA(TC)2 (GK)−1G

= KA(TC)2 K−1G−1G

= KA(TC)2 K−1 = A. 3.2. LINEAR SUPERALGEBRA 69

So while the super version of transposition is of period 4 we have that the adjoint is an involution as we expect. Another property worth mentioning is that if we scale both H and G by the same constant then the adjoint is unchanged. From this we can easily demonstrate that the induced metric on hom(U, V ) is not positive definite as the following example shows.

Example 3.1.22. Let U = V = Cp|q, the super vector space, so that hom(U, V ) = p|q p|q end(C ) and let A = Idp|q. The standard metric on C is given by the matrix: ! I 0 H = p . 0 iIq We then have that

−1 hI,Iiend(Cp|q) = sT r(H Ip|qHIp|q)  ! ! ! ! I 0 I 0 I 0 I 0 = sT r p p p p  −1  0 i Iq 0 Iq 0 iIq 0 Iq ! I 0 = sT r p 0 Iq

= p − q

So we have that for even A we have that hA, Aiend(Cp|q) 6≥ 0 and hence that the Hermitian form is not positive definite.

3.2 Linear Superalgebra

We have expressed our Hermitian form as the trace of the product of matrices. For our purposes we need to express this with column vectors so that we can get the supermatrix which defines the Hermitian form. For this purpose we need to develop vectorisation and the Kronecker product in the super case.

3.2.1 Vectorisation and the Kronecker Product

First we give an example.

Example 3.2.1. Let B,A ∈ End(U) with U a module over a supercommutative ring so that End(U) is also a module. We have that

B(Aλ) = B(A)λ 70 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT

In other words that the composition of A with B on the left is a linear map from End(U) to itself.

More generally we have that given a specific B ∈ hom(U, V ) then this induces an action on the space hom(U, T ) for any module T . Namely we can define a map

B : hom(T,U) → hom(T,V ), (3.20) acting on A ∈ hom(T,U) given by the following.

B(A) = BA.

This is right linear. If we have two modules that are finite dimensional then we should be able to express the linear map B in terms of a matrix acting on a column vector constructed from A. The way to do this is via Vectorisation and the Kronecker Product. We will first recall and restate how these work in the usual case before moving on to the super case. More on the Kronecker product for usual matrices can be found in [31].

3.2.2 Vectorisation

Definition 3.2.2. Let A ∈ hom(U, V ) we define the map vec : hom(U, V ) → U ∨ ⊗ V by the composition of

−1 ∨ λU,V : hom(U, V ) → V ⊗ U (3.21) with the map

∨ ∨ cV ⊗U ∨ : V ⊗ U → U ⊗ V.

i j We call this the vectorisation of A. In coordinates, if A = fiaje as a matrix then this gives the result:

j i vec A = e ⊗ fiaj

Now in terms of matrices, given a usual 2 × 2 matrix B so that ! b1 b1 B = 1 2 2 2 b1 b2 3.2. LINEAR SUPERALGEBRA 71 then we have that vec, which will denote vectorisation in the usual setting, is given by:  1 b1  2 b1 vec B =   . b1  2 2 b2

In cruder terms, the effect is that for any matrix B with columns bi then vectorisation is stacking them one on top of each other with the first column on top and the last column on the bottom.

3.2.3 The Kronecker Product

Suppose we have u ∈ U and v ∈ V elements of a module given in right coordinates Pn i Pq j so that u = i=1 eiu and v = j=1 fjv . We can look at the image of these two elements upon tensoring them together which is u ⊗ v. We can give u ⊗ v in right coordinates as, if u is the column vector representing u and v the same for v, then u ⊗ v the column vector representing u ⊗ v is given by:   u1v   u2v   u ⊗ v =  .  . (3.22)  .    unv

Given A : U → S and B : V → T then in terms of linear maps we can form A ⊗ B, another linear map, which acts as

(A ⊗ B)(u ⊗ v) = A(u) ⊗ B(v). and so in right coordinates if   A(u)1  .  A(u) =  .    A(u)m then   A(u)1B(v)  .  A(u) ⊗ B(v) =  .  .   A(u)mB(v) 72 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT

Now A ⊗ B ∈ Hom(U ⊗ V,S ⊗ T ), and is a linear map, so there is a matrix C such that   u1v     A(u)1B(v) u2v    .  C  .  =  .  .  .      A(u)mB(v) unv

This matrix is the Kronecker product of A and B.

Definition 3.2.3 (The Kronecker Product). Given A : U → S and B : V → T represented by matrices

 1 1   1 1  a1 . . . an b1 . . . bq  . . .   . . .  A =  . .. .  and B =  . .. .      m m p p a1 . . . an a1 . . . aq then the Kronecker product of two matrices A ⊗ B is given by:

 1 1  a1B ··· anB  . . .  A ⊗ B :=  . .. .    m m a1 B ··· an B or more explicitly

 1 1 1 1 1 1 1 1 1 1 1 1  a1b1 a1b2 ··· a1bq ······ anb1 anb2 ··· anbq  1 2 1 2 1 2 1 2 1 2 1 2   a1b1 a1b2 ··· a1bq ······ anb1 anb2 ··· anbq     ......   ......   p p p p   a1b a1b ··· a1bp ······ a1 b a1 b ··· a1 bp   1 1 1 2 1 q n 1 n 2 n q   ......   ......    .  ......   ......    amb1 amb1 ··· amb1 ······ amb1 amb1 ··· amb1  1 1 1 2 1 q n 1 n 2 n q  m 2 m 2 m 2 m 2 m 2 m 2 a1 b1 a1 b2 ··· a1 bq ······ an b1 an b2 ··· an bq  ......   ......    m p m p m p m p m p m p a1 b1 a1 b2 ··· a1 bq ······ an b1 an b2 ··· an bq

Example 3.2.4. To demonstrate the Kronecker Product in action let: ! ! a1 a1 b1 b1 A = 1 2 and B = 1 2 2 2 2 2 a1 a2 b1 b2 and ! ! u1 v1 u = and v = . u2 v2 3.2. LINEAR SUPERALGEBRA 73

We have that:

 1 1 1 1 1 1 1 1  1 1 a1b1 a1b2 a2b1 a2b2 u v  1 2 1 2 1 2 1 2  1 2 a1b1 a1b2 a2b1 a2b2 u v  (A ⊗ B)(u ⊗ v) =     a2b1 a2b1 a2b1 a2b1 u2v1  1 1 1 2 2 1 2 2   2 2 2 2 2 2 2 2 2 2 a1b1 a1b2 a2b1 a2b2 u v

 1 1 1 1 1 1 1 2 1 1 2 1 1 1 2 2 a1b1u v + a1b2u v + a2b1u v + a2b2u v  1 2 1 1 1 2 1 2 1 2 2 1 1 2 2 2 a1b1u v + a1b2u v + a2b1u v + a2b2u v  =   a2b1u1v1 + a2b1u1v2 + a2b1u2v1 + a2b1u2v2  1 1 1 2 2 1 2 2  2 2 1 1 2 2 1 2 2 2 2 1 2 2 2 2 a1b1u v + a1b2u v + a2b1u v + a2b2u v and that: ! ! a1u1 + a1u2 b1v1 + b1v2 A(u) ⊗ B(v) = 1 2 ⊗ 1 2 2 1 2 2 2 1 2 2 a1u + a2u b1v + b2v  1 1 1 2 1 1 1 2  (a1u + a2u )(b1v + b2v )  1 1 1 2 2 1 2 2  (a1u + a2u )(b1v + b2v ) =   (a2u1 + a2u2)(b1v1 + b1v2)  1 2 1 2  2 1 2 2 2 1 2 2 (a1u + a2u )(b1v + b2v )

 1 1 1 1 1 1 1 2 1 1 2 1 1 1 2 2 a1b1u v + a1b2u v + a2b1u v + a2b2u v  1 2 1 1 1 2 1 2 1 2 2 1 1 2 2 2 a1b1u v + a1b2u v + a2b1u v + a2b2u v  =   a2b1u1v1 + a2b1u1v2 + a2b1u2v1 + a2b1u2v2  1 1 1 2 2 1 2 2  2 2 1 1 2 2 1 2 2 2 2 1 2 2 2 2 a1b1u v + a1b2u v + a2b1u v + a2b2u v So the Kronecker product gives what we expect.

Remark 3.2.5. This definition is also compatible with the way that we wrote the column vector u ⊗ v in (3.22).

The Kronecker Product satisfies that for compatible matrices then

(A ⊗ B)(C ⊗ D) = (AC ⊗ BD) in particular we have that

(A ⊗ I)(I ⊗ B) = A ⊗ B.

This relation is true in terms of linear maps as well so the Kronecker Product is again just a translation of abstract map into a statement about matrices representing those linear maps once we have chosen basis for all the modules involved. We have seen that B : hom(T,U) → hom(T,V ) 74 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT given by B(A) = BA is a right linear map. Given A we can form vec A and we can also form vec BA, the Kronecker product relates these and matrix multiplication through the relation that

vec BA = (I ⊗ B) vec A (3.23)

So the matrix defining the linear map given by multiplying A on the left by B is (I ⊗ B). We can also define a map, using a given A ∈ hom(T,U),

A∨ : hom(U, V ) → hom(T,V ) and this acts by: A∨(B) = BA.

We then have that vec(A∨B) = (AT ⊗ I) vec B. (3.24)

Now suppose we have A, B ∈ End(U) and C ∈ End(V ). We have two natural functions on the space of endomorphisms, the trace and the determinant. Take the trace first, vectorisation allows us to express the trace in another manner. Namely, we have that: Tr(AB) = vec(AT )T vec(B). (3.25)

With the determinant, we have that for B and C then we can form B ⊗ C as an element of End(U ⊗ V ) and we have the relation that if U is m-dimensional and V n-dimensional then we have that:

det(B ⊗ C) = det(B)n det(C)m. (3.26)

It is these two properties that makes vectorisation and the Kronecker product useful for us and we wish to generalise these to the super case.

3.3 The Super Case

We now need to generalise all of the above to the case of free finitely generated super- modules. So let U and V be two such modules and let u ∈ U and v ∈ V . We have 3.3. THE SUPER CASE 75

i j that u = eiu and v = fjv we now look first at u ⊗ v and we have that:

i j u ⊗ v = eiu ⊗ fjv

˜(˜ı+˜u) i j = (−1) ei ⊗ fju v

So if ! ! u1 v1 u = and v = , u2 v2 where the ui and the like represent column vectors, then u ⊗ v the column vector representing u ⊗ v is:

  u1 ⊗ v1  (1+˜u) 2 2 (−1) u ⊗ v  u ⊗ v =   . (3.27)  (−1)u˜u1 ⊗ v2    u2 ⊗ v1

Remark 3.3.1. There is a choice in choosing which order to write the uivj, we choose to write it such that if u ⊗ v is an even element then u ⊗ v is given as a homogeneous even vector however there is a further choice in how to arrange things. We prefer our way of writing it but this is a preference rather being the result of anything concrete.

3.3.1 The Kronecker Product and the Berezinian

In [32] the notion of the tensor product of two supermatrices was defined. Using the ordering of u ⊗ v above our definition of the tensor product or Kronecker product as we will prefer to call it will look different, and is more general in that it defined for supermatrices of any parity, but agrees with the definition used there. Now using that (A ⊗ B)(u ⊗ v) = (−1)u˜B˜ A(u) ⊗ B(v) one comes to the definition of the Kronecker product in the super case being given by:

Definition 3.3.2. Suppose A and B are two supermatrices of the form:

! ! A A B B A = 00 01 and B = 00 01 A10 A11 B10 B11 76 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT then we have that the Kronecker product of two supermatrices A and B is:

A ⊗ B =

 ˜b ˜b  A00 ⊗ B00 (−1) A01 ⊗ B01 A00 ⊗ B01 (−1) A01 ⊗ B00  a˜+1 a˜+˜b a˜+1 a˜+˜b  (−1) A10 ⊗ B10 (−1) A11 ⊗ B11 (−1) A10 ⊗ B11 (−1) A11 ⊗ B10    .  (−1)a˜A ⊗ B (−1)a˜+˜b+1A ⊗ B (−1)a˜A ⊗ B (−1)a˜+˜b+1A ⊗ B   00 10 01 11 00 11 01 10 ˜b ˜b A10 ⊗ B00 (−1) A11 ⊗ B01 A10 ⊗ B01 (−1) A11 ⊗ B00

If A is square of size r|s × p|q and B is of size k|l × m|n then A ⊗ B is a supermatrix with dimensions

rk + sl|rl + sk × pm + qn|pn + qm

Remark 3.3.3. As remarked earlier (3.2.5) for the usual case we again have that the definition of the Kronecker product lines up with how we wrote (3.27).

Example 3.3.4. Let us take the simplest non trivial example where A and B are 1|1 × 1|1 square matrices. We then have that if

! ! a1 a1 b1 b1 A = 1 2 and B = 1 2 2 2 2 2 a1 a2 b1 b2 then

 1 1 ˜b 1 1 1 1 ˜b 1 1  a1b1 (−1) a2b2 a1b2 (−1) a2b1  a˜+1 2 2 a˜+˜b 2 2 a˜+1 2 2 a˜+˜b 2 2  (−1) a1b1 (−1) a2b2 (−1) a1b2 (−1) a2b1  A ⊗ B =    (−1)a˜a1b2 (−1)a˜+˜b+1a1b2 (−1)a˜a1b2 (−1)a˜+˜b+1a1b2  1 1 2 2 1 2 2 1 2 1 ˜b 2 1 2 1 ˜b 2 1 a1b1 (−1) a2b2 a1b2 (−1) a2b1

Now suppose we have u and v given by

! ! u1 v1 u = and v = u2 v2 and we have that   u1v1  1+˜u 2 2 (−1) u v  u ⊗ v =    (−1)u˜u1v2    u2v1 3.3. THE SUPER CASE 77

So we have

(A ⊗ B)(u ⊗ v) =

 1 1 ˜b 1 1 1 1 ˜b 1 1   1 1  a1b1 (−1) a2b2 a1b2 (−1) a2b1 u v  a˜+1 2 2 a˜+˜b 2 2 a˜+1 2 2 a˜+˜b 2 2   1+˜u 2 2 (−1) a1b1 (−1) a2b2 (−1) a1b2 (−1) a2b1  (−1) u v       (−1)a˜a1b2 (−1)a˜+˜b+1a1b2 (−1)a˜a1b2 (−1)a˜+˜b+1a1b2  (−1)u˜u1v2   1 1 2 2 1 2 2 1   2 1 ˜b 2 1 2 1 ˜b 2 1 2 1 a1b1 (−1) a2b2 a1b2 (−1) a2b1 u v

 1 1 1 1 ˜b+˜u+1 1 1 2 2 u˜ 1 1 1 2 ˜b 1 1 2 1  a1b1u v + (−1) a2b2u v + (−1) a1b2u v + (−1) a2b1u v  a˜+1 2 2 1 1 a˜+˜b+˜u+1 2 2 2 2 a˜+˜u+1 2 2 1 2 a˜+˜b 2 2 2 1 (−1) a1b1u v + (−1) a2b2u v + (−1) a1b2u v + (−1) a2b1u v  =    (−1)a˜a1b2u1v1 + (−1)a˜+˜b+˜ua1b2u2v2 + (−1)a˜+˜ua1b2u1v2 + (−1)a˜+˜b+1a1b2u2v1   1 1 2 2 1 2 2 1  2 1 1 1 ˜b+˜u+1 2 1 2 2 u˜ 2 1 1 2 ˜b 2 1 2 1 a1b1u v + (−1) a2b2u v + (−1) a1b2u v + (−1) a2b1u v

i j k l We now rearrange the terms so that we have terms of the form aju bl v . (Picking up signs as we go of course)

 u˜˜b 1 1 1 1 1 2 1 2 1 1 1 2 1 2 1 1  (−1) (a1u b1v + a2u b2v + a1u b2v + a2u b1v )  a˜+˜u˜b+˜u+1 2 1 2 1 2 2 2 2 2 1 2 2 2 2 2 1  (−1) (a1u b1v + a2u b2v + a1u b2v + a2u b1v ) =    (−1)a˜+˜u˜b+˜u(a1u1b2v1 + a1u2b2v2 + a1u1b2v2 + a1u2b2v1)   1 1 2 2 1 2 2 1  u˜˜b 2 1 1 1 2 2 1 2 2 1 1 2 2 2 1 1 (−1) (a1u b1v + a2u b2v + a1u b2v + a2u b1v )

 1 1 1 2 1 1 1 2  (a1u + a2u )(b1v + b2v )  a˜+˜u+1 2 1 2 2 2 1 2 2  u˜˜b (−1) (a1u + a2u )(b1v + b2v ) =(−1)    (−1)a˜+˜u(a1u1 + a1u2)(b2v1 + b2v2)   1 2 1 2  2 1 2 2 1 1 1 2 (a1u + a2u )(b1v + b2v ) ! ! a1u1 + a1u2 b1v1 + b1v2 =(−1)u˜˜b 1 2 ⊗ 1 2 2 1 2 2 2 1 2 2 a1u + a2u b1v + b2v =(−1)u˜˜bA(u) ⊗ B(v).

So everything works as it should for the Kronecker product and the tensor product of vectors.

Now suppose A and B are invertible, then the morphism A ⊗ B is invertible and even as well so its Berezinian can be calculated. In order to figure out the relation between the Berezinian’s of the parts to the Berezinian of the whole first we deal with the case of A ⊗ Ip|q.

Lemma 3.3.5. Suppose A is an even invertible supermatrix, then

(p−q) Ber(A ⊗ Ip|q) = Ber(A) . 78 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT

Proof. We have that   A00 ⊗ Ip 0 0 A01 ⊗ Ip    0 A11 ⊗ Iq −A10 ⊗ Iq 0  Ber(A ⊗ Ip|q) = Ber   .  0 −A ⊗ I A ⊗ I 0   01 q 00 q  A10 ⊗ Ip 0 0 A11 ⊗ Ip

Now we have that ! AB Ber det(A) det(D − CA−1B)−1 = det(A − BD−1C) det(D)−1 CD breaking the calculation up into pieces we first have that ! A00 ⊗ Ip 0 p q det = det(A00 ⊗ Ip) det(A11 ⊗ Iq) = det(A00) det(A11) . 0 A11 ⊗ Iq we then, corresponding to CA−1B have that

! −1 ! ! 0 −A01 ⊗ Iq A00 ⊗ Ip 0 0 A01 ⊗ Ip −1 A10 ⊗ Ip 0 0 A11 ⊗ Iq −A10 ⊗ Iq 0 −1 ! ! 0 A01A11 ⊗ Iq 0 −A01 ⊗ Ip = −1 A10A00 ⊗ Ip 0 −A10 ⊗ Iq 0 −1 ! A01A11 A10 ⊗ Iq 0 = −1 0 A10A00 A01 ⊗ Ip so that D − CA−1B is

−1 ! (A00 − A01A11 A10) ⊗ Iq 0 −1 0 (A11 − A10A00 A01) ⊗ Ip and we have that

−1 −1 −1 −1q −1 −1p det(D − CA B) = det (A00 − A01A11 A10) det (A11 − A10A00 A01)

Hence we have that

p q Ber(A ⊗ Ip|q) = det(A00) det(A11)

−1 −1q −1 −1p det (A00 − A01A11 A10) det (A11 − A10A00 A01)

p −1 −p q −1 −q = det(A00) det(A11 − A10A00 A01) det(A11) det(A00 − A01A11 A10) = Ber(A)p Ber(A)−q

= Ber(A)(p−q) 3.3. THE SUPER CASE 79

We also have that for the case of (Ir|s ⊗ B) the following

Lemma 3.3.6.

(r−s) Ber(Ir|s ⊗ B) = Ber(B)

Proof. A similar calculation to the one above so omitted.

Lemma 3.3.7. Suppose A is an odd invertible supermatrix then:

(p−q) Ber(A ⊗ Ip|q) = Ber(A)

Proof. This time we have that   A00 ⊗ Ip 0 0 A01 ⊗ Ip    0 −A11 ⊗ Iq A10 ⊗ Iq 0  A ⊗ Ip|q =    0 A ⊗ I −A ⊗ I 0   01 q 00 q  A10 ⊗ Ip 0 0 A11 ⊗ Ip due to A being odd.   0 A01 ⊗ Iq −A00 ⊗ Iq 0    A10 ⊗ Ip 0 0 A11 ⊗ Ip  J(A ⊗ Ip|q) =   −A ⊗ I 0 0 −A ⊗ I   00 p 01 p 0 A11 ⊗ Iq −A10 ⊗ Iq 0

So that Ber(A ⊗ Ip|q) due to A ⊗ Ip|q being an odd invertible supermatrix is   0 A01 ⊗ Iq −A00 ⊗ Iq 0    A10 ⊗ Ip 0 0 A11 ⊗ Ip  Ber   −A ⊗ I 0 0 −A ⊗ I   00 p 01 p 0 A11 ⊗ Iq −A10 ⊗ Iq 0 Computing this out we come to

−1 ! −1 ! 0 A01 ⊗ Iq 0 −(A01 − A00A10 A11) ⊗ Ip det det −1 A10 ⊗ Ip −(A10 − A11A01 A00) ⊗ Iq Suppose A is of rank r|r then we have that ! 0 A01 ⊗ Iq r2pq q p det = (−1) det(A01) det(A10) A10 ⊗ Ip and −1 −1 ! 0 −(A01 − A00A10 A11) ⊗ Ip det −1 −(A10 − A11A01 A00) ⊗ Iq

r2pq −1 −1 q −1 −1 p = (−1) det((−(A01 − A00A10 A11)) ) det((−(A10 − A11A01 A00) )) . 80 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT

So we have that

−1 p −1 −q Ber(A⊗Ip|q) = (det(−A10) det((A01−A00A10 A11))) (det(−A01) det((A10−A11A01 A00)))

Putting this altogether we get that

(p−q) Ber(A ⊗ Ip|q) = Ber(A)

Similarly to the even case we have that:

Lemma 3.3.8.

(r−s) Ber(Ir|s ⊗ B) = Ber(B) . when B is odd.

Once we have these lemmas then we can conclude the following:

Theorem 3.3.9. Suppose A ∈ GL(U), where dim U = p|q and B ∈ GL(V ), where dim V = r|s then the Berezinian of the morphism

A ⊗ B : U ⊗ V → U ⊗ V is given by Ber(A ⊗ B) = Ber(A)(r−s) Ber(B)(p−q).

Proof. Follows immediately from the previous lemmas, that

(A ⊗ B) = (A ⊗ Ir|s)(Ip|q ⊗ B), and that the Berezinian satisfies

Ber(XY ) = Ber(X) Ber(Y ).

This is the supercase version of (3.26) relating the tensor product and the super- version of the determinant together. 3.3. THE SUPER CASE 81

Remark 3.3.10. As an aside we have that:

n det(A ⊗ In) = det(A) and that:

(r−s) Ber(B ⊗ Ir|s) = Ber(B) .

Since the Berezinian is the analogue of the determinant in the super case we have the pleasing relationship that:

Tr(In) Trs(Ir|s) det(A ⊗ In) = det(A) and Ber(B ⊗ Ir|s) = Ber(B) relating the trace, determinant and tensor product (and their super analogues) all together.

Remark 3.3.11. This theorem leads to the conclusion that the map A ⊗ Ip|p has Berezinian 1 and that Ber(A ⊗ B) = 1 when both A and B are of size q|q and p|p respectively. This means that the map

GL(Rp|p) × GL(Rq|q) → GL(R2pq|2pq) given by the tensor product is actually a map to

Sl(R2pq|2pq).

We also note here that the following is true.

Proposition 3.3.12.

Trs(A ⊗ B) = Trs(A) Trs(B)

Proof.

a˜+˜b Trs(A ⊗ B) = Tr(A00 ⊗ B00) + (−1) Tr(A11 ⊗ B11) (3.28)

a˜+˜b+1 a˜ ˜b + (−1) ((−1) ) Tr(A00 ⊗ B11) + (−1) Tr(A11 ⊗ B00) (3.29)

a˜+˜b = Tr(A00) Tr(B00) + (−1) Tr(A11) Tr(B11) (3.30)

˜b+1 a˜+1 + (−1) Tr(A00) Tr(B11) + (−1) Tr(A11) Tr(B00) (3.31)

a˜+1 ˜b+1 =(Tr(A00) + (−1) Tr(A11))(Tr(B00 + (−1) ) Tr(B11)) (3.32)

= Trs(A) Trs(B) (3.33) 82 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT

On the level of morphisms in a braided monoidal category where the trace can be defined this has been proven before, in for instance [33]. However here it does at least show that we have made the right definition for the Kronecker product.

3.3.2 Super Vectorisation and the Trace

Definition 3.3.13. Given a supermatrix ! A A A = 00 01 A10 A11 of size r|s × p|q then the vectorisation of A vecs(A) is given by   vec(A00)  (1+A˜)  (−1) vec(A11) vecs(A) =    vec(A )   10  (1+A˜) (−1) vec(A01) and is a column vector of size pr + qs|ps + qr × 1|0.

This comes from

i j ˜(˜a+˜) j i eiajf = (−1) f ⊗ eiaj. which is in coordinates (3.21) in the super case. With this definition we recover the expressions we want in that for A of size r|s×p|q and B of size m|n × r|s we have that:

(Ip|q ⊗ B) vecs(A) = vecs(BA) and, with a natural sign involved,

ST A˜B˜ (A ⊗ Im|n) vecs(B) = (−1) vecs(BA) emulating (3.23) and (3.24). Now one relationship that the vectorisation enables is (3.25) and we wish to emulate this in the super case. With this in mind we can show the following result.

Proposition 3.3.14.

t ST Trs(AB) = vecs(A )Q vecs(B) 3.4. THE HERMITIAN FORM ON HOM(U, V ) 83 with   Irp 0 0 0    0 −Iqs 0 0  Q =    0 0 I 0   rq  0 0 0 Ips Proof. Let ! ! A A B B A = 00 01 and B = 00 01 , A10 A11 B10 B11 then we have that

1+A˜+B˜ Trs(AB) = Tr(A00B00) + Tr(A01B10) + (−1) (Tr(A10B01) + Tr(A11B11))

T T T T = vec(A00) vec(B00) + vec(A01) vec(B10)

1+A˜+B˜ T T T T + (−1) (vec(A10) vec(B01) + vec(A11) vec(B11))

 T T 1+A˜ T T 1+A˜ T T A˜ T T  = vec(A00) (−1) vec(A11) (−1) vec(A01) (−1) vec(A10)     Irp 0 0 0 vec(B00)    1+B˜   0 −Iqs 0 0  (−1) vec(B11)      0 0 I 0   vec(B )   rq   10  1+B˜ 0 0 0 Ips (−1) vec(B01)

t ST = vecs(A )Q vecs(B)

The concluding line comes from that ! t ! AT (−1)A˜AT u1   AST = 00 10 and = 1T 1+˜u 2T 1+A˜ T T 2 u (−1) u (−1) A01 A11 u

Remark 3.3.15. The presence of the −I in Q is ultimately the consequence of the braiding isomorphism, for any two V and U, between V ⊗ U and U ⊗ V .

3.4 The Hermitian Form on hom(U, V )

We will finish this chapter with an example combining the material on Hermitian forms and the Kronecker product. For two Hermitian spaces (U, g) and (V, h), where U and V are of dimension r|s and p|q respectively we have that the induced Hermitian form on hom(U, V ) is given by

† Trs(A B) 84 CHAPTER 3. HERMITIAN FORMS AND THE KRONECKER PRODUCT for A, B ∈ hom(U, V ). Give the tools above with vectorisation and the Kronecker product we can now find the matrix defining this form. We have that

† −1 TC Trs(A B) = Trs(G A HB)

−1 TC = Trs((G A )(HB))

t −1 TC ST = vecs((G A ) )Q vecs(HB) t ¯ −1 ST = vecs(A(G ) )Q vecs(HB) t −1 (ST )2 ¯ = (((G ) ⊗ Ip|q) vecs A)Q(Ir|s ⊗ H) vecs B

∗ −1 ST = vecs A((G ) ⊗ Ip|q)Q(Ir|s ⊗ H) vecs B,

−1 ST and so the matrix defining this form is ((G ) ⊗ Ip|q)Q(Ir|s ⊗ H).

Example 3.4.1. Suppose that U and V are the super vector spaces Cr|s and Cp|q respectively. The standard matrices defining a Hermitian form on these spaces is ! I 0 G = 0 iI with the I’s being adjusted for the dimension of the super vector spaces. We can now calculate the matrix defining the Hermitian form on hom(Cr|s, Cp|q). We have that   Ir ⊗ Ip 0 0 0  −1  −1 ST  0 i Is ⊗ Iq 0 0  (G ) ⊗ Ip|q =   r|s  0 0 I ⊗ I 0   r q  −1 0 0 0 i Is ⊗ Ip and that   Ir ⊗ Ip 0 0 0    0 Is ⊗ iIq 0 0  Ir|s ⊗ Gp|q =    0 0 I ⊗ iI 0   r q  0 0 0 Is ⊗ Ip

−1 ST Computing the product ((Gr|s) ⊗ Ip|q)Q(Ir|s ⊗ Gp|q) we get   Irp 0 0 0    0 −Isq 0 0     0 0 iI 0   rq  0 0 0 −iIsp

So we again see that the induced form is not positive definite. Chapter 4

The Volume Element

85 86 CHAPTER 4. THE VOLUME ELEMENT 4.1 Complex structures and Hermitian manifolds

4.1.1 Cp|q as a Hermitian space

Let’s look at R2p|2q the super vector space with the complex structure J. We now want to look at compatible bilinear forms with this complex structure J. Let us look at R2|2 as an example with the standard bilinear form given by the matrix   1 0 0 0   0 1 0 0  B =   0 0 0 −1   0 0 1 0

This can be interpreted as being a block diagonal matrix where the even-even matrix is symmetric and the odd-odd matrix is antisymmetric. This defines a bilinear form b on R2|2 such that b(v, w) = (−1)v˜w˜b(w, v).

We can also see from this that a bilinear form on Rp|q is only nondegenerate if q is an even number. From this we can generalise to obtain the standard structure on

Rp|2q with q arbitrary here. Now we want to consider R2p|2q as a space with a complex structure.

We have seen that the standard complex structure on R2|2 is given by   0 −1 0 0   1 0 0 0  J =   0 0 0 −1   0 0 1 0

We say a bilinear form is compatible with the complex structure if, as in the usual case, b(Jv, Jw) = b(v, w).

This in terms of the matrices involved means that

J TSBJ = B or that J †J = I. B above satisfies this condition for our J. We can now complexify

2|2 2|2 R to obtain R ⊗ C. With this we can complexify b to obtain bC. Where before b was, at least with respect to even elements, positive definite, when we complexify this 4.1. COMPLEX STRUCTURES AND HERMITIAN MANIFOLDS 87 is no longer true, even in the usual case. If our form is given by hx, xi = x2 + y2 as in the case of R2 then extending this to R2 ⊗C leads us to z2 +w2 for complex z, w. This is only a real number if z and w are real. We can resolve this by viewing b as a map from R2|2 ⊗C to its dual (R2|2 ⊗C)∨. We then have that there is a natural conjugation on (R2|2 ⊗ C)∨ given by taking the complex conjugate in the second factor. With this one obtains a Hermitian form h on R2|2 ⊗ C given by ¯b. This forces the new form to be sesquilinear in the first variable so that we obtain a Hermitian form. To put all this together we have that: h(v, w) := b(v)(w). (4.1)

We have defined before that Cp|q is defined as the i eigenspace for J. Changing 1|1 basis to coordinates (z, z,¯ ξ, ξ¯) so that R2|2 ⊗ C ' C1|1 ⊕ C , in effect doing a unitary change of coordinates, then we have that B, which is the same matrix which defines h, transforms to   1 0 0 0   1 0 1 0 0    . 2 0 0 i 0    0 0 0 −i

We can then restrict h to C1|1 to obtain the standard Hermitian form (multiplied by a half) on it given by the matrix ! 1 0 H = . 0 i

Remark 4.1.1. Here we have obtained a positive definite Hermitian form on C1|1 on 1|1 C on the −i eigenspace for J we don’t get a positive definite form.

∗ Example 4.1.2. If we have that h : C1|1 → C1|1 as above defined by the matrix ! 1 0 H = 0 i ! ! z w then we have that for z = and w = , two even vectors that 0 0

h(z)(w) = ∗zHw =zw ¯

If we use u, v ∈ C1|1 odd we get that

h(u)(v) = iuv.¯ 88 CHAPTER 4. THE VOLUME ELEMENT

4.1.2 Cp|q the Hermitian supermanifold

We now move onto defining Cp|q as a Hermitian supermanifold. In the usual case you can apply the above to the case of tangent spaces at each point to obtain a Hermitian form on each holomorphic tangent space. In that case a vector field is determined by its value at each point, we then only require that the Hermitian form varies smoothly as we change points. In the super case we have that a vector field isn’t determined by its values at each point of a manifold so we need to take a more general approach to defining a Hermitian supermanifold.

We first look at Cp|q as a Hermitian supermanifold. We have coordinates ! z ξ where z here is a column of p even sections of the algebra of functions and ξ is a column of q odd sections. The standard Hermitian form on this is defined using the (super)matrix ! I 0 H = p . 0 iIq

Which is the same as for Cp|q the super vector space. We will want to write this in local coordinates using zi and ξi. To do so we need to establish some conventions. Let (xi, ηj) be some local coordinates for a smooth supermanifold M. To xi we can associate an element δxi which form a basis of sections of the cotangent bundle T ∨M. In order to fully apply the linear superalgebra established in the previous chapters we will not follow all the conventions established in §5 of [1] though in terms of notation our exposition will be similar. In that article, vector fields correspond to left derivations of the algebra of functions, we consider that sections of the tangent bundle Γ(TM) correspond to right derivations of the algebra of functions. This is not a usual choice and we choose this convention to apply the linear superalgebra that we have developed.

∂ What this means is that if ∂xi = ∂i form a basis of Γ(TM) then we have that

˜ı˜ i i i (−1) h∂j, δx i = hδx , ∂ji := δj.

On the level of tangent vectors at a point then this pairing is 0 unless both tangent vector and covector have the same parity as tangent spaces are modules over the base field which has no odd component. 4.1. COMPLEX STRUCTURES AND HERMITIAN MANIFOLDS 89

Remark 4.1.3. In [34] and other texts the pairing between vector fields and one forms is hX, ωi = (−1)X˜ω˜ ω(X), (4.2) so there is a sign involved. This notation seems to prioritise that one works with left derivations of the algebra of functions. Working with right derivations instead might provide an exposition of the differential geometry of supermathematics where the need for signs in the definition is minimised as much as possible.

We are looking at complex supermanifolds and will be writing things in local holo- morphic coordinates (zi, ξj). This means we have to consider subbundles of the com- plexified cotangent and tangent bundles. We denote the holomorphic tangent bundle

1,0 by T1,0M and the holomorphic cotangent bundle by T M. We also consider the bun- dle T 1,0M, which isn’t strictly speaking the antiholomorphic cotangent bundle as an element of this bundle doesn’t annihilate a holomorphic vector field, we shall call it the opposite holomorphic cotangent bundle as the module Γ(T 1,0M) is the opposite module to Γ(T 1,0M). We then have that the pairing for holomorphic vector fields and forms is:

i i hδz , ∂ji := δj, as expected. However when writing a Hermitian metric in local coordinates it can be written as a line element as

a˜ a b H = (−1) δz hab¯ δz

1,0 1,0 i i.e. as a section of T M ⊗ T M. There is pairing of δz and ∂i, it is the following

i ˜ı˜ i ˜ı˜ i h∂j, δz i := (−1) hδz , ∂ji = (−1) δj.

i j Putting this altogether suppose we have u = ∂iu and v = ∂jv we then have

˜ı(˜ı+˜u) i j H(u, v) = (−1) u h¯ıjv or written using matrices and column vectors that represent u and v

H(u, v) = ∗uHv.

a˜ a b Now as presented, (−1) δz hab¯ δz is not a Hermitian form as defined in the previous chapter. Its takes sections of the holomorphic tangent bundle of a supermanifold 90 CHAPTER 4. THE VOLUME ELEMENT

∞ M, Γ(T1,0M) and outputs elements of C (M) ⊗ C. Γ(T1,0M) is not a module over C∞(M) ⊗ C so we don’t have the requirements to satisfy this being a Hermitian form. However it is still true that on each tangent space, which in the case of Cp|q the supermanifold, can be canonically identified with Cp|q the super vector space. So we make the following definition.

Definition 4.1.4. Let M be a complex supermanifold, then it is a Hermitian super- manifold if it is equipped with a smooth section of T 1,0M ⊗ T 1,0M such that at each point p it induces a positive definite Hermitian form over C on the space T1,0Mp the holomorphic tangent space at p.

We have that Cp|q the supermanifold can be made into a Hermitian supermanifold using the form

a˜ a a˜˜b b (−1) δz (i δab¯ )δz .

4.2 Grassmannian Supermanifolds as Hermitian Man- ifolds

p|q The Grassmannian supermanifolds Grr|s(C ) have homogeneous coordinates given as p|q a r|s × p|q supermatrix. Let us first look at the space Mr|s(C ) which covers the p|q Grassmannian. We have that an element of Z ∈ Mr|s(C ) is given in holomorphic coordinates as ! Z Z Z = 00 01 Z10 Z11 where Zii is a matrix consisting of even elements of the algebra of functions and the other two blocks of Z consist of odd elements. Using the results of the previous chapter we can define a natural Hermitian form on this space is given as

† Trs(X Y ) for X,Y holomorphic vector fields, and where X† := G−1XTC H, and H and G are given by the supermatrix of the standard Hermitian form of the correct size. We are using that X and Y are given as r|s × p|q matrices and so giving 4.2. GRASSMANNIAN SUPERMANIFOLDS AS HERMITIAN MANIFOLDS 91 them in block form we have  ! ! X∗ −iX∗ Y Y Tr (X†Y ) = Tr 00 10 00 01 s s  ∗ ∗  −iX01 X11 Y10 Y11

∗ ∗ ∗ ∗ = Tr(X00Y00 − iX10Y10) − Tr(X11Y11 − iX01Y01).

p|q Using this space we will now define the induced Hermitian form on Grr|s(C ). We’ll give the procedure for the usual case then we move to the supercase by repeating the same steps.

4.2.1 The Usual Case

We first start with some linear algebra. Let U be a k dimensional subspace of an n dimensional Hermitian space V with its Hermitian form given by a bracket h , i. As V is a Hermitian space we can define an orthogonal projection from V to U. By this we means an operator P ∈ End(V ) such that

P 2 = P and hP x, (I − P )yi = 0.

Let Z be a n × k nondegenerate matrix such that the columns of Z are a basis for U.

We can then define a projection PZ as

† −1 † PZ := Z(Z Z) Z .

This is a projection as

2 † −1 † † −1 † † −1 † PZ = Z(Z Z) Z Z(Z Z) Z = Z(Z Z) Z it is an orthogonal projection as

hP x, (I − P )yi = x∗P †(I − P )y = x∗P (I − P )y = 0

† n because P = P . We denote I − PZ as PZ⊥ . PZ = PZg for g ∈ GLk(C ) so it depends only on the subspace U not the element Z that we use to represent it. We have that † PZ and PZ⊥ are self-adjoint, PZ = PZ and that as expected PZ and PZ⊥ sum to the identity. These two operators hence define the decomposition V = U ⊕ U ⊥ where U ⊥ is the subspace orthogonal to U. We have seen that the induced Hermitian form on Hom(U, V ) is given by hX,Y i = Tr(X†Y ). 92 CHAPTER 4. THE VOLUME ELEMENT

The operators PZ and PZ⊥ can also be applied to elements of Hom(U, V ) by sending

X 7→ PZ X. One can then check that we have

Hom(U, V ) = Hom(U, U) ⊕ Hom(U, U ⊥)

as a decomposition into orthogonal subspaces. We thus have that PZ⊥ X belongs to Hom(U, U ⊥) for any X ∈ Hom(U, V ). Before moving on we should note that as we have a Hermitian form on V we that that Hom(U, V/U) is isomorphic to Hom(U, U ⊥). We now want to look at the principal bundles

n n π : Mk(C ) → Grk(C ) and examine the tangent bundles of these spaces. To start with we will look at the k = 1 case so Cn → CPn−1. We will denote Cn by V . Suppose we take a vector v ∈ V/{0}, then this generates a subspace

U = {λv | λ ∈ C}.

We have that Tv(V/{0}) ' Hom(U, V ). Let w ∈ TvV/{0}, we can define an element

φw : U → V as a linear map by v 7→ w. We can then assign a tangent vector at w given a map C ∈ Hom(U, V ) as C(w). Hence we have TvV/{0}' Hom(U, V ). We n can generalise to Mk(C ). Elements Z of this space are n × k matrices which are nondegenerate. In particular this is a subset of k copies of Cn. So a tangent vector W at Z can be given as a collection of k vectors in Cn or an arbitrary n × k matrix with entries in C. We can then define a map in Hom(U, V ) by Z 7→ W . The reverse map is as above where we assign to an arbitrary map C ∈ Hom(U, V ) the tangent vector at Z given by C(Z).

n We can summarise by saying that let U be the following vector bundle over Mk(C )

n U := {(Z, v) ∈ Mk(C ) × V | v ∈ U, the subspace generated by Z}

and let V denote the trivial bundle Mk(V ) × V . We have by the previous discussion that the tangent bundle TMk(V ) is isomorphic to the bundle Hom(U, V). Using this n we can induce a Hermitian form on Mk(C ), given by

hX,Y i = Tr(X†Y ), (4.3) 4.2. GRASSMANNIAN SUPERMANIFOLDS AS HERMITIAN MANIFOLDS 93 for X and Y tangent vectors considered as elements of Hom(U, V ). We can also write this in terms of a line element as

ds2 = Tr(dZ†dZ).

We will modify this expression to write a Hermitian form on the Grassmannian man- ifolds.

Let U be a subspace of V . U is a point in Grk(V ). We have from [35], for just one place where it is shown, that

n TU Grk(C ) ' Hom(U, V/U). (4.4)

n Let V also be the trivial bundle over Grk(C ) and let U be the tautological bundle over the Grassmannian. With these two bundles there is another related vector bundle

V/U over Grk(V ) where the fibre at each point U is the vector space V/U. So the statement (4.4) means that in terms of vector bundles we have that

T Grk(V ) ' Hom(U, V/U),

n as vector bundles over Grk(C ). n The fibre over a point U ∈ Grk(C ) can be given as, if we pick an arbitrary Z such that π(Z) = U,

−1 n n π ([Z]) = {B ∈ Mk(C ) | B = Zg for some g ∈ GLk(C )}

which we’ll also denote GZ and where [Z] stands for Z under the equivalence relation where it is identified with Zg. We have that the action of GLk(C) restricted to this

fibre maps this set to itself. Every element g ∈ GLk(C) defines a map g : GZ →

GZ . If we have that a function is invariant under this map for every [Z] then it n defines a function on the Grassmannian. So this is the statement that π : Mk(C ) → n Grk(C ) is the construction that allows us to work with homogeneous coordinates on the Grassmannian. The map g has a differential T g. As this map is a linear map its differential is represented by the same matrix g. Given the discussion above we have that the tangent spaces at every point of this fibre are isomorphic to the same space Hom(U, V ). So the differential T g maps the pair (Z,X) to (Zg, Xg). A necessary step we need then 94 CHAPTER 4. THE VOLUME ELEMENT to define a Hermitian form on the Grassmannian using homogeneous coordinates and

n tangent vectors in Mk(C ) is for any new Hermitian form to be invariant under T g. This is not sufficient however.

n n The differential of π : Mk(C ) → Grk(C ) sends X ∈ Hom(U, V ) to it’s equivalence class in Hom(U, V/U). The kernel of this map is Hom(U, U) from which we get the

n vertical bundle of Mk(C ). We thus have that

n n T πZ : TZ Mk(C ) → TU Grk(C ) when restricted to Hom(U, U ⊥) is a linear isomorphism. So the horizontal bundle is

Hom(U, U⊥).

We can now put all this preliminary work together. We define a vector bundle morphism

n ⊥ F : TMk(C ) → Hom(U, U ) by

† − 1 F (Z,X) 7→ (Z,PZ⊥ X(Z Z) 2 ).

We can apply this to the Hermitian form (4.3) so we have

hF (X),F (Y )i = Tr((F (X))†F (Y ))

† − 1 † † − 1 = Tr((Z Z) 2 X PZ⊥ PZ⊥ Y (Z Z) 2 ) = Tr((Z†Z)−1X†(I − Z(Z†Z)−1Z†)Y )).

n This is invariant under T g and hence defines a Hermitian form on Grk(C ). Here we † − 1 see the purpose of the (Z Z) 2 term in F . It is a scaling factor that makes the new

Hermitian form derived invariant under the action of GLk(C). We can express the Hermitian form using a line element as

ds2 = Tr((Z†Z)−1dZ†(I − Z(Z†Z)−1Z†)dZ). (4.5)

n n This is the induced Hermitian form on Grk(C ) given a Hermitian form on C . Looking n+1 n at Gr1(C ) ' CP we have this reduces down to the Fubini-Study Hermitian form 4.3. THE SUPER CASE AND THE VOLUME ELEMENT 95 given in homogeneous coordinates:

Tr((Z†Z)−1dZ†(I − Z(Z†Z)−1Z†)dZ) 1  hdz, zihz, dzi = hdz, dzi − hz, zi hz, zi hdz, dzi hdz, zihz, dzi = − . hz, zi hz, zi2

n We now want to express this in local coordinates. Let UI to be the subset of Grk(C ) such that the first k rows form a invertible matrix and let ! k(n−k) Ik kn k(n−k) CI := { ∈ C | W ∈ C } W

There is an inverse map of a chart

−1 k(n−k) ϕ : CI → UI given by ! " # I I ϕ−1 = . W W If we pullback the Hermitian form along this map we get that

2 † −1 † † −1 ds = Tr((Ik + W W ) dW (In−k + WW ) dW ) (4.6)

† −1 † This follows as the term In − Z(Z Z) Z maps to

† −1 † Ik − W (In−k + W W ) W

† −1 which by the Woodbury matrix identity is (In−k + WW ) . The expression in local coordinates (4.6) has been known for a long time, see [36]. We also have that for the usual case the expression in homogeneous coordinates can be implicitly found in [37] (it is not explicitly written out in the text) in that the associated K¨ahler form, ω, of (4.5) can be derived from a K¨ahlerpotential ln(det(Z†Z)) so that: 1 ω = ∂∂¯ ln(det(Z†Z)). 2i

4.3 The Super Case and the Volume Element

Now in treating the super case, we can’t repeat the whole exposition as statements like (4.4) are true at every point of the Grassmannian supermanifold but this doesn’t 96 CHAPTER 4. THE VOLUME ELEMENT extend to a statement about vector bundles. This is, as ever, because the value of a vector field at a point doesn’t determine the vector field in the case of supermanifolds. However we can use the expression (4.5) in the super case. It is invariant under the

p|q action of GLr|s(C ). We can take look at this Hermitian form in local coordinates which will be

2 † −1 † † −1 δs = Trs((Ik + W W ) δW (In−k + WW ) δW ) (4.7) here we have changed notation to be in line with [1]. From that same paper we have that an invariant volume element dV for a Hermitian supermanifold M of dimension n|m with Hermitian form

a˜ a b (−1) δz hab¯ δz is  1 n−m dV := Ber|h | D(z, z¯). (4.8) 2i ab¯ 1 n−m The 2i comes from that  1 n−m [dz¯1, z1, . . . , dz¯n, dzn | dξ¯1, dξ1, . . . , dξ¯m, dξm] 2i . =[dx1, dy1, . . . , dxn, dyn | dθ1, dη1, . . . , dθm, dηm]

We can now apply the linear algebra developed in the previous chapter to find the matrix of the Hermitian form (4.7) and its Berezinian.

Proposition 4.3.1. The matrix defining the Hermitian form (4.7) is

† −1 −1 ST † −1 (((Ir|s + W W ) G ) ⊗ Ip−r|q−s)Q(Ir|s ⊗ H(Ip−r|q−s + WW ) ) where ! ! I 0 I 0 H = p−r ,G = r 0 iIq−s 0 iIs and   Ir(p−r) 0 0 0    0 −Is(q−s) 0 0  Q =    0 0 I 0   r(q−s)  0 0 0 Is(p−r) Proof. We have

2 † −1 † † −1 δs = Trs((Ir|s + W W ) δW (Ip−r|q−s + WW ) δW ), 4.3. THE SUPER CASE AND THE VOLUME ELEMENT 97 expanding out δW † we have that:

2 † −1 −1 TC † −1 δs = Trs((Ir|s + W W ) G δW H(Ip−r|q−s + WW ) δW ).

† −1 † −1 We now relabel (Ir|s +W W ) as A and (Ip−r|q−s +WW ) as B in order to hopefully provide clarity in the following calculation.

−1 TC −1 TC Trs(AG δW HBδW ) = Trs((AG δW )(HBδW ))

t −1 TC ST = (vecs((AG δW ) ))Q vecs(HBδW )

t −1 ST = (vecs(δW (AG ) ))Q vecs(HBδW )

t −1 (ST )2 = (vecs((AG ) ⊗ Ip−r|q−s) vecs(δW ))Q(Ir|s ⊗ HB) vecs(δW )

∗ −1 ST = vecs(δW )((AG ) ⊗ Ip−r|q−s)Q(Ir|s ⊗ HB) vecs(δW )

So we have that

† −1 −1 ST † −1 (((Ir|s + W W ) G ) ⊗ Ip−r|q−s)Q(Ir|s ⊗ H(Ip−r|q−s + WW ) ). is the matrix defining the Hermitian form.

We can now calculate the Berezinian of this matrix

p|q Theorem 4.3.2. The volume element for the Grassmannian supermanifolds Grr|s(C ) is  1 (r−s)((p−q)−(r−s)) [dW ] dV = † (p−q) . 2i Ber (Ir|s + W W ) where we use [dW ] as shorthand notation to stand for the standard volume element of

2 2 Cpr+qs−(r +s )|ps+qr−2rs.

Proof. Let us label

† −1 −1 ST † −1 (((Ir|s + W W ) G ) ⊗ Ip−r|q−s)Q(Ir|s ⊗ H(Ip−r|q−s + WW ) ) by F for convenience. We have

† −1 −1 ST † −1 Ber(F ) = Ber(((Ir|s + W W ) G ) ⊗ Ip−r|q−s)Q(Ir|s ⊗ H(Ip−r|q−s + WW ) )

† −1 −1 ST † −1 = Ber(Q) Ber(((Ir|s + W W ) G ) ⊗ H(Ip−r|q−s + WW ) )

† −1 −1 ST † −1 = Ber(((Ir|s + W W ) G ) ⊗ H(Ip−r|q−s + WW ) ) 98 CHAPTER 4. THE VOLUME ELEMENT

We will not write the absolute value bars here for the next few lines for ease of reading so we have that

† −1 −1 (p−r)−(q−s) † −1 (r−s) = Ber((Ir|s + W W ) G ) Ber(H(Ip−r|q−s + WW ) )

† −1 (p−r)−(q−s) † −1 (r−s) = Ber((Ir|s + W W ) ) Ber((Ip−r|q−s + WW ) ) Ber(G−1)(p−r)−(q−s) Ber(H)(r−s)

now we use from [38] that Ber(I + AB) = Ber(I + BA)

† −1 (p−q) −1 (p−r)−(q−s) (r−s) = Ber ((Ir|s + W W ) ) Ber(G ) Ber(H)

Now reintroducing the absolute value we can eliminate Ber(H) and Ber(G) as the Berezinian of both have absolute value 1. We then calculate the Berezinian of the Hermitian form and hence we have our answer.

Suppose that H was scaled by a factor of R. Then we would have that

Ber(G−1)(p−r)−(q−s) Ber(H)(r−s) = (isR−(r−s))(p−r)−(q−s)(i−(q−s)R(p−r)−(q−s))(r−s)

= i−q.

We have also seen that the adjoint map is invariant with respect to multiplying the Hermitian form by a scalar. We then have the following corollary

Corollary 4.3.3. The volume element for the Grassmannian supermanifolds is invari- ant under the multiplication of the Hermitian form in the ambient space by a scalar R.

This applies in the usual case as well. To use a specific space then suppose we look at C2 and CP1. From the above we get that the volume of the complex projective space is independent of the volume of the sphere in C2. Chapter 5

Calculations

99 100 CHAPTER 5. CALCULATIONS

We now have from the previous chapter that we can calculate the volume of the

p|q Grassmannian supermanifolds Grr|s(C ). The integral we need to solve is Z  1 (r−s)((p−q)−(r−s)) (5.1) 2 2 C(pr+qs−(r +s ))|ps+rq−2rs 2i 1 (p−r)+(q−s) (p−r)+(q−s) ¯(p−r)+1 (p−r)+1 ¯(p−r) (p−r) [dw¯11, dw1, . . . , dw¯(r+s) , dw(r+s) |dξ1 , dξ1 , . . . , dξ(r+s) dξ(r+s) ] † (p−q) Ber(Ir|s + W W ) (5.2) if we write out all the variables. In the calculations below we will again shorten

1 (p−r)+(q−s) (p−r)+(q−s) ¯(p−r)+1 (p−r)+1 ¯(p−r) (p−r) [dw¯11, dw1, . . . , dw¯(r+s) , dw(r+s) |dξ1 , dξ1 , . . . , dξ(r+s) dξ(r+s) ] to [dW ].

We will first as usual look at the usual case. If this is the right volume element then

p it should agree with the usual answer for the volume of Grr(C ) which is G(r + 1)G(p − r + 1) πr(p−r) . G(p + 1) We recall that the Barnes G-function was defined in Chapter 1 in the Introduction and Review. This will also be useful in the super case as many of the integrals in the super case work out to be multiples of the integrals we get in the usual case.

5.1 The Usual Case

The integral at the start reduces to

Z  r(p−r) 1 1 p−r p−r 1 dw¯1 ∧ dw1 ∧ ... ∧ dw¯r ∧ dwr † p . (5.3) Cr(p−r) 2i det(Ir + W W ) We now want to make a change of coordinates. We want to use the singular value decomposition to solve this integral. To do this we need to discard sets of measure 0 from Cr(p−r). First we discard those matrices which aren’t of full rank, and then we further discard those which have two or more singular values being the same. So we integrate over the space of full rank (p − r) × r matrices which have r distinct singular values. We can now use the following theorem which we have modified for our needs from [39] (we have shifted it from a statement about r × n matrices to n × r matrices so from about rows to columns). 5.1. THE USUAL CASE 101

n Theorem 5.1.1. Let W ∈ Mr(C ) and rank W = k ≤ r. There exists a n × r r matrix with orthonormal columns Q, a diagonal matrix Λ ∈ Mr(C ) with non-negative diagonal entries, λ1 ≥ λ2 ≥ ... ≥ λk > λk+1 = ... = λr = 0 and a P in U(r) such

2 † that W = QΛP . Λ is uniquely determined and the λi are the eigenvalues W W . The columns of the matrix P are the eigenvectors of W †W . If W †W has distinct eigenvalues then P is determined up to a left diagonal factor D = diag(eiθ1 , eiθ2 , . . . , eiθr ) with the

θi ∈ R so that if W = QΛP = Q˜ΛP˜ then P˜ = DP . Given P , the matrix Q is uniquely determined if rank W = r.

What this theorem implies is that the open submanifold U of Cr(p−r) consisting of matrices W which are of full rank and such that W †W has distinct eigenvalues is diffeomorphic to

p−r r Vr(C ) × S × F (1,..., 1)

p−r where Vr(C ) is the Stiefel manifold,

1 2 r +r i j S := {(λ , λ , . . . , λ ) ∈ R | λ > λ if i < j}

and F (1,..., 1) is the flag manifold of all complete flags of Cr. So knowing this we want to change coordinates to the case where our coordinates

p−r r are a triple (Q, Λ,P ) ∈ Vr(C ) × S × F (1,..., 1). The Jacobian of this transforma- tion is found in [12]. Where we have if

 1 r(p−r) dV = dw¯1 ∧ dw1 ∧ ... ∧ dw¯p−r ∧ dwp−r 2i 1 1 r r then

r Y Y dV = (λi)2p−4r−1 ((λi)2 − (λj)2)2dS ∧ P †dP ∧ Q†dQ. i=1 i

Here we are using unitary homogeneous coordinates for P as an element of F (1,..., 1). 102 CHAPTER 5. CALCULATIONS

Applying this to our integral (5.3) then we have that Z dV † p Cr(p−r) det(Ir + W W ) Z Qr i 2p−4r+1 Q i 2 j 2 2 † † i=1(λ ) i

p−r We can now integrate over Vr(C ) and F (1,..., 1). From [12] we have that Z r r(p−r) p−r † 2 π G(p − 2r + 1) Vol(Vr(C )) = Q dQ = r(r−1) p−r Vr(C ) π 2 G(p − r + 1) after some rewriting. For the case of F (1,..., 1) we have seen that P †dP is written with P being a unitary matrix under the equivalence that it is the same element of F (1,..., 1) when multiplied on the left by a diagonal unitary matrix or in other words an element of T r the r dimensional torus. If we integrate over P †dP we should obtain the volume of the unitary group divided by the volume of the r dimensional torus which as a product of circles is (2π)r. So we obtain

r r(r−1)  1  Z π 2 Vol(F (1,..., 1)) = P †dP = 2π U(r) G(r + 1) Returning to our integral we now have that it is

r r(p−r) r(r−1) r i 2p−4r−1 2 π G(p − 2r + 1) π 2 Z Y (λ ) Y ((λi)2 − (λj)2)2dS r(r−1) G(r + 1) (1 + (λi)2)p π 2 G(p − r + 1) S i=1 i

Theorem 5.1.2. Let S be the domain

1 2 r +r i j S := {(x , x , . . . , x ) ∈ R | x > x if i < j}.

Suppose F : Rr → R is a symmetric functions so that F (x1, . . . , xr) = F (σ(x1), . . . , σ(xr)) for any permutation σ ∈ Sr. Then we have that Z Z r! F (x1, . . . , xr)dV = F (x1, . . . , xr)dV r S R+ From this we have that (5.5) becomes

Z r i 2p−4r−1 1 Y (λ ) Y i 2 j 2 2 i 2 p ((λ ) − (λ ) ) dV. (5.6) r! + r (1 + (λ ) ) (R ) i=1 i

We can further more make the change of variables so that (λi)2 = xi under this change of variables we get

Z r i p−2r 1 Y (x ) Y i j 2 r i p ((x − x ) dV. (5.7) 2 r! + r (1 + x ) (R ) i=1 i

Concluding this discussion we come to the statement that

r(p−r) Z r i p−2r p π G(p − 2r + 1) Y (x ) Y i j 2 Vol(Gr(C )) = i p ((x − x ) dV. (5.8) G(p − r + 1)G(r + 2) + r (1 + x ) (R ) i=1 i

We can write the term in front in another way which will useful shortly.

πr(p−r)G(p − 2r + 1) πr(p−r) 1 = (5.9) G(p − r + 1)G(r + 2) G(r + 2) (p − r − 1)!(p − r − 2)! ... (p − 2r)!

The volume for the Grassmannian obtained by dividing the volumes of unitary groups is G(r + 1)G(p − r + 1) Vol(G ( p)) = πr(p−r) (5.10) r C G(p + 1) so if our integral gives the volume of the Grassmannian manifolds then we require that

r Z Y (xi)p−2r Y : i j 2 Ir = i p (x − x ) dV (5.11) + r (1 + x ) (R ) i=1 i

G(r + 1)G(r + 2) (p − 1)(p − 2)2 ... (p − (r − 1))r−1(p − r)r(p − (r + 1))r−1 ... (p − (2r − 1))2(p − 2r) (5.12) as the product of (5.9) and (5.12) gives (5.10). 104 CHAPTER 5. CALCULATIONS

In full generality this hasn’t been proven however for r = 1,... 5 it has been checked to be true. For r = 1, 2, 3 it has been checked by hand, it has been checked for r = 4, 5 in Mathematica. In the r = 3 case, for instance, we have that

2 I3 = 6B(1, p − 1)B(3, p − 3)B(5, p − 5) − 6B(1, p − 1)B (4, p − 4) + 12B(2, p − 2)B(3, p − 3)B(4, p − 4) − 6B2(2, p − 2)B(5, p − 5) − 6B3(3, p − 3) where B(x, y) is the Beta function, which we can be expressed in terms of the Gamma function as Γ(x)Γ(y) B(x, y) = . Γ(x + y) This can be simplified to

2!1!2!3! . (p − 3)(p − 4)2(p − 5)3(p − 4)2(p − 5)

The integral Ir can be seen to give, like the case of I3, a sum of terms each of which are r Beta functions multiplied together. Solving Ir for all r is in some sense then a combinatorics problem of obtaining a simplified fraction in the terms we want. So we have that for r = 1, 2, 3, 4, 5 that

Z  r(p−r) 1 1 p−r p−r 1 dw¯1 ∧ dw1 ∧ ... ∧ dw¯r ∧ dwr † p Cr(p−r) 2i det(Ir + W W ) can be calculated to be G(r + 1)G(p − r + 1) πr(p−r) G(p + 1) as required. We have established that the integral gives the volume of the Grass- mannians for certain parameters of r in the usual case, we now move onto the super case.

5.2 The Super Case

Our aim has been to show whether

2 2 G((r − s) + 1)G((p − q) − (r − s) + 1) Vol(Gr ( p|q)) = 2rq+sp−2rsπrp+sq−(r +s ) r|s C G((p − q) + 1) (5.13) holds true for all valid values of r, s, p, q. This is true for some values of the parameters but for many cases where the formula predicts that the volume will be non-zero we 5.3. 1|0 × (P + 1)|Q 105 instead get that the volume is 0. In general solving this integral for all possible parameters has not been possible. We now list the cases where it can be solved and the calculations involved in that.

5.3 1|0 × (p + 1)|q

For this case we have that the integral is Z  1 (p−q) [dw¯1, dw1, . . . , dw¯p, dwp|dξ¯p+1, dξp+1, . . . , dξ¯p+qdξp+q] † p−q+1 . (5.14) Cp|q 2i (1 + w w) This is the same volume element as obtained in [1] and hence we get that 2qπp Vol(Gr ( p|q)) = Vol( p|q) = . 0|1 C CP (p − q)!

5.4 0|1 × p|(q + 1)

We have that the volume element in this case is Z  1 (q−p) [dw¯1, dw1, . . . , dw¯q, dwq|dξ¯q+1, dξq+1, . . . , dξ¯q+pdξq+p] † q−p+1 . Cq|p 2i (1 + w w) ! ξ This is as we have that W = so that we have w ! !   1 0 ξ Ber(I + W †W ) = 1 + (−i) ξ¯ w¯ = 1 +ww ¯ − iξξ.¯ 0 i w

This is the same as (1 + w†w) from (5.14) so that we have that 2pπq Vol(Gr ( p|q+1)) = . 0|1 C (q − p)! p|q In general we have that Grr|s(C ) are isometrically isomorphic to as Hermitian q|p manifolds to Grs|r(C ) as implied from [1] using the parity reversion functor as ΠCp|q = Cq|p. So the above result is not unexpected. From now on because of this we can always work in the case where p ≥ q.

p|p 5.5 Grr|s(C )

The previous results can be inferred from [1], hence the first new result is that

p|p Vol(Grr|s(C )) = 0 106 CHAPTER 5. CALCULATIONS

This is a simple consequence of examining the integrand in (5.1). The integrand is

1 † p−q , Ber(Ir|s + W W ) so if we have that p = q then the integrand is then a constant function which in the Berezin integral evaluates to 0.

5.6 r, s > 0, q < r

We now want to look at the parameters and examine what is required for the integral (5.1) to be non-zero. We have the following theorem

Theorem 5.6.1. If r, s > 0 and q < r then

p|q Vol(Grr|s(C )) = 0

† Proof. Let us label Ir|s + W W as ! X X X = 00 01 X10 X11

We then have that the integrand is

p−q −1 ! 1 det(X11 − X10X00 X01) p−q = . Ber(X) det(X00)

Now we have that

∗ ∗ X11 = Ir + W11W11 − iW01W01

∗ so that the odd variables are in the matrix −iW01W01. There are s(p − r) conjugate ¯i i pairs of odd variables in this matrix so that a generic term in this matrix is −iξjξj. We ¯i label a term of the form cξjξ which is a multiple of a conjugate pair of odd variables ¯i i ¯k k as having degree 1 so that a term of the form cξjξjξl ξl would have degree 2. If we repeat this process for X11 then we have that X11 has r(q − s) odd variables from ∗ the matrix −iW10W10. The matrices X11 and X00 share no odd variables in common. The Berezin integral is only non-zero when there is a term of highest degree in the integrand.

−1 If we look at the terms in (X11 − X10X00 X01) then this is a matrix where in terms ∗ −1 of the variables from W01W01 we only have terms of degree 1. (X11 − X10X00 X01) is 2|1 5.7. GR1|1(C ) 107 a s × s matrix. This implies that the highest degree term we can produce from the odd variables using the determinant is one of degree s. If we then look at (X11 − −1 p−q X10X00 X01) then the highest degree term we can produce in terms of the odd ∗ variables contained in the matrix W01W01 is one of degree s(p − q).

In order for there to be a term of highest degree then we must have that s(p − q) ≥

1 s(p − r). This is as while we can produce a term from (p−q) which is in principle det(X00) of highest degree possible from the odd variables in X00, so one of degree r(q − s), we require that to generate a term of highest degree overall we need that there is a term

−1 (p−q) of degree s(p − r) generated from det(X11 − X10X00 X01) . A necessary condition for this is that s(p − q) ≥ s(p − r). We can then conclude that as long as r, s > 0 then this is the case when q ≤ r. Hence when r, s > 0 and q > r there is no term of highest degree in the integrand and hence the Berezin Integral (5.1) is 0.

Remark 5.6.2. This line of argumentation that the volume of a supermanifold is 0 as there is no term of highest degree in the integrand is how Berezin [15] arrived at the volume of the unitary group U(p|q) being 0 when both p, q > 0.

We can now move to calculations involving specific cases.

2|1 5.7 Gr1|1(C )

2|1 The distinguished volume element for Gr1|1(C ) is given by:

[dW ]   † where W = w ξ Ber(I1|1 + W W ) so we have that this is:

1 !. 1 +ww ¯ wξ¯ Ber −iξw¯ −iξξ¯ 108 CHAPTER 5. CALCULATIONS

Computing this out we have: 1 ! 1 +ww ¯ wξ¯ Ber −iξw¯ −iξξ¯ 1 = (1+ww ¯ ) (1−iξξ¯ +iξw¯ (1+ww ¯ )−1wξ¯ )−1 (1 − iξξ¯ + iξw¯ (1 +ww ¯ )−1wξ¯ ) = (1 +ww ¯ ) 1 +ww ¯ − iξξ¯ − iww¯ ξξ¯ + iww¯ ξξ¯ = (1 +ww ¯ )2 1 iξξ¯ = − 1 +ww ¯ (1 +ww ¯ )2 We have that − iξξ¯ = −i(θ − iη)(θ + iη) = 2θη (5.15) with this we have that given the Berezin Integral Z [dW ] † C1|1 Ber(I1|1 + W W ) the corresponding Riemann integral when converting to integrating over R with z = x + iy is Z 2dx ∧ dy 2 2 2 . R2 (1 + x + y ) with the factor of two coming from (5.15) as we move from complex odd variables to real odd variables. This integral is twice the integral obtained when finding the volume of CP1, it is also the same as the integral for CP1|1. From this we get that the volume of both is 2π.

3|1 5.8 Gr1|1(C )

3|1 The distinguished volume element for Gr1|1(C ) is given by: ! [dW ] w1 ξ1 where W = † 2 2 2 Ber(I1|1 + W W ) w ξ so we have that this is: 1 . !2 1 +w ¯1w1 +w ¯2w2 w¯1ξ1 +w ¯2ξ2 Ber −iξ¯1w1 − iξ¯2w2 1 − iξ¯1ξ1 − iξ¯2ξ2 3|1 5.9. GR2|0(C ) 109

Computing this out we have:

1 !2 1 +w ¯1w1 +w ¯2w2 w¯1ξ1 +w ¯2ξ2 Ber −iξ¯1w1 − iξ¯2w2 1 − iξ¯1ξ1 − iξ¯2ξ2 !2 (1 − iξ¯1ξ1 − iξ¯2ξ2 + i(ξ¯1w1 + ξ¯2w2)(1 +w ¯1w1 +w ¯2w2)−1(w ¯1ξ1 +w ¯2ξ2)) = (1 +w ¯1w1 +w ¯2w2) 1 +w ¯1w1 +w ¯2w2 − i(1 +w ¯1w1 +w ¯2w2)(ξ¯1ξ1 + ξ¯2ξ2) + i(ξ¯1w1 + ξ¯2w2)(w ¯1ξ1 +w ¯2ξ2)2 = (1 +w ¯1w1 +w ¯2w2)4 1 +w ¯1w1 +w ¯2w2 − i(ξ¯1ξ1 + ξ¯2ξ2) − iξ¯2ξ2w¯1w1 − iξ¯1ξ1w¯2w2 + iξ¯1ξ2w1w¯2 + iξ¯2ξ1w2w¯12 = (1 +w ¯1w1 +w ¯2w2)4  2 1 +w ¯1w1 +w ¯2w2 − i (ξ¯1ξ1 + ξ¯2ξ2) + ξ¯2ξ2w¯1w1 + ξ¯1ξ1w¯2w2 − ξ¯1ξ2w1w¯2 − ξ¯2ξ1w2w¯1 = (1 +w ¯1w1 +w ¯2w2)4 1 = (1 +w ¯1w1 +w ¯2w2)2 (ξ¯1ξ1 + ξ¯2ξ2) + ξ¯2ξ2w¯1w1 + ξ¯1ξ1w¯2w2 − ξ¯1ξ2w1w¯2 − ξ¯2ξ1w2w¯1 − 2i (1 +w ¯1w1 +w ¯2w2)3 ξ¯1ξ1ξ¯2ξ2(1 +w ¯1w1 +w ¯2w2) − 2 (1 +w ¯1w1 +w ¯2w2)4

From this we get that the Berezin Integral is equal to the Riemann integral:

Z [dW ] Z 22dx1 ∧ dy1 ∧ dx2 ∧ dy2 † 2 = 2 1 2 1 2 2 2 2 2 3 . C2|2 Ber(I1|1 + W W ) R4 (1 + (x ) + (y ) + (x ) + (y ) ) This can be evaluated by changing to polar coordinates as

Z 3 r dr 3 1 2 2 8 2 3 Vol(S ) = 8 2π = 4π R+ (1 + r ) 4 This agrees with (5.13).

3|1 5.9 Gr2|0(C )

In this case we have that the volume element is given as

[dW ] † 2 det(I2 + W W ) where ! w1 w2 W = ξ1 ξ2 110 CHAPTER 5. CALCULATIONS

The integrand is expanded as

1 .  2 1! ¯1 1 ¯1 2! w¯  1 2 ξ ξ ξ ξ det I2 + w w − i  w¯2 ξ¯2ξ1 ξ¯2ξ2

We will write this as A − iB with

1! ¯1 1 ¯1 2! w¯  1 2 ξ ξ ξ ξ A = I2 + w w and B = . w¯2 ξ¯2ξ1 ξ¯2ξ2

For the case of 2 × 2 matrices we can use the identity that for matrices X and Y with X invertible then

det(X + Y ) = det(X) + det(Y ) + det(X) Tr(X−1Y )

So we have that

det(A − iB) = det(A) − det(B) − i det(A) Tr(A−1B)

We can arrange this as s − t with s = det(A) and t = det(B) + i det(A) Tr(A−1B). We have that given any s − t then its formal inverse is

∞ X (s − t)−1 = s−(i+1)ti. (5.16) i=0

We have arranged it so that t is a nilpotent element as B consists of nilpotent entries. We can compute that t3 = 0. Hence the power series for the formal inverse terminates after only finite terms and hence is the inverse. We have that

(s − t)−1 = s−1 + s−2t + s−3t2

We have that this is

det(A)−1 + det(A)−2 det(B) + i det(A−1) Tr(A−1B) − det(A)−1(Tr(A−1B))2.

We have that det(B) = 2ξ¯1ξ1ξ¯2ξ2 = 2Ξ and that (Tr(A−1B))2 = 2 det(A)−1Ξ 2|1 5.10. GR2|0(C ) 111 so that

(s − t)−1 = det(A)−1 + det(A)−2 det(B) + i det(A−1) Tr(A−1B) − det(A)−1(Tr(A−1B))2 (5.17)

= det(A)−1 + i det(A)−1 Tr(A−1B). (5.18)

If we square this we obtain that

1 (5.19)  2 1! ¯1 1 ¯1 2! w¯  1 2 ξ ξ ξ ξ det I2 + w w − i  w¯2 ξ¯2ξ1 ξ¯2ξ2

= det(A)−2 + 2i det(A)−1 Tr(A−1B) − 2 det(A)−3Ξ. (5.20)

Now we have that   1! −1 w¯  1 2 1 1 2 2 det(A) = det I2 + w w  = 1 +w ¯ w +w ¯ w , w¯2 from this after passing to real rather than complex coordinates we determine that the Riemann integral associated to the Berezin integral is

Z 22dx1 ∧ dy1 ∧ dx2 ∧ dy2 2 1 2 1 2 2 2 2 2 3 (5.21) R4 (1 + (x ) + (y ) + (x ) + (y ) ) which we have just computed above, so the volume is 4π2.

2|1 5.10 Gr2|0(C )

The volume element here is

 1 −2 [dξ¯1, dξ1, dξ¯2, dξ2] 2i  −1 det 1 − iξ¯1ξ1 − iξ¯2ξ2 so the volume is 0 as there is no element of highest degree.

Remark 5.10.1. The volume being 0 is also unsurprising in this case at the underlying manifold in this case is a point. This case is just an exercise in working through the available parameters. 112 CHAPTER 5. CALCULATIONS

3|1 5.11 Gr2|1(C )

In this case we have that the volume element is given as

 1  [dW ] † 2 2i det(I2|1 + W W ) where   W = w1 w2 ξ

The integrand is expanded as

 1  1 . 2i    2 w¯1    2   1 2  Ber I2|1 +  w¯  w w ξ       −iξ¯ 

Using that Ber(I + AB) = Ber(I + BA) we get that this equals

 1  1 . 2i (1 + w1w¯1 + w2w¯2 + iξξ¯)2

Using the formula for the inverse we get that this equals

1 ξξ¯ − 2i . (5.22) (1 +w ¯1w1 +w ¯2w2)2 (1 +w ¯1w1 +w ¯2w2)2

The Riemann integral we then get over R4 is Z 2dx1 ∧ dy1 ∧ dx2 ∧ dy2 2 1 2 1 2 2 2 2 2 3 R4 (1 + (x ) + (y ) + (x ) + (y ) ) so the volume is 2π2.

3|2 5.12 Gr2|0(C )

3|1 For this we repeat the procedure as in the case for Gr2|0(C ) above. We should expect that the answer will be 0 for two reasons. One, from (5.13) the predicted volume is

G(2 + 1)G(1 − 2 + 1) = 24π2 = 0 G(1 + 1)

3|2 as G(0) = 0. Secondly, we have that dim(Gr2|0(C )) = 2|4 over C. From [2] we have the result that for a symplectic supermanifold of dimension p|q (which our hermitian supermanifolds naturally are) then the volume is 0 if q > p. 3|2 5.12. GR2|0(C ) 113

So let us do the calculation. The volume element is

 1 −2 [dW ] † 2i det(I2 + W W ) with   w1 w2  1 1  W = ξ1 ξ2  .  2 2  ξ1 ξ2 Again using the labelling A and B with this time ! ξ¯1ξ1 + ξ¯2ξ2 ξ¯1ξ1 + ξ¯2ξ2 B = 1 1 1 1 1 2 1 2 ¯1 1 ¯2 2 ¯1 1 ¯2 2 ξ2 ξ1 + ξ2 ξ1 ξ2 ξ2 + ξ2 ξ2 we have that s = det(A) and t = det(B) + i det(A) Tr(A−1B) and so with s − t = det(A) − (det(B) + i det(A) Tr(A−1B)). We need to calculate the inverse of this ex- pression. This time as we have 4 of conjugate odd variables we need to produce the term of highest degree which is of degree 4. We have that Tr(A−1B) is of degree 1 and that det(B) is of degree 2, from this we have that t5 = 0 as the minimum degree of a term in t5 is 5. So we have

(s − t)−1 = s−1 + s−2t + s−3t2 + s−4t3 + s−5t4.

We now need to look at ti and we have that

t = det(B) + i det(A) Tr(A−1B)

t2 = det(B)2 + 2i det(A) det(B) Tr(A−1B) − det(A)2 Tr(A−1B)2

t3 = −3 det(B) det(A)2 Tr(A−1B)2 − i det(A)3 Tr(A−1B)3

t4 = det(A)4 Tr(A−1B)4.

From these due to the nature of the Berezin integral we only need to look at the terms of degree 4. If we denote by Ξ again the term of highest degree we have that after some calculations the terms of highest degree are

det(B)2 = 12Ξ 12Ξ det(B) Tr(A−1B)2 = det(A) 24Ξ Tr(A−1B)4 = . det(A)2 114 CHAPTER 5. CALCULATIONS

1 We now can say that the highest degree term in the expansion of † is det(I2+W W ) det(B)2 1 1 + (−3 det(A)2 det(B) Tr(A−1B)2) + det(A)4 Tr(A−1B)4 det(A)3 det(A)4 det(A)5 12Ξ 36Ξ 24Ξ = − + det(A)3 det(A)3 det(A)3 = 0

3|2 Hence we can now say that the volume of Gr2|0(C ) is 0.

3|2 5.13 Gr1|1(C )

This is a complex supermanifold of dimension 3|3 and the projected answer for the volume is G(1)G(2) 23π3 G(2) which is non-zero. It is also the first case where we are explicitly calculating where the denominator in the integrand is the Berezinian proper rather than the determinant. We have that the volume element is [dW ] † Ber(I1|1 + W W ) with   1 1 w1 ξ1  1 1  W = w2 ξ2   1 1 ξ3 w3 We have that ! 1 +w ¯1w1 +w ¯1w1 − iξ¯1ξ1 w¯1ξ1 +w ¯1ξ1 − iξ¯1w1 I + W †W = 1 1 2 2 3 3 1 1 2 2 3 3 1|1 ¯1 1 ¯1 1 1 1 ¯1 1 ¯1 1 1 1 −iξ1 w1 − iξ2 w2 +w ¯3ξ3 1 − iξ1 ξ1 − iξ2 ξ2 +w ¯3w3 so that ¯1 1 ¯1 1 1 1 1 1 1 1 ¯1 1 ¯1 1 ¯1 1 1 1 (−iξ1 w1−iξ2 w2+w ¯3ξ3 )(w ¯1ξ1 +w ¯2ξ2 −iξ3 w3) 1 − iξ1 ξ1 − iξ2 ξ2 +w ¯3w3 − 1+w ¯1w1+w ¯1w1−iξ¯1ξ1 Ber(I + W †W )−1 = 1 1 2 2 3 3 . 1|1 1 1 1 1 ¯1 1 1 +w ¯1w1 +w ¯2w2 − iξ3 ξ3 Using that 1 1 iξ¯1ξ1 = + 3 3 1 1 1 1 ¯1 1 1 1 1 1 1 1 1 1 2 1 +w ¯1w1 +w ¯2w2 − iξ3 ξ3 1 +w ¯1w1 +w ¯2w2 (1 +w ¯1w1 +w ¯2w2) we can see that no term of degree 3 can be generated from this expression. This immediately implies that

3|2 Vol(Gr1|1(C )) = 0. 3|2 5.14. GR2|1(C ) 115

3|2 5.14 Gr2|1(C )

This is again a supermanifold of dimension 3|3. The predicted volume is G(2)G(1) 23π3 G(2) however as the following calculations will demonstrate we have that

3|2 Vol(Gr2|1(C )) = 0.

This occurs again as there is no term of highest degree in the integrand. However the steps leading up to that conclusion are more interesting than before as if the term of highest degree wasn’t zero then the volume would be infinite. Let us proceed with the calculation, the volume element is [dW ] † Ber(I2|1 + W W ) with ! w1 w2 ξ3 W = 1 1 1 2 2 3 ξ1 ξ2 w2 Using that Ber(1 + AB) = Ber(1 + BA) then we get that this calculation is identical

3|2 (up to a sign) to the case of Gr1|1(C ) and hence the volume is 0.

3|2 5.15 Gr2|2(C )

For this case we get that the volume element is [dW ] † Ber(I2|2 + W W ) with   W = w1 w2 ξ1 ξ2

Using again that Ber(I +AB) = Ber(I +BA), we can compute that the highest degree term is −2ξ¯1ξ1ξ¯2ξ2(1 +w ¯1w1 +w ¯2w2)3 and from this converting to real variables and taking an absolute value so our volume is positive we get the Riemann integral to be the same as (5.21: Z 22dx1 ∧ dy1 ∧ dx2 ∧ dy2 2 1 2 1 2 2 2 2 2 3 R4 (1 + (x ) + (y ) + (x ) + (y ) ) so the volume is 4π2 again as the formula predicts. 116 CHAPTER 5. CALCULATIONS

3|2 5.16 Gr3|1(C )

For this case we should get by (5.6.1) that the volume is 0. We will check this though. We get that the volume element is

 1 −2 [dW ] † 2i Ber(I3|1 + W W ) with   W = ξ1 ξ2 ξ3 w .

We will again use Ber(1 + AB) = Ber(1 + BA), there is a subtlety here in that

Ber(I + W †W ) = det(I + WW †)−1.

This is as W here is an odd row vector so I + WW † corresponds to the D block in a supermatrix ! AB . CD We get that the integrand is

 1 −2 (1 + ww¯ − iξ1ξ¯1 − iξ2ξ¯2 − iξ3ξ¯3), 2i so we get that there is no term of highest degree and so the volume is 0.

4|2 5.17 Gr2|0(C )

The volume element is [dW ] † 2 det(I2|0 + W W ) We have that this is

1 (5.23) !2 1 +w ¯1w1 +w ¯2w2 − iξ¯3ξ3 − iξ¯4ξ4 w¯1w1 +w ¯2w2 − iξ¯3ξ3 − iξ¯4ξ4 det 1 1 1 1 1 1 1 1 1 2 1 2 1 2 1 2 1 1 2 2 ¯3 3 ¯4 4 1 1 2 2 ¯3 3 ¯4 4 w¯2w1 +w ¯2w1 − iξ2 ξ1 − iξ2 ξ1 1 +w ¯2w2 +w ¯2w2 − iξ2 ξ2 − iξ2 ξ2 If we label 1 1 2 2 1 1 2 2 ! 1 +w ¯1w1 +w ¯1w1 w¯1w2 +w ¯1w2 1 1 2 2 1 1 2 2 w¯2w1 +w ¯2w1 1 +w ¯2w2 +w ¯2w2 by A and ¯3 3 ¯4 4 ¯3 3 ¯4 4! ξ1 ξ1 + ξ1 ξ1 ξ1 ξ2 + ξ1 ξ2 ¯3 3 ¯4 4 ¯3 3 ¯4 4 ξ2 ξ1 + ξ2 ξ1 ξ2 ξ2 + ξ2 ξ2 4|2 5.17. GR2|0(C ) 117 by B then we can rewrite (5.23) as 1 . det(A − iB)2 We follow (5.12) and repeat many of the same steps to get that with s = det(A) and t = det(B) + i det(A) Tr(A−1B) then det(A − iB)−1 = (s − t)−1 is

s−1 + s−2t + s−3t2 + s−4t3 + s−5t4 with

t2 =(det(A) Tr((A)−1B) − det(B))2

= det(A)2 Tr((A)−1B) − 2 det(B) det(A) Tr((A)−1B) + det(B)2

t3 =(det(A) Tr((A)−1B) − det(B))3

= det(A)3 Tr((A)−1B)3 − 3 det(A)2 Tr((A)−1B)2 det(B)

t4 =(det(A) Tr((A)−1B) − det(B))4

= det(A)4 Tr((A)−1B)4

Squaring this we get

s−2 + 2s−3t + 3s−4t2 + 4s−5t3 + 5s−6t4. (5.24)

We are in particular looking at the coefficient in front of the term

¯3 3 ¯3 3 ¯4 4 ¯4 4 ξ1 ξ1 ξ2 ξ2 ξ1 ξ1 ξ2 ξ2 = Ξ in (5.24). This implies that we only need to look at

3xs−4t2 + 4s−5t3 + 5s−6t4 as every other part of the expansion doesn’t contain Ξ. Using this information leads us to having to having to perform the following calculation:

3 det(A)−4 det(B)2 + 4 det(A)−5(−3 det(A)2 Tr((A)−1B)2 det(B))

+ 5 det(A)−6 det(A)4 Tr((A)−1B)4.

We now need to compute det(B)2, 118 CHAPTER 5. CALCULATIONS

det(A)2 Tr((A)−1B)2 det(B), and det(A)4 Tr((A)−1B)4.

These compute to be: det(B)2 = 12Ξ,

det(A)2 Tr((A)−1B)2 det(B) = 12 det(A)Ξ, and finally that det(A)4 Tr((A)−1B)4 = 24 det(A)2Ξ.

So we have the following:

3 det(A)−4(det(B)2) + 4 det(A)−5(−3 det(A)2 Tr((A)−1B)2 det(B))

+ 5 det(A)−6(det(A)4 Tr((A)−1B)4)

= 3 det(A)−4(12Ξ) + 4 det(A)−5(−3(12 det(A)Ξ)) + 5 det(A)−6(24 det(A)2Ξ)

= det(A)−4(36Ξ − 144Ξ + 120Ξ) 12Ξ = det(A)4

i i i 3 3 3 3 4 4 4 4 We have that if ξj = θj + iηj then Ξ = (2θ1η1)(2θ2η2)(2θ1η1)(2θ2η2) and so Ξ = 4 3 3 3 3 4 4 4 4 2 θ1η1θ2η2θ1η1θ2η2 The Berezin Integral we need to calculate is:

Z [dW ] † 2 . C4|4 det(I2 + W W ) and this can now be rewritten as

Z [dW ] Z  12Ξ  † 2 = ··· + 4 [dW ]. C4|4 det(I2 + W W ) C4|4 det(A)

Substituting in real variables we get that the integral over R8 that gives the volume 4|2 of Gr2|0(C ) is:

Z 4 1 1 2 2 2 dx2 ∧ dy2 ∧ . . . dx2 ∧ dy2 12 4 R8 det(A) 4 † This is 192vol(Gr2(C )) as if A = I + Z Z then Z 1 1 2 2 4 dx2 ∧ dy2 ∧ . . . dx2 ∧ dy2 vol(Gr2(C )) = † 4 R8 det(I + Z Z) 4|2 5.18. GR1|1(C ) 119 so that

 G(3)G(3) vol(G ( 4|2)) = 192(vol(G ( 4))) = 192 π4 = 16π4 2|0 C 2 C G(5) Or written another way we have that it is

G((2 − 0) + 1)G((4 − 2) − (2 − 0) + 1) 24π4 G((4 − 2) + 1) so we again have agreement with the conjectured volume.

4|2 5.18 Gr1|1(C )

The volume element here is [dW ] † 2 Ber(I1|1 + W W ) with  1 1  w1 ξ2  2 2  w1 ξ2  W =   . w3 ξ3   1 2  4 4 ξ1 w2 The integrand here is then 1 !2 a b Ber c d with

1 1 2 2 3 3 ¯4 4 a =1 +w ¯1w1 +w ¯1w1 +w ¯1w1 − iξ1 ξ1 1 1 2 2 3 3 ¯4 4 b =w ¯1ξ2 +w ¯1ξ2 +w ¯1ξ2 − iξ1 w2 ¯1 1 ¯2 2 ¯3 3 4 4 c = − iξ2 w1 − iξ2 w1 − iξ2 w1 +w ¯2ξ1 4 4 ¯1 1 ¯2 2 ¯3 3 d =1 +w ¯2w2 − i(ξ2 ξ2 + ξ2 ξ2 + ξ2 ξ2 ).

We have that 1 = (a−1(d − ca−1b))2 !2 a b Ber c d Both d and cb can only produce a term of highest degree if they are cubed. Since this isn’t the case then the Berezin Integral is 0 and hence the volume is 0. Chapter 6

Conclusions and Discussion

In this thesis we have looked at the Grassmannian supermanifolds as Hermitian man- ifolds and have derived the volume element. This required developing some new lin- ear superalgebra and in particular some properties relating the Berezinian and the Kronecker product together. These were used to calculate the volume element of

p|q Grr|s(C ) for arbitrary r, s, p, and q. For some small values of these parameters the p volume of these supermanifolds was calculated. For the usual case of Grr(C ) we expect that r Z Y (xi)p−2r Y : i j 2 Ir = i p (x − x ) dV (6.1) + r (1 + x ) (R ) i=1 i

G(r + 1)G(r + 2) (p − 1)(p − 2)2 ... (p − (r − 1))r−1(p − r)r(p − (r + 1))r−1 ... (p − (2r − 1))2(p − 2r) (6.2) is true for all values of r rather than up to r = 5. There is a hope of developing a line of argumentation that implies that this is true that because the volume of the Grassmannian should be the same when calculated by dividing the volume of Unitary groups or the direst method but the precise details in how to make this connection eluded the author. In general when working in the super case we wanted to closely follow the notation in the usual case. The work in the appendix, in content, is not new but the form is, to our knowledge. We hope it provides a clear treatment of basic superalgebra. It is a treatment working from the abstract base, in that we are working in a symmetric monoidal category, to what that translates into in terms of coordinates. In particular

120 121 it clarifies, for a supermatrix A, why AST and ATS have the signs that they do and how they relate to moving between right and left coordinates. With signs appearing in supermathematics all the time making sure that the notation that you are working with is consistent is a must. There is a natural notational laxness in the usual case. This is natural as one is working in a commutative setting when looking at a smooth manifold. For instance Hermitian forms are often defined in mathematics as being sesquilinear in the second variable and when working with matrices and vectors we work with column vectors, these two conventions aren’t entirely consistent. If the Hermitian form on a module is sesquilinear in the second variable then one is implicitly assuming that you are working on a module M that is a left module first. If you are writing things in terms of columns of coordinates then one is working with a module M which is a right module first. So when one first tries to move to the super setting these inconsistencies, that didn’t matter in the usual case, truly clash in the super case. So much of the hours that produced this thesis was making a consistent framework in which to work in. For another part where this mattered we have defined that the internal hom(U, V ) is isomorphic to V ⊗ U ∨ with V ⊗ U ∨ being in this order. This ordering matters, and that the map from V ⊗ U ∨ to hom(U, V ) is

(v ⊗ ω)(u) := vω(u), is important. This map is different to the one from [20] and this has made a material difference. In defining vectorisation that the image is U ∨ ⊗ V then becomes impor- tant. With this the definition of the Kronecker product of two supermatrices becomes obvious and then proving things using it is made easier. In terms of further possible work there is the following conjecture.

p|q Conjecture 6.0.1. If 0 < r < p and 0 < s < q then the volume of Grr|s(C ) is 0. In general we expect that this is true for all Flag supermanifolds for similar ranges of parameters.

We make this conjecture for the following reasons. If our volume element is

 1 (r−s)((p−q)−(r−s)) [dW ] † (p−q) 2i Ber(Ir|s + W W ) 122 CHAPTER 6. CONCLUSIONS AND DISCUSSION

! † AB then if Ir|s + W W = we have that CD

 1 (r−s)((p−q)−(r−s)) det(D − CA−1B)p−q . 2i det(A)(p−q)

We can assume that p > q and so we find that when examining D and A that they share no even variables in common, we also have that the even variables in D present in C and B are in numerator. Hence once we expand the integrand over the odd variables we should expect that the coefficients of this expansion are rational functions of the f(x) form g(y) for x the even variables in D and y the even variables of A. The only possibility then for us to produce a convergent Riemann integral associated to the Berezin integral is if the term of highest degree is 0 and hence the volume is 0. As the underlying manifolds for these supermanifolds are compact then this seems to be the only reasonable conjecture to make. But this might fail to be true. In [2] a calculation that would determine the volume of a generic symplectic super- manifold is given. A generic smooth supermanifold can be looked at as if it is some vector bundle over the base manifold with the vector spaces over each point being ex- terior algebras. Considering our smooth symplectic supermanifolds as vector bundles one should be able to obtain that vector bundle’s Euler class and using that one can derive an expression for the volume in the supermathematics sense from knowing the Euler class. Calculating what the volume of the Grassmannian supermanifolds using this approach and contrasting what answers one may be able to achieve from there to our results would be fruitful. For a long time it was thought that the volume of a generic symplectic superman- ifold had to be zero. This was because Darboux coordinates exist on supermanifolds so the Berezin Integral in these coordinates should be 0. This is not the case but one should be able to calculate the volume using Darboux coordinates. The question is whether the partition of unity that one is implicitly using then carries the information of the volume. This would be useful to look at, the author has not made much progress on this front. The other idea is whether the boundary of the domain where Darboux coordinates are defined carries the information of the volume. This would be in analogy with the usual case where one can find a Darboux chart for CPn which maps the unit ball in 123

2n n n πn R into CP . One can observe that the unit ball and CP have the same volume n! the boundary of the unit ball defines its volume. The Berezin Integral over a domain with boundary as one finds in [24] and [26] depends on the algebraic expression that defines the boundary. So in principle the Berezin integral in Darboux coordinates where there is a boundary can be non-zero. It is maybe in this where the information of the volume is found when one has Darboux coordinates. Bibliography

[1] T. T. Voronov, “On volumes of classical supermanifolds,” Sbornik: Mathematics, vol. 207, no. 11, p. 1512, 2016.

[2] D. Stanford and E. Witten, “Jt gravity and the ensembles of random matrix theory,” arXiv preprint arXiv:1907.03363, 2019.

[3] T. Koda, “An introduction to the geometry of homogeneous spaces,” in Proceed- ings of the 13th International Workshop on Differential Geometry and Related Fields, Natl. Inst. Math. Sci.(NIMS), Taejon, pp. 121–144, 2009.

[4] A. Arvanitoge¯orgos, An introduction to Lie groups and the geometry of homoge- neous spaces, vol. 22. American Mathematical Soc., 2003.

[5] S. Kobayashi and K. Nomizu, Foundations of Differential Geometry, Volume 1. A Wiley Publication in Applied Statistics, Wiley, 1996.

[6] J. Ferrer, M. Gar´cia, and F. Puerta, “Differentiable families of subspaces,” Linear algebra and its applications, vol. 199, pp. 229–252, 1994.

[7] A. Edelman, T. A. Arias, and S. T. Smith, “The geometry of algorithms with orthogonality constraints,” SIAM journal on Matrix Analysis and Applications, vol. 20, no. 2, pp. 303–353, 1998.

[8] L. J. Boya, E. Sudarshan, and T. Tilma, “Volumes of compact manifolds,” Reports on Mathematical Physics, vol. 52, no. 3, pp. 401–422, 2003.

[9] M. Marinov, “Invariant volumes of compact groups,” Journal of Physics A: Math- ematical and General, vol. 13, no. 11, p. 3357, 1980.

124 BIBLIOGRAPHY 125

[10] M. Marinov, “Correction toinvariant volumes of compact groups’,” Journal of Physics A: Mathematical and General, vol. 14, no. 2, p. 543, 1981.

[11] Y. Hashimoto, “On macdonald’s formula for the volume of a compact lie group,” Commentarii Mathematici Helvetici, vol. 72, no. 4, pp. 660–662, 1997.

[12] J. A. Diaz-Garcia and R. Guti´errez-S´anchez, “Jacobians of singular matrix trans- formations: Extensions,” arXiv preprint arXiv:1207.1993, 2012.

[13] L. Hua, Harmonic analysis of functions of several complex variables in the classical domains. American Mathematical Soc., 1963.

[14] V. S. Adamchik, “Contributions to the theory of the barnes function,” Int. J. Math. Comput. Sci, vol. 9, no. 1, pp. 11–30, 2014.

[15] F. A. Berezin, “Representations of the supergroup u (p, q),” Functional Analysis and Its Applications, vol. 10, no. 3, pp. 221–223, 1976.

[16] F. A. Berezin, Introduction to superanalysis, vol. 9. Springer Science & Business Media, 2013.

[17] D. A. Leites, “Introduction to the theory of supermanifolds,” Russian Mathemat- ical Surveys, vol. 35, no. 1, p. 1, 1980.

[18] B. Kostant, “Graded manifolds, graded lie theory, and prequantization,” in Differ- ential geometrical methods in mathematical physics, pp. 177–306, Springer, 1977.

[19] Y. I. Manin, Gauge field theory and complex geometry, vol. 289. Springer Science & Business Media, 2013.

[20] P. Deligne and J. Morgan, “Notes on (following joseph bern- stein),” Quantum fields and strings: a course for mathematicians, pp. 41–97, 1999.

[21] E. Keßler, Supergeometry, Super Riemann Surfaces and the Superconformal Ac- tion Functional. Springer, 2019.

[22] V. S. Varadarajan, Supersymmetry for Mathematicians: An Introduction: An Introduction, vol. 11. American Mathematical Soc., 2004. 126 BIBLIOGRAPHY

[23] C. Carmeli, L. Caston, and R. Fioresi, Mathematical foundations of supersymme- try, vol. 15. European Mathematical Society, 2011.

[24] T. Voronov, Geometric integration theory on supermanifolds, vol. 1. CRC Press, 1991.

[25] M. Batchelor, “The structure of supermanifolds,” Transactions of the American Mathematical Society, vol. 253, pp. 329–338, 1979.

[26] E. Witten, “Notes on supermanifolds and integration,” arXiv preprint arXiv:1209.2199, 2012.

[27] Y. Kosmann-Schwarzbach and J. Monterde, “Divergence operators and odd pois- son brackets,” in Annales de l’institut Fourier, vol. 52, pp. 419–456, 2002.

[28] M.-A. Knus, Quadratic and Hermitian forms over rings, vol. 294. Springer Science & Business Media, 2012.

[29] P. Ara, “Morita equivalence for rings with involution,” Algebras and Representa- tion Theory, vol. 2, no. 3, pp. 227–247, 1999.

[30] C. Iuliu-Lazaroiu, D. McNamee, and C. S¨amann,“Generalized berezin-toeplitz quantization of k¨ahlersupermanifolds,” Journal of High Energy Physics, vol. 2009, no. 05, p. 055, 2009.

[31] H. V. Henderson and S. R. Searle, “The vec-permutation matrix, the vec operator and kronecker products: A review,” Linear and multilinear algebra, vol. 9, no. 4, pp. 271–288, 1981.

[32] Z. Feng, “The weighted super bergman kernels over the supermatrix spaces,” Mathematical Physics, Analysis and Geometry, vol. 18, no. 1, p. 4, 2015.

[33] C. Kassel, Quantum groups, vol. 155. Springer Science & Business Media, 2012.

[34] E. P. Deligne, P. Etingof, D. S. Freed, L. C. Jeffrey, D. Kazhdan, J. W. Morgan, D. R. Morrison, and E. Witten, “Quantum fields and strings. a course for math- ematicians,” in Material from the Special Year on held at the Institute for Advanced Study, American Mathematical Society, 1999. BIBLIOGRAPHY 127

[35] C. Voisin, Hodge theory and complex algebraic geometry II, vol. 2. Cambridge University Press, 2003.

[36] Y.-C. Wong, “Differential geometry of grassmann manifolds,” Proceedings of the National Academy of Sciences of the United States of America, vol. 57, no. 3, p. 589, 1967.

[37] W. Ballmann, Lectures on K¨ahlermanifolds, vol. 2. European mathematical society, 2006.

[38] H. M. Khudaverdian and T. T. Voronov, “, exterior powers and re- current sequences,” Letters in Mathematical Physics, vol. 74, no. 2, pp. 201–228, 2005.

[39] R. A. Horn and C. R. Johnson, Matrix analysis. Cambridge university press, 2012.

[40] T. Trif, “Multiple integrals of symmetric functions,” The American mathematical monthly, vol. 104, no. 7, pp. 605–608, 1997.

[41] D. Westra, Superrings and supergroups. PhD thesis, uniwien, 2009.

[42] nLab authors, “super vector space.” http://ncatlab.org/nlab/show/super% 20vector%20space, June 2020. Revision 22. Appendix A

An Introduction to Superalgebra and on Conventions

128 A.1. SUPERALGEBRA 129

The purpose of this appendix is to provide a summary of the basic algebraic frame- work of supermathematics that is used in the main part of the thesis. At the core of supermathematics is the Koszul sign rule, or just the sign rule. We will show were this occurs in the foundations and its ramifications through the rest of the theory. Having objects obey the Koszul sign rule gives the super version of familiar concepts. Instead of commutativity one has supercommutativity and the like. We will in this appendix pay attention to the effect when one wants to talk about coordinates and free finitely generated supermodules. Supercommutativity, while only a mild form of noncommutativity, means that we will see that one must take care of whether one is writing elements in left or right coordinates and the like.

The prefix super generally means that there is a Z2 grading present and originally comes from physics and more specifically the notion of supersymmetry. We will use the prefix super when introducing things here but may drop the prefix later if the meaning of any statement is clear from context as it is repetitive to have the word super appear over and over again. We draw most of this material from [19],[20],[17], [22], and [41] however we will expand the explanation for some things and emphasise more the ideas from those texts that will be relevant to the wider text.

A.1 Superalgebra

A.1.1 Superrings

Definition A.1.1. A superring R is a Z2 graded ring. In particular we have that it decomposes as the following direct sum:

R = R0 ⊕ R1 and we have that the product of the ring has the property that:

RiRj ⊆ Ri+j.

We will also require that the characteristic of R is greater than 2. We call an element r homogeneous if it is a member of exactly one of R0 or R1. For r homogeneous the parity of r,r ˜ is defined to be the component in the direct sum that it belongs to. 130 APPENDIX A. INTRODUCTION TO SUPERALGEBRA

The parity has two values, either 0 or 1. We note that it is often denoted elsewhere in the literature by p(r). Elements of parity 0 are labelled ”even” and elements of parity 1 are labelled ”odd”. In general an element of R will be a sum of elements in

R0 and R1, however for writing down formulae one works with homogeneous elements, as any other element can be written as the sum of homogeneous elements. If we have the product of two elements rs then the parity of rs is denoted by rse and we have that rse =r ˜ +s ˜.

Definition A.1.2. Let r and s be two elements in R, then the super-commutator [r, s] is the expression: [r, s] = rs − (−1)r˜s˜sr

For a superring we can define the supercentre of the ring to be

Z(R) = {x ∈ R | [x, y] = 0 ∀y ∈ R}.

We have then that R is supercommutative if Z(R) = R.

Remark A.1.3. Any ring R can be regarded as a superring by defining that R0 = R and R1 = ∅. In particular any commutative ring is supercommutative but the reverse doesn’t always hold.

Example A.1.4. An example of a supercommutative ring that occurs naturally is V∗ an exterior algebra of a vector space V (V ). This is naturally Z graded but if the grading is taken modulo 2 then this becomes a supercommutative algebra.

Given a graded ring one can speak of graded modules on the left and right so we can give the following definition for a supermodule.

Definition A.1.5. A right supermodule M of a superring R is a Z2 graded module as a module over R. In that, in addition to the usual axioms, we have that:

MiRj ⊆ Mi+j with the indexes being read modulo 2.

Definition A.1.6. A homomorphism between two right supermodules M and N is a map f : M → N A.1. SUPERALGEBRA 131 such that for r ∈ R

f(mr) = f(m)r

f(m1 + m2) = f(m1) + f(m2).

and that f(Mi) ⊆ Ni.

As we have that the parity of an element is preserved we call these homomorphisms even. In line with sometimes taking a categorical view we shall often drop the prefix homo- from homomorphisms and refer instead to just morphisms. We shall denote the set of parity preserving morphisms between two supermodules as Hom0(M,N).

Remark A.1.7. We have called these even because we will define odd morphisms, which are elements of Hom1(M,N), which reverse the parity.

Definition A.1.8. Given a right supermodule M = M0 ⊕M1 then the parity reversion functor Π applied to M (on the left) is the module ΠM where (ΠM)0 = M1 and

(ΠM)1 = M0.

One can also ”apply it on the right” to arrive at the module MΠ. Why it is called the parity reversion functor is because element of parity 0 are then of parity 1, and vice versa. We have that

ΠM × R → ΠM

(Πm, r) 7→ Π(mr) which can be written as Π(m)r and that Π2 = Id. Hence we have that ΠM is a right module. The module MΠ is again a right module with the action

(mΠ, r) 7→ mΠ ∗ r = (−1)r˜(mr)Π where we use the ∗ symbol here to contrast with the usual concatenation.

Remark A.1.9. With signs showing up often, it is useful to differentiate between the original action, of in this case R on M, which will stay being written using concatena- tion, and any new or induced action defined using this original action. Hence in this case we use ∗ above to indicate a new action defined using concatenation but with a sign. 132 APPENDIX A. INTRODUCTION TO SUPERALGEBRA

Given a morphism f : M → N then we can induce a mapping Πf : ΠM → ΠN which is the same map as f as a mapping of sets. Notationally Πf(Πm) = Π(f(m)) so that: Πf(Π(mr)) = Π(f(mr))) = Π(f(m)r) = Π(f(m))r.

We then have that Π is a covariant functor from the category of right modules to itself. We can also define fΠ : MΠ → NΠ by the rule that fΠ(mΠ) = f(m)Π. This gives a right linear map as required as

fΠ(mΠ ∗ r) = (−1)r˜fΠ((mr)Π) = (−1)r˜f(mr)Π = (−1)r˜f(m)rΠ

= f(m)Π ∗ r = fΠ(mΠ) ∗ r

So that Π applied on the left is also a covariant functor from the category of right modules to itself. We shall mainly look only at the case when Π is applied on the left. This is so that the right module structure remains unchanged when one moves to looking at modules over supercommutative superrings when the presence of Π modifies the induced left module structure.

A.1.2 Super Vector Spaces

As we remarked earlier, any ring can be regarded as a superring, and we will look in detail at the case when our ring is a field k of characteristic 0 which will generally be R or C. We note that one can formulate the below with any field as long as the characteristic is greater than 2 but this will not be needed.

Definition A.1.10. A super vector space V is a Z2 graded vector space so that:

V = V0 ⊕ V1

If a super vector space V is finite dimensional then we have that dim(V0) = p and dim(V1) = q and we denote the dimension of the super vector space by p|q. We can regard a super vector space as a supermodule over a field k. We can define the category of super vector spaces, (sVect) as the category with objects being super vector spaces and the morphisms between them being linear maps of vector spaces that preserve the grading. The foundations of most of supermathematics starts from the tensor product of super vector spaces. We can define a tensor product on super vector spaces using A.1. SUPERALGEBRA 133 the ordinary tensor product of vector spaces, the new vector space V ⊗ W is again a super vector space. The only thing to specify is the grading, and for V ⊗ W it is:

M (V ⊗ W )k = Vi ⊗ Wj. i+j=k Given two morphisms f : V → S and g : W → T we can form the tensor product of this morphism f ⊗ g : V ⊗ W → S ⊗ T which for v ⊗ w ∈ V ⊗ W is given by

(f ⊗ g)(v ⊗ w) = f(v) ⊗ g(w).

The presence of this tensor product turns our category of super vector spaces into a monoidal, or tensor, category. In terms of our category this means that:

(u ⊗ v) ⊗ w 7→ u ⊗ (v ⊗ w), for u ∈ U, v ∈ V, w ∈ W , defines a canonical isomorphism between

(U ⊗ V ) ⊗ W and U ⊗ (V ⊗ W ).

There is a unit object, in this case the field k such that there are isomorphisms l : k ⊗ V → V and r : V ⊗ k → V such that

l(1 ⊗ v) = v and r(v ⊗ 1) = v

To be more specific the above is actually defining a strict monoidal category. A full exposition of monoidal categories can be found in [33], where they are called tensor categories there. Here we are only stating as much as we need. One can look at the tensor product and impose symmetry conditions so that our category becomes a symmetric monoidal category. In general, one can speak of braided monoidal categories, these are monoidal categories where for objects V and W we have the following isomorphism

cV,W : V ⊗ W → W ⊗ V called the braiding isomorphism. In order for this to be symmetric we require that:

cW,V ◦ cV,W = IdV ⊗W . 134 APPENDIX A. INTRODUCTION TO SUPERALGEBRA

It can be shown [42] that there are only two choices for the braiding isomorphism when dealing with Z2 graded vector spaces. The category of super vector spaces is defined to be the monoidal category of Z2 graded vector spaces with the braiding given by:

v ⊗ w 7→ (−1)v˜w˜w ⊗ v.

Remark A.1.11. In essence it is this isomorphism which gives us the ’Koszul sign rule’ which is the philosophy in supermathematics that one switches two elements then there appears a sign based on the parity of the elements. It is through respecting this isomorphism that almost all signs appear in supermathematics.

We can now naturally define what a superalgebra is.

Definition A.1.12. A superalgebra over k is a super vector space A with a morphism:

∇ : A ⊗ A → A

We defined superrings in A.1.1 and most if not all examples of them that we shall use are in fact superalgebras over R or C. A superalgebra is associative if we have that (xy)z = x(yz). We can now give another natural definition of when a superalgebra is supercommutative, this justifies the remark that the braiding isomorphism is the source of the signs.

Definition A.1.13. A superalgebra A is supercommutative if the following diagram commutes c A ⊗ A A,A A ⊗ A , ∇ ∇ A this is the same as the statement that for two elements a, b, with multiplication given by concatenation, that: ab = (−1)a˜˜bba.

Most algebras that will be dealt with are associative and supercommutative.

Example A.1.14. The standard example of a superalgebra is

A = K[x1, . . . , xp; ξ1, . . . , ξq] (A.1) A.1. SUPERALGEBRA 135 generated by the elements xi and ξj with the relations that:

xixj = xjxi xiξj = ξjxi, and ξiξj = −ξjξi.

This is a supercommutative algebra.

Example A.1.15. A case of a nonassociative superalgebra over k are the cases of Lie superalgebras. A Lie superalgebra L is an superalgebra over k where the product denoted by the bracket [ , ] is an operation with the following properties.

[a, b] = −(−1)a˜˜b[b, a]

[a, [b, c]] = [[a, b], c] + (−1)a˜˜b[b, [a, c]].

The first property is the usual skew symmetry with parity taken into consideration. The second property is that the action of an element a on the algebra is a derivation of the bracket. This is succinct way of writing the Jacobi identity for Lie superalgebras as the Jacobi identity written in the usual fashion contains more signs.

Example A.1.16. We can also encounter Lie superalgebras over a ring. A typical example of this is the Lie superalgebra of vector fields on a supermanifold.

Having introduced examples of supercommutative algebras we will now look at modules over supercommutative rings.

A.1.3 Modules over supercommutative rings

Suppose M is a right R module and R is supercommutative. A right R-module can canonically be made into a bimodule when R is supercommutative by defining a left action by the rule:

R × M → M

(r, m) 7→ (−1)r˜m˜ mr.

Using this rule for the left action we are making sure that our original right action and the new left action commute. We can also view the right and left actions by denoting that an element r acting on the right of an element m is given by the usual concatenation mr and denote the action on the left by r ∗ m, differentiating between 136 APPENDIX A. INTRODUCTION TO SUPERALGEBRA right and left action this way. We will in general treat our bimodules as them being right modules first, then inducing a left action. We will continue to do this as it means that functions are naturally written on the left of an argument. Because they are bimodules, we have that

f(r ∗ m) = (−1)r˜m˜ f(mr) = (−1)m˜ r˜f(m)r = r ∗ f(m).

We can look at the parity reversion functor in the context of supercommutative rings. Now to reiterate, we are treating modules over a supercommutative ring first as right modules, then imposing a compatible left module structure on them. So for ΠM its right module structure is the same as M however its left module structure is different as the parity of an element has changed. The left module structure is:

R × ΠM → ΠM

(r, Πm) 7→ (−1)r˜(m ˜ +1)(Πm)r = (−1)r˜Π(r ∗ m).

We have the following for how the induced left action on the parity reversed module works with morphisms.

Πf(r ∗ Πm) = Πf(Π(r ∗ m)(−1)r˜)

= Π(f(r ∗ m)(−1)r˜)

= Π(f(m)r)(−1)r˜(m ˜ +1)

= r ∗ Π(f(m))

= r ∗ Πf(Πm)

Remark A.1.17. One can repeat the above for MΠ, though as Π being applied on the right won’t be used we shall not give the detail.

Let M and N be supermodules over a supercommutative ring R. We can form the tensor product module M ⊗ N in a natural way and we have that:

r(m ⊗ n) = (−1)r˜m˜ mr ⊗ n = (−1)r˜m˜ m ⊗ rn = (−1)r˜(m ˜ +˜n)(m ⊗ n)r for an r ∈ R. We can repeat the procedure for that we used for the super vector spaces to produce the symmetric monoidal category of R-supermodules for a given A.1. SUPERALGEBRA 137 supercommutative ring R. We have that given two maps f : M → S and g : N → T then we can define the map f ⊗ g : M ⊗ N → S ⊗ T which acts on m ⊗ n by:

(f ⊗ g)(m ⊗ n) = f(m) ⊗ g(n).

So far we have been working with grading preserving maps between modules, as these are the natural maps between graded objects. We can however look at maps which reverse the grading, as noted earlier.

Definition A.1.18. A map f : M → N is called an odd morphism if it is an additive mapping between M and N but that we have that f](m) =m ˜ +1 or that f(Mi) ⊆ Ni+1.

Looking in detail we have that for an odd morphism g : M → N we have that:

g(mr) = g(m)r

g(r ∗ m) = g(mr)(−1)r˜m˜ = g(m)r(−1)r˜m˜

= r ∗ g(m)(−1)r˜m˜ +˜r(m ˜ +1) = (−1)r˜r ∗ g(m)

Remark A.1.19. We denote these mappings as odd morphisms however strictly speaking, for the examples of super vector spaces, these morphisms are not in the category of super vector spaces. Nonetheless these we shall call them morphisms as there is a way to interpret them as being morphisms using the notion of the internal hom object.

We can assign to a morphism a parity of either 0 or 1 based on whether it is even or odd respectively. We can treat both even and odd morphisms together we have that f(r ∗ m) = (−1)r˜f˜r ∗ f(m).

We shall denote the set of odd morphisms between M and N by Hom1(M,N). The purpose of introducing odd morphisms is so that given M and N two supermodules we have an internal hom object between them which we will denote by hom(M,N). That is, given two supermodules we can introduce an object which is in our category, so is a supermodule, but behaves as the set of morphisms between two other objects in our category. More detail is given in [34]. hom(M,N) is naturally graded as we have that:

hom(M,N) = Hom0(M,N) ⊕ Hom1(M,N). 138 APPENDIX A. INTRODUCTION TO SUPERALGEBRA

The internal hom acts as the set of all linear maps between M and N disregarding the grading. That this object acts as the set of morphisms between two supermodules means that we can tensor hom(M,N) and M together as hom(M,N) ⊗ M and that there is an evaluation map from this pair to N, we can take the pair f ⊗ m where f ∈ hom(M,N) and m ∈ M and then apply an evaluation map hom(M,N)⊗M → N to arrive at f(m). Let f ∈ hom(M,S) and g ∈ hom(N,T ) be two morphisms. Before we considered just even maps, so we had that we could form f ⊗ g. We need to see how this behaves now we have odd morphisms. To be consistent we must have that

(f ⊗ g)(u ⊗ v) = (−1)g˜u˜f(u) ⊗ g(v).

This can be restated as there being a right linear morphism

µ : hom(M,S) ⊗ hom(N,T ) → hom(M ⊗ N,S ⊗ T ).

If M and N are free finitely generated supermodules then this map is an isomor- phism, which is the case that we will be using, in general it is only an injection. This immediately gives us a useful corollary namely that

hom(N,S) ' S ⊗ N ∨ (A.2) if we set that M = T = R where N ∨ is the dual module to N i.e. the module hom(N,R) the internal hom from N into R. Before we move forward it is useful to consider the case of a right module M over a noncommutative ring R where we put all discussion of Z2 grading and the like aside. We can naturally consider the space of right linear maps from M to R, Hom(M,R) = M ∨. This is naturally a left module with the left action given by:

(R,M ∨) → M ∨ (r, φ)(m) 7→ rφ(m).

We can make ∨ into a contravariant functor from the category of right (left) R- modules to the category of left R-modules as given f : M → N then we can define the map f ∨ : N ∨ → M ∨. This will be a map of left modules so is naturally written on the right of the argument. We then have that f ∨ is defined by for ϕ ∈ N ∨ by:

(ϕ)f ∨(m) 7→ ϕ(f(m)). A.2. FREE FINITELY GENERATED MODULES 139

Remark A.1.20. We are giving the above exposition on duality as the presence of many signs in supermathematics comes from writing maps, between dual spaces for instance, on the left rather than on the right. This is entirely legitimate when working over a supercommutative ring. In working with coordinates we will treat almost everything as a right module so maps are written on the left. This introduces signs where one might not need them but is necessary when working with matrices in the super case, and so enables that we can freely switch between coordinates and the abstract viewpoint if needed.

A.2 Free Finitely Generated Modules

So far we have been working with general supermodules, now however we will re- strict ourselves to free finitely generated supermodules. What this means that given a supermodule M over a supercommutative ring R then we have that

M ' Rp|q.

In detail this means that M is generated by p even elements e1, . . . , ep and q odd elements ep+1, . . . , eq. Furthermore with the understanding that Π acts on the left then we have that M ' Rp × (ΠR)q.

Π acts on the left in particular so that the right action is unmodified. Given the category of free finitely generated supermodules and a specific supermodule M we want to look at a dual object to M, M ∨, and as mentioned above this will be hom(M,R), the internal hom from M to R, or all maps from M to R. However we want to look at duality from a more general perspective through the following definition from [33].

Definition A.2.1. Given a strict monoidal category C with product ⊗ and a unit object R then we say that C is a monoidal category with left duality if for every object M there exists M ∨, called the left dual, and morphisms

∨ ∨ ηM : R → M ⊗ M and M : M ⊗ M → R (A.3) called the coevaluation and evaluation maps respectively which make the following 140 APPENDIX A. INTRODUCTION TO SUPERALGEBRA squares commute η ⊗Id R ⊗ M M M M ⊗ M ∨ ⊗ M

l IdM ⊗M

M r M ⊗ R

Id ⊗η M ∨ ⊗ R M∨ M M ∨ ⊗ M ⊗ M ∨

r M ⊗IdM∨

M ∨ R ⊗ M ∨. l This abstracts what we want from a duality and gives us that (left) duality is a contravariant functor from our category of supermodules to itself as every module in our category has compatible left and right R actions. As this is a functor, then for every f we should be able to define f ∨, and we can, as we have seen already. However we can go further and consider f ∈ hom(M,N), the internal hom. Given f then we can define f ∨ : N ∨ → M ∨ by the formula, where we omit the r and l unit morphisms needed,

∨ f = (N ⊗ IdN ∨ )(IdN ∨ ⊗ f ⊗ IdM ∨ )(IdN ∨ ⊗ ηM ).

This map gives us that f ∨, applied on the right, gives us that ((ϕ)f ∨)(m) = ϕ(f(m)) for ϕ ∈ N ∨. Applied on the left we have that f ∨(ϕ)(m) = (−1)f˜ϕ˜ϕ(f(m)). What we have in either case is that the dual map f ∨ is the pullback by f in the sense of precomposition with a sign depending on if its applying on the left or right. The case where it is applied on the left makes it equivalent to the map which makes the following square commute.

f ∨⊗Id N ∨ ⊗ M N M ∨ ⊗ M

IdN∨ ⊗f M (A.4)

N ∨ ⊗ N N R. and it is in this sense that it shall be used mostly, especially when dealing with coor- dinates and the like. The notion of dual pairings given by a bracket h , i will later be useful so we can also rephrase the dual map f ∨ as the unique map such that

(−1)f˜ω˜ hf ∨(ω), mi = hω, f(m)i (A.5) A.2. FREE FINITELY GENERATED MODULES 141 where ω ∈ N ∨, which is equivalent to the commuting square A.4. We can note here that as our category is symmetric then a left dual is automatically a right dual in that we can pair M with M ∨ in the reverse order as M ⊗ M ∨. The

0 ∨ new evaluation M : M ⊗ M , for instance, is defined so that

0 M = M ◦ cM,M ∨ .

Remark A.2.2. Similar to remarks above, the purpose of giving an exposition on the left dual and right dual is to show that we need to take care with whether our duals are on the left and right to account for any sign changes.

A.2.1 Duality and the Tensor Product

We need to see how duality interacts with the tensor product. We have from A.2 that the map µ is an isomorphism and we can then see that if we set S = T = R then we get that M ∨ ⊗ N ∨ ' (M ⊗ N)∨ this isn’t the isomorphism that will be most useful however, as there is a sign involved. In fact, the more natural isomorphism is

N ∨ ⊗ M ∨ ' (M ⊗ N)∨

This isomorphism is constructed by a map we will denote by ρM,N

ρM,N = (N ⊗ IdN ∨⊗M ∨ )(IdN ∨ ⊗ M ⊗ IdN⊗(M⊗N)∨ )(IdN ∨⊗M ∨ ⊗ ηM⊗N ).

This is an isomorphism as we can construct the morphism

−1 ∨ ρM,N = (M⊗N ⊗ IdN ∨⊗M ∨ )(Id(M⊗N)∨⊗M ⊗ ηN ⊗ IdM )(Id(M⊗N)∨ ⊗ ηM ) and one can check that, as the notation suggests, these maps are inverse to each other. We note that the braiding doesn’t appear in the construction on these morphisms so there are no signs involved.

A.2.2 The Double Dual

Given a dual module M ∨ we can apply the duality functor again to obtain the double dual of a module M ∨∨. We have as in the usual case that there is an injection

∨∨ IM : M,→ M 142 APPENDIX A. INTRODUCTION TO SUPERALGEBRA with

m 7→ ιm.

Given a right module M then M ∨ is naturally a left module. We then have that M ∨∨ is a right module again. Since IM is morphism of right modules it is written on the left of the argument. However we need to take care with how it acts on elements of M ∨. Any element B ∈ M ∨∨ acts on the right of elements of M ∨ as it is naturally a

∨ left linear map from M to R. So, the most natural way to write the natural map IM is that, for elements m ∈ M and φ ∈ M ∨, the following holds:

(φ)IM (m) = (φ)ιm = φ(m).

Written like this, with IM on the right of φ, then we have that IM is a right linear map and we have that IM defines a morphism from the category of supermodules to itself which sends a module M to its double dual M ∨∨. In fact we have that I is a natural transformation between the identity functor and the double dual functor, in other words for A : M → N we have that: M A N

IM IN (A.6) ∨∨ M ∨∨ A N ∨∨ In terms of working with coordinates it can be useful to write the double dual map

∨∨ IM : M → M on the left. This is only possible when working over a supercommu- tative ring. This results in that the map IM acts as follows:

m˜ φ˜ IM (m)(φ) = ιm(φ) = (−1) φ(m). (A.7)

Remark A.2.3. In [20] and [17] it is in the later manner that IM is given. This is also natural in the sense that M ∨∨ is a right dual to M ∨ written this way.

A.2.3 Trace

We will be using the trace in the main text so we will now define it without coordinates for the case of finitely generated free supermodules. The definition to be given relies on that we are in a strict monoidal category with duality. Let M be a supermodule over a ring R and let f ∈ hom(M,M) = end(M). The trace is a R-module morphism

Trs : end(M) → R A.2. FREE FINITELY GENERATED MODULES 143 defined by

Trs(f) = M cM,M ∨ (f ⊗ IdM ∨ )ηM .

Writing that another way, Trs(f) is the following composition evaluated at the identity of R.

η f⊗Id ∨ c ∨ R −−→M M ⊗ M ∨ −−−−−→M M ⊗ M ∨ −−−−→M,M M ∨ ⊗ M −→M R.

The trace as defined here satisfies the usual properties that

Trs(fg) = Trs(gf)

∨ Trs(f ) = Trs(f) f, g ∈ end(M) for when the compositions f ◦ g and g ◦ f are endomorphisms of some supermodule.

A.2.4 The Berezinian

[Or Superdeterminant]

Now suppose we look at GLp|q(M), the automorphisms of the module M for some module M of dimension p|q over R. We have that once a basis is chosen then

p|q GLp|q(M) = GL(R ).

The Berezinian (or superdeterminant) is defined as a group homomorphism

Ber : GLp|q(M) → GL1|0(R) which is a generalisation of the determinant. A full exposition is given in [22] and [20]. It uniquely defined by the conditions that

Ber(XY ) = Ber(X) Ber(Y ) and Ber(eX ) = eTrs(X) in the case where the exponential is well defined. If not then it satisfies

Ber(I + X) = 1 +  Trs(X) when  satisfies 2 = 0. How to calculate it will be given once we move to looking at coordinates. The Berezinian first appeared in the works of Felix Berezin who was 144 APPENDIX A. INTRODUCTION TO SUPERALGEBRA a pioneer in supermathematics hence it being named after him. It was first written about in relation to integration on supermanifolds and the behaviour of an integral upon a change of coordinates where it plays the role of the determinant of the Jacobian matrix. The Berezinian from this perspective will be expanded upon in the main text.

A.2.5 Canonical Ideal of a Superring

Let R be a supercommutative ring. We have that R = R0 ⊕ R1. There is a canonical ideal JR generated by the elements in R1. We can form the ring Rred := R/JR and we call this the reduced ring. There is a canonical map

R → Rred.

In calculations we will work in the case where R0 contains no nilpotent elements, so the reduced ring amounts to setting all the nilpotent elements to 0. We denote the image of of an element R under the canonical map rred. More on this and an exposition of superrings can be found in [41].

A.2.6 The Berezinian Module

In the usual case of a commutative ring one can look at a linear map A ∈ End(M), where M is an n dimensional R-module, and look at the induced action

n n n ^ ^ ^ A : M → M.

Then we have that for an element x ∈ Vn M that

n ^ (A)(x) = det(A)x.

So determinant of A comes about naturally as the expression of an induced map between a module related to the module M. We can also look at the determinant in a different manner. To a module M we associate to it the vector space det(M) such that if A ∈ End(M) then the induced action on det(M) is given by the determinant of A. So far this would appear to be semantics as this two are the same thing. However in the super case there is no top power in an exterior algebra. We can however to a supermodule M define the Berezinian module Ber(M) in that given A ∈ GLp|q(M) then the induced action on the module Ber(M) is given by Ber(A). A.3. COORDINATES 145

Remark A.2.4. There is an induced action of A on a module related to M such that

p ∗ ∨ the action is given by the Berezinian. It is the induced action on ExtSym∗(M ∨)(R, Sym (M )). This module is isomorphic to V M in the usual case. More detail on this can be found in [20]. There is also a ”homological interpretation of the Berezinian” as detailed in [19].

A.3 Coordinates

Let us now put most of the above in terms of coordinates. Let M be finitely generated free module over a supercommutative ring R, so M = Rp|q. We can also say that M is a finite dimensional supermodule of dimension p|q. Since M is free and finitely generated then this means we have a basis which we shall label as {ei}. We choose to arrange the basis elements so that each ei is of parity 0 if 0 ≤ i ≤ p and it is of parity 1 if p < i ≤ p + q. We then can speak of a parity of position, in that the parity of a basis vector depends on its position. One can write vectors in terms of left or right coordinates. We will favour right coordinates. Suppose we have an (homogeneous)

i element x ∈ M. We say that x is given in right coordinates if x = eix and we can represent x by the column vector   x1  .   .  . (A.8)   xp+q This is implicitly treating M as a right module first. The decomposition of Rp|q into even and odd parts is the following:

p|q p q q R = R0 ⊕ (ΠR1) ⊕ R1 ⊕ (ΠR0) .

We say that x is an even vector if it is in the even component of Rp|q, it is odd if it is in the odd component Rp|q. In detail this means that xei = ˜ı +x ˜ depending on whether x is even or odd. To illustrate this, suppose x is a homogeneous vector then

i ˜ı(˜x+˜ı) i x = eix = (−1) x ei,

˜ı(˜x+˜ı) i so that x in left coordinates, in terms of the basis {ei}, it is given by (−1) x ei. Representing a generic even vector x by the column ! v x = ξ 146 APPENDIX A. INTRODUCTION TO SUPERALGEBRA where we use Latin letters for even elements and Greek letters for odd elements, then the transition from right to left coordinates is a version of transposition, which we shall denote as tx, and we have that   tx = vT −ξT where T denotes the usual transpose. If x were odd so that ! ξ x = v then we would have that   tx = ξT vT

We shall use t for the transition from left coordinates to right coordinates or vice versa so we have that ttx = x. We can combine both transpositions for even and odd vectors by the rule that if ! x1 x = x2 then we have that   tx = x1T (−1)x˜+1x2T .

Now since we are representing elements of M using right coordinates the next thing to consider is what form a map f : M → N, either even or odd, takes in terms of coordinates. The map f is represented by a block 2 × 2 matrix B, which is in the form ! B00 B01

B10 B11 with B00 and B11 even (odd) and B01 and B10 being odd (even) if f is even or odd i 0 respectively. In terms of the basis coordinates with B = bj then we have that, if {ei} is a basis of N, then

0 j f(ei) = ejbi .

i This gives a consistent matrix multiplication in that if x = eix then given f : M → N 00 represented by B, g : N → P represented by C where the basis of P is {ei } then (g ◦ f)(x) is given by

i i i 0 j i 00 k j i g(f(x)) = g(f(eix )) = g(f(ei)x ) = g(f(ei))x = g(ejbi )x = ekcj bi x or that if we have x represented by a column vector x we have that

g(f(x)) = CBx. A.3. COORDINATES 147

A.3.1 The Dual Space

We can define a dual basis {ej} by the rule that

j j e (ei) = δi we can also denote this using a pairing

j j he , eii = δi in analogy to a scalar product. We can also view this as a map from M ∨ ⊗ M → R and what we are doing is expressing the map M (from A.3) in terms of coordinates. i i i i Hence given ω = ωie and v = eiv then hω, vi = ωiv or ω(v) = ωiv . We can also pair with the reverse order and then we have that

j ˜ı˜ j hei, e i = (−1) δi

If we decide to write ω in right coordinates then when evaluating a pairing of ω and v then in terms of coordinates we would have that

˜ı(˜ı+ω) i t hω, vi = (−1) ωiv = ωv

We have seen above that hom(M,N) ' N ⊗ M ∨. This fits in the framework as there being a map

∨ λM,N : N ⊗ M → hom(M,N), coming from A.2, acting on elements as

λM,N (n ⊗ ω)(m) = nω(m).

i In terms of coordinates we thus have that, given n = fin (with {fi} being the basis) j j ∨ and ω = ωje with ({e } a left dual basis for M ) we have that

i j λM,N (n ⊗ ω) = fin ωje .

Remark A.3.1. We are using f here in a different sense than from the previous page just so we aren’t using e for the basis of every space in the following.

With all of these conventions we thus have that given g ∈ hom(M,N) then it can be presented by an element

i j fiaje 148 APPENDIX A. INTRODUCTION TO SUPERALGEBRA

i or in other terms by the matrix A = aj and given x

i j k i j k i k A(x) = (fiaje )(ekx ) = fiajhe , ekix = fiakx .

We then have for elements g ∈ hom(M,N) and h ∈ hom(N,P ) we have that if they

i k are represented by matrices A = aj and B = bl respectively, then we have that:

00 k l i j 00 k l j BA = (e kbl f )(fiaje ) = e kbl aje .

The space of p|q × r|s supermatrices over a supercommutative ring R is denoted by

p|q r|s Mr|s(R ). The effect of using supermatrices is that for two modules M ' R and p|q p|q N ' R we have that hom(M,N) ' Mr|s(R ). i Now given a map g ∈ hom(M,N) represented by a matrix A = aj then we can look at the dual map g∨, which acts on the right, and we want to see what this is in terms of coordinates. Looking at the diagram A.4 we will construct the coordinate representation of the dual map. Let {ei} be a basis for M and {fj} a basis for N with i j ∨ i dual bases {e } and {f } respectively. Suppose that g is given by the matrix B = bj and that we are treating f j as f j1, so in right coordinates. Then we have that

∨ i i (g ⊗ IdM )(f ⊗ ek) = (IdN ∨ ⊗ g)(f ⊗ ek)

∨ i g˜˜ı i g (f ) ⊗ ek = (−1) f ⊗ g(ek)

j i g˜˜ı i l e bjek = (−1) f ⊗ flak

˜(˜ı+˜+˜g) i j g˜˜ı i l (−1) bje ⊗ ek = (−1) f ⊗ flak applying the evaluation on both sides we come to

k˜(˜ı+k˜+˜g) i g˜˜ı i (−1) bk = (−1) ak

so we have that

i (˜ı+k˜)(k˜+˜g) i bk = (−1) ak.

Putting this into a matrix we have that ! A (−1)g˜+1A B = 00 01 . g˜ (−1) A10 A11 A.3. COORDINATES 149

i This isn’t the full story as given f and ei respectively then we have that

j ∨ i (˜ı+k˜)(k˜+˜g) k i g(ei) = fjai and g (f ) = (−1) e ak.

Due to having the i in different positions, upper and lower respectively, this means that if we have that if ! A A A = 00 01 A10 A11 then g∨ acts by the matrix B but transposed. We then have that the dual map is represented by the supertranspose AST which is given by ! AT (−1)g˜AT AST = 00 10 g˜+1 T T (−1) A01 A11 T where Amn means just taking the usual transpose of the matrix. To illustrate this, i given a covector ω = f ωi in right coordinates, so represented by a column vector, we have that g∨(ω) = AST ω.

To illustrate consistency with the abstract picture laid out earlier, in A.5, we have that

∨ i (˜ı+k˜)(k˜+˜g) k i g˜˜ı i k g (f ) = (−1) e ak = (−1) ake so that we have that

∨ i g˜˜ı i g (f )(el) = (−1) al which is the same as

g˜˜ı i (−1) f (g(el)).

This matches what we except from the dual map. Since the dual map is precompo- sition then given a covector ω written in left coordinates then using its row vector representation we have that g∨(ω) = (−1)ω˜g˜ωA

So, this demonstrates that we use the supertranspose when dealing with covectors in right coordinates, which we often do. We should also note what happens with how supertransposition relates with the switch from left to right coordinates and some other properties related to it. So having g from above and the matrix related to it A we will denote by ATS the following matrix ! AT (−1)A˜+1AT ATS = 00 01 A˜ T T (−1) A10 A11 150 APPENDIX A. INTRODUCTION TO SUPERALGEBRA this is just the supertranspose with the sign changes switched. This would be the matrix of the dual map in the case that the dual module is dual to the right of the module instead of dual to the left as in our case. We then have the following relations for a row vector v and a column vector w:

(AST )TS = (ATS)ST = A (A.9)

t(vA) = (−1)v˜A˜AST tv

t(Aw) = twATS(−1)w˜A˜

we also have that

(ST )4 = Id, (ST )3 = TS, (ST )2 6= Id,

(B + C)ST = BST + CST

(BC)ST = (−1)B˜C˜CST BST

and likewise for TS.

Example A.3.2. Given all of these properties for supermatrices let us see how we can rephrase some of the above. If we are given ω ∈ M ∨ and v ∈ M in right coordinates then the pairing of them together hω, vi is tωv. Looking at

(−1)g˜ω˜ hg∨(ω), vi = hω, g(v)i then if we denote the matrix that represents g∨ by B then we have, looking the right and left hand side separately, that

(−1)g˜ω˜ hg∨(ω), vi = (−1)g˜ω˜ hBω, vi = (−1)g˜ω˜ ((t Bω))v = tωBTSv

and

hω, g(v)i = tωAv.

Ergo, we then find that BTS = A so that B = AST , exactly as we require.

A.3.2 Scalar Multiplication and Matrices

If M and N are modules over a supercommutative ring then hom(M,N) is a module. We have that given A : M → N then there is a left action such that

(r, A)(x) 7→ (rA(x)). A.3. COORDINATES 151

This implies that r defines an element in end(N) and so should be represented by a

i j k i j matrix. Suppose A = fiaje and x = ekx , then we have that A(x) = fiajx . We then i j r˜˜i i j have that r(A(x)) = rfiajx = (−1) firajx . Hence in terms of matrices we have that left multiplication of A by r corresponds to multiplying on the left by the following matrix ! rI00 0 r˜ 0 (−1) rI11 where Ipp is an identity matrix of the right size. Looking at Ar then we have the same result in that multiplication on the right in terms of matrices is given by the same matrix given above.

A.3.3 The Double Dual

The map IM from A.7 in coordinates is given by

˜ı 0 IM (ei) = (−1) ei

0 ∨∨ 0 j j where ei is the corresponding basis element in M and obeys ei(e ) = δi . We then get that

j ˜i j ˜ı˜ j IM (ei)(e ) = (−1) δi = (−1) δi

In terms of a matrix we can label this by a matrix K which is the matrix ! I 0 K = . 0 −I

We can then represent commuting diagram A.6 by

A = KAST 2 K as K is its own inverse and ATS2 = AST 2 .

A.3.4 The Trace

We will now look at the trace in terms of coordinates. Let A ∈ end(M) and let

i it represented by a matrix aj. The trace of A which we will now denote as Trs, to differentiate it from the regular trace of a matrix, is a result of the composition given

i in A.2.3. Now we have that ηM , in coordinates, is given by 1 → ei ⊗ e . We have 152 APPENDIX A. INTRODUCTION TO SUPERALGEBRA

already seen M in terms of coordinates so we have that, following the steps of that composition that defines the trace, as applied to 1 we have that:

j ηM (1) = ej ⊗ e (A.10)

j i j f ⊗ IdM ∨ (ej ⊗ e ) = eiaj ⊗ e (A.11)

i j ˜(˜+A˜) j i cM,M ∨ (eiaj ⊗ e ) = (−1) e ⊗ eiaj (A.12)

˜(˜+A˜) j i ˜(˜+A˜) j M ((−1) e ⊗ eiaj) = (−1) aj. (A.13)

So putting the matrix A in block form ! A A A = 00 01 A10 A11 we then have that:

1+A˜ A˜ Trs(A) = Tr(A00) + (−1) Tr(A11) = Tr(A00) − (−1) Tr(A11).

i This works out for a matrix A = aj that the trace is given in terms of a summation over an index as:

˜(1+A˜) j Trs(A) = (−1) aj. Given the scalar multiplication above we also have that  ! ! rI 0 A A Tr (rA) = Tr 00 00 01 s s  r˜  0 (−1) I11 A10 A11 ! rA rA = Tr 00 01 s r˜ r˜ (−1) A10 (−1) A11

1+A˜+˜r r˜ = Tr(rA00) + (−1) Tr(rA00)(−1)

1+A˜ = r(Tr(A00) + (−1) Tr(A11))

= r(Trs(A))

We also have similarly that

Trs(Ar) = Trs(A)r.

The trace has the following properties as expressed in coordinates rather than in abstract language.

ST TS Trs(A) = Trs(A ) = Trs(A )

Trs([A, B]) = 0 A.3. COORDINATES 153

A.3.5 The Berezinian

Suppose X is an element of GL(Rp|q). If p 6= q then this means that X is an even supermatrix, ! X X X = 00 01 X10 X11 and that both X00 and X11 are invertible. We have that the Berezinian is given as the following expression:

−1 −1 Ber(X) = det(X00) det(X11 − X10X00 X01) .

Alternatively we have that

−1 −1 Ber(X) = det(X00 − X01X11 X10) det(X11) .

Both of these give the same answer. It should be noted that the Berezinian is a rational function of the matrix coefficients rather than a polynomial, which the usual determinant is.

Remark A.3.3. We can actually apply the Berezinian to all elements of end(Rp|q) which has one of Xii being invertible using these formula, but this won’t be used.

For the case of p = q so supermatrices belonging to GL(Rp|p) then for even elements the Berezinian is given by the same formula. However in this case there are now odd supermatrices which are invertible. With X as before, but now odd, we have that X01 and X10 are square matrices of even elements. If they are invertible then the whole supermatrix is invertible. The formula above doesn’t work for these odd supermatrices however if we define Ber(X) := Ber(JX) where ! 0 I J = p −Ip 0 then we have that

−1 −1 Ber(X) = det(−X10) det(X01 − X00X10 X11)

−1 −1 = det(X10 − X11X01 X00) det(−X01) . 154 APPENDIX A. INTRODUCTION TO SUPERALGEBRA

Concentrating on the Berezinian of an even morphism we have that it satisfies the following

Ber(X) = Ber(XST )

Ber(X−1) = Ber(X)−1.

Further more we will need a result implied in [38] by that paper’s first proposition. This is that Ber(I + AB) = Ber(I + BA) where A and B are supermatrices of the correct size so that the expression makes sense. We will prove a further property of the Berezinian in the main text.