<<

POLAR AND GIVENS DECOMPOSITION AND INVERSION OF THE INDEFINITE

DOUBLE COVERING MAP

by

Samreen Sher Khan

APPROVED BY SUPERVISORY COMMITTEE:

Viswanath Ramakrishna, Chair

M. Ali Hooshyar

Mieczyslaw K. Dabkowski

Vladimir Dragovic Copyright c 2020

Samreen Sher Khan

All rights reserved This thesis is dedicated to My Parents Their strong and gentle souls taught me to trust in God Almighty, believing in myself and hard work My Supervisor who relentlessly encouraged me to strive for the excellence My Husband Who had given me dreams to look forward to and instilled in me the virtues of perseverance and commitment. My children For supporting and believing in me. POLAR AND GIVENS DECOMPOSITION AND INVERSION OF THE INDEFINITE

DOUBLE COVERING MAP

by

SAMREEN SHER KHAN, BS, MS

DISSERTATION

Presented to the Faculty of

The University of Texas at Dallas

in Partial Fulfillment

of the Requirements

for the Degree of

DOCTOR OF PHILOSOPHY IN

MATHEMATICS

THE UNIVERSITY OF TEXAS AT DALLAS

August 2020 ACKNOWLEDGMENTS

Foremost, I would like to express my sincere gratitude to my advisor Dr. Viswanath Ramakrishna, for the continuous support in course work and research, for his immense knowledge, enthusiasm, patience, and motivation. His guidance and support helped me throughout the research time and even while writing this dissertation. I could not have imagined having a better mentor for my PhD study.

Besides my advisor, I would also like to thank the rest of my committee members: Dr. Ali Hooshyar, Dr. Mieczyslaw Dabkowski,and Dr. Vladimir Dragovic, for their encouragement, challenges and insightful comments.

I would also like to thank my fellow graduate students who have helped me through difficult times with study sessions.

Finally, I would like to thank my family: my parents (may God grant them peace in heaven) for supporting me spiritually throughout my life. Special thanks go to my husband and family.

June 2020

v POLAR AND GIVENS DECOMPOSITION AND INVERSION OF THE INDEFINITE

DOUBLE COVERING MAP

Samreen Sher Khan, PhD The University of Texas at Dallas, 2020

Supervising Professor: Viswanath Ramakrishna, Chair

Algorithmic methods for the explicit inversion of the indefinite double covering maps are proposed. These based on either the Givens decomposition or the of the given in the proper, indefinite orthogonal SO+(p, q). As a by product we establish that the pre-image in , of a positive matrix in SO+(p, q), can always be chosen to be itself positive definite. These methods solve the given system by either inspection or by inverting the associated isomorphism and computing certain exponential explicitly. The techniques are illustrated for (p, q) ∈ {(2, 1), (4, 1), (5, 1)}. We also develop explicit matrix form for the covering map for the cases(p,q) in (2,3) and (2,4) . In this work we provide explicit algorithms to invert these double cover maps for specific values of (p, q) (though the general techniques extend to all p, q). More precisely, given the

+ + + double covering map, Φp,q : Spin (p, q) → SO (p, q) and an X ∈ SO (p, q) (henceforth called the target), we provide algorithms to compute the matrices ±Y , in the matrix algebra

that the even sub algebra of Cl(p, q) is isomorphic to, satisfying Φp,q(±Y ) = X.

vi TABLE OF CONTENTS

ACKNOWLEDGMENTS ...... v ABSTRACT ...... vi CHAPTER 1 INTRODUCTION ...... 1 1.1 Relation to Work in the Literature ...... 5 CHAPTER 2 PRELIMINARIES ...... 10 2.1 Notation ...... 10 2.2 Preliminaries from Clifford Algebra ...... 11 2.2.1 Preliminary Observations ...... 11 2.3 Clifford Conjugation, Reversion and Grade Automorphisms ...... 12 2.4 Some Basic Matrix Groups and corresponding Lie Algebras ...... 12

2.5 Quaternionic and θH Matrices ...... 14 2.6 Givens-like Actions ...... 16 2.7 Euler-Rodrigues Formula ...... 16 2.8 Givens decomposition ...... 17 2.9 Polar Decomposition ...... 20 2.10 Logarithms of Special Orthogonal Matrices of Size 4: ...... 20 2.11 Special Bases for Clifford Algebras ...... 21 CHAPTER 3 POSITIVE DEFINITE ELEMENTS IN THE SPIN GROUPS . . . . 24 3.1 Bases for Clifford Algebras ...... 26 CHAPTER 4 POLAR DECOMPOSITION AND THE INVERSION ...... 29 4.1 Polar and Givens Decomposition of SO+(p, q) ...... 29 4.2 Given Decomposition ...... 29 CHAPTER 5 RESULTS ON INVERSIONS ...... 32

5.1 Inversion of Φ2,1 via Polar Decomposition ...... 32 5.2 Preimages of Positive Definite Targets in SO+(2, 1) ...... 33 −1 5.3 Finding Φ2,1(X) when X > 0 by Inspection ...... 34 5.4 Finding the Polar Decomposition in SO+(2, 1) ...... 35

5.5 Inversion of Φ4,1 via the Inversion of Ψ4,1 ...... 39

vii 5.6 Inversion via Givens Factors ...... 41

5.7 Inversion of Φ4,1 via the polar decomposition ...... 44

CHAPTER 6 INVERSION OF φ(1, 5) ...... 49

6.1 Inversion of φ(1, 5) via Polar Decomposition ...... 49 CHAPTER 7 Spin+(2, 3) AND Spin+(2, 4) ...... 75 7.1 Spin+(2, 3) ...... 75 7.2 Spin+(2, 4): ...... 87 CHAPTER 8 CONCLUSION ...... 97 REFERENCES ...... 99 BIOGRAPHICAL SKETCH ...... 101 CURRICULUM VITAE

viii CHAPTER 1

INTRODUCTION

This dissertation is mainly centered on following two topics.

• Inversion of covering map using Polar and Given decomposition.

• We also produced the direct form of the covering map for cases (2,3), (2,4).

In this work we provide explicit algorithms to invert these double cover maps for specific

values of (p, q) (though the general techniques extend to all p, q). More precisely, given the

+ + + double covering map, Φp,q : Spin (p, q) → SO (p, q) and an X ∈ SO (p, q) (henceforth called the target), we provide algorithms to compute the matrices ±Y , in the matrix algebra that the even sub algebra of Cl(p, q) is isomorphic to, satisfying Φp,q(±Y ) = X. One prominent contrast with [1,2] is that we will invert ψp,q which is the linearization of of φp,q. Our interest in finding Y as a matrix, as opposed to an abstract element of Cl(p, q), stems from wishing to use specific matrix theoretic properties of Y (respectively X), together with an explicit matrix form of the direct map Φp,q, to infer similar properties for X (respectively Y ). A classic example of this is the usage of the (equivalently SU(2)) to find axis- representations of SO(3). In the same vein to compute the polar decomposition of a matrix in SO+(3, 2), it is easier to find that of its preimage in the corresponding ,

Sp(4,R) and then project both factors via Φ3,2. It is easy to show that the projected factors constitute the polar decomposition of the original matrix. Since the polar decomposition of a

4×4 symplectic matrix can even be found in closed form, (? ), this is indeed a viable strategy.

Similarly, one can find the polar decomposition of a matrix X in SO+(4, 1) in closed form,

(? ) and this can be then used to find the polar decomposition of Y ∈ Spin+(4, 1). Since

Y is a 2 × 2 matrix with quaternionic entries and there is no method available for its polar decomposition, (17) (barring finding the polar decomposition of an associated 4 × 4 complex

1 matrix), this is indeed useful. Similarly, block properties of the preimage, Y ∈ Spin+(p, q), viewed as a matrix, may provide useful information about X. Such information is typically unavailable out by finding Y only as an abstract element of Cl(p, q). Furthermore, one of the methods to be used in this work, viz., the inversion of the linearization of the covering map, may be used to also compute exponential in the Lie algebras so(p, q).

There is literature on the problem being considered in this work. For the case (p, q) =

(0, 3) [ or (3, 0)] this problem is classical. The case (0, 4) is considered in (? ). The cases

(0, 5) and (0, 6) are treated in (? ). The excellent work of (18) treats the general case with extensions to the Pin groups, under a genericity assumption, but finds the preimage in the abstract Clifford algebra via a formula requiring the computation of several .

We provide a detailed discussion of the relation between present work and (? 18). In (19) an algorithm is proposed, for the inversion of abstract covering map, but which requires the solution of a nonlinear equation in several variables for which no techniques seem to be presented.

The double covers of the definite and indefinite orthogonal groups, SO+(p, q) by the spin groups are venerable objects. They arise in a variety of applications. The covering of SO(3) by SU(2) is central to robotics. The group SO+(2, 1) arises in polarization optics, [4]. The group SO+(3, 2) arises as the dynamical symmetry group of the 2D hydrogen atom and also in the analysis of the DeSitter space time [8,12]. The group SO+(4, 1) arises as the representation of the three dimensional conformal group and thus has applications in robotics and computer vision, [14]. we extend the explicit construction of indefinite spin groups to Clifford algebras with (p, q) ∈ (2, 3), (4, 1), (5, 1). We also invert the covering map for several important cases of (p,q). In fact we have an explicit form for the inversion for all p,q which does not even require the form of the map φ(p, q), this is called Agnostic inversion.

But Agnostic inversion sometimes provide less information than the more concrete inversion and that is why we are doing it here.

2 In this work we try to give an overview on how applications of Clifford algebras mainly in the areas of image and signal processing, computer and robotics, neural computing, con- trol problems and other areas have developed over the past two decades. The massive range of applications established makes a complete overview next to impossible. We there- fore will confine ourselves to reinforce above mentioned application fields. The work is organized as follows. Without any claim for completeness, Chapter 2 briefly recounts the preliminaries and introduction to the terms, concepts and definitions used throughout this document. Chapter 3 focuses on Clifford algebra and an overview of the spin homomor-

+ + phism φp,q : Spin (p, q) → SO (p, q) of classical matrix Lie groups , explicitly formulated for several (p,q) with 3 ≤ p + q ≤ 6. Also the inversion of double covering map from the definite Spin(0,n) to SO(n,R) for n = 5,6 [2] which, among other applications, makes pos- sible a parametrization of SO(5,R) and SO(6,R). Chapter 4 focuses on polar decomposition and inversion of SO+(p, q) which have a great impact on the results proved in this work.

Especially for the polar decomposition of some X ∈ SO+(p, q) , defined as X = VP, for V being orthogonal and P positive definite. It is well known that both factors P and V belong to SO(p, q), and we will show that both factors belong to SO+(p, q) .We will also discuss

Givens decomposition for the inversion. Chapter 5 will focus on Inversion from Spin+(p, q) to SO+(p, q) for some cases like (2,1) and (4,1). Section 6 deals with inversion of (5,1) and formulate explicit results for it. Chapter 7 focuses on developing forward map for the cases (2,3) and (2,4) and also leads to interesting results, in passing mentions aspects of the Clifford algebra. One of the most prominent motivations for working with Clifford al- gebra stems from the fact that Clifford algebras are the natural parents of the spin groups.

The success of spin groups and theory in modeling physical phenomena such as the angular momentum of an electron is evidenced by the pioneering works of Wolfgang Pauli,

Paul Dirac and others in the early twentieth century, and such theories remain important to active physicists today. As mentioned earlier, spin groups are also useful for calculating

3 transformations. For example, a Euclidean spin group Spin(n) is related to the

special SO(n) which can be represented by the set of all n × n orthogonal matrices with unit (sometimes called rotation matrices) and both can be used to model rotations in an n-dimensional Euclidean space.

To elaborate on the relationship between Spin(n) and SO(n), consider that an n×n rota- tion matrix serves as a unique rotation transformation in an n-dimensional Euclidean space; in contrast, there exist two different Clifford numbers in the Clifford algebra representation of Spin(n), either of which can be used in a computation of the same rotation transformation.

As an example of this phenomenon, consider the group Spin(2) and the everyday experience of navigating the surface of Earth. If one is facing north and desires to face east instead, one can rotate either clockwise or counterclockwise to do so; the topology of Spin(2) and Spin(3) allows one to distinguish between equivalent ”clockwise” and ”counterclockwise” rotations of less than a full revolution (quotations emphasizing that the terms clockwise and counter- clockwise are ambiguous and used for their analogy only). The ability to distinguish between two equivalent rotations of less than a full revolution may be desirable in some applications such as robotics and computer animation. As described earlier, the exponential function has special significance in representations of both the SO(n) and Spin(n) groups. A matrix representation of an element in the group SO(n) can be generated by taking the matrix ex- ponential of a real-valued n × n matrix which changes only its sign when operated on by the matrix transpose (or equivalently the Hermitian conjugate); such matrices are called skew- symmetric. Likewise, a Clifford algebra representation of an element in the group Spin(n) can be generated by the exponential of a bi-vector in the algebra Cln. Euclidean bi-vectors also change only their sign when operated on by the Hermitian conjugate. In the special case of a two-dimensional Euclidean space, representations of elements in the groups Spin(2) and

SO(2) can both be generated by the matrix exponential of a 2 × 2 skew-symmetric matrix

4 because the Clifford algebra Cl2 can be represented using 2 × 2 real matrices. The difference between rotations calculated using representations of Spin(2) instead of SO(2) in this case results only from the difference between the actions of these group representations on vectors. Rotation matrices, for example, act on matrix column vectors, while the Clifford spin groups act on Clifford vectors. When Clifford algebras are represented by matrices, Clifford vectors take the form of square matrices instead of column vectors, and the rotation is computed as a matrix similarity transformation as opposed to the action of a column vector on a matrix.

1.1 Relation to Work in the Literature

In this section the relation of the present work with [3, 4(Emily and Adjai)] is discussed. Previously Emily Herzig formulated some Algorithm methods for the explicit inversion of the indefinite double covering map in SO+(p, q) for Spin(0,5) and (0,6). She applied this technique to solve the given system either by inspection or by inverting the associative Lie Algebra for (p,q) ∈ (2, 1), (2, 2), (3, 2) and we extended the work for (2, 1), (4, 1), (5, 1) . The relation between (? ) and the present work is as follows: In (? ) the concern is with inversion for the (0, 5) and (0, 6) cases. Since the polar decom- position of an is trivial, it does not help at all with the task of inversion. On the other hand, in this work it plays a significant role precisely because the polar de- composition for the (p, q), pq 6= 0 case is no longer trivial. Therefore, when the inversion of a positive definite target in SO+(p, q) can be carried out efficiently, it becomes even more useful than when the target is an ordinary or hyperbolic Givens matrix, since the number of Givens factor grows rapidly with n = p + q. Even though Spin+(p, q) is isomorphic to Spin+(q, p), it is not true for Cl(p, q) and Cl(q, p). Therefore we provide double covering maps provided by Spin+(2, 3) and Spin+(2, 4). Therefore We will get new realizations for the groups Spin+(3, 2) and Spin+(4, 2) giving 4∗4 complex matrices. [3] provided algorithms for inversion of the covering maps. In [3], the

5 author used Given decomposition and Gr¨obner methods whereas we use Polar and

Givens and also cover cases not covered in [3] like the case for (1,5).

Next the relation to (18) is discussed. Significant portions of the present work were

completed in late 2016, (? ). As this paper was being written up, we became aware of

the 2019 paper (18). In (18), an elegant solution is provided for inverting the abstract (as

opposed to the matrix) map Φp,q, under a generic condition on X, not required by our work. The solution in (18) is a generalization of a method proposed in (11) for the (3, 1) case. This

formula is as follows. Let X ∈ SO+(p, q) and define the element M of Cl(p, q) via

X −1 M = det(Xα,β)eα(eβ) α,β

Here α = {i1, . . . , ik} and β = {j1, . . . , jk} are subsets of {1, 2, . . . , n} of equal cardinality

(including the empty set); Xα,β is the square sub matrix of X located in rows indexed by

α and and columns indexed by β; eα = ei1 ei2 . . . eik and similarly for eβ. Here el is the lth

−1 basis 1-vector in the abstract Clifford algebra Cl(p, q). (eβ) is the inverse of eβ in Cl(p, q). It is assumed that MM rev 6= 0 in Cl(p, q). This is the aforementioned generality assump- tion. Then Φp,q(±Y ) = X, where M Y = √ MM rev Here M rev is the reversion of M.

The following nuances, besides the generality condition MM rev 6= 0, of this formula need

attention:

(i) The principal burden in implementing the formula in (18) is that one has to compute

all the minors of X. If we ally this formula with one innovation of the current work,

namely decomposing X into hyperbolic and usual Givens rotations, then a significant

reduction in complexity in implementing the formula in (18) can be expected. Indeed,

the number of non-zero minors of a hyperbolic or standard Givens is much lower than

6 that for a general X. However, from many viewpoints, it is still emphatically not true that if X is a Givens rotation then only principal minors are non-zero, and thus still several determinants have to be computed. For instance, consider   a 0 b     2 2 X =  0 1 0  , a − b = 1     b 0 a Thus X is a hyperbolic Givens rotation in SO+(2, 1). Then, for instance, the following non-principal 2 × 2 minors are non-zero: {(1, 2), (2, 3)}, {(2, 3), (1, 2)}. Hence, even when the target X is sparse, such as a Givens matrix, one has to calculate several determinants.

(ii) Furthermore, due to the involvement of several determinants, the formula obtained for Y often is quite elaborate and occludes the “half-angle” nature of the inversion even when X is simple - see Example (1.1) below for an illustration of this issue.

(iii) The formula only finds Y as an element of the abstract Clifford algebra Cl(p, q). Our methods also provide such an inversion, without the need for determinants, but by using Givens decomposition - see Remark (5.6). Of course, by using specific matrices constituting a basis of 1-vectors for Cl(p, q), Y can be recovered as a matrix. The matrix Y thereby obtained, even though an even vector, will live in Cl(p, q) which is typically an algebra of matrices of larger size than the matrix algebra constituting the even sub algebra. This is due to the very nature of the formula. Thus, for instance this formula will yield, for the case (p, q) = (3, 2), Y as a 8 × 8 matrix, even though the covering group consists of symplectic matrices of size 4. To get around this one has to know how to embed the even sub algebra in Cl(p, q), (10). In effect, one has to compute the matrix form of the grade involution. This limitation is thus due to not

having at one’s disposal an explicit matrix form for Φp,q, when using the formula in (18).

7 (iv) Next the matrix form of ±Y very much depends on the basis of 1-vectors. Without

this caveat, one can find different find different matrices, Y , with ±Y projecting to the

same X. This matter is illustrated in Example (1.1).

(v) Other steps in this method such as finding the reversion of M can, in principle, be

performed without having to resort to finding reversion as an explicit automorphism

of the matrix algebra that Cl(p, q) is isomorphic to However, as p + q grows it is more

convenient to work with a concrete matrix form of reversion, such as those in (10)).

Indeed MM rev is proportional to the identity and thus, if a matrix form of M (and

M rev) is available, then one needs to only compute the of MM rev. These issues

are all mitigated when the methods being proposed here are used, since our methods

make systematic use of the structure of the matrix form of the map Φp,q, whereas this is not the case in (18).

Consider Φ1,1. Its inversion, is of course, trivial. However, it illustrates Remark(??) and also the caveats ii) and iv) above about the usage of the formula in (18).

Let us use the basis B1 = {σx, iσy} for the 1-vectors of Cl(1, 1) ' M(2, <). Incidentally, this is the canonical basis that the constructive proof of the isomorphism between Cl(p +

1, q + 1) and M(2, Cl(p, q)) naturally yields. Then Spin+(1, 1) is realized as

  α 0 +   Spin (1, 1) = {  ; α 6= 0} (1.1) 0 1/α

      α 0 α2+1 α2−1 a b   1  α4 α4    The map Φ1,1 sends to 2 . Let X = be a target    α2−1 α2+1    0 1/α α4 α4 b a matrix is SO+(1, 1). Here

a = cosh(x); b = sinh(x)

8 Directly solving for α from the quadratic system obtained by equating the entries of X to   α 0   Φ1,1[ ] one recovers 0 1/α

  ex/2 0   Y = ±   (1.2) 0 e−x/2

The “half-angle” aspect of the covering map is manifest is this formula. Only after some algebra, is this also the solution yielded by the formula of (18). Specifi- cally,   2 + 2a + 2b 0   M = (2 + 2a)1 + 2be2e1 =   0 2 + 2a − 2b since e2e1 = σz if we use B1 as the basis of 1 vectors for Cl(1, 1). Next, a calculation shows MM rev = (8 + 8a)1. So, it follows that:   2+2√ a+2b 0  8+8a  Y =   (1.3) 2+2√ a−2b 0 8+8a

which, after further manipulations, coincides with Equation (1.2). Thus, even in this simple case, it is seen that if one is interested in a symbolic expression for Y as a function of the entries of X, then Equation (7.6) is more complicated than Equation (1.2), even though they are equivalent. Next, we could also have used B2 = {σz, ıσy}, as the basis of one vectors. Naively applying the formula in (18) would then naturally lead to Y being a linear

combination of I2 and σx, which is inconsistent with Equation (1.1). The resolution is that

+ with B2 as the choice of the basis of 1-vectors, Spin (1, 1) is just   a b +   2 2 Spin (1, 1) = {  ; a − b = 1} (1.4) b a

9 CHAPTER 2

PRELIMINARIES

2.1 Notation

We use the following notation throughout

• H is the set of quaternions. Let K be an associative algebra. Then M(n, K) is just the set of n × n matrices with entries in K. For K = C,H we define X∗ as the matrix obtained by performing entry wise complex (resp. quaternionic) conjugation first, and then transposition. For K = C, X¯ is the matrix obtained by performing entry wise complex conjugation.

• The are       0 1 0 −ı 1 0       σx = σ1 =   ; σy = σ2 =   ; σz = σ3 =   1 0 ı 0 0 −1   I 0  p p×q  • Ip,q =  . 0q×p Iq   0n In • J =  . Sp(2n, R) is the standard notation for G . 2n   J2n −In 0n • The notation X ⊕Y refers to a block- with diagonal blocks X and Y .

T • Let M be an n × n real . Then GM = {X : X MX = M}. GM is a .

• We use the standard notation SO(p, q, R) for the determinant 1 matrices in GIp,q . When p or q is zero, this group is connected, but it is not if p > 0 and q > 0. The connected component of the identity in SO(p, q, R), denoted SO+(p, q, R), is also a Lie group.

10 2.2 Preliminaries from Clifford Algebra

2.2.1 Preliminary Observations

We will begin with informal definitions of the notions of one and two-vectors for a Clifford algebra, which is sufficient for the purpose of this work. The texts (14; 16) are excellent sources for more precise definitions in the theory of Clifford algebras. Let p, q be non-negative integers with p + q = n. A collection of matrices

{X1,...,Xp,Xp+1,...,Xp+q}

with entries in R,C or H is a basis of one-vectors for the Clifford algebra Cl(p, q) if

2 1. Xi = I, for i = 1, 2, ..., p, where I is the identity matrix of the appropriate size (this size is typically different from n)

2 2. Xi = −I, for i = p + 1, p + 2, ..., p + q.

3. XiXj = −XjXi, for i 6= j; i, j = 1, 2, ..., n. A one-vector is just a real linear combination

of the Xi’s, i = 1, 2, ..., n. Similarly, a two-vector is a real linear combination of the

matrices XiXj, i < j; i, j = 1, 2, ..., n. Analogously, we can define three, four, ... n- vectors, etc. Cl(p, q) is just a real linear combination of I, one-vectors, ..., n-vectors. Spin+(p, q) is the connected component of the identity of the collection of elements x in Cl(p, q) satisfying the following requirements: i) xgr = x, i.e., x is even; ii) xxcc = 1; and iii) For all one-vectors v in Cl(0, n), xvxcc is also a one-vector. The last condition, in the presence of the first two conditions, is known to be superfluous for p + q ≤ 5, (14; 16).

4. Let n = p + q. Denote by Ip,q = Ip ⊕ (−Iq). Then

T + SO(p, q) = {X ∈ M(n, R): X Ip,qX = Ip,q; det(X) = 1}. SO (p, q) is the connected component of the identity in SO(p, q). Unless pq = 0, SO+(p, q) is a proper subset of SO(p, q). The Lie algebra of SO+(p, q) is denoted by so(p, q).

11 + + + The map Φp,q : Spin (p, q) → SO (p, q) sends x ∈ Spin (p, q) to the matrix of the linear

cc map v → xvx , where v is a 1-vector with respect to a basis {X1,...,Xp,

Xp+1,...,Xp+q} of the space of 1-vectors. It is known that Φp,q is a

+ with {±1}.Ψp,q is the linearization of Φp,q. Thus, Ψp,q sends an element y ∈ spin (p, q)

to the matrix of the v → yv − vy.Ψp,q is a Lie algebra isomorphism from spin+(p, q) to so+(p, q).

2.3 Clifford Conjugation, Reversion and Grade Automorphisms

Three important endomorphisms on a Clifford algebra are defined as follows: Define a

function φcc on the basis elements with φcc(I) = I, φcc(v) = −v for all 1-vectors v, and

φcc(ab) = φcc(b)φcc(a) for any basis vectors a and b. Then extend φcc by linearity to

all of Cl(p, q). φcc is called the Clifford conjugation anti automorphism. Define a func- tion φrev on the basis elements with φrev(I) = I, φrev(v) = v for all 1-vectors v, and

φrev(ab) = φrev(b)φrev(a) for any basis vectors a and b. Then extend φrev by linearity to all of

Cl(p, q). φrev is called the reversion anti automorphism. Define the function φgr = φrev.φcc on Cl(p, q). This function is called the grade automorphism, and satisfies φgr(I) = I,

φgr(v) = −v for all 1-vectors v, φgr(ab) = φgr(a)φgr(b) for any vectors a and b, and extends linearly over Cl(p, q).

2.4 Some Basic Matrix Groups and corresponding Lie Algebras

Following are some basic introduction to the groups and their lie algebras to make it reader friendly.

• General GL(n, F ) = {X ∈ M(n, F )|detX 6= 0}. The corresponding Lie

algebra is gl(n, F ) = M(n, F ), for F in R or C

12 • A useful subset of GL(n,F) is the

SL(n, F ) = {X ∈ GL(n, F )|detX = 1}, SL(n, H) = {X ∈ M(n, H)|det(θH (X) = 1},

where θH is defined in section 2.5. Their corresponding Lie algebras are sl(n, F ) = {X ∈ M(n, F )|T rX = 0} when F = R or C, and sl(n, H) = {X ∈ M(n, F )|Re(T rX) = 0}.

• Orthogonal Group Let O(n, R) = {X ∈ M(n, R)|XT X = I}, O(n,R) arises as the group of matrices that leave invariant the quadratic form defined by the standard in- ner product.That is, if β(v) = vT v, then β(Xv) = β(v) for all v ∈ Rn if and only if vT XT Xv = vT v for all v ∈ Rn, which implies XT X = I. For any X ∈ O(n), detX = ±1.

• the special orthogonal group SO(n, R) = {X ∈ O(n)|detX = 1}. SO(n,R) can be interpreted as the group of all rotations on Rn centered at the origin. The associated Lie algebra is so(n, R) = {X ∈ M(n, R)|X = −XT }

• The complex analogue of the orthogonal group is the , U(n) = {X ∈ M(n, C)|X ∗ X = I}. The associated Lie algebra is u(n) =∈ M(n, C)|X = −X∗}. For any X ∈ U(n), |detX| = 1. The is SU(n) = {X ∈ U(n)|detX = 1}. The associated Lie algebra is su(n) = {X ∈ M(n, C)|X = −X∗, T rX = 0}.   I 0  p  • Let I(p, q) =   for some p, q ∈ 0 with p + q = n. The quadratic form 0 −Iq T with signature (p, q) can be written as β(v) = v Ip,qv, and is left invariant by X such

n T T T n that β(Xv) = β(v) for all v ∈ R . That is, v X Ip,qXv = v Ip,qv for all v ∈ R .

T Thus we define the indefinite orthogonal group O(p, q) = {X ∈ M(n, R)|X Ip,qX =

Ip,q}. The indefinite special orthogonal group is SO(p, q) = {X ∈ O(p, q)|detX =

13 T 1}. The associated Lie algebra is so(p, q) = {X ∈ M(n, R)|X Ip,q = −Ip, qX}. As mentioned previously, when both p and q are nonzero SO(p, q) is separable, so we may further define SO+(p, q) as the connected component of SO(p, q) containing the identity. Definition 7. The complex analogues of the indefinite orthogonal and special

∗ orthogonal groups are U(p, q) = {X ∈ M(n, C)|X Ip,qX = Ip,q} and SU(p, q) =

{X ∈ U(p, q)|detX = 1}. The associated Lie algebras are u(p, q) =∈ M(n, C)|XIp,q =

∗ ∗ −Ip,qX } and su(p, q) = {X ∈ M(n, C)|XIp,q = −Ip,qX , T rX = 0}.

2.5 Quaternionic and θH Matrices

Next, to a matrix with entries will be associated a complex matrix. First, if q ∈ H is a quaternion, it can be written uniquely in the form q = z + wj, for some z, w ∈ C. Note that jη =ηj ¯ , for any η ∈ C. With this at hand, the following construction associating complex matrices to matrices with quaternionic entries is useful:

Let X ∈ M(n, H). By writing each entry xpq of X as

xpq = zpq + wpqj, zpq, wpq ∈ C we can write X uniquely as X = Z +W j with Z,W ∈ M(n, C). Associate to X the following matrix θ (X) ∈ M(2n, C): H   ZW θ (X) =   H   −W¯ Z¯

Viewing an X ∈ M(n, C) as an element of M(n, H) it is immediate that jX = Xj¯ , where X¯ is entrywise complex conjugation of X.   ZW A 2n × 2n complex matrix of the form   is said to be a θ matrix.   H −W¯ Z¯

Next some useful properties of the map θH : M(n, H) → M(2n, C) are collected.

Properties of θH:

14 i) θH is an R-linear map.

ii) θH(XY ) = θH(X)θH(Y )

∗ ∗ iii) θH(X ) = [θH(X)] . Here the ∗ on the left is quaternionic Hermitian conjugation, while that on the right is complex Hermitian conjugation.

iv) θH(In) = I2n

In this remark we will collect some more facts concerning quaternionic matrices.

1. If X,Y ∈ M(n, H) then it is not true that T r(XY ) = T r(YX). However, Re(T r(XY )) = Re(T r(YX)). Therefore, the following version of cyclic invariance of trace holds for quaternionic matrices:

Re[T r(XYZ)] = Re[T r(YZX)] = <([T r(ZXY )]

2. Let X and Y be square quaternionic matrices. Then

T r(X ⊗ Y ) = T r(X)T r(Y ).

Furthermore, if at least one of X and Y is real, then

<(T r(X ⊗ Y )) = <(T r(X))<(T r(Y ))

3. X = Z + W j is Hermitian iff Z is Hermitian (as a complex matrix) and B is skew-

symmetric. This is, of course, equivalent to θH(X) being Hermitian as a complex matrix.

4. If X is a square quaternionic matrix, we define

X2 Exp(X) = I + X + + .... n 2!

Then θH(Exp(X)) = Exp(θH(X)).

15 5. If X is a square quaternionic matrix, it is positive definite if q∗Xq > 0, for all q ∈ H.

This is equivalent to θH(X) being a positive definite complex matrix.

6. Putting the last two items together we see that if X ∈ M(n, H) is Hermitian, then Exp(X) is positive definite.

2.6 Givens-like Actions     c −s a b   2 2   2 2 Define R =   where c + s = 1 and H =   , for a − b = 1 respectively. s c b a It is well known that   c −s T   2 2 • Given a vector (x, y) there is an R =   where c + s = 1 such that s c R(x, y)T = (px2 + y2, 0)   a b T   2 • Similarly given a vector (x, y) , with|x| ≤ |y|, there is an H =  , fora − b a b2 = 1such that H(x, y)T = (px2 + y2, 0) R, H are called standard Givens and hyperbolic Givens respectively.

Embedding R, resp. H as a principal submatrix of the identity matrix In, yields matrices known as standard Givens (respectively, Hyperbolic Givens)[19].

2.7 Euler-Rodrigues Formula

Let X be an n×n matrix in R,C or H. If

X3 = −λ2X

, with λ ∈ R.Then sinλ 1 − cosλ eX = I + X + X2 λ λ2

16 And if X2 = λ2X

, with λ ∈ R. , then sinhλ coshλ − 1 eX = I + R + X2 λ λ2

2.8 Givens decomposition

.     c −s a b   2 2   2 2 Define R =   where c + s = 1 and H =  , for a − b = 1. Then s c b a the following facts are well known:   c −s T   2 2 • Given a vector (x, y) there is a R =   where c + s = 1 such that s c     x px2 + y2     R   =  . y 0     c −s x   2 2   Similarly there is a R =   where c + s = 1 such that R   = s c y   0    . px2 + y2   a b T   2 2 • Next given a vector (x, y) , with | x |≥| y |, there is a H =   with a −b = 1, b a     x px2 − y2     such that H   =   y 0

17 R(andH) are called plane standard Givens and hyperbolic Givens respectively. Embed-

ding R (and H) as a principal sub matrix of the identity matrix In, yields matrices known

as standard Givens (respectively, Hyperbolic Givens).

Example:

Let X ∈ SO+(2, 2). Consider the first column of X,   a      b    + 2 2 2 2 v1 =   Since X ∈ SO (2, 2), a + b − c − d = 1. Therefore there are R1,2,R3,4  c      d   α      0  √ √   2 2 2 2 such that the first column of R1,2,R3,4X =  , where α = a + b and β = c + d .  β      0 2 2 2 2 2 2 Since a + b − c − d = 1 = α − β , it follows that |α| > |β|. Hence there is an H1,3 such     a 1          b   0      that the first column of H1,3R1,2R3,4X v1 =   =    c   0          d 0 + Since H1,3R1,2R3,4X ∈ SO (2, 2) also, it follows that the first row of H1,3R1,2R3,4X is   also 1 0 0 0 .   0      b    Therefore, the second column of the product H1,3R1,2R3,4X is of the form   with  c      d 2 2 2 T T 2 2 2 b − c − d = 1. So there is an R3,4 such that R3,4(c, d) = (γ, 0) , where γ = c + d .

2 2 T As before b − γ = 1, so there is an H2,3 such that H2,3(b, γ) = (1, 0). So it follows

18 that the first and the second column will be equal to first two standard unit vectors. Since   1 0 0 0      0 1 0 0  +   H2,3R3,4H1,3R1,2R3,4X ∈ SO (2, 2), it follows that it equals    0 0 y y   33 34    0 0 y43 y44 + Again the condition H2,3R3,4H1,3R1,2R3,4X ∈ SO (2, 2), ensures that   y y  33 34    must itself be a plane standard Givens rotation. Therefore, pre-multiplying y43 y44 by the corresponding R3,4 we get that

R3,4H2,3R3,4H1,3R1,2R3,4X = I4

Since the inverse of each Ri,j (respectively Hk,l) is itself an Ri,j (respectively Hk,l), it follows that X can be expressed constructively as a product of R3,4, H2,3, H1,3, R1,2. Remark: the following two observations are worth mentioning for the upcoming work:

(i) The above factorization is not the only way to factor an element of SO+(2, 2) into

a product of standard and hyperbolic Givens matrices. By way of illustration, we

use a slightly different factorization in Section ??, which emanates from using an R4,3

instead of an R3,4 in one of the three usages of R3,4 above. This will result, therefore,

in the usage of an H2,4 instead of an H2,3. We note, however, that since R3,4 and an

R4,3 are essentially the same matrix, differing only in the parameter θ which enters in them. Thus, their inversion will require the symbolic solution of the same system of

equations.   p + q   (ii) There are at most   Givens factors in the decomposition of a generic X. 2 However, there are at most 2p + q − 2 distinct such factors. This is pertinent since this

implies that we have to symbolically invert only 2p + q − 2 targets.

19 2.9 Polar Decomposition

We use the standard notation SO(p,q) for the determinant 1 matrices . When p and q are zero, this group is connected, but it is not if p, q ≥ 0The connected component of iden- tity in SO(p,q) is SO+(p, q), also called Lie group. Use will be made of the following theorem:

Theorem Let X ∈ GM , with m real orthogonal, If X = QP is its polar decomposi-

tion, with P positive definite and Q real orthogonal, then P and Q are also in GM . We next collect some definitions and results on real positive definite matrices. Those statements without proof are standard.

2.10 Logarithms of Special Orthogonal Matrices of Size 4:

Let X ∈ SO(4). Then there always exist a pair of unit quaternions u, v such that X = Mu⊗v, (see, for instance, (? )). we consider the following two cases

• Suppose first that neither u nor v belong to the set {±I4}. This means M 6= ±I4. Then one can further find, essentially by inspection of u, v, a real skew-symmetric Y such that Exp(Y ) = X. Specifically, let λ ∈ (0, π) be such that <(u) = cos(λ). Then

λ let p = p1i + p2j + p3k be sin(λ) =(u). Similarly, find q = q1i + q2j + q3k from inspecting v. Then

Y = Y1 + Y2

with   0 −p1 −p2 −p3      p1 0 −p3 p2    Y1 =    p p 0 −p   2 3 1    p3 −p2 p1 0

20 and   0 q1 q2 q3      −q1 0 −q3 q2    Y2 =    −q q 0 −q   2 3 1    −q3 −q2 q1 0

Furthermore, Y1 and Y2 commute.

• Now consider the case when M = ±I4, if M = I4, then M = Exp(0n), while if

M = −I4, then M = Exp(Y ), with Y = Z ⊕ Z   0 π   where Z =  . −π 0

2.11 Special Bases for Clifford Algebras

In this section we show that every Cl(p, q) possesses a basis of 1-vectors satisfying BP1 and BP 2 of Section 5.2. We note that the work, (12), also provides special bases of 1-vectors for real Clifford algebras, but the properties of these special bases are not BP1 nor BP2. We begin by recalling three iterative constructions for Clifford algebras, (14; 16) and show that these constructions inherit BP1 and BP2.

• IC1 If {V1,...,Vp,W1,...,Wq} is a basis of 1-vectors for Cl(p, q) then     0 I 0 I     σz ⊗ Vj,   , σz ⊗ Wk,   I 0 −I 0

is a basis of 1-vectors for Cl(p + 1, q + 1). Here I is the identity element of Cl(p, q) and 0 is the zero element of Cl(p, q).

21  ∗ 0 I   Let X ∈ Cl(p, q). Then note that   [σz ⊗ X] is a 2 × 2 block matrix with I 0  ∗ 0 I   zeroes on its diagonal block. Similarly,   [σz ⊗ X] also has zero diagonal −I 0  ∗   0 I 0 I     blocks matrix. Similarly, the trace and the real part of trace of     I 0 −I 0 is also zero. Finally, if X∗Y has zero trace (respectively zero real part of trace) then

∗ the same holds for (σz ⊗ X) (σz ⊗ Y ). So property BP2 is inherited by the iteration IC1. It is evident that property BP1 is also inherited by the iteration IC1.

Remark: The following result also tells us how to extend reversion and Clifford con- jugation from that for (p,q) to (p + 1,q + 1):The following result also tells us how to extend reversion and Clifford conjugation from that for (p,q) to (p + 1,q + 1):   AB   For A,B,C,D ∈ Cl(p,q), let X =   ∈ Cl(p+1,q+1). Then Clifford conjuga- CD   Drev −Brev cc   tion and reversion on Cl(p + 1,q + 1) is given by: X =   and −Crev Arev   Dcc Bcc rev   X =   respectively. Ccc Acc

• IC2 If {E1,...,Em} is a basis of 1-vectors for Cl(m, 0) then the following set is a basis of 1-vectors for Cl(m + 8, 0):

{I ⊗ V1,...,I ⊗ V8,E1 ⊗ L, . . . , Em ⊗ L}

where

22 – I is the identity on Cl(m, 0).

– {V1,...,V8} is the basis of 1-vectors for Cl(8, 0) used in Theorem 3.1 below.

– L = σx ⊗ σx ⊗ ıσy ⊗ ıσy.

Note that L is a real symmetric matrix. The Vi’s are also real and either symmetric

or antisymmetric. Therefore, by item ii) of Remark (2.5) the reality of L and the Vi ensures that BP2 is inherited by IC2. Since L is real symmetric and the fact that

T Vi = ±Vi we also see that BP1 is also inherited by IC2.

• IC3 If {F1,...,Fm} is a basis of 1-vectors for Cl(0, m) then the following is a basis of 1-vectors for Cl(0, m + 8):

{I ⊗ V1,...,I ⊗ V8,F1 ⊗ K,...,Fm ⊗ K}

where

– I is the identity on Cl(0, m).

– {V1,...,V8} is the basis of 1-vectors for Cl(0, 8) used in Theorem 3.1 below.

– K = iσy ⊗ iσy ⊗ σz ⊗ σz.

As in the previous case K is real symmetric, while each Vi is real and either symmetric or antisymmetric. Therefore, both BP1 and BP2 are inherited by IC3

23 CHAPTER 3

POSITIVE DEFINITE ELEMENTS IN THE SPIN GROUPS

In this chapter, we prove a very useful result which ensures that one pre-image in Spin+(p, q) of a positive definite matrix in SO+(p, q) is also positive definite. In light of Remark (??), we assume in Proposition (3) that the basis, B, of one vectors for Cl(p, q) being used satisfies the following two properties:

• BP1 If V ∈ B, the basis of 1-vectors being used, then

∗ Vi = ±Vi (3.1)

• BP2 The matrices in B are orthogonal with respect to the trace inner product. Specif- ically, if B consists of real or complex matrices then T r(U ∗V ) = 0, for all U, V ∈ B,U 6= V and if B contains quaternionic matrices then Re(T r(U ∗V )) = 0, for all U, V ∈ B,U 6= V

Proposition: Let P ∈ SO+(p, q) be positive definite. Let B be a set of matrices serving as a basis of 1-vectors for Cl(p, q) satisfying BP1 and BP2, stated above. Then there is a

+ unique positive definite Y ∈ Spin (p, q) with Φp,q(Y ) = P . Proof: As shown in (? ), there is a symmetric Q ∈ so+(p, q) such that

Exp(Q) = P . Let Ψp,q be the linearization of Φp,q. We will show that the (unique) preimage

A of Q with respect to Ψp,q is Hermitian. Therefore, from the formula Φp,q[Exp(A)] =

Exp[Ψp,q(A)], it follows that if we assume that Y = Exp(A), then Φp,q(Y ) = P . Since A is Hermitian, it follows that Y = Exp(A) is positive definite. [where, in the event A is quaternionic we invoke the last item of Remark (2.5)]

−1 Let us now show that A = Ψp,q(Q) is Hermitian. First, suppose that B consists of real or complex matrices. Since B satisfies BP1 and BP2 we have:

24 • If A ∈ spin+(p, q), then A∗ ∈ spin+(p, q) also. Indeed the typical element of spin+(p, q)

∗ is a real linear combination of the VkVl, k < l. Since (VkVl) = ±VkVl (using

∗ the fact that the Vis anticommute), it follows that A is also a real linear combination

of the bivectors and is thus in spin+(p, q) also.

∗ ∗ • Ψp,q(A ) = [Ψp,q(A)] . To see this note that the (i, j)th entry of the matrix Ψp,q(A)

equals, due the Vi’s being orthogonal with respect to the trace inner product,

∗ ∗ ∗ [Ψp,q(A)]ij = T r[Vi (AVj − VjA)] = T r[A(VjVi − Vi Vj)]

(where we used cyclic invariance of trace).

th ∗ ∗ ∗ ∗ Similarly the (j, i) entry of Ψp,q(A ) equals T r[A (ViVj − Vj Vi)]. But this equals the

∗ ∗ complex conjugate of T r[(VjVi − Vi Vj)A], which again by cyclic invariance of trace,

equals Ψp,q(A)ij.

If B contains quaternionic matrices then the above argument goes through verbatim if we replace T r by <(T r) in light of item 1) of Remark(2.5).

So Ψp,q being a vector space isomorphism, if Ψp,q(A), being real, is symmetric then, it

follows that A = A∗ and hence A is Hermitian and Y = Exp(A), is positive definite.

The previous proof assumed that there is a basis of 1-vectors, {Vi} for Cl(p, q) with the

properties BP1 and BP2. For all the Clifford algebras discussed in this paper, this is true

by construction. However, for the sake of completeness, we will prove that there is at least

one such basis for all Cl(p, q) in Theorem (3.1). Notwithstanding theorem (3.1), it is worth

emphasizing that for the purpose of inversion, in light of Remark (??), one must verify the

veracity of both BP1 and BP2 for the specific basis of 1-vectors that one chooses to arrive

at the matrix form of Φp,q.

25 3.1 Bases for Clifford Algebras

Here we show that every Cl(p, q) possesses a basis of 1-vectors satisfying BP1 and BP 2 .

We note that the work, (12), also provides special bases of 1-vectors for real Clifford algebras, but the properties of these special bases are somehow different from BP1 and BP2 but still inherited from BP1 and BP2 . We will elaborate these properties of the special bases for the 1-vectors below:

We refer to section 2 on IC1, IC2, IC3 three iterative constructions for Clifford algebras,

(14; 16), defined in previous chapter and show that these constructions inherit BP1 and

BP2.

We now prove the main result:

Theorem: Every real Clifford algebra has a basis of 1-vectors with the properties BP1 and BP2.

Proof: As observed above both BP1 and BP2 are inherited by each of IC1, IC2 and IC3.

Since every Cl(r, s) can be obtained by repeatedly applying IC1 to either some Cl(n, 0) or

Cl(0, n), and every Cl(n, 0) (resp. Cl(0, n)) is obtained by applying IC2 (resp. IC3) to

Cl(m, 0), m = 0,..., 8 (resp. Cl(0, m), m = 0,..., 8) it suffices to verify the theorem for

Cl(m, 0) and Cl(0, m) for m = 0,..., 8.

Let us begin with Cl(m, 0). The following is the list of bases of 1-vectors that will be used for this purpose:

• B0,0 = Φ

• B1,0 = {σx}

26 • B2,0 = {σz, σx}

• B3.0 = {σz, σx, iσy}       0 ı 0 j 0 k       • B4,0 = {  ,   ,   , σz} −ı 0 −j 0 −k 0       0 σ ⊗ i 0 σ ⊗ j 0 σ ⊗ k  2 z   2 z   2 z  • B5,0 = {  ,   ,   , σz ⊗ i 02 σz ⊗ j 02 σz ⊗ k 02     σ 0 0 σ  z 2   2 z    ,  }. 02 −σz σz 02

• B6,0 = {I2 ⊗ σz, I2 ⊗ σx, iI2 ⊗ (iσy), jI2 ⊗ (iσy), kσx ⊗ (iσy), kσz ⊗ (iσy)}.

• B7,0 = {I4 ⊗ σz, I4 ⊗ σx, −iσz ⊗ I2 ⊗ (iσy), iσy ⊗ I2 ⊗ (iσy), −iσx ⊗ σx ⊗ (iσy),

−iσx ⊗ σz ⊗ (iσy), σx ⊗ −iσy ⊗ (iσy)}

• B8,0 = {I8 ⊗ σz, I8 ⊗ σx, −σx ⊗ iσy ⊗ I2 ⊗ (iσy),

−iσy ⊗I2 ⊗I2 ⊗(iσy), −σz ⊗iσy ⊗σz ⊗(iσy), −σz ⊗iσy ⊗σx ⊗(iσy), σz ⊗I2 ⊗iσy ⊗(iσy),

−σx ⊗ σz ⊗ iσy ⊗ (iσy)} = {V1,...,V8}

Next we verify the assertion for Cl(0, m). We work the following sets of 1-vectors for m ≤ 8:

• B0,1 = {i}.

• B0,2 = {i, j}.       i 0 j 0 k 0       • B0,3 = {  ,   ;  } 0 i 0 j 0 k         ı 0 j 0 k 0 0 k         • B0,4 = {  ,   ;   ;  } 0 ı 0 j 0 −k k 0

27 • B0,5 = {iσz ⊗ I2, iσy ⊗ I2, iσx ⊗ σx,

sigmax ⊗ σz, σx ⊗ (iσy)}

• B0,6 = {σz ⊗ (iσy) ⊗ I2, iσy ⊗ I4, σx ⊗ (iσy) ⊗ σx, σx ⊗ (iσy) ⊗ σz,

σx ⊗ I2 ⊗ I2 ⊗ (iσy), σz ⊗ σx ⊗ (iσy)} = {Z1,...,Z6}     Z 0 σ ⊗ σ ⊗ (iσ ) 0  j 8   z z y 8  • B0,7 = {  , j = 1,..., 6} ∪ { }. 08 Zj 08 σz ⊗ σz ⊗ iσy

• B0,8 = {I4 ⊗σz ⊗(iσy),I4 ⊗(iσy)⊗I2,I2 ⊗σz ⊗σz ⊗(iσy),I2 ⊗σz ⊗σx ⊗(ıσy),I2 ⊗(iσy)⊗

σx ⊗ I2,I2 ⊗ (iσy) ⊗ σz ⊗ σx, σx ⊗ (iσy) ⊗ σz ⊗ σz, σz ⊗ (iσy) ⊗ σz ⊗ σz} = {V1,...,V8}.

By construction BP1 and BP2 hold for these bases.

28 CHAPTER 4

POLAR DECOMPOSITION AND THE INVERSION

In this chapter we collect various results on decomposition of SO+(p, q) which will play an important role in the remainder of this work.

4.1 Polar and Givens Decomposition of SO+(p, q)

As we know, if we have a matrix in SO(p, q), its polar factors belong to the group SO(p, q). However if X ∈ SO+(p, q) , we can in fact show that both the factors belong to SO+(p, q).[1]

Constructive Polar Decomposition: Let X ∈ SO+(p, q). The polar decomposition of X is defined as X = VP , where V is real special orthogonal and P is positive definite, Then (see (? )), both factors V,P also belong to SO+(p, q). Furthermore, one can find V,P and the real symmetric Q with Exp(Q) = P mere by inspecting and finding special orthogonal matrices which take the first to a given vector of length one. Also provides an algorithm for computing the polar decomposition in SO+(p, q) which requires the inspection of first (resp. last) rows and columns together with finding two orthogonal matrices whose first columns come from these inspected columns. We will discuss about one of its special case in Algorithm in Section 5.1 below to cater to polar decomposition and exponentiation. For other values of (p, q) these constructive procedures can be extended, except that they involve substantially more matrix maneuvers. We will tacitly assume the contents of this remark in Sections 3 and 5.

4.2 Given Decomposition

In the previous paragraph, we saw that both factors belong to SO+(p, q) we will show in this section, we will show in this section, that all factors in the given decomposition also can

29 + also be chosen to be in SO (p, q). Let us define Hij, for i < j, stands for the n × n matrix which is the identity except in the principal sub-matrix, indexed by rows and columns (i, j), wherein it is a hyperbolic Givens matrix. Similarly, Rij stands for the n × n matrix which is the identity except in the principal sub-matrix, indexed by rows and columns (i, j), wherein it is an ordinary Givens matrix.

Remark: While Hij is defined only if i < j, the matrices Rij make sense for all pairs

(i, j) with i 6= j. Rij is the matrix which zeroes out the jth component of a vector that it premultiplies. Thus, Rij will be different, in general, if i < j from that when i > j.

+ The next result shows that Hij’s and Rij’s belong to SO (p, q).

+ Proposition: Let 1 ≤ i ≤ p and 1 ≤ j ≤ q. Then Hij belongs to SO (p, q). Similarly, if

+ either 1 ≤ i, j ≤ p or 1 ≤ i, j ≤ q, then Rij belongs to SO (p, q).

Proof: Consider the Hij case first. Define

T T Lij = θ(eiej + ejei )

with θ ∈ R. Thus Lij is the symmetric matrix which is zero everywhere, except in the (i, j)th and (j, i)th entries wherein it is θ. Due to the conditions, 1 ≤ i ≤ p and 1 ≤ j ≤ q, it is easy

+ to verify that Lij ∈ so (p, q). A calculation shows that

2 Lij = Dθ

where Dθ is n×n diagonal with zeroes everywhere, except on the ith and jth diagonal entries wherein it is θ2. Therefore,

3 3 T T 2 Lij = θ (eiej + ejei ) = θ Lij

Hence by the Euler-Rodrigues formula

sinh(θ) cosh(θ)−1 2 Exp(Lij) = Idn + θ Lij + θ2 Lij

30 T T cosh(θ)−1 = Idn + sinh(θ)(eiej + ejei ) + θ2 Dθ

= Hij

+ Thus Hij being the exponential of a matrix in the Lie algebra so (p, q), belongs to

+ SO (p, q). The proof for Rij is similar. The relevance of Givens rotations is that any matrix in SO+(p, q) can be decomposed constructively into a product of Givens matrices. It will suffice to illustrate this via an example.

31 CHAPTER 5

RESULTS ON INVERSIONS

5.1 Inversion of Φ2,1 via Polar Decomposition

+ + The inversion of Φ2,1 : Spin (2, 1) → SO (2, 1) was derived in [3,5], showing that it only

+ requires inspection to find the preimage (under Φ2,1) of matrices in SO (2, 1) which are either positive definite or special orthogonal. Since the factors in the polar decomposition of an X ∈ SO+(2, 1) also belong to SO+(2, 1), these methods also simultaneously provides the polar decomposition of the X being inverted, with minimal fuss. Alternatively, one can also directly find the polar decomposition of X, by essentially inspecting the last row and some extra calculations, and use that to invert Φ2,1.

In principle these methods extends to all (p, q) but is limited in that, besides the (2, 1) and

(3, 1) cases (the latter is treated in (? )), finding the pre image by mere inspection seems

difficult. See however, Section [5.5], wherein the (4, 1) case is handled by a combination

of the polar decomposition and inverting the associated Lie algebra isomorphism Ψ4,1 : spin+(4, 1) → so(4, 1).

Let us first provide an explicit matrix form of the map Φ2,1. This follows, after some computations, from the material in (10). Specifically we begin with the following basis of

1-vectors for Cl(2, 1):

B2,1 = {Y1,Y2,Y3} = {σz ⊗ σz, σx ⊗ I2, ıσy ⊗ I2} (5.1)

Thus, Cl(2, 1) is a matrix sub algebra of M(4,R). The even sub algebra is isomorphic to

M(2,R), which can be embedded into the former sub algebra as follows. Specifically, given

32   y y  1 2  Y =   ∈ SL(2,R) in Cl(2, 1) as follows y3 y4

  y1 0 y2 0      0 −y1 0 y2    G =   (5.2)  y 0 y 0   3 4    0 y3 0 −y4   y y  1 2  It can then be shown that the map Φ2,1 sends an element   in SL(2,R) to the y3 y4 following matrix in SO+(2, 1):   1 + 2y2y3 y2y4 − y1y3 − (y1y3 + y2y4)      y y − y y 1 (y2 − y2 − y2 + y2) 1 (y2 + y2 − y2 − y2)  (5.3)  3 4 1 2 2 1 2 3 4 2 1 2 3 4   1 2 2 2 2 1 2 2 2 2  − (y1y2 + y3y4) 2 (y1 − y2 + y3 − y4) 2 (y1 + y2 + y3 + y4)

5.2 Preimages of Positive Definite Targets in SO+(2, 1)

T T From Equation (5.3) , it is easily seen that if Y ∈ SL(2,R), then Φ2,1(Y ) = [Φ2,1(Y )] .

Hence, in view of the fact that Φ2,1 is surjective with ker(Φ2,1) = {±I, we see that if Φ2,1(Y ) is a symmetric matrix in SO+(2, 1) then necessarily Y T = ±Y . Next, a symmetric Y in SL(2,R) cannot have its (1, 1) entry equal to zero. Thus, as det(Y ) > 0, Y is either positive or negative definite. If Y is antisymmetric, it is easily seen from Equation (5.3) that Φ2,1(Y ) is diagonal with two entries negative and one positive - i.e., it is indefinite.

Furthermore, if Y ∈ SL(2,R) is symmetric then one can also deduce directly that Φ2,1(Y ) is positive definite. To that end, note that since det(Φ2,1(Y )) = 1 and quite visibly the (1, 1) entry is positive, it suffices to check that the (1, 2) minor is positive to verify positive definiteness.

33 From equation (5.3), this minor equals

1 (1 + 2y2)(y2 − 2y2 + y2) − y2(y − y )2 2 2 1 2 4 2 1 4

2 1 2 2 2 2 Using y1y4 − y2 = 1 we find that this minor is 2 (1 + 2y2)[(y1 − y4) + 2] − y2(y1 − y4) = 2 2 2y2 + (1/2)(y1 − y4) + 1 > 0. Summarizing the contents of previous two paragraphs we have following theorem. Theorem 5.2: Let X ∈ SO+(2, 1) be symmetric. Then it is either positive definite or indefinite. In the former case X = Φ2,1(±Y ), with Y ∈ SL(2,R) also positive definite. In the latter case X is diagonal and X = Φ2,1(±Y ) with Y ∈ SL(2,R) anti symmetric.

−1 5.3 Finding Φ2,1(X) when X > 0 by Inspection

Suppose that X is positive definite. Let us then address how a positive definite preimage   y y  1 2  Y ∈ SL(2,R) is found by inspection of Equation (5.3). Let Y =  . By looking at y2 y4 q X11−1 the (1, 1) entry of Equation (5.3) we see y2 = ± 2 . We consider two cases here:

• Suppose X11 6= 1 first. Then we find y1, y4 from the equations

X12 y4 − y1 = y2 X13 y4 + y1 = − y2

By Theorem (5.2), one choice of the sign for y2 will lead to a Y which is positive definite.

2 2 • If X11 = 1, then y2 = 0. Now we look at X22 and X23 to find that y1 and y4 may be found by solving the system

2 2 y1 + y4 = 2X22

2 2 y1 − y4 = 2X23

34 2 2 We take the positive square roots of the solutions y1 and y4 to find a positive definite Y projecting to X. This finishes our claim that if X ∈ SO+(2, 1) is positive definite then we

can find by inspection a positive definite Y ∈ SL(2,R) projecting to X under Φ2,1 The above discussion is summarized in the following algorithm:

+ Algorithm 5.3: Let X = (Xij) ∈ SO (2, 1) be positive definite. The following

algorithm finds a positive definite Y ∈ SL(2,R) satisfying Φ2,1(Y ) = X.

q X11−1 X12+X13 1. Suppose X11 6= 1. Let y2 = ± , y1 = − and 2 2y2

X12−X13 y4 = . 2y2   y y  1 2  2. Let Y =  . There are two choices of Y corresponding to the choice of the y2 y4

square root in y2 in Step 1, which are negatives of one another. One of these Y is positive definite. Pick this one.

√ 3. Suppose X11 = 1. Then let y2 = 0, y1 = X22 + X23 and √ y4 = X22 − X23. Then Y = diag(y1, y4) is positive definite and is one pre image of X in SL(2,R).

5.4 Finding the Polar Decomposition in SO+(2, 1)

Let us now address how to find the polar decomposition in SO+(2, 1) using the last algorithm.

Let X = VP be the polar decomposition of an X ∈ SO+(2, 1). Then the orthogonal V and positive definite P are both in SO+(2, 1). To find P we proceed as follows. First find

XT X, which is then a positive definite element of SO+(2, 1). Since its preimage Z can be chosen to be positive definite, we find it using Algorithm (5.3). Once Z has been found, we compute its unique positive definite square root, W . We note in passing that finding Y from

Z can be executed in closed form, without any eigencalculations, (? ).

35 + Since Z ∈ SL(2,R), W is also in SL(2,R). Then let P = Φ2,1(W ) ∈ SO (2, 1). Then,

2 T T T we compute P = P P = [Φ2,1(Y )] Φ2,1(Y ) = Φ2,1(Y )Φ2,1(Y )

T T = Φ2,1(Y Y ) = Φ2,1(Z) = X X So P is the positive definite factor in X = VP . Of course,

−1 −1 V = XP . Next, finding P is easy. One just interchanges y1 and y4 and replaces y2 by

−y2 and y3 by −y3 in the formula Φ2,1(Y ) = P . This completes the determination of the polar decomposition of X.

However, for the purpose of inversion of Φ2,1, it still remains to find

±S ∈ SL(2,R), satisfying Φ2,1(±S) = V . To that end, note first that V is both orthogonal and in SO+(2, 1). Thus, it is in SO(3). Hence it must have the following form V = diag(R, ±1), where R is 2 × 2 orthogonal. However, from Equation (5.3) it is clear that the (3, 3) entry of a matrix in SO+(2, 1) is positive. So, V = diag(R, 1), with R in SO(2). Let c = cos θ, s = sin θ. Then V =   c −s 0      s c 0  ,     0 0 1 As before, simple considerations show that the matrix S ∈ SL(2,R) projecting to R must itself be in SO(2). Finding R’s entries as functions of c and s is easy. First, if θ ∈ (0, 2π),

θ  then sin 2 > 0. Indeed, denoting by c = cos θ  , s = sin θ . we have b 2 b 2   c −s  b b  S = ±   (5.4) bs bc For θ = 0, 2π we get   −1 0   S = ±  (5.5) 0 −1 We summarize all of this in an algorithm. Algorithm 5.3: Given X ∈ SO+(2, 1), the following algorithm computes both its polar decomposition and the Y ∈ SL(2,R) satisfying Φ2,1(±Y ) = X.

36 • Step 1: Find XT X and find Z ∈ SL(2,R) positive definite such that

T Φ2,1(Z) = X X, using Algorithm (5.3).

• Step 2: Find the unique positive definite square root W of Z. This step can be executed

without any diagonalization - see (? ).

• Step 3: Find P = Φ2,1(W ) using Equation (5.3).

−1 • Step 4: Find P by interchanging p11 and p22 and replacing p12, p21 by −p12, −p21

respectively in P from Step 3.

• Step 5: Find V = XP −1. Then X = VP is the polar decomposition of X.

• Step: 6: Find S ∈ SO(2) satisfying Φ2,1(S) = V using Equation (5.4) or Equation

(5.5). Then Y = WS satisfies Φ2,1(±Y ) = X.

We now present a second algorithm which will produce the polar decomposition directly from X itself, with inspection and finding special orthogonal matrices which rotate a plane vector into another of the same length. This is a special case of the algorithm for general

SO+(n, 1) from (? ) mentioned earlier in Remark (4.1).

Algorithm: Let X ∈ SO+(n − 1, 1). The following algorithm computes the polar de- composition X = VQ and also provides the logarithm, within so+(n − 1, 1), of the positive definite factor Q:

1. Suppose Xnn = 1. Then use Q = In and V = X itself. The logarithm of Q is, of

course, 0n.

2. Let Xnn > 1. Let τ ≥ 0 be uniquely defined by cosh(τ) = Xnn.

37 3. Let U be a special orthogonal matrix of size n − 1 satisfying   Xn1     1  Xn2    Ue1 =  .  sinh(τ)  .      Xn,n−1

n−1 . Here e1 is the first unit vector in R . U can be found constructively - see Proposition (??).

4. Let W be a special orthogonal matrix of size n − 1 satisfying   X1n     1  X2n    W e1 =  .  sinh(τ)  .      Xn−1,n

. Define Z = WU T .   sinh(τ)      0  ˜   5. Write C = (cosh(τ)), C = C ⊕ In−1,Sn−1×1 =  . .  .      0

6. Then the polar decomposition of X is X = VQ where V = Z ⊕ 1 and   UCU˜ T USV T   Q =  . VSTU T VCV T

  0 US +   7. Finally the logarithm, within so (n − 1, 1), of Q is  . ST U T 0

Remark: Algorithm (5.4) extends, with additional matrix manipulations, to general (p, q). Once again for brevity we suppose p ≥ q. Then the main differences are:

38 • i) in Step 2, one would have to find a SVD of a q × q matrix (viz., the SW q × q block of X.

• ii) In Steps 3 and 4 one would have to find a matrix in SO(p, R) which would rotate the first q unit vectors in Rp to a given collection of q orthonormal vectors Rp.

If, as is the main interest in this work, one wishes to obtain closed form solutions then one has to limit to low q in i). Modification ii) can be performed in closed form if either q = p, or q = p − 1 or if p ≤ 4 (for the last case one would use quaternion algebra.) Remark: Though Alogorithm 5.4: (5.4) applies only to the connected component of the identity, SO+(n − 1, 1, R) [or SO+(1, n − 1R)], and it is easy to extend them to the full group.

5.5 Inversion of Φ4,1 via the Inversion of Ψ4,1

+ + In this section the map Φ4,1 : Spin (4, 1) → SO (4, 1) is inverted by linearizing Φ4,1. We will see that this method works for both the case where the target matrix in SO+(4, 1) is assumed to be given by its Givens factors and the case wherein we assume that the target matrix is given by its polar decomposition. In particular, we will see that the latter provides a constructive technique to find the polar decomposition of a matrix in Spin+(4, 1). Since this a group of certain 2 × 2 quaternionic matrices, we have thus a technique to compute the polar decomposition of such quaternionic matrices, without passage to the associated ΘH image in M(4,C) and, in particular, without any eigencalculations. As usual we begin with a basis of 1-vectors for Cl(4, 1) = M(4,C):

V1 = σz ⊗ σx; V2 = σy ⊗ I2,V3 = σz ⊗ σz; V4 = σx ⊗ I2; V5 = −σz ⊗ (ıσy)

As shown in (10), with respect to this basis,

+ ∗ Spin (4, 1) = {X ∈ M(4,C) ∩ im(ΘH )X MX = M}

39 where

M = (ıσy) ⊕ (−ıσy)

Since these matrices are ΘH matrices it is convenient to identify them with the corre- sponding matrices in M(2, H). Note, however, M itself is not a ΘH matrix.

Next, the Lie algebra

+ ∗ spin (4, 1) = {Λ ∈ M(4,C) ∩ im(ΘH )Λ M = −MΛ}

  ZW   Since Λ is in the image of ΘH it is of the form   with Z + W j ∈ M(2, H). −W¯ Z¯ The condition Λ∗M = −MΛ forces

  a + ıa b  1 2  Z =   (5.6) c −a1 + ıa2 and   α + ıα β + ıβ  1 2 1 2  W =   (5.7) γ1 + ıγ2 −α1 − ıα2

So   a1 + ıa2 b α1 + ıα2 β1 + ıβ2      c −a1 + ıa2 γ1 + ıγ2 −α1 − ıα2    Λ =    −α + ıα −β + ıβ a − ıa b   1 2 1 2 1 2    −γ1 + ıγ2 α1 − ıα2 c −a1 − ıa2

+ The linearization of Φ4,1 sends Λ ∈ spin (4, 1) to the matrix of the linear map which sends a one-vector V to the one vector YV − VY , with respect to the basis {V1,...,V5} above

40 We then get:

  0 β2 + γ2 −b + c β1 + γ1 −2a1      −β2 − γ2 0 −2α2 2a2 −β2 + γ2      Ψ4,1(Λ) =  b − c 2α 0 2α b + c  (5.8)  2 1       −β1 − γ1 −2a2 −2α1 0 −β1 + γ1    −2a1 −β2 + γ2 b + c −β1 + γ1 0

5.6 Inversion via Givens Factors

Following Example 2.8 every matrix in SO+(4, 1) can be decomposed non-uniquely as X =

R14R13R12H15R24R23H25R34H35H45 We then have the following result.

+ Proposition: The following table describes the Y ∈ Spin (4, 1) ⊆ M(2, H) satisfying

Φ4,1(±Y ) = X, where X is one of the Givens matrices in the last equation:

41 Rij or Hij Y   cos(θ/2) − sin(θ/2)j   R14   − sin(θ/2)j cos(θ/2)   cos(θ/2) − sin(θ/2)   R13   sin(θ/2) cos(θ/2)   cos(θ/2) − sin(θ/2)k   R1,2   − sin(θ/2)k cos(θ/2)   e−θ/2 0   H1,5   0 e−θ/2   cos(θ/2) + sin(θ/2)ı 0   R4,2   0 cos(θ/2) − sin(θ/2)ı   cos(θ/2) − sin(θ/2)k 0   R3,2   0 cos(θ/2) + sin(θ/2)k   cosh(θ/2) − sinh(θ/2)k   H2,5   sinh(θ/2)k cosh(θ/2)   cosh(θ/2) sinh(θ/2)k   H3,5   sinh(θ/2)k cosh(θ/2)   cosh(θ/2) − sinh(θ/2)j   H4,5   sinh(θ/2)j cosh(θ/2)

Proof: The proof proceeds by expressing each Rij or Hij as the exponential of an Lij ∈

+ −1 so (5, 1); finding the 2 × 2 quaternionic matrix Kij = Ψ4,1(Lij) and then exponentiating Kij

explicitly. This last matrix is Y . For brevity only the details for H25 are displayed.

42 T T We begin by noting that H25 = Exp[θ(e2e5 + e5e2 )]. By inspecting, Equation (5.8), it is seen that its preimage in spin(4, 1) is   θ 0 −k   ΘH [  ] 2 k 0

Now   θ 0 −k θ2   2 [  ] = I2 2 k 0 4

Hence   θ 0 −k   Exp[  ] 2 k 0   cosh(θ/2) − sinh(θ/2)k   =  ♦. sinh(θ/2)k cosh(θ/2)

Remark:Linearization and Givens in General: The fact that the preimages of the

logarithms of the Givens factors in the spin Lie algebra always had a quadratic minimal

polynomial holds for general (p, q) case. This provides us with a method to invert both the

abstract Φp,q and the matrix Φp,q without having to find a concrete form of Φp,q. We dub the latter as agnostic inversion. Thus, the method of (18) as enhanced by iii) of Remark (1.1) is

agnostic inversion. We will now justify our claim and display a second method for agnostic

inversion which uses Givens decompositions instead of calculating minors of X.

T T Specifically, the spin Lie algebra is also the space of bi-vectors. Let Lij = θ(eiej + ejei ) be the logarithm of a hyperbolic Givens Hij. Its pre image in the space of bi-vectors is

θ 2 XiXj, where 1 ≤ p ≤ p and p + 1 ≤ j ≤ q. Indeed the abstract Ψp,q sends an element Λ in the space of bi-vectors to the matrix of the linear map which sends a one-vector V to

ΛV − V Λ. From the form of LiJ it then follows that if Λ is the pre image of Lij, then Λ

commutes with all one vectors in the basis of one vectors {X1,...,Xp,Xp+1,...,Xq} except

43 θ Xi and Xj. This observation plus a few calculations show that Λ = 2 XiXj. Quite clearly,

2 θ θ Λ is a positive multiple of the identity. So Exp(Λ) = cos( 2 )I + sin( 2 )[XiXj] is the pre image of Hij in the spin Lie group. Similar comments apply to Rij. This provides the inversion of the abstract covering map and also the agnostic inversion of Φp,q via iii) of

Remark (1.1). For the inversion of the concrete Φp,q via linearization we need, of course, an explicit matrix form of Ψp,q. However, since the embedding of the even sub algebra in Cl(p, q)

−1 is an algebra isomorphism onto its image, it is guaranteed that the Ψp,q(Lij) also satisfies the same quadratic annihilating polynomial and hence its exponential is easily found.

5.7 Inversion of Φ4,1 via the polar decomposition

Let X ∈ SO+(4, 1). Then in view of Remark (4.1), one can find constructively both its polar decomposition

X = VP and the Xˆ ∈ so+(4, 1) , such that it is symmetric and Exp(Xˆ) = P . Furthermore, by invoking Remark (??) plus a little work, we can also find a skew-symmetric, 5 × 5, real matrix whose exponential equals V .

We will presently see that it is possible to exponentiate in closed form the pre image,

+ under Ψ4,1, of a symmetric matrix or a skew-symmetric matrix in so (4, 1). Therefore, using the polar decomposition to invert Φ4,1 is a viable option.

ˆ + + To that end let X ∈ so (4, 1) be symmetric. Then its pre image in spin (4, 1) is the ΘH image of the following 2 × 2 quaternionic matrix   a b + β j + β k  1 1 2  Λ =   (5.9) b − β1j − β2k −a1

2 2 A quick calculation Λ = λ I2 where

44 2 2 2 λ = a1+ | q | (5.10)

wherein q is the quaternion b + β1j + β2k. Therefore,

sinh(λ) Exp(Y ) = cosh(λ)I + Λ. 2 λ ˆ sinh(λ) Hence, the preimage of P = Exp(X) is ΘH [cosh(λ)I2 + λ Λ]. Note also that

−1 [cosh(λ)I2 + sinh(λ)Λ] = cosh(λ)I2 − sinh(λ)Λ

Next, V is both special orthogonal and in SO+(4, 1). Therefore, it is of the form W ⊕ 1, where W is 4 × 4 special orthogonal. Hence the matrix Yˆ ∈ so+(4, 1) with

Exp(Yˆ ) = V

is of the form ˆ Y = Y ⊕ 01×1 with Y that is 4 × 4 real antisymmetric. In view of Remark (??),

Y = Y1 + Y2

ˆ ˆ ˆ ˆ ˆ ˆ with [Y1,Y2] = 0. Thus Y = Y1 + Y2, where Yl = Yl ⊕ 01×1, l = 1, 2. Clearly Y1 and Y2 also commute. Therefore, −1 ˆ −1 ˆ −1 ˆ Ψ4,1(Y ) = Ψ4,1(Y1) + Ψ4,1(Y2) and as Ψ4,1 is a Lie algebra isomorphism we find that the two summands on the right hand side of the last equation also commute. Thus,

−1 ˆ −1 ˆ −1 ˆ Exp[Ψ4,1(Y )] = Exp[Ψ4,1(Y1)]Exp[Ψ4,1(Y2)]

ˆ −1 ˆ Now, the preimage of V = Exp(Y ) under Φ4,1 is ±Exp[Ψ4,1(Y )], which in is the −1 ˆ product of ± the product of the Exp[Ψ4,1(Yl)], l = 1, 2.

45 Let us write −1 ˆ Ψ4,1(Yl) = ΘH (Zl + Wlj), l = 1, 2

Then, evidently, ˆ exp[Ψ4,1(Yl] = ΘH [Exp(Z + Wlj], l = 1, 2

Now both Zl + Wlj for l = 1, 2 satisfy a cubic polynomial

3 2 (Zl + Wlj) = −κl (Zl + Wlj), l = 1, 2

where κl are real (as will be shown presently). Hence,

sin(κl) 1 − cos(κl) 2 Exp(Zl + Wlj) = I2 + (Zl + Wlj) + 2 (Zl + Wlj) (5.11) κl κl

−1 and hence finding Φ4,1(V ) (thus, Φ4,1(X)) is complete.

We will next show that Z1 + W1j is indeed annihilated by a real cubic polynomial.

Inspecting Equation (2.10) and Equation (5.8), it is evident that we must also impose a1 =

0, β2 = γ2 = α1; b = −c = a2, β1 = γ1 = −α2 in Equation (5.6) and Equation (5.7) to obtain Z + W j. This then yields 1 1   i 1   Z1 = a2   −1 ı and   α + iα −α + iα  1 2 2 1  W1 =   −α2 + iα1 −α1 − iα2 Now 2 2 ¯ ¯ (Z1 + W1j) = (Z1 − W1W1) + (Z1W1 + W1Z1)j

¯ A quick calculation shows Z1W1 + W1Z1 = 0 and

  −1 ı Z2 = 2a2   1 2   −ı −1

46 while   1 −i W W¯ = 2(α2 + α2)   1 1 1 2   ı 1   −1 ı Thus Z2 − W W¯ = 2(a2 + α2 + α2)  . Hence, 2 2 2 1 2   −i −1

3 2 2 2 (Z1 + W1j) = −4(a2 + α1 + α2)(Z1 + W1j)

In other words,

2 2 2 1/2 κ1 = 2(a2 + α1 + α2)

Similarly, Z2 + W2j is expressible solely in terms of a2, α1, α2 (but, of course, the triple

3 (a2, α1, α2) for Z2 + W2j is different from that for Z1 + W1j). Once again (Z1 + W1j) =

2 2 2 2 1/2 −κ2(Z2 + W2j), with κ2 = 2(a2 + α1 + α2) .

This completes the inversion of Φ4,1 via the polar decomposition which we present as an algorithm:

• Let X ∈ SO+(4, 1). Compute using Remark both the polar decomposition X = VP

and the “logarithms” Q ∈ so+(4, 1) of P and the logarithm

ˆ + Y = (Y1 + Y2) ⊕ 01×1 in so (4, 1), where Y1,Y2 are as in Equation (2.10) and Equation (2.10).

−1 • Find Λ = Ψ4,1(Q) and λ as given by Equation (5.9) and Equation (5.10). Then

−1 Φ4,1(P ) = ±ΘH [cosh(λ)I2 + sinh(λ)Λ].

• Next find Zi + Wij ∈ M(2, H) and κi ∈ R for i = 1, 2, from the entries of Y1,Y2.

47 • Then

−1 sin(κ1 Φ4,1(V ) = ±ΘH {[I2 + (Z1 + W1j) κ1

1 − cos(κ1 2 sin(κ2 + 2 (Z1 + W1j) ][I2 + (Z2 + W2j) κ1 κ2

1 − cos(κ2 2 + 2 (Z2 + W2j) ]} κ2

As mentioned at the beginning of this section, the above considerations can be used to compute the polar decomposition of a matrix Y in spin+(4, 1), without computing that of the associated 4 × 4 complex matrix that is the ΘH image of it. Indeed, all that one has to do is to compute X = Φ4,1(Y ) and apply the previous algorithm to X.

48 CHAPTER 6

INVERSION OF φ(1, 5)

+ In this chapter we will show some technique for inverting φ1,5. Inversion of Spin (1, 5)

6.1 Inversion of φ(1, 5) via Polar Decomposition

In this section, we show that the polar decomposition is viable technique for inverting φ1,5.

Specifically we show that all steps , which work for ψ4,1 go through, except that we require extra work is finding the log of a 5 × 5 orthogonal matrix constructively. Since Cl(0,4) is M(2,H), IC1 shows that Cl(1,5) is M(4,H). As shown in the paper [5], of basis of 1-vectors for Cl(1,5) is given by         0 i 0 j 0 k 0 1         B1,5 = {iσx =   ; jσx =   ; kσx =   ; J2 =  } . i 0 j 0 k 0 −1 0 Furthermore spin+(1, 5) is isomorphic to

sl(2,H) = {A ∈ M(2,H)|Re(T rA) = 0}

Specifically,if

  q q  1 2  A =   (6.1) q3 q4 where

q1 = a + bi + cj + dk

q2 = e + fi + gj + hk

q3 = m + ni + pj + qk

q4 = r + si + tj + uk

, with A ∈ −R , The condition ReT rA = 0 gives that Re(q1 + q4) = 0 implies Re(q1) =

−Re(q4)

49   A 0 Let us now embed A into a 4 × 4 matrix, A˜ =   , where A∗ represents the  ∗  0 −σxA σx quaternion transpose conjugate.

      0 −1 q¯1 q¯3 0 1 ∗       Now, −σxA σx =       −1 0 q¯2 q¯4 1 0       −q¯ −q¯ 0 1 −q¯ −q¯  2 4     4 2  =     =   −q¯1 −q¯3 1 0 −q¯3 −q¯1

So we get

  q1 q2 0 0      q3 q4 0 0  ˜   A =   (6.2)  0 0 −q¯ −q¯   4 2    0 0 −q¯3 −q¯1

So we get

  a + bi + cj + dk e + fi + gj + hk 0 0      m + ni + pj + qk r + si + tj + uk 0 0  ˜   A =    0 0 −r + si + tj + uk −e + fi + gj + hk      0 0 −m + ni + pj + qk −a + bi + cj + dk (6.3)

Then we have following theorem.

50 Theorem: ψ1,5 is a lie algebra isomorphism from sl(2,H) to so(1,5). If A ∈ sl(2,H), then   0 f − n g − p h − q e + m a − r      f − n 0 −(d + u) c + t b − s −(f + n)         g − p d + u 0 −(b + s) c − t −(g + p)  ψ1,5 =   (6.4)    h − q −(c + t) b + s 0 d − u −(h + q)       e + m −(b − s) −(c − t) −(d − u) 0 −(e − m)      a − r f + n g + p h + q e − m 0

Proof: Proof is provided at the end of the chapter. Let us examine the pre image of a symmetric(resp skew symmetric) in so(1,5). Symmetric case: Examining equation(6.1), we see that d and u should be 0 , C and t should be 0, b and s must be zero and f = −n, h = −q and e= m, therefore we get q1 = a, q2 = fi + gj + hk, q3 =q ¯2, q4 = −r. so we find that the preimage of symmetric matrix takes the form Matrix   a q   A =   q¯ −a

With a real and q arbitrary quaternion.

2 2 2 A Therefore if we find A = (a − |q| )I2. So e is easy to calculate. Anti symmetric Case:

We see that we must have f=n, g=p, h=q, e=-n and a=r=0 Therefore q1 = bi + cj + dk q2 = e + fi + gj + hk, q3 = −e + fi + gj + hk, q4 = si + gj + uk, therefore the matrix   q q  1 2  A =   q3 q4 satisfies A∗ = −A

Therefore θH image of such an A is a matrix in sp(4). The exponential of matrices in sp(4)

51 was thoroughly investigated in [3]. Notice that finding the polar decomposition in SO(1,5) can be performed constructively just as in the case of SO+(4, 1). The only ingredient for which we don’t have a constructive procedure yet is finding the logarithm of the orthogonal factor.

Proof of the theorem: we will calculate the matrix of the linear map vi −→ vi − viA, where Vi, i= 1 ... 6 are the elements of V(1,5).

To facilitate this calculation we use the fact that V1 to V6 form an orthogonal set with respect to the inner product < X, Y >= ReT r(X∗Y )

Here aij refers the elements i, j=1 ... 6 refers to the entries of the matrix of this linear map.

+ Let us begin with the Pauli matrices, σx , σy , σz and the 1-vectors for Spin (1, 5), the set         0 i 0 j 0 k 0 1         B1,5 = {iσx =   ; jσx =   ; kσx =   ; J2 =  } i 0 j 0 k 0 −1 0 , which is 1-vectors for Cl(0, 4) = M(2,H) as well with the only difference in conjugate transpose for the quaternionic matrices and the transpose for the real case. Clifford con-

cc ∗ rev ∗ ∗ ∗ jugation is given by X = X and reversion by X = σz X σz, where X indicates quaternion conjugation and transposition. Then a set of 1-vectors for Cl(1, 5) is given by

B1,5 = {σx ⊗ σx, σx ⊗ iσz, σx ⊗ jσz, σx ⊗ kσz, iσy ⊗ I2, σx ⊗ iσy}

Let A ∈ M(2,H), the collection of 2 × 2 matrices with entries belonging to a field F,   q q  1 2  with ReT rA = 0 for some A =   ; where q3 q4

q1 = a + bi + cj + dk

q2 = e + fi + gj + hk

52 q3 = m + ni + pj + qk q4 = r + si + tj + uk

The condition ReT rA = 0 gives that Re(q1 + q4) = 0 implies Re(q1) = −Re(q4)   A 0 Let us now embed A into a 4 × 4 matrix, A˜ =   where and A∗ represents  ∗  0 −σxA σx transpose of complex conjugate

      0 −1 q¯1 q¯3 0 1 ∗       −σxA σx =       −1 0 q¯2 q¯4 1 0       −q¯ −q¯ 0 1 −q¯ −q¯  2 4     4 2  =     =   −q¯1 −q¯3 1 0 −q¯3 −q¯1

So we get A˜ =   q1 q2 0 0      q3 q4 0 0     , for q1, q2, q3 and q4 defined as  0 0 −q¯ −q¯   4 2    0 0 −q¯3 −q¯1

q¯1 = a − bi − cj − dk q¯2 = e − fi − gj − hk q¯3 = m − ni − pj − qk q¯4 = r − si − tj − uk

So we get

53   a + bi + cj + dk e + fi + gj + hk 0 0      m + ni + pj + qk r + si + tj + uk 0 0  ˜   A =    0 0 −r + si + tj + uk −e + fi + gj + hk      0 0 −m + ni + pj + qk −a + bi + cj + dk

˜ ˜ Let us define ψA˜ : 1Vs → 1Vs such that ψA˜(Vj) = AVj − VjA, j = 1,2,...,6 and define the inner product as follows:

∗ hψA˜Vj,Vji = ReT r(ψA˜(Vj )Vj)

Let us first calculate the basis(1-vectors) for Cl(1,5)   0 0 0 1      0 0 1 0    V1 = σx ⊗ σx =    0 1 0 0      1 0 0 0   0 0 i 0         0 1 i 0  0 0 0 −i        V2 = σx ⊗ iσz =   ⊗   =   1 0 0 −i  i 0 0 0      0 −i 0 0   0 0 j 0         0 1 j 0  0 0 0 −j        V3 = σx ⊗ jσz =   ⊗   =   1 0 0 −j  j 0 0 0      0 −j 0 0   0 0 k 0         0 1 k 0  0 0 0 −k        V4 = σx ⊗ kσz =   ⊗   =   1 0 0 −k  k 0 0 0      0 −k 0 0

54   0 0 1 0         0 1 1 0  0 0 0 1        V5 = iσy ⊗ I2 =   ⊗   =   −1 0 0 1  −1 0 0 0      0 −1 0 0   0 0 0 1         0 1 0 1  0 0 −1 0        V6 = σx ⊗ iσy =   ⊗   =   1 0 −1 0  0 1 0 0      −1 0 0 0 0 Now we calculate the image under ψ for all Vi s, i =1,2,..,6   ˘ 0 0 V1,3 2a    ˘   0 0 2r V2,4  ˜ ˜   ψA˜(V1) = AV1 − V1A =    V˘ −2r 0 0   3,1    ˘ −2a V4,2 0 0

˘ where V1,3 = (e + m) + (f − n)i + (g − p)j + (h − q)k, ˘ V2,4 = (e + m) + (n − f)i + (p − g)j − (q − h)k, ˘ V3,1 = −(e − m) + (f − n)i + (g − p)j + (h − q)k, ˘ V4,2 = −(e + m) + (n − f)i + (p − g)j + (q − h)k

  ˘ ∗ 0 0 V1,3 −2a      0 0 2r V˘ ∗  ∗  2,4  ψA˜(V1) =    V˘ ∗ 2r 0 0   3,1    ˘ ∗ 2a V4,2 0 0

˘ ∗ For V1,3 = −(e + m) + (f − n)i + (g − p)j + (h − q)k, ˘ ∗ V2,4 = −(e + m) + (f − n)i + (g − p)j + (h − q)k, ˘ ∗ V3,1 = (e + m) − (f − n)i − (g − p)j − (h − q)k, ˘ ∗ V4,2 = −(e + m) + (f − n)i + (g − p)j + (h − q)k

55   −2a 0 0 0      0 −2r 0 0  ∗   hψA˜(V1),V1i = ReT r(ψA˜(V1 )V1) =   = 0  0 0 2a 0      0 0 0 2r

∗ and hψA˜(V1),V2i = ReT r(ψA˜(V1 )V2)

  f − n 0 0 0      0 f − n 0 0    =   = 4(f − n)  0 0 f − n 0      0 0 0 f − n

∗ hψA˜(V1),V3i = ReT r(ψA˜(V1 )V3)

  g − p 0 0 0      0 g − p 0 0    =   = 4(g − p)  0 0 g − p 0      0 0 0 g − p

∗ hψA˜(V1),V4i = ReT r(ψA˜(V1 )V4)

  h − q 0 0 0      0 h − q 0 0    =   = 4(h − q)  0 0 h − q 0      0 0 0 h − q

∗ hψA˜(V1),V5i = ReT r(ψA˜(V1 )V5)

56   e + m 0 0 0      0 e + m 0 0    =   = 4(e + m)  0 0 e + m 0      0 0 0 e + m

∗ hψA˜(V1),V6i = ReT r(ψA˜(V1 )V6)

  2a 0 0 0      0 −2r 0 0    =   = 4(a − r)  0 0 −2r 0      0 0 0 2a

˜ ˜ ψA˜(V2) = AV2 − V2A   ˘ 0 0 U1,3 2f    ˘   0 0 −2n U2,4    =    U˘ = 2f 0 0   3,1    ˘ −2n U2,4 0 0

˘ whereU1,3 = −(b − s) + (a + r)i + (d + u)j − (c + t)k, ˘ U2,4 = −(b − s) − (a + r)i − (d + u)j + (c + t)k, ˘ U3,1 = (b − s) − (a + r)i + (d + u)j − (c + t)k, ˘ U4,2 = (b − s) + (a + r)i − (d + u)j + (c + t)k

  ˘ ∗ 0 0 U1,3k −2n      0 0 2f U˘ ∗  ∗  2,4  ψA˜(V1) =    U˘ ∗ −2n 0 0   3,1    ˘ ∗ 2f U4,2 0 0

˘ ∗ where U1,3 = (b − s) + (a + r)i − (d + u)j + (c + t)k,

57 ˘ ∗ U2,4 = (b − s) − (a + r)i + (d + u)j − (c + t)k, ˘ ∗ U3,1 = −(b − s) − (a + r)i − (d + u)j + (c + t)k, ˘ ∗ U4,2 = −(b − s) + (a + r)i + (d + u)j − (c + t)k

∗ hψA˜(V2),V1i = ReT r(ψA˜(V2 )V1)   −2n 0 0 0      0 2f 0 0    =   = 4(f − n)  0 0 −2n 0      0 0 0 2f

∗ hψA˜(V2),V2i = ReT r(ψA˜(V2 )V2)   −(a + r) 0 0 0      0 −(a + r) 0 0    =   = 0  0 0 (a + r) 0      0 0 0 (a + r)

∗ hψA˜(V2),V3i = ReT r(ψA˜(V2 )V3)

  d + u 0 0 0      0 d + u 0 0    =   = 4(d + u)  0 0 d + u 0      0 0 0 d + u

∗ hψA˜(V2),V4i = ReT r(ψA˜(V2 )V4)

  −(c + t) 0 0 0      0 −(c + t) 0 0    =   = −4(c + t)  0 0 −(c + t) 0      0 0 0 −(c + t)

58 ∗ hψA˜(V2),V5i = ReT r(ψA˜(V2 )V5)

  −(b − s) 0 0 0      0 −(b − s) 0 0    =   = −4(b − s)  0 0 −(b − s) 0      0 0 0 −(b − s)

∗ hψA˜(V2),V6i = ReT r(ψA˜(V2 )V6)

  2n 0 0 0      0 2f 0 0    =   = 4(f + n)  0 0 2n 0      0 0 0 2f

˜ ˜ ψA˜(V3) = AV3 − V3A

  ˘ 0 0 W1,3 2g    ˘   0 0 −2p W2,4    =    W˘ 2g 0 0   3,1    ˘ −2p W4,2 0 0 ˘ ˘ where W1,3 = −(c − t) − (d + u)i + (a + r)j + (b + s)k, W2,4 = −(c − t) + (d + u)i − (a + r)j − ˘ ˘ (b+s)k, W3,1 = (c−t)−(d+u)i−(a+r)j+(b+s)k, W4,2 = (c−t)+(d+u)i+(a+r)j−(b+s)k

  ˘ ∗ 0 0 W1,3 −2p      0 0 2g W˘ ∗  ∗  2,4  ψA˜(V3) =    W˘ ∗ −2p 0 0   3,1    ˘ ∗ 2g W4,2 0 0 ˘ ∗ ˘ ∗ where W1,3 = (c − t) + (d + u)i + (a + r)j − (b + s)k, W2,4 = (c − t) − (d + u)i − (a + r)j + (b +

59 ˘ ∗ ˘ ∗ s)k, W3,1 = −(c−t)+(d+u)i−(a+r)j−(b+s)k, W4,2 = −(c−t)−(d+u)i+(a+r)j+(b+s)k

∗ hψA˜(V3),V1i = ReT r(ψA˜(V3 )V1)

  −2p 0 0 0      0 2g 0 0    =   = 4(g − p)  0 0 −2p 0      0 0 0 2g

∗ hψA˜(V3),V2i = ReT r(ψA˜(V3 )V2)

  −(d + u) 0 0 0      0 −(d + u) 0 0    =   = −4(d + u)  0 0 −(d + u) 0      0 0 0 −(d + u)

∗ hψA˜(V3),V3i = ReT r(ψA˜(V3 )V3)

  −(a + r) 0 0 0      0 −(a + r) 0 0    =   = 0  0 0 −a + r) 0      0 0 0 (a + r)

∗ hψA˜(V3),V4i = ReT r(ψA˜(V3 )V4)

  b + s 0 0 0      0 b + s 0 0    =   = 4(b + s)  0 0 b + s 0      0 0 0 b + s

60 ∗ hψA˜(V3),V5i = ReT r(ψA˜(V3 )V5)

  −(c − t) 0 0 0      0 −(c − t) 0 0    =   = −4(c − t)  0 0 −(c − t) 0      0 0 0 −(c − t)

∗ hψA˜(V3),V6i = ReT r(ψA˜(V3 )V6)

  2g 0 0 0      0 −2p 0 0    =   = 4(g + p)  0 0 −2g 0      0 0 0 2p

˜ ˜ ψA˜(V4) = AV4 − V4A   ˘ 0 0 X1,3 2h    ˘   0 0 −2q X2,4    =    X˘ 2h 0 0   3,1    ˘ −2q X4,2 0 0 ˘ ˘ where X1,3 = −(d − u) + (c + t)i − (b + s)j + (a + r)k, X2,4 = −(d − u) − (c + t)i + (b + s)j − ˘ ˘ (a+r)k, X3,1 = (d−u)+(c+t)i−(b+s)j−(a+r)k, X4,2 = (d−u)−(c+t)i+(b+s)j+(a+r)k

  ˘ ∗ 0 0 X1,3 −2q      0 0 2h X˘ ∗  ∗  2,4  ˘ ∗ ψA˜(V4) =   where X1,3 = (d − u) − (c + t)i + (b + s)j + (a +  X˘ ∗ −2q 0 0   3,1    ˘ ∗ 2h X4,2 0 0 ˘ ∗ ˘ ∗ r)k, X2,4 = (d − u) + (c + t)i − (b + s)j − (a + r)k, X3,1 = −(d − u) − (c + t)i + (b + s)j − ˘ ∗ (a + r)k, X4,2 = −(d − u) + (c + t)i − (b + s)j + (a + r)k

61 ∗ hψA˜(V4),V1i = ReT r(ψA˜(V4 )V1)

  −2q 0 0 0      0 2h 0 0    =   = 4(h − q)  0 0 −2q 0      0 0 0 2h

∗ hψA˜(V4),V2i = ReT r(ψA˜(V4 )V2)

  (c + t) 0 1 0      0 (c + t) 0 1    =   = 4(c + t)  0 0 (c + t) 0      0 0 0 (c + t)

∗ hψA˜(V4),V3i = ReT r(ψA˜(V4 )V3)

  −(b + s) 0 0 0      0 −(b + s) 0 0    =   = −4(b + s)  0 0 −(b + s) 0      0 0 0 −(b + s)

∗ hψA˜(V4),V4i = ReT r(ψA˜(V4 )V4)

  −(a + r) 0 0 0      0 −(a + r) 0 0    =   = 0  0 0 (a + r)) 0      0 0 0 (a + r)

∗ hψA˜(V4),V5i = ReT r(ψA˜(V4 )V5)

62   −(d − u) 0 0 0      0 −(d − u) 0 0    =   = −4(d − u)  0 0 −(d − u) 0      0 0 0 −(d − u)

∗ hψA˜(V4),V6i = ReT r(ψA˜(V4 )V6)

  2q 0 0 0      0 2h 0 0    =   = 4(h + q)  0 0 2q 0      0 0 0 2h

˜ ˜ ψA˜(V5) = AV5 − V5A   ˘ 0 0 Y1,3 2e    ˘   0 0 2m Y2,4    =   ,  Y˘ 2e 0 0   3,1    ˘ 2m Y4,2 0 0 ˘ ˘ ˘ where Y1,3 = (a+r)+(b−s)i+(c−t)j+(d−u)k, Y2,4 = (a+r)−(b−s)i−(c−t)j−(d−u)k, Y3,1 = ˘ (a + r) − (b − s)i − (c − t)j − (d − u)k, Y4,2 = (a + r) − (b − s)i − (c − t)j − (d − u)k

  ˘ ∗ 0 0 Y1,3 2m      0 0 2e Y˘ ∗  ∗  2,4  ψA˜(V5) =   ,  Y˘ ∗ 2m 0 0   3,1    ˘ ∗ 2e Y4,2 0 0 ˘ ∗ ˘ ∗ ˘ ∗ where Y1,3 = (a+r)−(b−s)i−(c−t)j−(d−u)k, Y2,4 = (a+r)+(b−s)i+(c−t)j+(d−u)k, Y3,1 = ˘ ∗ (a + r) − (b − s)i − (c − t)j − (d − u)k, Y4,2 = (a + r) + (b − s)i + (c − t)j + (d − u)k

∗ hψA˜(V5),V1i = ReT r(ψA˜(V5 )V1)

63   2m 0 0 0      0 2e 0 0    =   = 4(e + m)  0 0 2m 0      0 0 0 2e

∗ hψA˜(V5),V2i = ReT r(ψA˜(V5 )V2)

  b − s 0 0 0      0 b − s 0 0    =   = 4(b − s)  0 0 b − s 0      0 0 0 b − s

∗ hψA˜(V5),V3i = ReT r(ψA˜(V5 )V3)

  c − t 0 0 0      0 c − t 0 0    =   = 4(c − t)  0 0 c − t 0      0 0 0 c − t

∗ hψA˜(V5),V4i = ReT r(ψA˜(V5 )V4)

  d − u 0 0 0      0 d − u 0 0    =   = 4(d − u)  0 0 d − u 0      0 0 0 d − u

∗ hψA˜(V5),V5i = ReT r(ψA˜(V5 )V5)

64   −(a + r) 0 0 0      0 −(a + r) 0 0    =   = 0  0 0 (a + r) 0      0 0 0 (a + r)

∗ hψA˜(V5),V6i = ReT r(ψA˜(V5 )V6)

  −2m 0 0 0      0 2e 0 0    =   = 4(e − m)  0 0 −2m 0      0 0 0 2e

˜ ˜ ψA˜(V6) = AV6 − V6A   ˘ 0 0 Z1,3 2a    ˘   0 0 −2r Z2,4    =    Z˘ −2r 0 0   3,1    ˘ 2a Z4,2 0 0 ˘ ˘ where Z1,3 = (e−m)−(f +n)i−(g+p)j −(h+q)k, Z2,4 = −(e−m)+(f +n)i+(g+p)j +(h+ ˘ ˘ q)k, Z3,1 = (e−m)−(f +n)i−(g+p)j−(h+q)k, Z4,2 = −(e−m)+(f +n)i+(g+p)j+(h+q)k

  ˘∗ 0 0 Z1,3 2a      0 0 −2r Z˘∗  ∗  2,4  ψA˜(V6) =  ,  Z˘∗ −2r 0 0   3,1    ˘∗ 2a Z4,2 0 0 ˘∗ ˘∗ where Z1,3 = (e−m)+(f +n)i+(g +p)j +(h+q)k, Z2,4 = (e−m)−(f +n)i−(g +p)j −(h+ ˘∗ ˘∗ q)k, Z3,1 = (e−m)+(f +n)i+(g+p)j+(h+q)k, Z4,2 = −(e−m)−(f +n)i−(g+p)j−(h+q)k

∗ hψA˜(V6),V1i = ReT r(ψA˜(V6 )V1)

65   2a 0 0 0      0 −2r 0 0    =   = 4(a − r)  0 0 −2r 0      0 0 0 2a

∗ hψA˜(V6),V2i = ReT r(ψA˜(V6 )V2)

  −(f + n) 0 0 0      0 −(f + n) 0 0    =   = −4(f + n)  0 0 −(f + n) 0      0 0 0 −(f + n)

∗ hψA˜(V6),V3i = ReT r(ψA˜(V6 )V3)

  −(g + p) 0 0 0      0 −(g + p) 0 0    =   = −4(g + p)  0 0 −(g + p) 0      0 0 0 −(g + p)

∗ hψA˜(V6),V4i = ReT r(ψA˜(V6 )V4)

  −(h + q) 0 0 0      0 −(h + q) 0 0    =   = −4(h + q)  0 0 −(h + q) 0      0 0 0 −(h + q)

∗ hψA˜(V6),V5i = ReT r(ψA˜(V6 )V5)

66   −(e − m) 0 0 0      0 −(e − m) 0 0    =   = −4(e − m)  0 0 −(e − m) 0      0 0 0 −(e − m)

∗ hψA˜(V6),V6i = ReT r(ψA˜(V6 )V6)

  −2a 0 0 0      0 −2r 0 0    =   = 0  0 0 2r 0      0 0 0 2a

So ψA˜(V1) = a11V1 + a21V2 + a31V3 + a41V4 + a51V5 + a61V6

For     0 0 0 1 0 0 0 1          0 0 1 0   0 0 1 0  ∗     hV1,V1i = ReT r[V1 V1] =      0 1 0 0   0 1 0 0          1 0 0 0 1 0 0 0   1 0 0 0      0 1 0 0    = ReT r   = 4  0 0 1 0      0 0 0 1     0 0 −i 0 0 0 i 0          0 0 0 i   0 0 0 −i  ∗     hV2,V2i = ReT r[V2 V2] =      −i 0 0 0   i 0 0 0          0 i 0 0 0 −i 0 0

67   1 0 0 0      0 1 0 0    = ReT r   = 4  0 0 1 0      0 0 0 1     0 0 −j 0 0 0 j 0          0 0 0 j   0 0 0 −j  ∗     hV3,V3i = ReT r[V3 V3] =      −j 0 0 0   j 0 0 0          0 j 0 0 0 −j 0 0   1 0 0 0      0 1 0 0    = ReT r   = 4  0 0 1 0      0 0 0 1     0 0 −k 0 0 0 k 0          0 0 0 k   0 0 0 −k  ∗     hV4,V4i = ReT r[V4 V4] =      −k 0 0 0   k 0 0 0          0 k 0 0 0 −k 0 0   1 0 0 0      0 1 0 0    = ReT r   = 4  0 0 1 0      0 0 0 1     0 0 −1 0 0 0 −1 0          0 0 0 −1   0 0 0 −1  ∗     hV5,V5i = ReT r[V5 V5] =      1 0 0 0   1 0 0 0          0 1 0 0 0 1 0 0

68   1 0 0 0      0 1 0 0    = ReT r   = 4  0 0 1 0      0 0 0 1

∗ hV6,V6i = ReT r[V6 V6]

    0 0 0 −1 0 0 0 −1          0 0 1 0   0 0 1 0      =      0 −1 0 0   0 −1 0 0          1 0 0 0 1 0 0 0   1 0 0 0      0 1 0 0    = ReT r   = 4  0 0 1 0      0 0 0 1

a11 = hψA˜(V1),V1i/hV1,V1i = 0/4 = 0 4(f−n) a21 = hψA˜(V1),V2i/hV2,V2i = 4 = f − n 4(g−p) a31 = hψA˜(V1),V3i/hV3,V3i = 4 = g − p 4(h−q) a41 = hψA˜(V1),V4i/hV4,V4i = 4 = h − q 4(e+m) a51 = hψA˜(V1),V5i/hV5,V5i = 4 = e + m 4(a−r) a61 = hψA˜(V1),V6i/hV6,V6i = 4 = a − r

ψA˜(V2) = a12V1 + a22V2 + a32V3 + a42V4 + a52V5 + a62V6 where

−2(b−s) 4(f−n) a12 = hψA˜(V2),V1i/hV1,V1i = 4 = 4 0 a22 = hψA˜(V2),V2i/hV2,V2i = 4 = 0

69 4(d+u) (d+u) a32 = hψA˜(V2),V3i/hV3,V3i = 4 = 2 −4(c+t) a42 = hψA˜(V2),V4i/hV4,V4i = 4 = −(c + t) −4(b−s) a52 = hψA˜(V2),V5i/hV5,V5i = 4 = −(b − s) 4(n+f) a62 = hψA˜(V2),V6i/hV6,V6i = 4 = n + f

ψA˜(V3) = a11V1 + a21V2 + a31V3 + a41V4 + a51V5 + a61V6 4(g−p) a13 = hψA˜(V3),V1i/hV1,V1i = 4 = g − p −4(d+u) a23 = hψA˜(V3),V2i/hV2,V2i = 4 = −(d + u) 0 a33 = hψA˜(V3),V3i/hV3,V3i = 4 = 0 4(b+s) a43 = hψA˜(V3),V4i/hV4,V4i = 4 = (b + s) −4(c−t) a53 = hψA˜(V3),V5i/hV5,V5i = 4 = −(c − t) 4(g+p) a63 = hψA˜(V3),V6i/hV6,V6i = 4 = g + p

ψA˜(V4) = a14V1 + a24V2 + a34V3 + a44V4 + a54V5 + a64V6

4(h−q) a14 = hψA˜(V4),V1i/hV1,V1i = 4 = h − q 4(c+t) a24 = hψA˜(V4),V2i/hV2,V2i = 4 = c + t −4(b+s) a34 = hψA˜(V4),V3i/hV3,V3i = 4 = −(b + s) 0 a44 = hψA˜(V4),V4i/hV4,V4i = 4 = 0 −4(d−u) a54 = hψA˜(V4),V5i/hV5,V5i = 4 = −(d − u) 4(h+q) a64 = hψA˜(V4),V6i/hV6,V6i = 4 = h + q

ψA˜(V5) = a15V1 + a25V2 + a35V3 + a45V4 + a55V5 + a65V6

4(e+m) a15 = hψA˜(V5),V1i/hV1,V1i = 4 = e + m 4(b−s) a25 = hψA˜(V5),V2i/hV2,V2i = 4 = b − s

70 4(c−t) a35 = hψA˜(V5),V3i/hV3,V3i = 4 = c − t 4(d−u) a45 = hψA˜(V5),V4i/hV4,V4i = 4 = d − u 0 a55 = hψA˜(V5),V5i/hV5,V5i = 4 = 0 4(e−m) a65 = hψA˜(V5),V6i/hV6,V6i = 4 = e − m

ψA˜(V6) = a16V1 + a26V2 + a36V3 + a46V4 + a56V5 + a66V6 4(a−r) a16 = hψA˜(V6),V1i/hV1,V1i = 4 = (a − r) −4(f+n) a26 = hψA˜(V6),V2i/hV2,V2i = 4 = −(f + n) −4(g+p) a36 = hψA˜(V6),V3i/hV3,V3i = 4 = −(g + p) −4(h+q) a46 = hψA˜(V6),V4i/hV4,V4i = 4 = −(h + q) −4(e−m) a56 = hψA˜(V6),V5i/hV5,V5i = 4 = −(e − m) 0 a66 = hψA˜(V6),V6i/hV6,V6i = 4 = 0

Hence we get the result given below   a11 a12 a13 a14 a15 a16      a a a a a a   21 22 23 24 25 26       a31 a32 a33 a34 a35 a36       a41 a42 a34 a44 a45 a46       a a a a a a   51 52 35 54 55 56    a61 a62 a36 a64 a56 a66   0 f − n g − p h − q e + m a − r      f − n 0 −(d + u) c + t b − s −(f + n)         g − p d + u 0 −(b + s) c − t −(g + p)  =      h − q −(c + t) b + s 0 d − u −(h + q)       e + m −(b − s) −(c − t) −(d − u) 0 −(e − m)      a − r f + n g + p h + q e − m 0

71 Notice that, the resultant matrix has three important things worth mentioning to further conclude the result.

• the diagonal entries are all zeroes as was our expectation.

• Entries of first row and first column are exactly the same which will help us conclude the result.In this case, we consider all entries in the first row and first column equal to zero for the resultant 6 × 6 matrix. We obtain f + n = 0, g − p = 0, h − q = 0, e + m = 0, a − r = 0 giving us, f = n, g = p, h = q, e = −m, a = r, the resultant matrix could be simplified based on the findings as   0 0 0 0 0 0      0 0 −(d + u) c + t b − s 0         0 d + u 0 −(b + s) c − t 0       0 −(c + t) b + s 0 d − u 0       0 −(b − s) −(c − t) −(d − u) 0 0      0 2f 2g 2h 2e 0

• The remaining 5 × 5 matrix leaving first row and the first column, is skew- symmetric matrix. we consider all entries equal to zero, for the 5 × 5 matrix given above leaving

72 first row and first column. We obtain the following equations

d + u = 0 gives d = −u

d − u = 0 gives d = u so d = 0

c + t = 0 gives c = −t, c − t = 0 gives c = t gives c = 0

b + s = 0 gives b = −s, b − s = 0 gives b = s gives b = 0

f + n = 0 gives f = −n, g + p = 0 gives g = −p

e − m = 0 gives e = m

h + q = 0 gives h = −q

so we obtained a 2 × 2 matrix with entries given below,

q1 = a + bi + cj + dk = a

q2 = e + fi + gj + hk

q3 = m + ni = pj + qk = e − fi − gj − hk = complex conjugate of q2

q4 = r + si + tj + uk = r writing it in matrix form       q1 q2 a q2 a e + fi + gj + hk q =  =  =      ∗    q3 q4 q2 r e − fi − gj − hk r

Notice that diagonal entries are real for the matrix q and anti-diagional entries are

complex conjugate of each other.

so we obtained a 2 × 2 matrix with entries given below, q1 = a + bi + cj + dk q2 = e + fi + gj + hk q3 = m + ni + pj + qk = −e + fi + gj + hk q4 = r + si + tj + uk = a + si + tj + uk writing it in matrix form

73   a + bi + cj + dk e + fi + gj + hk   q =   −e + fi + gj + hk a + si + tj + uk   a + bi + cj + dk e + fi + gj + hk   −q = −   −e + fi + gj + hk a + si + tj + uk   −a − bi − cj − dk e − fi − gj − hk   =   e − fi − gj − hk −a − si − tj − uk   a + bi + cj + dk e + fi + gj + hk ∗   cc q = [ ] −e + fi + gj + hk a + si + tj + uk   a − bi − cj − dk −e − fi − gj − hk   =   e − fi − gj − hk a − si − tj − uk = −q

So q is an anti-hermitian matrix. For a , r being real parts and sum to zero and the off-diagonal elements are complex conjugate of each other. Thus spin+(1, 5) is isomorphic to sl(2,H) and Spin+(1, 5)is isomorphic to SL(2,H)= M(4, H). Furthermore, the eight elements

I2, σx, σy, σz, σxσy, σyσz, σxσz, σxσyσz are linearly independent over R. Taking the span of these eight elements over R gives a representation of Cl(3, 0) as M(2,C). The Pauli matrices are the 1-vectors for this representation. We can construct the even and odd sub algebras of Cl(3, 0) explicitly: The algebra Cl(0,2) is isomorphic to the quaternion algebra, and can be defined using the basis mentioned above.

74 CHAPTER 7

Spin+(2, 3) AND Spin+(2, 4)

In this chapter, we study a direct approach to the group Spin+(2, 3) and Spin+(2, 4), the motivation is that even though Spin+(p, q) is isomorphic to Spin+(q, p),the former lives in

Cl(p,q) while the latter lives in Cl(q,p) and the two Clifford algebras are not isomorphic. In this way, we will get novel representations of the classical groups which are isomorphic to these spin groups. We denote xcc := φcc(x), xrev := φrev(x) , and xgr := φgr(x) . As the size of the pair (p,q) increases, the task of finding a convenient set of 1- vectors in order to define the Clifford conjugation, reversion and the grade map as explicit matrix automorphisms becomes arduous. However, our task will be simplified by the previously defined Iterative constructions IC1 in Chapter 2.

7.1 Spin+(2, 3)

Cl(0,1) is C. Then Clifford conjugation is defined as x → x¯; and reversion as x → x. For

Cl(1,2)= M(2,C), by IC1, Clifford conjugation is defined as

−1 T X → J2 x J2   0 1   , where J2 =  ; and reversion as −1 0

−1 ∗ X → R2 X R2   0 1   , where R2 =  ; 1 0 Also, Cl(2,3) = M(4, C), by IC 1, Clifford conjugation is defined as

−1 ∗ X → K4 X K4

75   0 σ  x  , where K4 =  ; Note that K4 = −Mk⊗1 and reversion as −σx 0

−1 T X → L4 X L4   0 J  2  , where L4 =  ; J2 0 So grade is defined as

−1 ¯ X → G4 XG4   −σ 0  z  , where G4 =  ; Thus the even subalgebra of Cl(2,3) is 0 −σz   x1 iy1 ix2 y2      iz1 w1 z2 iw2    {X ∈ M(4,C)|X =  }  ix y x iy   3 3 4 4    z3 iw3 iz4 w4

(7.1)

Furthermore, we obtain that the spin+(2, 3) equals X defined above which further satisfies

∗ X Mk⊗1 = −Mk⊗1X where   0 0 0 −1      0 0 −1 0    Mk⊗1 =    0 1 0 0      1 0 0 0

76 Examining (7.1), it is tempting to believe that it must be isomorphic to the collection of real matrices given below   x1 y1 x2 y2      iz1 w1 z2 w2    Y =   (7.2)  x y x y   3 3 4 4    z3 w3 z4 w4 which further satisfy

Y T PY = P where P is some version of the symplectic signature matrix. However, a small calculation reveals that the map which sends X as in equation(7.1) to Y as an equation(7.2) is not even remotely an algebra isomorphism. So we have to be more innovative. To that end, we begin with the following basis for Cl(2,3):   0 1 0 0         1 0 0 1  1 0 0 0        e1 = σz ⊗ σx =   ⊗   =  , 0 −1 1 0  0 0 0 −1      0 0 −1 0   0 0 1 0         1 0 1 0  0 0 0 1        e2 = σx ⊗ I2 =   ⊗   =  , 0 1 0 1  1 0 0 0      0 1 0 0   i 0 0 0         1 0 i 0  0 −i 0 0        f1 = σz ⊗ iσz =   ⊗   =  , 0 −1 0 −i  0 0 −i 0      0 0 0 i

77   0 1 0 0         1 0 0 −i  −1 0 0 0        f2 = σz ⊗ iσy =   ⊗ i   =  , 0 −1 i 0  0 0 0 −1      0 0 1 0   0 0 1 0         0 1 1 0  0 0 0 1        f3 = iσy ⊗ I2 =   ⊗   =   −1 0 0 1  −1 0 0 0      0 −1 0 0 Using these basis we define the following matrices.

T1 = e1f3 = (σz ⊗ σx)(iσy ⊗ I2)   0 0 0 1      0 0 1 0    =   = σx ⊗ σx  0 1 0 0      1 0 0 0   1 0 0 0      0 1 0 0    T2 = e2.f3 = (σx ⊗ I2)(iσy ⊗ I2) =    0 0 1 0      0 0 0 1   0 0 i 0      0 0 0 −i    T3 = f1.f3 = (σz ⊗ iσz)(iσy ⊗ I2) =    i 0 0 0      0 −i 0 0     0 0 0 1 0 0 0 1          0 0 −1 0   0 0 1 0      T4 = f2f3 =   , T5 = T1.T2 =    0 1 0 0   0 −1 0 0          −1 0 0 0 −1 0 0 0

78     0 −i 0 0 −1 0 0 0          i 0 0 0   0 1 0 0      T6 = T1.T3 =   , T7 = T1.T4 =    0 0 0 −i   0 0 −1 0          0 0 i 0 0 0 0 1     0 0 −i 0 0 0 0 −1          0 0 0 i   0 0 1 0      T8 = T2.T3 =   , T9 = T2T4 =    i 0 0 0   0 1 0 0          0 −i 0 0 −1 0 0 0   0 i 0 0      i 0 0 0    T10 = T3T4 =    0 0 0 i      0 0 i 0

T11 = T1.T2.T3       0 0 0 1 1 0 0 0 0 0 i 0              0 0 1 0   0 1 0 0   0 0 0 −i        =        0 1 0 0   0 0 1 0   i 0 0 0              1 0 0 0 0 0 0 1 0 −i 0 0   0 −i 0 0      i 0 0 0    =    0 0 0 i      0 0 −i 0       0 0 0 1 1 0 0 0 0 0 0 1              0 0 1 0   0 1 0 0   0 0 −1 0        T12 = T1T2T4 =     .    0 1 0 0   0 0 1 0   0 1 0 0              1 0 0 0 0 0 0 1 −1 0 0 0

79   −1 0 0 0      0 1 0 0    =    0 0 1 0      0 0 0 −1       0 0 0 1 0 0 i 0 0 0 0 1              0 0 1 0   0 0 0 −i   0 0 −1 0        T13 = T1T3T4 =        0 1 0 0   i 0 0 0   0 1 0 0              1 0 0 0 0 −i 0 0 −1 0 0 0   0 0 i 0      0 0 0 i    =    i 0 0 0      0 i 0 0       1 0 0 0 0 0 i 0 0 0 0 1              0 1 0 0   0 0 0 −i   0 0 −1 0        T14 = T2T3T4 =        0 0 1 0   i 0 0 0   0 1 0 0              0 0 0 1 0 −i 0 0 −1 0 0 0   0 −i 0 0      −i 0 0 0    =    0 0 0 i      0 0 i 0         0 0 0 1 1 0 0 0 0 0 i 0 0 0 0 1                  0 0 1 0   0 1 0 0   0 0 0 −i   0 0 −1 0          T15 = T1T2T3T4 =          0 1 0 0   0 0 1 0   i 0 0 0   0 1 0 0                  1 0 0 0 0 0 0 1 0 −i 0 0 −1 0 0 0

80   0 0 i 0      0 0 0 i    =    −i 0 0 0      0 −i 0 0   1 0 0 0      0 1 0 0    T16 = I4 =   .  0 0 1 0      0 0 0 1 Consider the real linear combination aT1 + bT2 + cT3 + dT4 + eT5 + fT6 + gT7 + hT8 + jT9 + kT10 + lT11 + mT12 + nT13 + pT14 + rT15   −b − g i(−f + k) i(c − h) a + d + e − j      i(f + k) g − b a − d + e + j i(h − c)    =   +  i(c + h) a + d − e + j b − g i(k − f)      a − d − e − j i(−c − h) i(f + k) b + g       0 −il 0 0 −m 0 0 0 0 0 in 0              il 0 0 0   0 m 0 0   0 0 0 in          +   +   +  0 0 0 il   0 0 m 0   in 0 0 0              0 0 −il 0 0 0 0 −m 0 in 0 0       0 −ip 0 0 0 0 iq 0 r 0 0 0              −ip 0 0 0   0 0 0 iq   0 r 0 0          +   +    0 0 0 ip   −iq 0 0 0   0 0 r 0              0 0 ip 0 0 −iq 0 0 0 0 0 r   −b − g − m + r i(−f + k − l − p) i(c − h + n + q) a + d + e − j      i(f + k + l − p) g − b + m + r a − d + e + j i(h − c + n + q)    =   (7.3)  i(c + h + n − q) a + d − e + j b − g + m + r i(k − f + l + p)      a − d − e − j i(−c − h + n − q) i(f + k − l + p) b + g − m + r

81 Comparing the resulting matrix with X where X is in equation(7.1).We get the following

equations:

x1 = −b − g − m + r, y1 = −f + k − l − p, x2 = c − h + n + q

y2 = a + d + e − j, z1 = f + k + l − p, w1 = g − b + m + r z2 = a − d + e + j, w2 = h − c + n + q, x3 = c + h + n − q y3 = a + d − e + j, x4 = b − g + m + r, y4 = k − f + l + p

z3 = a − d − e − j, w3 = −c − h + n − q, z4 = f + k − l + p, w4 = b + g − m + r

Now since the even sub algebra of Cl(2,3) is Cl(2,2), we define the following real matri-

ces:   0 1 0 0         1 0 0 1  1 0 0 0        S1 = σz ⊗ σx =   ⊗   =  , 0 −1 1 0  0 0 0 −1      0 0 −1 0   0 0 1 0         1 0 1 0  0 0 0 1        S2 = σx ⊗ I2 =   ⊗   =  , 0 1 0 1  1 0 0 0      0 1 0 0   0 1 0 0         1 0 i 0  −1 0 0 0        S3 = σz ⊗ iσy =   ⊗ i   =  , 0 −1 0 −i  0 0 0 −1      0 0 1 0   0 0 1 0         0 1 1 0  0 0 0 1        S4 = iσy ⊗ I2 =   ⊗ i   =  , −1 0 0 1  −1 0 0 0      0 −1 0 0

82       0 1 0 0 0 0 1 0 0 0 0 1              1 0 0 0   0 0 0 1   0 0 1 0        S5 = S1S2 =     =    0 0 0 −1   1 0 0 0   0 −1 0 0              0 0 −1 0 0 1 0 0 −1 0 0 0       0 1 0 0 0 1 0 0 −1 0 0 1              1 0 0 0   −1 0 0 0   0 1 0 0        S6 = S1S3 =     =    0 0 0 −1   0 0 0 −1   0 0 −1 0              0 0 −1 0 0 0 1 0 0 0 0 1       0 1 0 0 0 0 1 0 0 0 0 1              1 0 0 0   0 0 0 1   0 0 1 0        S7 = S1S4 =     =    0 0 0 −1   −1 0 0 0   0 1 0 0              0 0 −1 0 0 −1 0 0 1 0 0 0       0 0 1 0 0 1 0 0 0 0 0 −1              0 0 0 1   −1 0 0 0   0 0 1 0        S8 = S2S3 =     =    1 0 0 0   0 0 0 −1   0 1 0 0              0 1 0 0 0 0 1 0 −1 0 0 0       0 0 1 0 0 0 1 0 −1 0 0 0              0 0 0 1   0 0 0 1   0 −1 0 0        S9 = S2S4 =     =    1 0 0 0   −1 0 0 0   0 0 1 0              0 1 0 0 0 −1 0 0 0 0 0 1       0 1 0 0 0 0 1 0 0 0 0 1              −1 0 0 0   0 0 0 1   0 0 −1 0        S10 = S3S4 =     =    0 0 0 −1   −1 0 0 0   0 1 0 0              0 0 1 0 0 −1 0 0 −1 0 0 0

83       0 1 0 0 0 0 1 0 0 1 0 0              1 0 0 0   0 0 0 1   −1 0 0 0        S11 = S1S2S3 =        0 0 0 −1   1 0 0 0   0 0 0 −1              0 0 −1 0 0 1 0 0 0 0 1 0   0 0 1 0      0 0 0 −1    =    1 0 0 0      0 −1 0 0       0 1 0 0 0 0 1 0 0 0 1 0              1 0 0 0   0 0 0 1   0 0 0 1        S12 = S1S2S4 =        0 0 0 −1   1 0 0 0   −1 0 0 0              0 0 −1 0 0 1 0 0 0 −1 0 0   0 −1 0 0      −1 0 0 0    =    0 0 0 −1      0 0 −1 0       0 1 0 0 0 1 0 0 0 0 1 0              1 0 0 0   −1 0 0 0   0 0 0 1        S13 = S1S3S4 =        0 0 0 −1   0 0 0 −1   −1 0 0 0              0 0 −1 0 0 0 1 0 0 −1 0 0   0 0 −1 0      0 0 0 1    =    1 0 0 0      0 −1 0 0

84       0 0 1 0 0 1 0 0 0 0 1 0              0 0 0 1   −1 0 0 0   0 0 0 1        S14 = S2S3S4 =        1 0 0 0   0 0 0 −1   −1 0 0 0              0 1 0 0 0 0 1 0 0 −1 0 0   0 1 0 0      −1 0 0 0    =    0 0 0 1      0 0 −1 0         0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0                  1 0 0 0   0 0 0 1   −1 0 0 0   0 0 0 1          S15 = S1S2S3S4 =          0 0 0 −1   1 0 0 0   0 0 0 −1   −1 0 0 0                  0 0 −1 0 0 1 0 0 0 0 1 0 0 −1 0 0   0 1 0 0      −1 0 0 0    =    0 0 0 1      0 0 −1 0 and   1 0 0 0      0 1 0 0    S16 = I4 =   .  0 0 1 0      0 0 0 1

Note that S1 to S4 is the basis of one- vector for Cl(2,2), S5 to S10 are basis of 2-vectors for

Cl(2,2) and S11 to S14 are three vectors for Cl(2,2) and S15 is the sole 4- vector for Cl(2,2) and S16 is zero vector. Notice that Si’s are all real matrices which is a should be. We form the Linear combination given below:

φ(x)

85 = [aS1 + bS2 + cS3 + dS4 + eS5 + fS6 + gS7 + hS8 + jS9 + kS10 + lS11 + mS12 + nS13 + pS14 +

+qS15 + rI4]   −f − j − q + r a + c − m + p b + d + l − n e + g − h + k      a − c − m − p f − j + q + r e + g + h − k b + d − l + n    =   (7.4)  b − d + l + n −e + g + h + k −f + j + q + r −a − c − m + p      −e + g − h − k b − d − l − n −a + c − m − p f + j − q + r So The map which sends the matrix in equation (7.3) to the real matrix in equation(7.4) is an algebra isomorphism as one can easily verify. Now we will try to identify the Spin Lie algebra spin(2,3)which equals the X as in equa- tion(7.1) which further satisfy X∗M = −MX with those Y’s as in equation(7.4) which further, satisfy

Y T S = −SY with S being 4*4 real skew symmetric matrix, given below:   0 a b c      −a 0 d e    S =   (7.5)  −b −d 0 f      −c −e −f 0

After some calculations, we see that S can be taken to be matrix given below:

  0 0 0 1      0 0 −1 0    S =   (7.6)  0 1 0 0      −1 0 0 0

86 Therefore we get the following theorem.

Theorem: Spin+(2 , 3) is the group defined by the matrices in equation(7.1) satisfying

X∗MX = M

This is isomorphic to a group of real 4 ∗ 4 matrices via map φ which sends X to Y and the image of φ is the group Y real 4 × 4 matrix such that

Y T SY = S

where Si’s is defined in equation(7.6). This later group is explicitly conjugate to Sp(4,R). Hence Spin+(2 , 3) is also explicitly isomorphic to Sp(4,R).

7.2 Spin+(2, 4):

Starting from Cl(0,2)= H and applying IC1 twice we get Cl(2=, 4) M(2 ,H). Inside Cl(2,4)is

Spin+(2 , 4) as a group of quaternionic matrices. Since Spin+(2 , 4) is known to be SU(2,2) we would like to find an isomorphism from this group to SU(2,2). This is difficult to do directly so we work at the level of Lie algebra Spin+(2, 4).

We begin with , Cl(0,2) = H. Then Clifford conjugation is defined as

x → x¯

; and reversion as

x → k−1xk¯ = k¯xk¯

87 For Cl(1,3)= M(2,H), by IC1. Clifford conjugation is defined as

−1 ∗ X → A2 X A2   0 k , where A =  ;Note that A−1 = A = A . 2   2 2 ∗ −k 0 Reversion is defined as

−1 ∗ X → R2 X R2   0 1   , where R2 =  ; Also, Cl(2,4) = M(4, H), by IC1, Clifford conjugation is defined −1 0 as

−1 ∗ X → K4 X K4

  0 σ  x  where K4 =   = −Mk⊗1; and reversion as σx 0

−1 ∗ X → H4 X H4   0 0 0 k      0 0 −k 0    , where H4 =  ;  0 k 0 0      −k 0 0 0   −kσz 0 So grade is defined as X → G−1XG¯ , where G =  ; 4 4 4   0 kσz   AB   Therefore an element of Cl(2,4)= M(4,H), written as X =  ; CD where A and D commute with kσz and B and C anticommute with kσz. The Lie algebra spin+(2, 4) is the space of bi-vector of Cl(2,4). So we calculate that first.

88   0 0 1 0      0 0 0 1    V1 = σx ⊗ I =    1 0 0 0      0 1 0 0   0 0 i 0         i 0 0 1  0 0 0 −i        V2 = iσz ⊗ σx =   ⊗   =   0 −i 1 0  i 0 0 0      0 −i 0 0   0 0 j 0         j 0 0 1  0 0 0 −j        V3 = jσz ⊗ σx =   ⊗   =   0 −j 1 0  j 0 0 0      0 −j 0 0   0 0 k 0         k 0 0 1  0 0 0 −k        V4 = kσz ⊗ σx =   ⊗   =   0 −k 1 0  k 0 0 0      0 −k 0 0   0 1 0 0         1 0 0 1  −1 0 0 0        V5 = σz ⊗ J2 =   ⊗   =   0 −1 −1 0  0 0 0 −1      0 0 1 0   0 0 1 0         0 1 1 0  0 0 0 1        V6 = J2 ⊗ I2 =   ⊗   =   −1 0 0 1  −1 0 0 0      0 −1 0 0

89 Let V1,V2,V3,V4,V5,V6 be 1-vectors of Cl(2,4) where

2 2 V1 = V2 = I and

2 2 2 2 V3 = V4 = V5 = V6 = −I We will calculate the basis for 4*4 quaternionic matrix in cl(2,4) as followings:     0 0 0 1 −1 0 0 0          0 0 1 0   0 −1 0 0      T1 = V1V6 =   , T2 = V2V6 =    0 1 0 0   0 0 1 0          1 0 0 0 0 0 0 1     0 0 i 0 0 0 j 0          0 0 0 −i   0 0 0 −j      T3 = V3V6 =   , T4 = V4V6 =    i 0 0 0   j 0 0 0          0 −i 0 0 0 −j 0 0     0 0 0 1 0 0 0 1          0 0 −1 0   0 0 1 0      T5 = V5V6 =   , T6 = T1T2 =    0 1 0 0   0 −1 0 0          −1 0 0 0 −1 0 0 0     0 −i 0 0 0 −j 0 0          i 0 0 0   j 0 0 0      T7 = T1T3 =   , T8 = T1T4 =    0 0 0 −i   0 0 0 −j          0 0 i 0 0 0 j 0     −1 0 0 0 0 0 −i 0          0 1 0 0   0 0 0 i      T9 = T1T5 =   , T10 = T2T3 =    0 0 −1 0   i 0 0 0          0 0 0 1 0 −i 0 0

90     0 0 −j 0 0 0 0 −1          0 0 0 j   0 0 1 0      T11 = T2T4 =   , T12 = T2T5 =    j 0 0 0   0 1 0 0          0 −j 0 0 −1 0 0 0     k 0 0 0 0 i 0 0          0 k 0 0   i 0 0 0      T13 = T3T4 =   , T14 = T3T5 =    0 0 k 0   0 0 0 i          0 0 0 k 0 0 i 0     0 j 0 0 0 −i 0 0          j 0 0 0   i 0 0 0      T15 = T4T5 =   , T16 = T1T2T3 =    0 0 0 j   0 0 0 i          0 0 j 0 0 0 −i 0     0 −j 0 0 −1 0 0 0          j 0 0 0   0 1 0 0      T17 = T1T2T4 =  , T18 = T1T2T5 =    0 0 0 j   0 0 1 0          0 0 −j 0 0 0 0 −1     0 0 0 k 0 0 i 0          0 0 k 0   0 0 0 i      T19 = T1T3T4 =  , T20 = T1T2T3 =    0 k 0 0   i 0 0 0          k 0 0 0 0 i 0 0     0 0 j 0 −k 0 0 0          0 0 0 j   0 −k 0 0      T21 = T1T4T5 =  , T22 = T2T3T4 =    j 0 0 0   0 0 k 0          0 j 0 0 0 0 0 k

91     0 −i 0 0 0 −j 0 0          −i 0 0 0   −j 0 0 0      T23 = T2T3T5 =  , T24 = T2T4T5 =    0 0 0 i   0 0 0 j          0 0 i 0 0 0 j 0     0 0 0 k 0 0 0 k          0 0 −k 0   0 0 k 0      T25 = T3T4T5 =  , T26 = T1T2T3T4 =    0 k 0 0   0 −k 0 0          −k 0 0 0 −k 0 0 0     0 0 j 0 0 0 0 −k          0 0 0 j   0 0 k 0      T27 = T1T2T4T5 =  , T28 = T2T3T4T5 =    −j 0 0 0   0 k 0 0          0 −j 0 0 −k 0 0 0     −k 0 0 0 0 0 i 0          0 k 0 0   0 0 0 i      T29 = T1T3T4T5 =  , T30 = T1T2T3T5 =    0 0 k 0   −i 0 0 0          0 0 0 −k 0 −i 0 0     −k 0 0 0 1 0 0 0          0 k 0 0   0 1 0 0      T31 = T1T2T3T4T5 =  , T32 = I =    0 0 k 0   0 0 1 0          0 0 0 −k 0 0 0 1

Now since the even sub algebra of Cl(2,4) is isomorphic to Cl(2,3),we form in parallel calcu-

lation for real matrices as given below: The basis for 4 × 4 complex matrix are defined as

92     0 1 0 0 0 0 1 0          1 0 0 0   0 0 0 1      S1 =   ,S2 =    0 0 0 −1   1 0 0 0          0 0 −1 0 0 1 0 0     i 0 0 0 0 1 0 0          0 −i 0 0   −1 0 0 0      S3 =   ,S4 =    0 0 −i 0   0 0 0 −1          0 0 0 i 0 0 1 0     0 0 1 0 0 0 0 1          0 0 0 1   0 0 1 0      S5 =   ,S6 = S1S2 =    −1 0 0 0   0 −1 0 0          0 −1 0 0 −1 0 0 0     0 −i 0 0 −1 0 0 0          i 0 0 0   0 1 0 0      S7 = S1S3 =   ,S8 = S1S4 =    0 0 0 −i   0 0 −1 0          0 0 i 0 0 0 0 1     0 0 0 1 0 0 −i 0          0 0 1 0   0 0 0 i      S9 = S1S5 =   ,S10 = S2S3 =    0 1 0 0   i 0 0 0          1 0 0 0 0 −i 0 0     0 0 0 −1 −1 0 0 0          0 0 1 0   0 −1 0 0      S11 = S2S4 =   ,S12 = S2S5 =    0 1 0 0   0 0 1 0          −1 0 0 0 0 0 0 1

93     0 i 0 0 0 0 i 0          i 0 0 0   0 0 0 −i      S13 = S3S4 =   ,S14 = S3S5 =    0 0 0 i   i 0 0 0          0 0 i 0 0 −i 0 0     0 0 0 1 0 0 0 i          0 0 −1 0   0 0 −i 0      S15 = S4S5 =   ,S16 = S1S2S3 =    0 1 0 0   0 i 0 0          −1 0 0 0 −i 0 0 0     0 0 1 0 0 −1 0 0          0 0 0 −1   −1 0 0 0      S17 = S1S2S4 =   ,S18 = S1S2S5 =    1 0 0 0   0 0 0 −1          0 −1 0 0 0 0 −1 0     i 0 0 0 0 0 0 −1          0 i 0 0   0 0 1 0      S19 = S1S3S4 =   ,S20 = S1S3S5 =    0 0 −i 0   0 1 0 0          0 0 0 −i −1 0 0 0     0 0 −1 0 0 0 0 i          0 0 0 1   0 0 i 0      S21 = S1S4S5 =   ,S22 = S2S3S4 =    1 0 0 0   0 i 0 0          0 −1 0 0 i 0 0 0     i 0 0 0 0 1 0 0          0 −i 0 0   −1 0 0 0      S23 = S2S3S5 =   ,S24 = S2S4S5 =    0 0 i 0   0 0 0 1          0 0 0 −i 0 0 −1 0

94     0 0 0 i 0 0 i 0          0 0 i 0   0 0 0 i      S25 = S3S4S5 =   ,S26 = S1S2S3S4 =    0 −i 0 0   −i 0 0 0          −i 0 0 0 0 −i 0 0     −1 0 0 0 0 −i 0 0          0 1 0 0   −i 0 0 0      S27 = S1S2S4S5 =   ,S28 = S2S3S4S5 =    0 0 1 0   0 0 0 i          0 0 0 −1 0 0 i 0     0 0 i 0 0 −i 0 0          0 0 0 i   i 0 0 0      S29 = S1S3S4S5 =   ,S30 = S1S2S3S5 =    i 0 0 0   0 0 0 i          0 i 0 0 0 0 −i 0     −i 0 0 0 1 0 0 0          0 −i 0 0   0 1 0 0      S31 = S1S2S3S4S5 =   ,S32 =    0 0 −i 0   0 0 1 0          0 0 0 −i 0 0 0 1

We take the linear combination of vectors from S1 to S32 and obtain a complex 4 × 4 matrix

in Cl(2,3). For constants a, b, c,...... , π, δ, we get

aS1 + bS2 + cS3 + dS4 + eS5 + fS6 + gS7 + hS8 + lS9 + mS10 + nS11 + pS12 + qS13 + rS14 +

sS15 + tS16 + uS17 + vS18 + wS19 + xS20 + yS21 + zS22 + αS23 + βS24 + γS25 + σS26 + ηS27 +

µS28 + S29 + λS30 + πS31 + δS32  −h − p − η + δ + i(c + w − π + α)(a + d − v + β) + i(−g + q − µ − λ)    b + e + u − y + i(−m + r + σ + ) f + l − n + s + i(t + z − x + γ)  =   (a − d − v − β) + i(g + q − µ + λ) h − p + η + δ + i(−c + w − π − α)   f + l − n − s + i(−t + z − x + γ) b + e − u + y + i(m − r + σ + )

95  b − e + u + y + i(m + r − σ + ) −f + l − n + s + i(t + z + x − γ)   −h + p + η + δ + i(−c − w − π + α)(−a − d − v + β) + i(−g + q + µ + λ)    −f + l − n − s + i(−t + z − x − γ) b − e − u − y + i(−m − r − σ + )    (−a + d − v − β) + i(g + q + µ + λ) h + p − η + δ + i(c − w − π − α)

The isomorphism sends T to T to S to S . Therefore we have the following theorem: 1 32 1 32   AB +   Theorem Spin (2, 4) consist of matrices X =   where: CD

• A and D commute with kσz and B and C anti-commute with kσz.

• det(θH (X)) = 1.

−1 ∗ • XK4 X K4 = I This group is explicitly isomorphic, via the isomorphism which sends

T1 to T32 to S1 to S32 respectively, to the group of 4× 4 complex matrices Y which satisfy

∗ Y (iMj⊗1)Y = iMj⊗1

and detY = 1. Since (iMj⊗1) is explicitly unitarily similar to I2,2, this group is explicitly unitarily similar to SU(2,2). Hence Spin+(2, 4) is explicitly isomorphic to SU(2,2).

96 CHAPTER 8

CONCLUSION

Explicit algorithms for inverting the double covering maps φp,q, for (p,q) ∈ (2,1), (2,2), (3,2), (4,1) were provided. These methods have been extended to the general (p,q) case, at the cost of more computation. A brief, and necessarily incomplete, comparison of the methods proposed here and also the formula in [16] follows. Both our methods and the method in [16] will require considerable computation if n = p + q is large. Our methods require that

a matrix form of φp,q be available first.

This is, in any case, a necessity if the principal aim of inversion is to relate matrix theo- retic properties of an element in the indefinite orthogonal group to those of its preimage(s) in the spin group. On the other hand, this aim is, in general, impossible to execute and achieve when viewing φp,q only as an abstract map. Next, as n grows the matrix entries of φp,q(Y ) will be quadratic entries in a large number of variables. On the other hand, the formula in [16] will require the computation of a prohibitive number of determinants.

Next, among our methods, it is more direct to use Groebner¨ bases for inversion - if the matrix form of φp,q has been already calculated. The systems of equations and the atten- dant the Groebner¨ basis calculations become cumbersome if the polar decomposition is used, instead of the Givens decomposition. This is why in this work, we did not use Groebner¨ basis techniques. On the other hand, the number of systems to be solved, when the Givens decomposition is employed, is larger than when the polar decomposition is deployed. Note, however, the number of such systems to be solved symbolically, in the Givens case, is typi-   p + q   cally lower than   Givens factors, since there is repetition of these different factors 2 (albeit with different or ) - see ii) of Remark 21 in Section 2.4. Finally, the Lie algebraic

97 methods proposed require first that ψp,q be calculated. This is no harder than finding the entries of φp,q, but it is nevertheless a requisite. The inversion of ψp,q is, of course, orders of magnitude simpler than that of φp,q. However, to be able to use it effectively for the inversion of φp,q, one needs to be able to compute exponential of matrices in the spin Lie algebra easily. This factor is the basic tradeoff between Gr¨oebner basis methods and the Lie algebraic method proposed here. For n 6, in most cases, there are explicit formulae for the exponential. The more this can be extended to larger n, the Lie algebraic method becomes more competitive. On the other hand, for any n the exponentiation is always elementary when Givens factors are used as Remark 32 shows. Finally, the combination of linearization and Givens factors provides an alternative for the inversion of the abstract covering map and also for agnostic inversion of the concrete covering map, for any n.

98 REFERENCES

[1] . Adjei, M. K. Dabkowski, Samreen S. Khan V. Ramakrishna, Inversions of the In- definite Double Covering Map. Journal of Combinatorics Information and System Sci- ences(2020). to appear in Journal of Combinatorics Information and System Sciences (Volume 45) 2020.

[2] Francis Adjei, Marcus Cisneros, Deep Desai, Samreen S. Khan, Viswanath Ramakr- ishna, Brandon Whiteley, Algorithms for the Polar Decomposition in Certain Groups and the Quaternion Tensor Square. Journal of Combinatorics Information and System Sciences(2020). to appear in Journal of Combinatorics Information and System Sciences (Volume 44).

[3] Adjei F., Inversion of the Indefinite Double Covering Map, Ph.d Dissertation, University of Texas at Dallas 2017.

[4] E. Herzig, V. Ramakrishna, An Elementary, First Principles Approach to the Indefi- nite Spin Groups, Advances in Applied Clifford Algebras, June 2017, Volume 27, Issue 2, pp 1283–1311; doi:10.1007/s00006-016-0671-0. First Online: 18 April 2016., Ph.d Dissertation, University of Texas at Dallas.

[5] Ansari Y., Ramakrishna V., J. Phys A: Math. Theory, 41, 335203, (2008), 1-12.., On the Non-compact portion of Sp(4,R) via QuaternionsJournal of Physics A: Math. Theory (2008).

[6] Constantinescu T., Ramakrishna V., Spears N., Hunt L., Tong J., Panahi I., Kannan G., McFarlan D., Evans G., Christensen M., Composition Methods for Four-Port Couplers in Photonic Integrated Circuitry, J. of Optical Society of America A, 23 (2006), 2919- 2931.

[7] Inversion of Double-Covering Map Spin(N) → SO(N,R) for N ≤ 6, J. Geom. Symme- try Phys., 42, (2016), 15-51.

[8] Dabkowski M., Herzig E., Ramakrishna V., (2016). Inversion of Double-Covering Map Spin(N) → SO(N,R) for N ≤ 6 (3rd edition). Journal of Geom. Symmetry Physics.

[9] Gilmore, R., Lie Groups, Physics and Geometry: An Introduction for Physicists, Engi- neers and Chemists, Cambridge University Press, 2008.

[10] Herzig E., Ramakrishna V., An Elementary, First Principles Approach to the Indefinite Spin Groups, Advances in Applied Clifford Algebras, 27(2), (2017), 1283–1311.

[11] Hestenes D., Space-Time Algebra, Gordon and Breach, New York, 1966.

99 [12] Hile G. N., Lounesto P., Matrix Representations of Clifford Algebras, and Its Applications, 128, (1990), 51-63.

[13] Kibler, M., On the Use of the Group SO(4,2) in Atomic and Molecular Physics, 102 (2004) 1221-1251.

[14] Lounesto P., Clifford Algebras and , Cambridge University Press, 2001.

[15] Perwass C., with Applications in Engineering, Springer-Verlag (2010).

[16] Porteous I., Clifford Algebras and the Classical Groups, Cambridge University Press, 1995.

[17] Rodman L., Topics in Quaternionic Linear Algebra, Princeton University Press, 2015.

[18] Shirokov D. S., Calculation of Elements of Spin Groups Using Method of Averaging in Clifford’s Geometric Algebra, Adv. Appl. Clifford Algebras, (2019), 29-50.

[19] Shirokov D. S., Calculation of Elements of Spin Groups Using Generalized Pauli’s The- orem, Adv. Appl. Clifford Algebras, (2015), 227-244.

100 BIOGRAPHICAL SKETCH

Samreen Sher Khan was born in Sahiwal, and raised in Lahore, Pakistan. She is a PhD candidate in Mathematics at The University of Texas at Dallas. She is working on POLAR AND GIVENS DECOMPOSITION AND INVERSION OF INDEFINITE DOUBLE COV-ERING MAP under the supervision of Dr. Viswanath Ramakrishna.

She received her Bachelor of Science in Mathematics from Bahaudin Zakaria University Multan, Pakistan. She served as an under-grad Mathematics teacher at Shiblee College. She continued her teaching career at the largest and most reknowned school system, Bea- conhouse Educational Services, in Lahore, Pakisten as O A-level’s Mathematics Teacher. Along with teaching, she served as the school coordinator at Beaconhouse.

She is one of the pioneers for introducing technological development in teaching and learn-ing in Pakistan. She received certifications for professional teaching from ST. Mark and John University, UK. She earned her diploma in information, communication and technol-ogy (DTWICT) with distinction from the University of Plymouth, England. She completed her master's degree in Educational Leadership and Management from Beaconhouse National University, Lahore, Pakistan in 2015.

The same year, she joined the PhD program at The University of Texas at Dallas, USA. She has been working as a teaching assistant at the Department of Mathematical Sciences since

Fall 2015. She completed her MS in Mathematics in 2018 from The University of Texas at Dallas and is currently pursuing a certification in Data Science along with her Doctoral Degree.

101 CURRICULUM VITAE

Samreen S Khan June, 2020

Contact Information:

Department of Natural Science Email: [email protected] and Mathematics

The University of Texas at Dallas 800 W. Campbell Rd. Richardson, TX 75080-3021, U.S.A. Educational History:

BS, Mathematical Sciences, Bahaudin Zakaria University, Pakistan 1998 MS, Educational Leadership and Management, Beaconhouse National University La- hore, PK, 2015

MS, Mathematics University of Texas at Dallas, 2018 Graduate Certificate in Data Science, University of Texas at Dallas, 2020 PhD, Mathematics, University of Texas at Dallas, 2020

POLAR AND GIVENS DECOMPOSITION AND INVERSION OF INDEFINITE DOU- BLE COVERING MAP PhD Dissertation Department of Natural Science and Mathematics The University of Texas at Dallas Advisor: Dr. Viswanath Ramakrishna

Employment History: University of Texas at Dallas-Richardson, TX

Teaching Assistant 2015 – present

Beaconhouse Education System 2004-2015 Lecturer Mathematics Taught O and A-levels

Mathematics (D-series and C- series) for classes with high enrollment and Lecturer Math/

School Coordinator worked on curriculum development and paper setter. Worked as

School Coor-dinator.

Shiblee College, Faisalabad, Pakistan

Lecturer Mathematics 2000 –2004

Professional Recognitions and Honors:

• Distinction in BS. Mathematics, Pakistan

• Presented research work at TWIMS 2020 conference at Texas AM University, College

Station, Title: ‘Polar Decomposition in certain Lie Groups’

• Presented Paper on “The Split Bergman Method for L1-Regularized Problems” and

member of Image and Signal club in Department of Mathematical Sciences, University of

Texas at Dallas, USA.

• Judged the Undergrad poster session at AMS (American Mathematical Society Con-

ference) 2017, held in Atlanta, Georgia, USA, and Science Fair Projects in Plano District

High Schools from 2018 to present.

• Teacher Trainer - Trained new staff teachers in key areas of lesson planning, different

classroom management and teaching strategies as well as dealing with the students of various

abilities. ETAC Ambassador (Emerging Technology across the Curriculum)

Professional Certifications and Trainings: St. Mark John University, UK Diploma in Information Communication Technology

(DTWICT) and Certificate in Professional Education University of Texas at Dallas, USA Graduate certification in Data Science Grad Teaching Certification, UTD Beaconhouse National University, PK Summer School on Teaching Strategies

Publications: . • Inversions of the Indefinite Double Covering Map to appear in Journal of Combin- itorics Information and System Sciences Vol45. F. Adjei†, M. K. Dabkowski, Samreen S. Khan V. Ramakrishna • Algorithms for the Polar Decomposition in Certain Groups and the Quaternion Ten- sor Square to appear in Journal of Combinitorics Information and System Sciences Vol44. Francis Adjei, Marcus Cisneros, Deep Desai, Samreen S. Khan, Viswanath Ramakrishna, , Brandon Whiteley • Samreen S. Khan, The Role Organizational Leadership on use of Learning MMan- agement System(LMS) integrated with classroom education from user’s perspective, M-Phil Thesis, Beaconhouse National University, Lahore Pakistan. July 2015 .

Professional Memberships:

American Mathematical Society (AMS) Society for Industrial and Applied Mathematics (SIAM) Mathematical Association of America (MAA) Canadian Mathematical Society (CMM)