<<

EXPONENTIALS OF REAL SKEW-SYMMETRIC MATRICES

IN TERMS OF THEIR EIGENVALUES

A Thesis

Presented to the

Faculty of

California State Polytechnic University, Pomona

In Partial Fulfillment

Of the Requirements for the Degree

Master of Science

In

Mathematics

By

Diego Gerardo Andree´ Avalos Galvez ´

2018 SIGNATURE PAGE

THESIS: EXPONENTIALS OF REAL SKEW-SYMMETRIC MATRICES IN TERMS OF THEIR EIGENVALUES

AUTHOR: Diego Gerardo Andree´ Avalos Galv´ ez

DATE SUBMITTED: Spring 2018

Department of and

Dr. Hubertus von Bremen Thesis Committee Chair Mathematics & Statistics

Dr. Randall Swift Mathematics & Statistics

Dr. Jennifer Switkes Mathematics & Statistics

ii ACKNOWLEDGMENTS

I would like to thank every math instructor I have had because they all have kindly shared their mathematical knowledge with me, put up with my carefree attitude, and helped shape the person that I am now. To my math primary and secondary school teachers, my community college instructors, and my graduate school professors, sincerely, thank you all. Specially Dr. von Bremen, who stands out for being not only a grad thesis adviser, but a life adviser, a motivator, and a true human being. Furthermore, I would like to thank Dr. Swift and Dr. Switkes for kindly accepting the invitation of being part of my thesis committee on such a short notice, and for their willingness to share their expertise in my thesis. I also would like to thank all my family in Peru, who, even when

I lose contact with them for long periods of time, have always been proud of me: the first mathematician of the family. Most importantly, I want to express my deepest gratitude and admiration to my very first mentor: my mother, who never stopped believing in me and is always pushing me to achieve greater things, even during my most stubborn moments. I am truly the luckiest son in the world.

iii ABSTRACT

The eigenvalues of an n × n real nonzero skew-symmetric S are purely imaginary or zero. Let the list of distinct purely imaginary eigenvalues of S be ±θ1i,...,±θpi such S that each θ j > 0. We algebraically demonstrate that the exponential e can be expressed

n−1 in terms of the powers In,S,...,S , where the coefficients are in terms of the distinct S values θ j, by using the method by Gallier and Xu [2]. Furthermore, the formulas of e

(in terms of the θ js) depend solely on the number of distinct eigenpairs ±θ ji of S and whether zero is an eigenvalue of S, but are independent of their algebraic multiplicities.

Only the formulas of the θ js (in terms of the entries of S) depend on the multiplicities of the θ jis in the characteristic polynomial of S. This allows us to determine that if n is even, S n−1 then e has n − 1 different cases, and 2 cases if n is odd. In this thesis, we calculate all the closed form formulas of eS for 2 ≤ n ≤ 9 because we can obtain the eigenvalues of S in terms of its entries up to the case n = 9 using the linear, quadratic, cubic, and quartic formulas. Nevertheless, the theory allows us to calculate the closed formula of eS for any arbitrary n assuming the eigenvalues of S are known. Lastly, we implement

the formulas obtained in this thesis on our Matlab function skewexpm and compare the orthogonality errors using our formulas on randomly generated skew-symmetric matrices

to those obtained by applying Matlab’s expm. It turns out that our formulas give a smaller

error than expm for over 97% of the time up to size n = 5, over 92% of the time up to

size n = 7, and for over 60% of the time for sizes n = 8 and 9 (see Table 4.1). Finally,

if we allow the entries of a skew- to range from −1015 to 1015, we can rely that our closed formulas will have a far better and acceptable error than expm, as our example with n = 9 illustrates in Figure 5.9.

iv Contents

1 Introduction 1

2 Preliminary Concepts 5

3 The Theory 9

4 Computing the Formulas of eS 21

4.1 S ∈ M1 ...... 21

4.2 S ∈ M2 ...... 22

4.3 S ∈ M3 ...... 22

4.4 S ∈ M4 ...... 22

4.5 S ∈ M5 ...... 25

4.6 S ∈ M6 ...... 26

4.7 S ∈ M7 ...... 30

4.8 S ∈ M8 ...... 32

4.9 S ∈ M9 ...... 40

4.10 Special Cases for S ∈ Mn, n ≥ 10 ...... 46

5 Computer Results and Conclusions 48

v Bibiliography 61

A Roots of Polynomials 63

A.1 The Linear Case and the Quadratic Formula ...... 63

A.2 The Cubic Formula ...... 63

A.3 The Quartic Formula ...... 64

B Matlab Codes 65

B.1 The Skewexpm Function ...... 65

B.2 The Skewsymgenerator Function ...... 75

vi List of Tables

5.1 Expected Performance Rate of skewexpm against expm ...... 57

vii List of Figures

5.1 Case 4.2 (n = 2) Closed Formula Error vs. Matlab’s expm ...... 49

5.2 Case 4.3 (n = 3) Closed Formula Error vs. Matlab’s expm ...... 50

5.3 Case 4.4.3 (n = 4) Closed Formula Error vs. Matlab’s expm ...... 51

5.4 Case 4.5.2 (n = 5) Closed Formula Error vs. Matlab’s expm ...... 52

5.5 Case 4.6.5 (n = 6) Closed Formula Error vs. Matlab’s expm ...... 53

5.6 Case 4.7.3 (n = 7) Closed Formula Error vs. Matlab’s expm ...... 54

5.7 Case 4.8.7 (n = 8) Closed Formula Error vs. Matlab’s expm ...... 55

5.8 Case 4.9.4 (n = 9) Closed Formula Error vs. Matlab’s expm ...... 56

5.9 Case 4.9.4 (n = 9) Closed Formula Error vs. Matlab’s expm with Matri­

ces of Large Entries ...... 58

viii Chapter 1

Introduction

The problem of numerically computing the exponential of a matrix A, where A is square

and complex-valued, is difficult if we seek to have little error, and the method of choice

to attempt to control this error usually depends on the properties of the matrix A. To

better understand the problem we are facing, note that Moler and Van Loan [7] refer to

all our existing methods of computing eA as dubious. For any general A,

Moler and Van Loan summarize five classes of methods to compute its matrix exponen­ tial: , ODE, polynomial, matrix decomposition, and splitting methods. Higham

[5] studies two series methods, which are often used in computer algorithms to compute exponential of matrices (e.g., Matlab’s expm, and Mathematica’s MatrixExp). The two methods used to numerically compute the exponential of a matrix are scale and square, and Pade´ approximations. The algorithm consists of choosing a positive integer s so that

1 X || 2s A|| is appropriately bounded, and then use an [m/m] Pade´ approximant of e , namely, a rational function R(X) = Q(X)−1P(X), where Q(X) and P(X) are two polynomials of

degree m such that Q(X) = P(−X). Finally, the matrix exponential is approximated to

A 1 2s s e ≈ [R( 2s A)] , and the power of 2 is dealt with by squaring repeatedly. However, er­

1 ror problems may arise when using this method. For example, Golub and Van Loan [3]

explain that some matrices may exponentially grow before decaying during the squaring

1 process of R( 2s A). Furthermore, Al-Mohy and Higham [1] himself address that if the integer s is not chosen carefully and ends up being too large, a phenomenon called over-

scaling can cause a significant loss of accuracy. For instance, they present a new version

of the scale and square method, which imposes more strict error bounds by choosing a

minimal integer s, and m ≤ 13. This improved method is how matrix exponentials are

computed in the newer versions of Matlab.

In this thesis, we present yet another dubious method to calculate the matrix expo­

nential of one heavily-studied class of matrices: real skew-symmetric matrices S, by

obtaining closed formulas of eS in terms of its eigenvalues and powers of S, and then compare our method with Matlab’s expm. One application where accurate computa­ tions of exponentials of skew-symmetric matrices are required is when computing the

Lyapunov characteristic exponent of continuous dynamical systems using the eS-method described by von Bremen [11], where he uses real skew-symmetric matrices to charac­ terize orthogonal matrices Q in the SO(n) with fewer entries than n2. In his paper,

von Bremen presents closed formulas of eS for S of size 4 and 5 by using functions of

matrices. Piamonte [9] obtains similar formulas to the ones contained in this paper up to

size 7 by also using function of matrices, but with some missing cases and one overlap­

ping case. Moreover, Politi [10] and Oshinuga [8] independently calculated the formulas

of eS when S is of size 4. We demonstrate that such formulas of eS can be calculated

for any real skew-symmetric matrix S of arbitrary size n, as long as the eigenvalues are

known, and to find the closed form formulas, one only needs to solve a linear system of

equations of matrices. The method we use is from Gallier and Xu [2], who show that

eS has a Rodrigues-like formula (see Section 4.3 to view Rodrigues’ formula), which

2 depends on the number of distinct nonzero eigenvalues of S. We show that the nonzero

eigenvalues of a real skew-symmetric matrix are purely imaginary, so the characteristic

polynomial P(X) of S has a coefficient of zero for each odd power of X when S has even

size, or a zero for each even power of X if S has odd size. Because it is impossible to

represent the roots of a quintic equation and higher in terms of its coefficients, we can

obtain the eigenvalues of S in terms of the coefficients of its characteristic polynomial

(and, thus, in terms of the entries of S) up to real skew-symmetric matrices of size 9.

For instance, we derive eS for skew-symmetric matrices up to size 9, but show that such

formulas can be derived for real skew-symmetric matrices of arbitrary size.

In Chapter 2, we present a list of basic mathematical notions that will aid us develop

the theory of our thesis. The latter will be presented in Chapter 3, where our ultimate goal

is to prove that the eigenvalues of any skew-symmetric matrix S ∈ Mn (the space of n × n real matrices) are either purely imaginary or zero, and if we denote the distinct nonzero

eigenvalues as ±θ1i,...,±θpi, θ j > 0 for j = 1,..., p, then there exist p skew-symmetric matrices S1,...,Sp ∈ Mn with the following four properties

S = θ1S1 + ··· + θpSp,

S jSk = SkS j = 0n, for j =� k,

3 S j = −S j, for each j = 1,..., p,

and p S 2 e = In + ∑ [sinθ jS j + (1 − cosθ j)S j ]. j=1 S 2 In Chapter 4, we will obtain the formulas of e by solving for each S j and S j alge­ n−1 braically in terms of the powers In,S,...,S and the θ js. This allows us to calculate closed, finite formulas of eS for any skew-symmetric matrix S. We will obtain the for­

mulas of eS, for S with size less than or equal to nine, in terms of powers of S and the

3 coefficients of the characteristic polynomial of S with the aid of the linear, quadratic,

cubic, and quartic formulas. To see the formulas of the roots of all polynomials up to

degree four in terms of its coefficients, refer to Appendix A.

Finally, we present our Matlab results in Chapter 5. We randomly generate one

thousand real skew-symmetric matrices S, with distinct eigenvalues, using our function skewsymgenerator for the cases 2 ≤ n ≤ 9. For each n, we implement the formulas

obtained in this thesis on our function skewexpm, and then compute and plot the orthog­

onality errors obtained from using both our function and Matlab’s expm on the generated

matrices in Figures 5.1 to 5.8. We summarize our results on Table 5.1, and briefly state

our conclusions, namely, that our closed formulas are expected perform better than Mat-

lab for skew-symmetric matrices of size up to 7, and there is a significant decrease in per­

formance for the two higher cases. Finally, we show empirical evidence that our closed

formulas give a much more acceptable error than expm if we use it on skew-symmetric

matrices with very large entries. To see the codes of the functions implemented in this

thesis, refer to Appendix B.

4 Chapter 2

Preliminary Concepts

In this chapter, we present the background needed for Chapter 3, where the theory of

our method is fully developed in a purely algebraic manner. The notions presented in

Chapter 2 and 3 of this thesis are partly from Horn and Johnson [6], unless specified

otherwise.

Let Mn represent the noncommutative ring of all real n × n matrices with zero 0n and n identity In, and let the standard ordered basis of the vector space R be {e1,...,en} with zero vector 0. We will sometimes use the symbol ∗ inside of a matrix to represent some

of its entries, and isolated 0s to represent an area of the matrix which is completely filled

with zeros.

Definition 2.1. If A ∈ Mn, we define the exponential of A as ∞ A 1 k e = In + ∑ A . (2.1) k=1 k! It is important to note that the exponential of a matrix always converges. Furthermore,

A+BA B e = e e for any commutative pair A,B ∈ Mn. For any matrix A, we denote its by AT . Since vectors are matrices, we can

n T take transpose of vectors, as well. Observe that for x,y ∈ R , the binary operation x y

5 √ n induces an inner product on R with vector norm ||x|| = xT x. We define unit vectors as those with a norm of 1.

T T Definition 2.2. S ∈ Mn is symmetric if S = S and skew-symmetric if S = −S .

For A ∈ Mn, we denote the of A as trA, and of A as detA. Moreover, A is nonsingular (or invertible) if and only if detA �= 0. Consequently, a product of

invertible matrices is also invertible. If A is invertible, we can define A−1, the unique

−1 −1 inverse of A, as the matrix such that AA = A A = In. m The direct sum A = j=1 A j ∈ Mn signifies A is block diagonal of the form ⎡ ⎤ A1 0 ⎢ ⎥ ⎢ . ⎥ A = ⎢ . . ⎥ ⎢ ⎥ ⎣ ⎦ 0 Am

for some smaller square matrices A j. If all A js are scalars and m = n, then A is a diagonal

matrix where its main diagonal is that which has all the entries A j. If some A js are nondi­ agonal matrices, then A is called quasidiagonal. Also, a matrix that has zeroes bellow its main diagonal is called upper-triangular (we can define a block upper- analogously).

We now introduce a class of matrices that we will be using throughout this paper.

−1 T Definition 2.3. A matrix Q ∈ Mn is orthogonal if Q = Q .

m Observe that Q = j=1 Q j is orthogonal if and only if each Q j is orthogonal, and the n n columns of an Q ∈ R form an orthonormal basis of R . Moreover, orthogonal matrices have the important property detQ = ±1.

Definition 2.4. The characteristic polynomial (or characteristic equation) of a matrix

A ∈ Mn is the polynomial in X with real coefficients described as follows

P(X) = det(XIn − A). (2.2)

6 Observe that the coefficients of P(X) are real because A is real-valued, and the leading coefficient of P(X) is always 1. We denote a root of P(X) by λ ∈ C (or θ ∈ C), and n each root of P(X) is defined as an eigenvalue of A. Any nonzero vector x ∈ C such that Ax = λ x is an eigenvector of A associated with the eigenvalue λ . The algebraic

multiplicity of an eigenvalue λ is its corresponding multiplicity in P(X); we will refer to the algebraic multiplicity of an eigenvalue as simply multiplicity. From now on, we √ reserve the letter i to denote the imaginary unit −1, and the conjugate of a matrix A is

denoted by A¯. Since the coefficients of P(X) are real, if λ ∈ C is an eigenvalue of A, so is

λ¯ . An eigenpair of a matrix is a set of two nonreal eigenvalues a ± bi for a,b ∈ R. Note n that if the eigenvector x ∈ C is associated with the eigenvalue λ , then x¯ is an eigenvector associated with the eigenvalue λ¯ .

We say A ∈ Mn is similar to B ∈ Mn if there exists a nonsingular P ∈ Mn such that A = PBP−1. For instance, two similar matrices have the same set of eigenvalues.

We now present three theorems which play an important role in our theory.

Theorem 2.5 (Cayley-Hamilton). Let A ∈ Mn be a real matrix with characteristic poly­ nomial

n n−1 P(X) = X + an−1X + ··· + a1X + a0 (2.3) where the a js are real scalars. Then,

n n−1 A + an−1A + ··· + a1A + a0In = 0n. (2.4)

Furthermore, a0 = detA, so A is nonsingular if and only if a0 �= 0.

Theorem 2.6 (QR Factorization). Let A ∈ Mn be given. Then there exists an orthogonal matrix Q ∈ Mn and an upper-triangular matrix R ∈ Mn such that A = QR.

7 The following theorem is taken from Hall [4], and it allows us to have a way to measure absolute error when computing the formulas of eS in Matlab.

S Theorem 2.7. Let S ∈ Mn be a real skew-symmetric matrix. Then e is a real orthogonal matrix such that deteS = 1.

T Thus, in Chapter 5, we will use ||In − Q˜ Q˜|| as our measure of orthogonality error, where Q˜ is the numerically computed eS and || · || is the Frobenious norm.

8 Chapter 3

The Theory

We start building our theory by considering the following well-known factorization.

Theorem 3.1 (Schur). Let A ∈ Mn. Then there exists an orthogonal matrix Q ∈ Mn such T that Q AQ is an upper-quasitriangular matrix in Mn, that is, it has the form ⎡ ⎤ A1 ∗ ⎢ ⎥ ⎢ ⎥ ⎢ A2 ⎥ QT AQ = ⎢ ⎥ (3.1) ⎢ . ⎥ ⎢ .. ⎥ ⎢ ⎥ ⎣ ⎦ 0 Am

where each A j is a matrix of size 1 × 1 or 2 × 2. Furthermore, each 1 × 1 diagonal block displays a real eigenvalue of A, and each 2 × 2 diagonal block has a conjugate pair of

nonreal eigenvalues of A. The order of the A js can be manipulated as desired.

Proof. We will prove our claim by constructing the orthogonal matrix Q above through a

process called deflation. Let A ∈ Mn be given. Suppose that A has a real eingenvalue λ , so n we make A1 in Eq.(3.1) be the 1×1 matrix [λ ]. Let u1 ∈ R be a nonzero unit eigenvector

n of λ , that is, Au1 = λ u1. Let u2,...,un ∈ R be such that U1 = u1 u2 ... un ∈ Mn is an

9 n T orthogonal matrix. Since the columns of U1 form an orthonormal basis of R , u1 u j = 0 for any j =� 1. Consider ⎡ ⎤ T u1 ⎢ ⎥ ⎢ ⎥ ⎢ uT ⎥ UT AU = ⎢ 2 ⎥ 1 1 ⎢ . ⎥ λ u1 Au2 ... Aun ⎢ . ⎥ ⎢ . ⎥ ⎣ ⎦ T un ⎡ ⎤ T T T λ u1 u1 u1 Au2 ... u1 Aun ⎢ ⎥ ⎢ T T T ⎥ ⎢ λ u u1 u Au2 ... u Aun ⎥ = ⎢ 2 2 2 ⎥ ⎢ . . . . ⎥ ⎢ . . . . ⎥ ⎢ . . . . ⎥ ⎣ ⎦ T T T λ un u1 un Au2 ... un Aun ⎡ ⎤ A ∗ ⎢ 1 ⎥ = ⎣ ⎦ . 0 A

We have deflated A to the matrix above, where A ∈ Mn−1.

Now suppose A has an eigenpair λ = a + bi, λ¯ with a,b ∈ R, b > 0, and instead we make A1 in Eq.(3.1) be a 2 × 2 matrix whose eigenvalues are a ± bi. Observe that we can n write an eigenvector associated with λ as x = u + vi where u,v ∈ R , so that λ¯ has the eigenvector x¯ = u−vi associated with it. Substituting in Ax = λ x we obtain Au = au −bv ⎤⎡ a b ⎢ ⎥ and Av = av + bu. Set B = ⎣ ⎦ and note that A u v = u v B. Since λ and −b a λ¯ are distinct, then x and x¯ must be linearly independent, that is, the only solution to

k1x + k2x¯ = 0 is trivial, so the only solution to (k1 + k2)u + i(k1 − k2)v = 0 is also trivial.

Thus, u and v are linearly independent, which means we can define W1 = u v W ∈ Mn ⎤⎡ ⎤⎡ I2 I2 W ⎢ ⎥ = W −1 = ⎢ ⎥ to be nonsingular. Observe that since 1 ⎣ ⎦ u v , we have 1 u v ⎣ ⎦. 0 0

10 Consider ⎡ ⎤ B ∗ W −1 AW = W −1 = W −1 = ⎢ ⎥ 1 1 1 A[u v] AW 1 [u v]BAW ⎣ ⎦ 0 A where A ∈ Mn−2. The deflation above produces a real 2 × 2 matrix B with eigenvalues λ and λ¯ .

Let us say that the next eigenvalue of A we want to place in Eq.(3.1) is µ ∈ R, so

A2 = [µ]. Take A from either of the two cases above, and use the same deflation method as in the case where λ is real. This yields ⎡ ⎤ µ ∗ T ⎢ ⎥ V1 A V1 = ⎣ ⎦ 0 A '

' for some orthogonal matrix V1 in Mn−1 (or Mn−2 if A1 is a 2 × 2) and some A in Mn−2

(or Mn−3 if A1 is a 2 × 2). If A1 ∈ M1, let U2 = I1 ⊕V1, which is orthogonal (if A1 ∈ M2, let W2 = I2 ⊕V1, which makes W2 invertible); hence we obtain either ⎡ ⎤ λ ∗ ∗ ⎢ ⎥ T T T ⎢ ⎥ (U1U2) A(U1U2) = U (U AU1)U2 = ⎢ 0 µ ∗ ⎥ 2 1 ⎢ ⎥ ⎣ ⎦ 0 0 A '

or ⎡ ⎤ B ∗ ∗ ⎢ ⎥ −1 T −1 ⎢ ⎥ (W1U2) A(W1U2) = U (W AW1)U2 = ⎢ 0 µ ∗ ⎥. 2 1 ⎢ ⎥ ⎣ ⎦ 0 0 A '

If instead we want to place another eigenpair in A2, we could do so by deflating A in a similar manner as in the first eigenpair.

For instance, if we continue deflating A, it only takes finitely many steps to obtain a product of nonsingular and orthogonal matrices (of the form W1,W2 and U1,U2, respec­

11 tively), which we denote by T ∈ Mn, such that ⎡ ⎤ B1 ∗ ⎢ ⎥ ⎢ ⎥ ⎢ B ⎥ −1 ⎢ 2 ⎥ T AT = ⎢ ⎥ ⎢ ... ⎥ ⎢ ⎥ ⎣ ⎦ 0 Bm where each B j is a 1 × 1 displaying a real eigenvalue of A, or a 2 × 2 displaying the real ⎡ ⎤ a b ⎢ ⎥ and imaginary part of a nonreal eigenpair a ± bi as ⎣ ⎦. Use Theorem 2.6 to factor −b a

T as T = QR, and partition the upper triangular matrix R such that R j are smaller upper

−1 triangular matrices along the diagonal of R, conformally to the B j matrices in T AT , so that ⎡ ⎤ B1 ∗ ⎢ ⎥ ⎢ ⎥ ⎢ B2 ⎥ QT AQ = R(T −1AT )R−1 = R ⎢ ⎥R−1 ⎢ . ⎥ ⎢ .. ⎥ ⎢ ⎥ ⎣ ⎦ 0 Bm ⎡ ⎤ −1 R1B1R1 ∗ ⎢ ⎥ ⎢ −1 ⎥ ⎢ R2B2R ⎥ = ⎢ 2 ⎥ ⎢ . ⎥ ⎢ .. ⎥ ⎢ ⎥ ⎣ ⎦ −1 0 RmBmRm ⎡ ⎤ A1 ∗ ⎢ ⎥ ⎢ ⎥ ⎢ A ⎥ ⎢ 2 ⎥ = ⎢ ⎥ ⎢ . . . ⎥ ⎢ ⎥ ⎣ ⎦ 0 Am

where a matrix A j is either a 1 × 1 displaying a real eigenvalue of A, or it is a 2 × 2 with an eigenpair of A because A j and B j are similar for each j = 1,...,m. Note that the order

12 in which we place the eigenvalues of A in the A js can be manipulated as we fancy.

Definition 3.2. A matrix A ∈ Mn is normal if it commutes with its transpose, that is, AAT = AT A.

For instance, symmetric and skew-symmetric matrices are normal. Observe that if a

m T m T block A = j=1 A j is normal, then it commutes with A = j=1 A j , so T each A j is normal because it commutes with A j . The converse is also true, namely, if m each A j is normal, then A = j=1 A j is normal. Furthermore, if A ∈ Mn is normal and T there exist an orthogonal matrix Q ∈ Mn and B ∈ Mn such that A = QBQ , then B is also normal since AAT = QBQT QBT QT = QBBT QT and AT A = QBT QT QBQT = QBT BQT , thus BBT = BT B.

Lemma 3.3. A block upper-triangular matrix is normal if and only if each of its off- diagonal blocks is zero and each of its diagonal blocks is normal.

Proof. Let A ∈ Mn be normal block upper-triangular and partition it as ⎡ ⎤ A A ⎢ 11 12 ⎥ A = ⎣ ⎦ . 0 A22

Hence, the equality AAT = AT A yields ⎡ ⎡⎤ ⎤ A AT + A AT ∗ AT A ∗ ⎢ 11 11 12 12 ⎥ ⎢ 11 11 ⎥ ⎣ ⎦ = ⎣ ⎦ , ∗ ∗ ∗ ∗

T T T for instance, we obtain the equality A11A11 + A12A12 = A11A11. Since the diagonals T T of A11A11 and A11A11 are the same, then taking the trace of the latter equality yields T tr(A12A12) = 0. We have shown that the sum of the norms of the columns of A12 equals zero, which implies that A12 is a of appropriate size. Next, we can partition

13 A22 in the same manner we partitioned A, and continue this process until all the off- diagonal blocks of A are shown to be zero. Consequently, each diagonal block of A must be normal.

Conversely, if all the off-diagonal blocks of A ∈ Mn are zero and its diagonal blocks m are normal, we can write A = j=1 A j where each A j is normal. Thus, A is normal.

Let us now present a theorem which tells us about the nature of the eigenvalues of symmetric and skew-symmetric matrices.

Theorem 3.4. The eigenvalues of a symmetric matrix are real, and the eigenvalues of a skew-symmetric matrix are purely imaginary or zero.

n Proof. Let A ∈ Mn and let x ∈ C be an eigenvector associated with the eigenvalue λ ∈ C of A. Thus, we have the equations Ax = λ x and Ax¯ = λ¯ x¯. If we left-multiply the first equation by x ¯T and the second equation by xT , we obtain the following identities

x¯T Ax = λ x¯T x and xT A x¯ = λ¯ xT x¯.

Observe that x ¯T x = x T x¯ > 0. Suppose A is symmetric, hence (x¯T Ax)T = xT AT x¯ = xT A x¯,

so if we subtract the identities above we get λ − λ¯ = 0, which implies that λ ∈ R. Sup­ pose now that A is skew-symmetric, hence (x¯T Ax)T = xT AT x¯ = −xT Ax¯, so if we add

the identities above, we get λ + λ¯ = 0, which implies that λ is either zero or purely imaginary. ⎡ ⎤ a b ⎢ ⎥ Lemma 3.5. Suppose A = ⎣ ⎦ ∈ M2 is normal and has a conjugate pair of nonreal c d eigenvalues. Then c = −b �= 0 and d = a. Consequently, the eigenvalues of A are a ± bi.

Proof. Since A is normal, AAT = AT A is equivalent to ⎡ ⎡⎤ ⎤ a 2 + b2 ac + bd a 2 + c2 ab + cd ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ = ⎣ ⎦ . ac + bd c2 + d2 ab + cd b2 + d2

14 From the first entry of the equation above, we obtain c2 = b2. Assume c = b, then A is

symmetric, so it has real eigenvalues by Theorem 3.4. Therefore, we must have c = −b

instead, and they are nonzero because A would be diagonal otherwise. Additionally, ⎡ ⎤ a b ⎢ ⎥ −ab + bd = ab − bd yields a = d. Finally, observe that A has the form ⎣ ⎦, so its −b a eigenvalues are a ± bi.

Theorem 3.6. Let A ∈ Mn be normal. Then there is an orthogonal matrix Q ∈ Mn such that QT AQ is a real quasidiagonal matrix of the form m T m Q AQ = A j (3.2) j=1

where each A j is either 1 × 1 or 2×2 with the following properties: Each 1 ×1 matrix in the direct sum above displays a real eigenvalue of A, and each 2 × 2 matrix has the form ⎡ ⎤ a b ⎢ ⎥ ⎣ ⎦ (3.3) −b a with b > 0, and has a pair of nonreal eigenvalues of A of the form a ± bi. The order of the A js can be manipulated as desired.

T Proof. Theorem 3.1 ensures the existence of an orthogonal Q ∈ Mn such that Q AQ is upper quasitriangular of the form in Eq.(3.1) where each 1 × 1 matrix A j displays a real eigenvalue of A, and each 2 × 2 matrix has an eigenpair of A as eigenvalues. Moreover,

QT AQ is normal because A is normal, so QT AQ is actually block diagonal by Lemma

3.3 and has the form in Eq.(3.2). Hence, each A j is normal. Specifically, each 2 × 2 matrix A j is normal and has a conjugate pair of nonreal eigenvalues, so it has the form in Eq.(3.3) by Lemma 3.5. We can force b > 0 by using similarity via the orthogonal ⎡ ⎤ 1 0 ⎢ ⎥ matrix ⎣ ⎦, if necessary. Theorem 3.1 allows us to choose the order of the A j as 0 −1 we desire.

15 From now on, our theory will focus on skew-symmetric matrices.

Corollary 3.7. Let S ∈ Mn, n ≥ 2 be a nonzero skew-symmetric matrix with distinct purely imaginary eigenvalues ±θ1i,...,±θpi such that each θ j > 0, j = 1,..., p. Then T there exists an orthogonal matrix Q ∈ Mn and a positive integer m such that S = QEQ

where E ∈ Mn is a block diagonal matrix of the form ⎡ ⎤ E1 ⎢ ⎥ ⎢ . ⎥ ⎢ .. ⎥ ⎢ ⎥ E = ⎢ ⎥ (3.4) ⎢ E ⎥ ⎢ m ⎥ ⎣ ⎦ 0n−2m

and each block Ek ∈ M2, with k = 1,...,m, has the form ⎡ ⎤ 0 1 ⎢ ⎥ Ek = θ j ⎣ ⎦ (3.5) −1 0

for some θ j, j = 1,..., p. Furthermore, the number of repeating blocks of a particular

Ek in Eq.(3.4) is equal to the multiplicity of the corresponding eigenvalue θ ji in the characteristic polynomial of S.

Proof. Suppose S ∈ Mn, n ≥ 2 is a nonzero skew-symmetric matrix, so by Theorem 3.4 the eigenvalues of S are purely imaginary or zero. Denote the 2p, p ≥ 1 distinct purely imaginary eigenvalues by ±θ1i,...,±θpi with θ j > 0, j = 1,..., p so that S has a total of m ≥ p eigenpairs including multiplicities; thus the multiplicities of all the purely imaginary eigenvalues of S add up to 2m. This implies that zero has a multiplicity of n − 2m. Therefore, Theorem 3.6 states that there is an orthogonal Q ∈ Mn such that

T Q SQ = E1 ⊕ ··· ⊕ Em ⊕ 0 ⊕ ··· ⊕ 0 = E1 ⊕ ··· ⊕ Em ⊕ 0n−2m 0·· 0 n−2m zeroes

16 ⎡ ⎤ 0 θ ⎢ j ⎥ where each Ek, k = 1,...,m, has the form ⎣ ⎦ for some θ j, j = 1,..., p. More­ −θ j 0 over, each particular block Ek appears in the direct sum above the same number of times T as the multiplicity of its corresponding θ ji. Finally, let E = Q SQ.

Proof of The Main Theorem

We can now readily state and prove the theorem that will allow us to compute closed

S formulas of e , for skew-symmetric matrices S ∈ Mn, in terms of the eigenvalues of S and powers of S. The result below is from Gallier and Xu [2].

Theorem 3.8. Let S ∈ Mn, n ≥ 2 be a nonzero skew-symmetric matrix. If ±θ1i,...,±θpi are the distinct eigenvalues of S, where θ j > 0 for each j ∈ {1,..., p}, then there exist p skew-symmetric matrices S1,...,Sp ∈ Mn such that

S = θ1S1 + ··· + θpSp, (3.6)

S jSk = SkS j = 0n, for j =� k, (3.7)

and

3 S j = −S j, for all j = 1,..., p. (3.8)

Furthermore, p S 2 e = In + ∑ [sinθ jS j + (1 − cosθ j)S j ]. (3.9) j=1

Proof. Suppose S ∈ Mn, n ≥ 2 is a nonzero skew-symmetric matrix, and let ±θ1i,...,±θpi be its distinct eigenvalues. By Corollary 3.7, there exists E ∈ Mn with the form in

17 T Eq.(3.4) and an orthogonal matrix Q ∈ Mn such that S = QEQ . Fix j ∈ {1,..., p}, and let B j ∈ Mn be the matrix obtained by zeroing out all the block diagonal matrices in ⎡ ⎤ 0 1 1 E which are not of the specific form P = ⎢ ⎥. Hence, each B j has the form in θ j ⎣ ⎦ −1 0

Eq.(3.4) but each of its inner Eks is either zero or it has the form of P defined above. This T yields E = θ1B1 + ··· + θpBp. Define S j = QB jQ for each j ∈ {1,..., p}; this proves

Eq.(3.6). Let j,k ∈ {1,..., p} be distinct. Since B j and Bk have nonzero diagonal blocks in different positions, it follows that B jBk = BkB j = 0n, and this is equivalent to Eq.(3.7). To verify Eq.(3.8) it suffices to observe P3 = −P.

We now prove Eq.(3.9). Consider any fixed S j, thus Eq.(3.8) gives

3 4 2 5 6 2 S j = −S j, S j= −S j , S j = S j, S j= S j

which can be generalized to

4k−1 4k 2 4k+1 4k+2 2 S j = −S j, S j = −S j , S j = S j, S j = S j , k ≥ 1.

Therefore, if we substitute the above in Eq.(2.1) we have ∞ θ k θ jS j j k e = In + ∑ S j k=1 k! ∞ 2k+1 ∞ 2k θ j 2k+1 θ j 2k = In + ∑ S j + ∑ S j k=0 (2k + 1)! k=1 (2k)! ∞ (−1)k 2k+1 ∞ (− )k+1 2k θ j 1 θ j 2 = In + ∑ S j + ∑ S j k=0 (2k + 1)! k=1 (2k)! 2 = In + sinθ jS j + (1 − cosθ j)S j .

Finally, Eq.(3.6) and Eq.(3.7) give p S 2 e = exp(θ1S1 + ··· + θpSp) = ∏ [In + sinθ jS j + (1 − cosθ j)S j ] j=1 p 2 = In + ∑ [sinθ jS j + (1 − cosθ j)S j ]. j=1

18 It is important to observe that the way we will obtain the formulas of eS depends only

on the number p of distinct eigenpairs of S because of the nature of the Equations from

(3.6) to (3.9), and the multiplicities of the imaginary eigenvalues of S do not play a role

in the computations. The reason lies in the extension of Eq.(3.6) with the aid of Eq.(3.7),

namely

S = θ1S1 + ··· + θpSp,

2 2 2 2 2 S = θ1 S1 + ··· + θp Sp, . .

2p 2p 2p 2p 2p S = θ1S 1 + ··· + θpS p

k 2 where each S j reduces to one of ±S j, ±S j by Eq.(3.8). This leaves us with one system of p equations and the p unknowns S1,...,Sp, and another system of p equations and 2 2 2 the p unknowns S1,...,Sp. Once we solve for each S j and S j in terms of powers of S with exponents less than n with the aid of the Cayley-Hamilton theorem, we use them

S in Eq.(3.9) directly to obtain e . Keep in mind that each S j is unique. Additionally, we obtain

2 2 In = −S1 − ··· − Sp (3.10)

2 from the fact that P = −I2, where P is the matrix from the proof of Theorem 3.8. We are now ready to calculate the desired exponentials of nonzero skew-symmetric matrices of sizes 2 ≤ n ≤ 9 in terms of themselves and their eigenvalues using Theo­

rem 3.8. Since the characteristic polynomial of a skew-symmetric matrix always has the

n n−2 2 S form P(X) = X + an−2X + ··· + a2X + a0 by Theorem 3.4, when we examine e , where S ∈ Mn, we get some repeated previous cases. That is, if n is even, the number of cases of the formula of eS is determined by how many distinct nonzero eigenvalues S has and whether 0 is an eigenvalue. For instance, we obtain the repeated previous cases

19 2,3,...,n − 1, and a new case n. If n is odd, then it is never invertible, so we get the

repeated previous odd cases 3,5,...,n − 2, and a new case n. In each case, we represent the θ js in terms of the coefficients of the characteristic polynomial of S using the for­ mulas of roots of polynomial up to degree four in Appendix A. The coefficients of the characteristic polynomial of a matrix S are, by definition, in terms of sums, differences, and products of the entries of S, and can always be numerically computed with negligible or no error (e.g., by using Matlab’s charpoly function).

20 Chapter 4

Computing the Formulas of eS

S We now derive the formulas of e for S ∈ Mn, where n ≤ 9, in sections 4.1 to 4.9, and include a discussion on how to expand our theory to calculate the closed formulas for

n ≥ 10 in section 4.10. The formulas in sections 4.1 to 4.7 are similar to the formulas

Piamonte [9] derives, but in fully simplified form, and every single case is covered. Von

Bremen [11] obtains formulas similar to ours for cases 4.4 and 4.5. Furthermore, in section 4.4 we arrive at the same formulas that Politi [10] and Oshinuga [8] obtain.

4.1 S ∈ M1

This is the trivial case, as the only skew-symmetric matrix possible in M1 is S = 0. Hence, S e = I1 = 1. From now on, we will only consider nonzero skew-symmetric matrices 0 because e n = In for any n.

21 4.2 S ∈ M2

The characteristic equation of S has the form P(X) = X2 + a. The eigenvalues of S are

1 ±θ i, θ > 0 so Eq.(3.6) gives S = θS1 for some skew-symmetric matrix S1. Thus, S1 = θ S 2 and S1 = −I2 by Eq.(3.10). Hence, by Theorem 3.8

sinθ eS = I + sinθ S + (1 − cosθ )S2 = I + S + (cosθ − 1)I 2 1 1 2 θ 2 sinθ = cosθ I + S. 2 θ √ Here, θ = a.

4.3 S ∈ M3

The characteristic equation of S has the form P(X) = X(X2 + a). The eigenvalues of S are ±θ i, θ > 0 and zero. Theorem 3.8 gives S = θ S1 for some skew-symmetric matrix S . Thus, S = 1 S and S2 = 1 S2. Hence, 1 1 θ 1 θ 2

S 2 e = I3 + sinθS1 + (1 − cosθ)S1 sinθ 1 − cosθ = I + S + S2 . 3 θ θ 2 √ Here θ = a. The formula above is known as Rodrigues’ formula.

4.4 S ∈ M4

The characteristic equation of S has the form P(X) = X4 + aX2 + b.

22 Case 4.4.1

If S has two distinct nonzero eigenvalues ±θ i with θ > 0 each of multiplicity two, then this case is similar to Case 4.2, namely

sinθ eS = cosθ I + S. 4 θ

Here θ = a/2.

Case 4.4.2

If S has two distinct nonzero eigenvalues ±θi with θ > 0 each of multiplicity one, and zero has multiplicity two, then this case is similar to Case 4.3, namely

sinθ 1 − cosθ eS = I + S + S2 . 4 θ θ 2 √ Here θ = a.

Case 4.4.3

The eigenvalues of S are ±θ1i,±θ2i, θ2 > θ1 > 0, so the characteristic polynomial of S is

2 2 2 2 4 2 2 2 2 2 P(X) = (X + θ1 )(X + θ2 ) = X + (θ1 + θ2 )X + θ1 θ2 .

4 2 2 2 2 2 Thus, Cayley-Hamilton implies S = −(θ1 + θ2 )S − θ1 θ2 I4. Theorem 3.8 allows us to use the formula S = θ1S1 + θ2S2 for some skew-symmetric matrices S1 and S2, and

23 extend it to

S = θ1S1 + θ2S2 (4.4.1)

2 2 2 2 2 S = θ1 S1 + θ2 S2 (4.4.2)

3 3 3 3 3 3 3 S = θ1 S1 + θ2 S2 = −θ1 S1 − θ2 S2 (4.4.3)

4 4 2 4 2 S = −θ1 S1 − θ2 S2. (4.4.4)

We use Eq.(4.4.1) and Eq.(4.4.3) to solve for S1 and S2 and obtain

2 3 θ2 S + S S1 = 2 2 (4.4.5) θ1(θ2 − θ1 ) 2 3 θ1 S + S S2 = 2 2 . (4.4.6) θ2(θ1 − θ2 ) 2 2 Similarly, we use Eq.(4.4.2) and Eq.(4.4.4) to get S1 and S2 2 2 4 2 θ2 S + S S1 = 2 2 2 (4.4.7) θ1 (θ2 − θ1 ) 2 2 4 2 θ1 S + S S2 = 2 2 2 . (4.4.8) θ2 (θ1 − θ2 ) But since n = 4 is even and S is invertible, Cayley-Hamilton gives

2 2 4 2 2 2 2 2 2 2 θ2 S + S −θ1 θ2 I4 − θ1 S −θ2 I4 − S S1 = 2 2 2 = 2 2 2 = 2 2 θ 1 (θ2 − θ1 ) θ 1 (θ2 − θ1 ) θ 2 − θ 1 2 2 4 2 2 2 2 2 2 2 θ1 S + S −θ1 θ2 I4 − θ2 S −θ1 I4 − S S2 = 2 2 2 = 2 2 2 = 2 2 . θ 2 (θ1 − θ2 ) θ 2 (θ1 − θ2 ) θ 1 − θ 2 Putting it all together in

S 2 2 e = I4 + sinθ1S1 + (1 − cosθ1)S1 + sin θ2S2 + (1 − cosθ2)S2

we get

2 2 3 3 S θ2 cosθ1 − θ1 cosθ2 θ2 sinθ1 − θ1 sinθ2 cosθ1 − cosθ2 2 e = 2 2 I4 + 2 2 S + 2 2 S θ2 − θ1 θ1θ2(θ2 − θ1 ) θ2 − θ1 θ2 sin θ1 − θ1 sinθ2 3 + 2 2 S . θ1θ2(θ2 − θ1 )

24 Here, √ a − a2 − 4b θ = , 1 2 and √ a + a2 − 4b θ = . 2 2

4.5 S ∈ M5

The characteristic equation of S has the form P(X) = X(X4 + aX2 + b).

Case 4.5.1

If S has two distinct nonzero eigenvalues ±θi with θ > 0 and zero, each of them with nonzero multiplicity, then this case is similar to Case 4.3, namely

sinθ 1 − cosθ eS = I + S + S2 . 5 θ θ 2

Here, θ = a/m, where m is the multiplicity of θ i with m ∈ {1,2}.

Case 4.5.2

The eigenvalues of S are ±θ1i,±θ2i, θ2 > θ1 > 0, and zero. Since S is not invertible, we use Equations (4.4.5) to (4.4.8) from Case 4.4.3 into

S 2 2 e = I5 + sinθ1S1 + (1 − cosθ1)S1 + sin θ2S2 + (1 − cosθ2)S2

and get

3 3 4 4 S θ2 sinθ1 − θ1 sinθ2 θ2 (1 − cosθ1) − θ1 (1 − cosθ2) 2 e =I5 + 2 2 S + 2 2 2 2 S θ1θ2(θ2 − θ1 ) θ1 θ2 (θ2 − θ1 ) 2 2 θ2 sinθ1 − θ1 sin θ2 3 θ2 (1 − cosθ1) − θ1 (1 − cosθ2) 4 + 2 2 S + 2 2 2 2 S . θ1θ2(θ2 − θ1 ) θ1 θ2 (θ2 − θ1 )

25 Here, √ a − a2 − 4b θ = , 1 2 and √ a + a2 − 4b θ = . 2 2

4.6 S ∈ M6

The characteristic equation of S has the form P(X) = X6 + aX4 + bX2 + c.

Case 4.6.1

If S has two distinct nonzero eigenvalues ±θ i with θ > 0 each of multiplicity three, then this case is similar to Case 4.2, namely

sinθ eS = cosθ I + S. 6 θ

Here, θ = a/3

Case 4.6.2

If S has two distinct nonzero eigenvalues ±θi with θ > 0 and zero, each of them with nonzero multiplicity, then this case is similar to Case 4.3, namely

sinθ 1 − cosθ eS = I + S + S2 . 6 θ θ 2

Here, θ = a/m, where m is the multiplicity of θ i with m ∈ {1,2}.

26 Case 4.6.3

If S has only four distinct nonzero eigenvalues ±θ1i,±θ2i, θ2 > θ1 > 0, each of them with nonzero multiplicity, then this case is similar to Case 4.4.3, namely

2 2 3 3 S θ2 cosθ1 − θ1 cosθ2 θ 2 sinθ1 − θ1 sinθ2 cosθ1 − cosθ2 2 e = 2 2 I6 + 2 2 S + 2 2 S θ2 − θ 1 θ1θ2(θ2 − θ 1 ) θ2 − θ1 θ2 sin θ1 − θ1 sinθ2 3 + 2 2 S . θ1θ2(θ 2 − θ 1 )

Here, √ a − ma2 − 3 b θ = 2 , 1 3 and √ a + m a2 − 3b θ = 1 , 2 3 where m j is the multiplicity of θ ji with m1 ∈ {1,2} and m1 + m2 = 3.

Case 4.6.4

The eigenvalues of S are ±θ1i,±θ2i, θ2 > θ1 > 0, and zero (of multiplicity two). Hence, this case is similar to Case 4.5.2, namely,

3 3 4 4 S θ2 sinθ1 − θ1 sinθ2 θ2 (1 − cosθ1) − θ1 (1 − cosθ2) 2 e =I6 + 2 2 S + 2 2 2 2 S θ1θ2(θ2 − θ1 ) θ1 θ2 (θ2 − θ1 ) 2 2 θ2 sinθ1 − θ1 sin θ2 3 θ2 (1 − cosθ1) − θ1 (1 − cosθ2) 4 + 2 2 S + 2 2 2 2 S . θ1θ2(θ2 − θ1 ) θ1 θ2 (θ2 − θ1 )

Here, √ a − a2 − 4b θ = , 1 2 and √ a + a2 − 4b θ = . 2 2

27 Case 4.6.5

The eigenvalues of S are ±θ1i,±θ2i,±θ3i, θ3 > θ2 > θ1 > 0, so the characteristic poly­ nomial of S is

2 2 2 2 2 2 P(X) = (X + θ1 )(X + θ2 )(X + θ3 )

6 2 2 2 4 2 2 2 2 2 2 2 2 2 2 = X + (θ1 + θ2 + θ3 )X + (θ1 θ2 + θ1 θ3 + θ2 θ3 )X + θ1 θ2 θ3 .

Thus, Cayley-Hamilton implies

6 2 2 2 4 2 2 2 2 2 2 2 2 2 2 S = −(θ1 + θ2 + θ3 )S − (θ1 θ2 + θ1 θ3 + θ2 θ3 )S − θ1 θ2 θ3 I6.

Theorem 3.8 allows us to use the formula S = θ1S1 +θ2S2 +θ3S3 for some skew-symmetric

matrices S1, S2 and S3, and extend it to

S = θ1S1 + θ2S2 + θ3S3 (4.6.1)

2 2 2 2 2 2 2 S = θ1 S1 + θ2 S2 + θ3 S3 (4.6.2)

3 3 3 3 S = −θ1 S1 − θ2 S2 − θ3 S3 (4.6.3)

4 4 2 4 2 4 2 S = −θ1 S1 − θ2 S2 − θ3 S3 (4.6.4)

5 5 5 5 S = θ1 S1 + θ2 S2 + θ3 S3 (4.6.5)

6 6 2 6 2 6 2 S = θ1 S1 + θ2 S2 + θ3 S3. (4.6.6)

We use Equations (4.6.1),(4.6.3), and (4.6.5) above to solve for S1, S2, and S3 and obtain

2 2 2 2 3 5 θ2 θ3 S + (θ2 + θ3 )S + S S1 = 2 2 2 2 (4.6.7) θ1(θ1 − θ2 )(θ1 − θ3 ) 2 2 2 2 3 5 θ1 θ3 S + (θ1 + θ3 )S + S S2 = 2 2 2 2 (4.6.8) θ2(θ2 − θ1 )(θ2 − θ3 ) 2 2 2 2 3 5 θ1 θ2 S + (θ1 + θ2 )S + S S3 = 2 2 2 2 . (4.6.9) θ3(θ3 − θ1 )(θ3 − θ2 )

28 2 2 2 Similarly, we use Eq.(4.6.2), (4.6.4), and (4.6.6) to get S1, S2, and S3 2 2 2 2 2 4 6 2 θ2 θ3 S + (θ2 + θ3 )S + S S1 = 2 2 2 2 2 (4.6.10) θ1 (θ1 − θ2 )(θ1 − θ3 ) 2 2 2 2 2 4 6 2 θ1 θ3 S + (θ1 + θ3 )S + S S2 = 2 2 2 2 2 (4.6.11) θ2 (θ2 − θ1 )(θ2 − θ3 ) 2 2 2 2 2 4 6 2 θ1 θ2 S + (θ1 + θ2 )S + S S3 = 2 2 2 2 2 . (4.6.12) θ3 (θ3 − θ1 )(θ3 − θ2 ) But since n = 6 is even and S is invertible, Cayley-Hamilton gives

2 2 2 2 2 4 2 −θ2 θ3 I6 − (θ2 + θ3 )S − S S1 = 2 2 2 2 (θ1 − θ2 )(θ1 − θ3 ) 2 2 2 2 2 4 2 −θ1 θ3 I6 − (θ1 + θ3 )S − S S2 = 2 2 2 2 (θ2 − θ1 )(θ2 − θ3 ) 2 2 2 2 2 4 2 −θ1 θ2 I6 − (θ1 + θ2 )S − S S3 = 2 2 2 2 . (θ3 − θ1 )(θ3 − θ2 ) Putting it all together in

S 2 2 e =I6 + sinθ1S1 + (1 − cosθ1)S1 + sinθ2S2 + (1 − cosθ2)S2

2 + sinθ3S3 + (1 − cosθ3)S3 we get 2 2 2 2 2 2 2 2 2 2 2 2 S θ2 θ3 (θ3 − θ2 )cosθ1 − θ1 θ3 (θ3 − θ1 )cosθ2 + θ1 θ2 (θ2 − θ1 )cosθ3 e = 2 2 2 2 2 2 I6 (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 3 3 2 2 3 3 2 2 3 3 2 2 θ2 θ3 (θ3 − θ2 )sinθ1 − θ1 θ3 (θ3 − θ1 )sinθ2 + θ1 θ2 (θ2 − θ1 )sinθ3 + 2 2 2 2 2 2 S θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 4 4 4 4 4 4 (θ3 − θ2 )cosθ1 − (θ3 − θ1 )cosθ2 + (θ2 − θ1 )cosθ3 2 + 2 2 2 2 2 2 S (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 4 4 4 4 4 4 θ2θ3(θ3 − θ2 )sinθ1 − θ1θ3(θ3 − θ1 )sinθ2 + θ1θ2(θ2 − θ1 )sinθ3 3 + 2 2 2 2 2 2 S θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 2 2 2 2 2 2 (θ3 − θ2 )cosθ1 − (θ3 − θ1 )cosθ2 + (θ2 − θ1 )cosθ3 4 + 2 2 2 2 2 2 S (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 2 2 2 2 2 2 θ2θ3(θ3 − θ2 )sinθ1 − θ1θ3(θ3 − θ1 )sinθ2 + θ1θ2(θ2 − θ1 )sinθ3 5 + 2 2 2 2 2 2 S . θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 )

29 Here,

1 √ √ θ = a − q cos(ϕ/3) + 3sin(ϕ/3) , 1 3

1 √ √ θ = a − q cos(ϕ/3) − 3sin(ϕ/3) , 2 3 and

1 √ θ = [a + 2 qcos(ϕ/3)], 3 3

where ϕ = arccosp , p = 2a3 − 9ab + 27c, and q = a2 − 3b. 2q3/2

4.7 S ∈ M7

The characteristic equation of S has the form P(X) = X(X6 + aX4 + bX2 + c).

Case 4.7.1

If S has two distinct nonzero eigenvalues ±θi with θ > 0 and zero, each of them with nonzero multiplicity, then this case is similar to Case 4.3, namely

sinθ 1 − cosθ eS = I + S + S2. 7 θ θ 2

Here, θ = a/m, where m is the multiplicity of θ i with m ∈ {1,2,3}.

Case 4.7.2

The eigenvalues of S are ±θ1i,±θ2i, θ2 > θ1 > 0, and zero, each having a nonzero multiplicity. Hence, this case is similar to Case 4.5.2, namely,

3 3 4 4 S θ2 sinθ1 − θ1 sinθ2 θ2 (1 − cosθ1) − θ1 (1 − cosθ2) 2 e =I7 + 2 2 S + 2 2 2 2 S θ1θ2(θ2 − θ1 ) θ1 θ2 (θ2 − θ1 ) 2 2 θ2 sinθ1 − θ1 sin θ2 3 θ2 (1 − cosθ1) − θ1 (1 − cosθ2) 4 + 2 2 S + 2 2 2 2 S . θ1θ2(θ2 − θ1 ) θ1 θ2 (θ2 − θ1 )

30 Here, if the multiplicity of zero is three, then √ a − a2 − 4b θ = , 1 2

and √ a + a2 − 4b θ = . 2 2 Otherwise, if the multiplicity of zero is one, then √ a − m a2 − 3b θ = 2 , 1 3 and √ a + m a2 − 3b θ = 1 , 2 3 where m j is the multiplicity of θ ji with m1 ∈ {1,2} and m1 + m2 = 3.

Case 4.7.3

The eigenvalues of S are ±θ1i,±θ2i,±θ3i, θ3 > θ2 > θ1 > 0, and zero. Since S is not invertible, we use Equations (4.6.7) to (4.6.12) from Case 4.6.5 into

S 2 2 e =I7 + sinθ1S1 + (1 − cosθ1)S1 + sinθ2S2 + (1 − cosθ2)S2

2 + sinθ3S3 + (1 − cosθ3)S3

31 and get

3 3 2 2 3 3 2 2 3 3 2 2 S θ2 θ3 (θ3 − θ2 )sinθ1 − θ1 θ3 (θ3 − θ1 )sinθ2 + θ1 θ2 (θ2 − θ1 )sinθ3 e =I7 + 2 2 2 2 2 2 S θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 4 4 2 2 4 4 2 2 4 4 2 2 θ2 θ3 (θ3 − θ2 )(1 − cosθ1) − θ1 θ3 (θ3 − θ1 )(1 − cosθ2) + θ1 θ2 (θ2 − θ1 )(1 − cosθ3) 2 + 2 2 2 2 2 2 2 2 2 S θ1 θ2 θ3 (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 4 4 4 4 4 4 θ2θ3(θ3 − θ2 )sinθ1 − θ1θ3(θ3 − θ1 )sinθ2 + θ1θ2(θ2 − θ1 )sinθ3 3 + 2 2 2 2 2 2 S θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 2 2 4 4 2 2 4 4 2 2 4 4 θ2 θ3 (θ3 − θ2 )(1 − cosθ1) − θ1 θ3 (θ3 − θ1 )(1 − cosθ2) + θ1 θ2 (θ2 − θ1 )(1 − cosθ3) 4 + 2 2 2 2 2 2 2 2 2 S θ1 θ2 θ3 (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 2 2 2 2 2 2 θ2θ3(θ3 − θ2 )sinθ1 − θ1θ3(θ3 − θ1 )sinθ2 + θ1θ2(θ2 − θ1 )sinθ3 5 + 2 2 2 2 2 2 S θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 2 2 2 2 2 2 2 2 2 2 2 2 θ2 θ3 (θ3 − θ2 )(1 − cosθ1) − θ1 θ3 (θ3 − θ1 )(1 − cosθ2) + θ1 θ2 (θ2 − θ1 )(1 − cosθ3) 6 + 2 2 2 2 2 2 2 2 2 S . θ1 θ2 θ3 (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 )

Here,

1 √ √ θ = a − q cos(ϕ/3) + 3sin(ϕ/3) , 1 3

1 √ √ θ = a − q cos(ϕ/3) − 3sin(ϕ/3) , 2 3 and

1 √ θ = [a + 2 qcos(ϕ/3)], 3 3 where ϕ = arccos p , p = 2a3 − 9ab + 27c, and q = a2 − 3b. 2q3/2

4.8 S ∈ M8

The characteristic equation of S has the form P(X) = X8 + aX6 + bX4 + cX2 + d.

32 Case 4.8.1

If S has two distinct nonzero eigenvalues ±θ i with θ > 0 each of multiplicity four, then this case is similar to Case 4.2, namely

sinθ eS = cosθ I + S. 8 θ

Here, θ = a/4.

Case 4.8.2

If S has two distinct nonzero eigenvalues ±θi with θ > 0 and zero, each of them with nonzero multiplicity, then this case is similar to Case 4.3, namely

sinθ 1 − cosθ eS = I + S + S2 . 8 θ θ 2

Here, θ = a/m, where m is the multiplicity of θ i with m ∈ {1,2,3}.

Case 4.8.3

If S has only four distinct nonzero eigenvalues ±θ1i,±θ2i, θ2 > θ1 > 0, each with nonzero multiplicity, then this case is similar to Case 4.4.3, namely

2 2 3 3 S θ2 cosθ1 − θ1 cosθ2 θ2 sinθ1 − θ1 sinθ2 cosθ1 − cosθ2 2 e = 2 2 I8 + 2 2 S + 2 2 S θ2 − θ1 θ1θ2(θ2 − θ1 ) θ2 − θ1 θ2 sin θ1 − θ1 sinθ2 3 + 2 2 S . θ1θ2(θ2 − θ1 )

Here, if the multiplicities of θ1i and θ2i are each of multiplicity two, then

1 θ = a − 3a2 − 8b, 1 2

and 1 θ = a + 3a2 − 8b. 2 2

33 Otherwise,

1 1 2 θ1 = a − 9a − 24b, 2 m1 and

1 1 2 θ2 = a + 9a − 24b, 2 m2

where m j is the multiplicity of θ ji with m1 ∈ {1,3} and m1 + m2 = 4.

Case 4.8.4

The eigenvalues of S are ±θ1i,±θ2i, θ2 > θ1 > 0, and zero, each having a nonzero multiplicity. Hence, this case is similar to Case 4.5.2, namely,

3 3 4 4 S θ2 sinθ1 − θ1 sinθ2 θ2 (1 − cosθ1) − θ1 (1 − cosθ2) 2 e =I8 + 2 2 S + 2 2 2 2 S θ1θ2(θ2 − θ1 ) θ1 θ2 (θ2 − θ1 ) 2 2 θ2 sinθ1 − θ1 sin θ2 3 θ2 (1 − cosθ1) − θ1 (1 − cosθ2) 4 + 2 2 S + 2 2 2 2 S . θ1θ2(θ2 − θ1 ) θ1 θ2 (θ2 − θ1 )

Here, if the multiplicity of zero is four, then √ a − a2 − 4b θ = , 1 2 and √ a + a2 − 4b θ = . 2 2 Otherwise, if the multiplicity of zero is two, then √ a − m a2 − 3b θ = 2 , 1 3 and √ a + m a2 − 3b θ = 1 , 2 3 where m j is the multiplicity of θ ji with m1 ∈ {1,2} and m1 + m2 = 3.

34 Case 4.8.5

If S has only six distinct nonzero eigenvalues ±θ1i,±θ2i,±θ3i, θ3 > θ2 > θ1 > 0, each of them with nonzero multiplicity, then this case is similar to Case 4.6.5, namely 2 2 2 2 2 2 2 2 2 2 2 2 S θ2 θ3 (θ3 − θ2 )cosθ1 − θ1 θ3 (θ3 − θ1 )cosθ2 + θ1 θ2 (θ2 − θ1 )cosθ3 e = 2 2 2 2 2 2 I8 (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 3 3 2 2 3 3 2 2 3 3 2 2 θ2 θ3 (θ3 − θ2 )sinθ1 − θ1 θ3 (θ3 − θ1 )sinθ2 + θ1 θ2 (θ2 − θ1 )sinθ3 + 2 2 2 2 2 2 S θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 4 4 4 4 4 4 (θ3 − θ2 )cosθ1 − (θ3 − θ1 )cosθ2 + (θ2 − θ1 )cosθ3 2 + 2 2 2 2 2 2 S (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 4 4 4 4 4 4 θ2θ3(θ3 − θ2 )sinθ1 − θ1θ3(θ3 − θ1 )sinθ2 + θ1θ2(θ2 − θ1 )sinθ3 3 + 2 2 2 2 2 2 S θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 2 2 2 2 2 2 (θ3 − θ2 )cosθ1 − (θ3 − θ1 )cosθ2 + (θ2 − θ1 )cosθ3 4 + 2 2 2 2 2 2 S (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 2 2 2 2 2 2 θ2θ3(θ3 − θ2 )sinθ1 − θ1θ3(θ3 − θ1 )sinθ2 + θ1θ2(θ2 − θ1 )sinθ3 5 + 2 2 2 2 2 2 S . θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 )

Here, the multiplicity of one θ ji is two, and the multiplicities of each of the other quan­ tities, call them θki and θli (θk < θl), is one. Then √ a q θ = + Φ , j 4 3 j

where ⎧ √ ⎪−cos( /3) − 3sin( /3), j = 1 ⎪ ϕ ϕ ⎪ ⎨⎪ √ Φ j = −cos(ϕ/3) + 3sin(ϕ/3), j = 2 ⎪ ⎪ ⎪ ⎩⎪2cos(ϕ/3), j = 3,

2 ϕ = arccos −p , p = − 27 (a3 − 4ab + 8c), and q = 9a − 3b . Furthermore, 2q3/2 32 16 2

α − α2 − 4β θ = , k 2 and

α + α2 − 4β θ = , l 2

35 2 2 4 where α = a − 2θ j , and β = b − 2aθ j + 3θ j .

Case 4.8.6

The eigenvalues of S are ±θ1i,±θ2i,±θ3i, θ3 > θ2 > θ1 > 0, and zero, each having a nonzero multiplicity. Hence, this case is similar to Case 4.7.3, namely,

3 3 2 2 3 3 2 2 3 3 2 2 S θ2 θ3 (θ3 − θ2 )sinθ1 − θ1 θ3 (θ3 − θ1 )sinθ2 + θ1 θ2 (θ2 − θ1 )sinθ3 e =I8 + 2 2 2 2 2 2 S θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 4 4 2 2 4 4 2 2 4 4 2 2 θ2 θ3 (θ3 − θ2 )(1 − cosθ1) − θ1 θ3 (θ3 − θ1 )(1 − cosθ2) + θ1 θ2 (θ2 − θ1 )(1 − cosθ3) 2 + 2 2 2 2 2 2 2 2 2 S θ1 θ2 θ3 (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 4 4 4 4 4 4 θ2θ3(θ3 − θ2 )sinθ1 − θ1θ3(θ3 − θ1 )sinθ2 + θ1θ2(θ2 − θ1 )sinθ3 3 + 2 2 2 2 2 2 S θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 2 2 4 4 2 2 4 4 2 2 4 4 θ2 θ3 (θ3 − θ2 )(1 − cosθ1) − θ1 θ3 (θ3 − θ1 )(1 − cosθ2) + θ1 θ2 (θ2 − θ1 )(1 − cosθ3) 4 + 2 2 2 2 2 2 2 2 2 S θ1 θ2 θ3 (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 2 2 2 2 2 2 θ2θ3(θ3 − θ2 )sinθ1 − θ1θ3(θ3 − θ1 )sinθ2 + θ1θ2(θ2 − θ1 )sinθ3 5 + 2 2 2 2 2 2 S θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 2 2 2 2 2 2 2 2 2 2 2 2 θ2 θ3 (θ3 − θ2 )(1 − cosθ1) − θ1 θ3 (θ3 − θ1 )(1 − cosθ2) + θ1 θ2 (θ2 − θ1 )(1 − cosθ3) 6 + 2 2 2 2 2 2 2 2 2 S . θ1 θ2 θ3 (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) Here,

1 √ √ θ = a − q cos(ϕ/3) + 3sin(ϕ/3) , 1 3

1 √ √ θ = a − q cos(ϕ/3) − 3sin(ϕ/3) , 2 3 and

1 √ θ = [a + 2 qcos(ϕ/3)], 3 3 where ϕ = arccos p , p = 2a3 − 9ab + 27c, and q = a2 − 3b. 2q3/2

Case 4.8.7

The eigenvalues of S are ±θ1i,±θ2i,±θ3i,±θ4i, θ4 > θ3 > θ2 > θ1 > 0, so the charac­ teristic polynomial of S is

36 2 2 2 2 2 2 2 2 P(X) =(X + θ1 )(X + θ2 )(X + θ3 )(X + θ4 )

8 2 2 2 2 6 2 2 2 2 2 2 2 2 2 2 2 2 4 =X + (θ1 + θ2 + θ3 + θ4 )X + (θ1 θ2 + θ1 θ3 + θ1 θ4 + θ2 θ3 + θ2 θ4 + θ3 θ4 )X

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 + (θ1 θ2 θ3 + θ1 θ2 θ4 + θ1 θ3 θ4 + θ2 θ3 θ4 )X + θ1 θ2 θ3 θ4 .

Thus, Cayley-Hamilton implies

8 2 2 2 2 6 2 2 2 2 2 2 2 2 2 2 2 2 4 S = − (θ1 + θ2 + θ3 + θ4 )S − (θ1 θ2 + θ1 θ3 + θ1 θ4 + θ2 θ3 + θ2 θ4 + θ3 θ4 )S

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 − (θ1 θ2 θ3 + θ1 θ2 θ4 + θ1 θ3 θ4 + θ2 θ3 θ4 )S − θ1 θ2 θ3 θ4 I8.

Theorem 3.8 allows us to use the formula S = θ1S1 + θ2S2 + θ3S3 + θ4S4 for some skew- symmetric matrices S1, S2, S3 and S4, and extend it to

S = θ1S1 + θ2S2 + θ3S3 + θ4S4 (4.8.1)

2 2 2 2 2 2 2 2 2 S = θ1 S1 + θ2 S2 + θ3 S3 + θ4 S4 (4.8.2)

3 3 3 3 3 S = −θ1 S1 − θ2 S2 − θ3 S3 − θ4 S4 (4.8.3)

4 4 2 4 2 4 2 4 2 S = −θ1 S1 − θ2 S2 − θ3 S3 − θ4 S4 (4.8.4)

5 5 5 5 5 S = θ1 S1 + θ2 S2 + θ3 S3 + θ4 S4 (4.8.5)

6 6 2 6 2 6 2 6 2 S = θ1 S1 + θ2 S2 + θ3 S3 + θ4 S4 (4.8.6)

7 7 7 7 7 S = −θ1 S1 − θ2 S2 − θ3 S3 − θ4 S4 (4.8.7)

8 8 2 8 2 8 2 8 2 S = −θ1 S1 − θ2 S2 − θ3 S3 − θ4 S4. (4.8.8)

37 We use Equations (4.8.1),(4.8.3), (4.8.5), and (4.8.7) above to solve for S1, S2, S3, and S4 and obtain 2 2 2 2 2 2 2 2 2 3 2 2 2 5 7 θ2 θ3 θ4 S + (θ2 θ3 + θ2 θ4 + θ3 θ4 )S + (θ2 + θ3 + θ4 )S + S S1 = − 2 2 2 2 2 2 (4.8.9) θ1(θ1 − θ2 )(θ1 − θ3 )(θ1 − θ4 ) 2 2 2 2 2 2 2 2 2 3 2 2 2 5 7 θ1 θ3 θ4 S + (θ1 θ3 + θ1 θ4 + θ3 θ4 )S + (θ1 + θ3 + θ4 )S + S S2 = − 2 2 2 2 2 2 (4.8.10) θ2(θ2 − θ1 )(θ2 − θ3 )(θ2 − θ4 ) 2 2 2 2 2 2 2 2 2 3 2 2 2 5 7 θ1 θ2 θ4 S + (θ1 θ2 + θ1 θ4 + θ2 θ4 )S + (θ1 + θ2 + θ4 )S + S S3 = − 2 2 2 2 2 2 (4.8.11) θ3(θ3 − θ1 )(θ3 − θ2 )(θ3 − θ4 ) 2 2 2 2 2 2 2 2 2 3 2 2 2 5 7 θ1 θ2 θ3 S + (θ1 θ2 + θ1 θ3 + θ2 θ3 )S + (θ1 + θ2 + θ3 )S + S S4 = − 2 2 2 2 2 2 . (4.8.12) θ4(θ4 − θ1 )(θ4 − θ2 )(θ4 − θ3 ) 2 2 2 2 Similarly, we use Eq.(4.8.2), (4.8.4), (4.8.6), and (4.8.8) to get S1, S2, S3, and S4 2 2 2 2 2 2 2 2 2 2 4 2 2 2 6 8 2 θ2 θ3 θ4 S + (θ2 θ3 + θ2 θ4 + θ3 θ4 )S + (θ2 + θ3 + θ4 )S + S S1 = − 2 2 2 2 2 2 2 (4.8.13) θ 1 (θ1 − θ2 )(θ1 − θ3 )(θ1 − θ4 ) 2 2 2 2 2 2 2 2 2 2 4 2 2 2 6 8 2 θ1 θ3 θ4 S + (θ1 θ3 + θ1 θ4 + θ3 θ4 )S + (θ1 + θ3 + θ4 )S + S S2 = − 2 2 2 2 2 2 2 (4.8.14) θ2 (θ2 − θ1 )(θ2 − θ3 )(θ2 − θ4 ) 2 2 2 2 2 2 2 2 2 2 4 2 2 2 6 8 2 θ1 θ2 θ4 S + (θ1 θ2 + θ1 θ4 + θ2 θ4 )S + (θ1 + θ2 + θ4 )S + S S3 = − 2 2 2 2 2 2 2 (4.8.15) θ3 (θ3 − θ1 )(θ3 − θ2 )(θ3 − θ4 ) 2 2 2 2 2 2 2 2 2 2 4 2 2 2 6 8 2 θ1 θ2 θ3 S + (θ1 θ2 + θ1 θ3 + θ2 θ3 )S + (θ1 + θ2 + θ3 )S + S S4 = − 2 2 2 2 2 2 2 . (4.8.16) θ4 (θ4 − θ1 )(θ4 − θ2 )(θ4 − θ3 ) But since n = 8 is even and S is invertible, Cayley-Hamilton gives 2 2 2 2 2 2 2 2 2 2 2 2 2 4 6 2 θ2 θ3 θ4 I8 + (θ2 θ3 + θ2 θ4 + θ3 θ4 )S + (θ2 + θ3 + θ4 )S + S S1 = 2 2 2 2 2 2 (θ1 − θ2 )(θ1 − θ3 )(θ1 − θ4 ) 2 2 2 2 2 2 2 2 2 2 2 2 2 4 6 2 θ1 θ3 θ4 I8 + (θ1 θ3 + θ1 θ4 + θ3 θ4 )S + (θ1 + θ3 + θ4 )S + S S2 = 2 2 2 2 2 2 (θ2 − θ1 )(θ2 − θ3 )(θ2 − θ4 ) 2 2 2 2 2 2 2 2 2 2 2 2 2 4 6 2 θ1 θ2 θ4 I8 + (θ1 θ2 + θ1 θ4 + θ2 θ4 )S + (θ1 + θ2 + θ4 )S + S S3 = 2 2 2 2 2 2 (θ3 − θ1 )(θ3 − θ2 )(θ3 − θ4 ) 2 2 2 2 2 2 2 2 2 2 2 2 2 4 6 2 θ1 θ2 θ3 I8 + (θ1 θ2 + θ1 θ3 + θ2 θ3 )S + (θ1 + θ2 + θ3 )S + S S4 = 2 2 2 2 2 2 . (θ4 − θ1 )(θ4 − θ2 )(θ4 − θ3 ) Putting it all together in

S 2 2 e =I8 + sinθ1S1 + (1 − cosθ1)S1 + sinθ2S2 + (1 − cosθ2)S2

2 2 + sinθ3S3 + (1 − cosθ3)S3 + sinθ4S4 + (1 − cosθ4)S4

38 we get

2 2 2 2 2 2 2 2 2 2 2 2 S θ2 θ 3 θ4 T32T42T43 cosθ 1 − θ1 θ 3 θ4 T31T41T43 cosθ 2 + θ1 θ 2 θ 4 T21T41T42 cosθ 3 − θ1 θ 2 θ 3 T21T31T32 cosθ 4 e = I8 T41T42T43T31T32T21 θ 3θ 3θ 3T T T sin θ − θ 3θ 3θ 3T T T sinθ + θ 3θ 3θ 3T T T sinθ − θ 3θ 3θ 3T T T sinθ + 2 3 4 32 42 43 1 1 3 4 31 41 43 2 1 2 4 21 41 42 3 1 2 3 21 31 32 4 S θ 1 θ2 θ3 θ4 T41T42T43T31T32T21 U T T T cosθ −U T T T cosθ +U T T T cosθ −U T T T cosθ + 1 32 42 43 1 2 31 41 43 2 3 21 41 42 3 4 21 31 32 4 S 2 T41T42T43T31T32T21 θ θ θ U T T T sinθ − θ θ θ U T T T sinθ + θ θ θ U T T T sinθ − θ θ θ U T T T sinθ + 2 3 4 1 32 42 43 1 1 3 4 2 31 41 43 2 1 2 4 3 21 41 42 3 1 2 3 4 21 31 32 4 S 3 θ1 θ 2 θ 3 θ4 T41T42T43T31T32T21 V T T T cosθ −V T T T cos θ +V T T T cos θ −V T T T cosθ + 234 32 42 43 1 134 31 41 43 2 124 21 41 42 3 123 21 31 32 4 S 4 T41T42T43T31T32T21 θ 2 θ 3 θ 4V234T32T42T43 sinθ 1 − θ1 θ 3 θ 4V134T31T41T43 sinθ 2 + θ1 θ 2 θ 4V124T21T41T42 sinθ 3 − θ1 θ 2 θ 3V123T21T31T32 sinθ 4 5 39 + S θ 1 θ 2 θ3 θ4 T41T42T43T31T32T21 T T T cosθ − T T T cosθ + T T T cosθ − T T T cosθ + 32 42 43 1 31 41 43 2 21 41 42 3 21 31 32 4 S 6 T41T42T43T31T32T21 θ θ θ T T T sinθ − θ θ θ T T T sinθ + θ θ θ T T T sinθ − θ θ θ T T T sinθ + 2 3 4 32 42 43 1 1 3 4 31 41 43 2 1 2 4 21 41 42 3 1 2 3 21 31 32 4 S 7 . θ 1 θ 2 θ 3 θ 4 T41T42T43T31T32T21 2 2 Note that Tjk = θ j − θ k for j,k ∈ {1,2,3,4}. Also, for each j = 1,2,3,4 we set 2 2 2 2 2 2 Uj = θ a θb + θ aθ c + θb θc , a,b,c ∈ {1,2,3,4}\{ j} and a,b,c are distinct. Furthermore, 2 2 2 Vabc = θa + θb + θ c for a,b,c ∈ {1,2,3,4}. Here, 1√ θ = a − 2R − 2D, 1 2 1√ θ = a − 2R + 2D, 2 2 1√ θ = a + 2R − 2E, 3 2 1√ θ = a + 2R + 2E, 4 2 where 1 R = a2 − 4b + 4y, 2 1 D = 3a2 − 4R2 − 8b + (4ab − 8c − a3)/R, 2 1 E = 3a2 − 4R2 − 8b − (4ab − 8c − a3)/R, 2 and 1 √ y = [b + 2 qcos(ϕ/3)], 3 with ϕ = arccos −p , p = −2b3 + 9abc + 72bd − 27c2 − 27a2d, and 2q3/2 q = b2 − 3ac + 12d.

4.9 S ∈ M9

The characteristic equation of S has the form P(X) = X(X8 + aX6 + bX4 + cX2 + d).

40 Case 4.9.1

If S has two distinct nonzero eigenvalues ±θi with θ > 0 and zero, each of them with nonzero multiplicity, then this case is similar to Case 4.3, namely

sinθ 1 − cosθ eS = I + S + S2 . 9 θ θ 2

Here, θ = a/m, where m is the multiplicity of θ i with m ∈ {1,2,3,4}.

Case 4.9.2

The eigenvalues of S are ±θ1i,±θ2i, θ2 > θ1 > 0, and zero, each having a nonzero multiplicity. Hence, this case is similar to Case 4.5.2, namely,

3 3 4 4 S θ2 sinθ1 − θ1 sinθ2 θ2 (1 − cosθ1) − θ1 (1 − cosθ2) 2 e =I9 + 2 2 S + 2 2 2 2 S θ1θ2(θ2 − θ1 ) θ1 θ2 (θ2 − θ1 ) 2 2 θ2 sinθ1 − θ1 sin θ2 3 θ2 (1 − cosθ1) − θ1 (1 − cosθ2) 4 + 2 2 S + 2 2 2 2 S . θ1θ2(θ2 − θ1 ) θ1 θ2 (θ2 − θ1 )

Here, if the multiplicity of zero is five, then √ a − a2 − 4b θ = , 1 2 and √ a + a2 − 4b θ = . 2 2 If the multiplicity of zero is three, then √ a − m a2 − 3b θ = 2 , 1 3 and √ a + m a2 − 3b θ = 1 , 2 3

41 where m j is the multiplicity of θ ji with m1 ∈ {1,2} and m1 + m2 = 3.

If the multiplicity of zero is one, and the multiplicities of θ1i and θ2i are each of multi­ plicity two, then 1 θ = a − 3a2 − 8b, 1 2 and 1 θ = a + 3a2 − 8b. 2 2

Otherwise, if the multiplicity of zero is one, and the multiplicities of θ1i and θ2i are unequal, then

1 1 2 θ1 = a − 9a − 24b, 2 m1 and

1 1 2 θ2 = a + 9a − 24b, 2 m2

where m j is the multiplicity of θ ji with m1 ∈ {1,3} and m1 + m2 = 4.

42 Case 4.9.3

The eigenvalues of S are ±θ1i,±θ2i,±θ3i, θ3 > θ2 > θ1 > 0, and zero, each having a nonzero multiplicity. Hence, this case is similar to Case 4.7.3, namely,

3 3 2 2 3 3 2 2 3 3 2 2 S θ2 θ3 (θ3 − θ2 )sinθ1 − θ1 θ3 (θ3 − θ1 )sinθ2 + θ1 θ2 (θ2 − θ1 )sinθ3 e =I9 + 2 2 2 2 2 2 S θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 4 4 2 2 4 4 2 2 4 4 2 2 θ2 θ3 (θ3 − θ2 )(1 − cosθ1) − θ1 θ3 (θ3 − θ1 )(1 − cosθ2) + θ1 θ2 (θ2 − θ1 )(1 − cosθ3) 2 + 2 2 2 2 2 2 2 2 2 S θ1 θ2 θ3 (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 4 4 4 4 4 4 θ2θ3(θ3 − θ2 )sinθ1 − θ1θ3(θ3 − θ1 )sinθ2 + θ1θ2(θ2 − θ1 )sinθ3 3 + 2 2 2 2 2 2 S θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 2 2 4 4 2 2 4 4 2 2 4 4 θ2 θ3 (θ3 − θ2 )(1 − cosθ1) − θ1 θ3 (θ3 − θ1 )(1 − cosθ2) + θ1 θ2 (θ2 − θ1 )(1 − cosθ3) 4 + 2 2 2 2 2 2 2 2 2 S θ1 θ2 θ3 (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 2 2 2 2 2 2 θ2θ3(θ3 − θ2 )sinθ1 − θ1θ3(θ3 − θ1 )sinθ2 + θ1θ2(θ2 − θ1 )sinθ3 5 + 2 2 2 2 2 2 S θ1θ2θ3(θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 ) 2 2 2 2 2 2 2 2 2 2 2 2 θ2 θ3 (θ3 − θ2 )(1 − cosθ1) − θ1 θ3 (θ3 − θ1 )(1 − cosθ2) + θ1 θ2 (θ2 − θ1 )(1 − cosθ3) 6 + 2 2 2 2 2 2 2 2 2 S . θ1 θ2 θ3 (θ3 − θ1 )(θ3 − θ2 )(θ2 − θ1 )

Here, if the multiplicity of zero is three, then

1 √ √ θ = a − q cos(ϕ/3) + 3sin(ϕ/3) , 1 3

1 √ √ θ = a − q cos(ϕ/3) − 3sin(ϕ/3) , 2 3 and

1 √ θ = [a + 2 qcos(ϕ/3)], 3 3 where ϕ = arccos p , p = 2a3 − 9ab + 27c, and q = a2 − 3b. 2q3/2

Otherwise, if the multiplicity of zero is one, then the multiplicity of one θ ji is two, and the multiplicities of each of the other quantities, call them θki and θli (θk < θl), is one. Then √ a q θ = + Φ , j 4 3 j

43 where ⎧ √ ⎪−cos( /3) − 3sin( /3), j = 1 ⎪ ϕ ϕ ⎪ ⎨⎪ √ Φ j = −cos(ϕ/3) + 3sin(ϕ/3), j = 2 ⎪ ⎪ ⎪ ⎩⎪2cos(ϕ/3), j = 3,

2 ϕ = arccos −p , p = − 27 (a3 − 4ab + 8c), and q = 9a − 3b . Furthermore, 2q3/2 32 16 2

α − α2 − 4β θ = , k 2 and

α + α2 − 4β θ = , l 2 2 2 4 where α = a − 2θ j , and β = b − 2aθ j + 3θ j .

Case 4.9.4

The eigenvalues of S are ±θ1i,±θ2i,±θ3i,±θ4i, θ4 > θ3 > θ2 > θ1 > 0, and zero. Since S is not invertible, we use Equations (4.8.9) to (4.8.16) from Case 4.8.7 into

S 2 2 e =I9 + sinθ1S1 + (1 − cosθ1)S1 + sinθ2S2 + (1 − cosθ2)S2

2 2 + sinθ3S3 + (1 − cosθ3)S3 + sinθ4S4 + (1 − cos θ4)S4

and get the formula

44 3 3 3 3 S Π1 T32T42T43 sinθ 1 − Π2 T31T41T43 sinθ 2 + Π3 T21T41T42 sinθ 3 − Π4 T21T31T32 sinθ 4 e =I9 + S θ1 θ 2 θ3 θ4 T41T42T43T31T32T21 4 4 4 4 Π1 T32T42T43(1 − cosθ1 ) − Π2 T31T41T43(1 − cosθ2 ) + Π3 T21T41T42(1 − cosθ3 ) − Π4 T21T31T32(1 − cos θ4 ) 2 + 2 2 2 2 S θ 1 θ 2 θ3 θ 4 T41T42T43T31T32T21 Π U T T T sin θ − Π U T T T sinθ + Π U T T T sinθ − Π U T T T sinθ + 1 1 32 42 43 1 2 2 31 41 43 2 3 3 21 41 42 3 4 4 21 31 32 4 S 3 θ1 θ 2 θ 3 θ 4 T41T42T43T31T32T21 2 2 2 2 Π1 U 1 T32T42T43(1 − cosθ1 ) − Π2 U 2 T31T41T43(1 − cosθ2 ) + Π3 U 3 T21T41T42(1 − cosθ3 ) − Π4 U4 T21T31T32(1 − cosθ4 ) 4 + 2 2 2 2 S θ 1 θ 2 θ 3 θ 4 T41T42T43T31T32T21 Π V T T T sinθ − Π V T T T sinθ + Π V T T T sinθ − Π V T T T sinθ + 1 234 32 42 43 1 2 134 31 41 43 2 3 124 21 41 42 3 4 123 21 31 32 4 S 5 θ1 θ 2 θ 3 θ 4 T41T42T43T31T32T21 2 2 2 2 Π1 V 234T32T42T43(1 − cosθ1 ) − Π2 V 134T31T41T43(1 − cosθ2 ) + Π3 V 124T21T41T42(1 − cosθ3 ) − Π4 V123T21T31T32(1 − cosθ4 ) 6 + 2 2 2 2 S 45 θ1 θ 2 θ 3 θ 4 T41T42T43T31T32T21 Π T T T sinθ − Π T T T sinθ + Π T T T sinθ − Π T T T sinθ + 1 32 42 43 1 2 31 41 43 2 3 21 41 42 3 4 21 31 32 4 S 7 θ 1 θ2 θ3 θ 4 T41T42T43T31T32T21 2 2 2 2 Π1 T32T42T43(1 − cosθ1 ) − Π2 T31T41T43(1 − cosθ2 ) + Π3 T21T41T42(1 − cosθ3 ) − Π4 T21T31T32(1 − cos θ4 ) 8 + 2 2 2 2 S . θ1 θ 2 θ3 θ4 T41T42T43T31T32T21 2 2 Note that Tjk = θ j − θ k for j,k ∈ {1,2,3,4}. Also, for each j = 1,2,3,4 we set 2 2 2 2 2 2 Uj = θa θb + θ aθ c + θb θ c , and Π j = θaθbθc, such that a,b,c ∈ {1,2,3,4}\{ j} where 2 2 2 a,b,c are distinct. Furthermore, Vabc = θ a + θb + θ c for a,b,c ∈ {1,2,3,4}. Here, 1√ θ = a − 2R − 2D, 1 2 1√ θ = a − 2R + 2D, 2 2 1√ θ = a + 2R − 2E, 3 2 1√ θ = a + 2R + 2E, 4 2 where 1 R = a2 − 4b + 4y, 2 1 D = 3a2 − 4R2 − 8b + (4ab − 8c − a3)/R, 2 1 E = 3a2 − 4R2 − 8b − (4ab − 8c − a3)/R, 2 and 1 √ y = [b + 2 qcos(ϕ/3)], 3 with ϕ = arccos −p , p = −2b3 + 9abc + 72bd − 27c2 − 27a2d, and 2q3/2 q = b2 − 3ac + 12d.

4.10 Special Cases for S ∈ Mn, n ≥ 10

The formulas in Sections 4.1 to 4.9 can be extended for some real skew-symmetric matri­

ces S ∈ Mn of arbitrary size n, but only the formulas of the θ js change. For example, if S only has one eigenpair ±θi and is nonsingular (this forces n to be even), then the formula of eS is the same as in Case 4.2, but θ = 2a/n, where a is the coefficient of Xn−2 in

46 the characteristic polynomial of S. On the other hand, if S is not invertible but has only

one eigenpair ±θi, then eS has the same form as Rodrigues’ Formula (Case 4.3), where

2a θ = n−m , and m is the multiplicity of zero in the characteristic polynomial of S. We S can obtain the closed formulas of e in terms of the appropriate θ js for j ∈ {1,..., p} by simply solving a system of p linear equations. However, if n ≥ 10 and p ≥ 5, we cannot

obtain the θ js in terms of the entries of S, thus, we could only obtain the θ js through numerical methods for solving polynomials.

47 Chapter 5

Computer Results and Conclusions

In this chapter, we will test the effectiveness of our formulas in Cases 4.2 to 4.9 by imple­

menting them in our Matlab function skewexpm and comparing them to expm. For each case, we randomly generate one thousand skew-symmetric matrices using our function skewsymgenerator and compute the exponential of each of them via our skewexpm function (see Appendix B for the codes). The entries of the generated matrices are uni­ formly distributed integers on [−1000,1000], so we make the fair assumption that the matrices will have distinct eigenvalues and use the appropriate formula for the matrices.

For example, in Figure 5.1 we compare the two methods by examining the orthogonality

S T S errors ||In − (e ) e || of the generated matrices S ∈ M2, as discussed in Chapter 2. This is an interesting case because it shows that both expm and our formulas get us exactly the same error every time. Figure 5.2 shows us big improvement for the case n = 3 since our closed formula always has a smaller error than expm. Figures 5.3 to 5.6 show that skewexpm still does a better job at computing the matrix exponentials than Matlab for cases n = 4,5,6, and 7, but with a few outliers where Matlab’s function has a smaller er­

ror. On the other hand, in Figures 5.7 and 5.8, there is a significant amount of randomly

48 generated skew-symmetric matrices where expm outperforms our closed formulas for the cases n = 8 and 9.

10-15 Matlab's expm Closed Formula 10-16

10-17

10-18

10-19

10-20

10-21 0 100 200 300 400 500 600 700 800 900 1000 Generated Matrices S of size n = 2

Figure 5.1: Case 4.2 (n = 2) Closed Formula Error vs. Matlab’s expm

Orthogonality error using our closed formulas implemented on skewexpm vs. Matlab’s expm on

one thousand randomly generated matrices S of size 2. The entries of the matrices are uniformly

distributed integers on [−1000,1000]. The errors for both methods are exactly the same for this

case.

49 10-12 Matlab's expm Closed Formula

10-13

10-14

10-15

10-16

10-17 0 100 200 300 400 500 600 700 800 900 1000 Generated Matrices S of size n = 3

Figure 5.2: Case 4.3 (n = 3) Closed Formula Error vs. Matlab’s expm

Orthogonality error using our closed formulas implemented on skewexpm vs. Matlab’s expm on one thousand randomly generated matrices S of size 3. The entries of the matrices are uniformly distributed integers on [−1000,1000]. The errors of our closed formulas are always smaller than

the errors from expm.

50 10-10 Matlab's expm Closed Formula 10-11

10-12

10-13

10-14

10-15

10-16 0 100 200 300 400 500 600 700 800 900 1000 Generated Matrices S of size n = 4

Figure 5.3: Case 4.4.3 (n = 4) Closed Formula Error vs. Matlab’s expm

Orthogonality error using our closed formulas implemented on skewexpm vs. Matlab’s expm on one thousand randomly generated matrices S of size 4. The entries of the matrices are uniformly distributed integers on [−1000,1000]. The errors of our closed formulas are smaller than those

from expm for 976 matrices.

51 10-12 Matlab's expm Closed Formula

10-13

10-14

10-15

10-16 0 100 200 300 400 500 600 700 800 900 1000 Generated Matrices S of size n = 5

Figure 5.4: Case 4.5.2 (n = 5) Closed Formula Error vs. Matlab’s expm

Orthogonality error using our closed formulas implemented on skewexpm vs. Matlab’s expm on one thousand randomly generated matrices S of size 5. The entries of the matrices are uniformly distributed integers on [−1000,1000]. The errors of our closed formulas are smaller than those

from expm for 999 matrices, which is remarkable.

52 10-9 Matlab's expm Closed Formula 10-10

10-11

10-12

10-13

10-14

10-15

10-16 0 100 200 300 400 500 600 700 800 900 1000 Generated Matrices S of size n = 6

Figure 5.5: Case 4.6.5 (n = 6) Closed Formula Error vs. Matlab’s expm

Orthogonality error using our closed formulas implemented on skewexpm vs. Matlab’s expm on one thousand randomly generated matrices S of size 6. The entries of the matrices are uniformly distributed integers on [−1000,1000]. The errors of our closed formulas are smaller than those

from expm for 929 matrices.

53 10-11 Matlab's expm Closed Formula

10-12

10-13

10-14

10-15

10-16 0 100 200 300 400 500 600 700 800 900 1000 Generated Matrices S of size n = 7

Figure 5.6: Case 4.7.3 (n = 7) Closed Formula Error vs. Matlab’s expm

Orthogonality error using our closed formulas implemented on skewexpm vs. Matlab’s expm on one thousand randomly generated matrices S of size 7. The entries of the matrices are uniformly distributed integers on [−1000,1000]. The errors of our closed formulas are smaller than those

from expm for 961 matrices.

54 10-9 Matlab's expm Closed Formula 10-10

10-11

10-12

10-13

10-14

10-15 0 100 200 300 400 500 600 700 800 900 1000 Generated Matrices S of size n = 8

Figure 5.7: Case 4.8.7 (n = 8) Closed Formula Error vs. Matlab’s expm

Orthogonality error using our closed formulas implemented on skewexpm vs. Matlab’s expm on one thousand randomly generated matrices S of size 8. The entries of the matrices are uniformly distributed integers on [−1000,1000]. Here, we encounter a significant drop in performance of

our closed formulas, as the errors of skewexpm are smaller than those from expm for only 744

matrices.

55 10-10 Matlab's expm Closed Formula

10-11

10-12

10-13

10-14

10-15 0 100 200 300 400 500 600 700 800 900 1000 Generated Matrices S of size n = 9

Figure 5.8: Case 4.9.4 (n = 9) Closed Formula Error vs. Matlab’s expm

Orthogonality error using our closed formulas implemented on skewexpm vs. Matlab’s expm on one thousand randomly generated matrices S of size 9. The entries of the matrices are uniformly distributed integers on [−1000,1000]. We observe yet another significant drop in performance of

our formulas because the errors of skewexpm are smaller than those from expm for just 606

matrices.

56 We will now examine the rate at which skewexpm is expected to outperform expm.

Table 5.1 has the values p, which represent the fraction of the one thousand randomly

S T S generated skew-symmetric matrices S where the error ||In − (e) e || is smaller for the computed eS using our closed formulas in skewexpm. Observe that the table shows we

can rely on skewexpm most of the time when n is of size up to 7, and a significant drop

in performance of our implemented closed formulas occurs at n = 8 and 9.

Table 5.1: Expected Performance Rate p of skewexpm against expm when using one thousand generated skew-symmetric matrices with integer entries uniformly distributed on [−1000,1000] for each case 2 ≤ n ≤ 9.

Matrix Size n Expected Performance Rate p

2 1.000

3 1.000

4 0.976

5 0.999

6 0.929

7 0.961

8 0.744

9 0.606

As a final remark, our formulas can be shown to always give a much better error than

Matlab’s expm if we generate matrices with very large entries. The skewsymgenerator

function relies on Matlab’s function randi, which does not accept powers of 10 larger than 1015 as input. For instance, we push the size of the entries of S to the limit

by generating skew-symmetric matrices of size 9 with uniformly distributed entries on

[−1015 ,1015], and obtain the errors illustrated in Figure 5.9. The resuls are dramatic,

57 namely, the orthogonality errors of skewexpm do not go higher than about 10−10, whereas

the errors obtained by expm do not go lower than 10−2.

105 Matlab's expm Closed Formula

100

10-5

10-10

10-15 0 100 200 300 400 500 600 700 800 900 1000 Generated Matrices S of size n = 9

Figure 5.9: Case 4.9.4 (n = 9) Closed Formula Error vs. Matlab’s expm with Matrices of Large Entries

Orthogonality error using our closed formulas implemented on skewexpm vs. Matlab’s expm on

one thousand randomly generated matrices S of size 9. The entries of the matrices are uniformly

distributed integers on [−1015 ,1015]. Our closed formula gives far more acceptable errors than

expm because the orthogonality errors of skewexpm do not go higher than about 10−10, whereas

the errors obtained by expm do not go lower than 10−2.

58 Conclusions

Gallier and Xu [2] established the method for finding the closed formulas presented in

this thesis. What we did new is we used their method to find the actual formulas of eS

in terms of the θ js for real skew-symmetric matrices up to size 9, and found each θ j in terms of the entries of S. Furthermore, we implemented the formulas on Matlab and

tested their effectiveness against expm, which uses the new scale and square algorithm of

Al-Mohy and Higham [1], on randomly generated skew-symmetric matrices with entries on [−1000,1000]. We then find that a smaller orthogonality error is obtained on our closed formulas, so we can rely on them over 92% of the time for matrices of size up to 7, but the effectiveness significantly decreases for the cases n = 8 and 9 (See Table

5.1). It is still open to further investigation why our closed formulas have a bigger error than expm for some of the generated matrices. For instance, we initially suspected this occurs when the ratio of two different θ js of a particular matrix S is relatively large, so the closed formula (with distinct eigenvalues) of eS approaches to formula where not all

eigenvalues are distinct. However, if we consider the matrix with the worst error from

Figure 5.4, which illustrates the case where only one out of the a thousand matrices had

a worse error than expm, we obtain the outlier matrix

⎡ ⎤ 0 418 −747 390 42 ⎢ ⎥ ⎢ ⎥ ⎢ − − ⎥ ⎢ 418 0 627 942 271 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 747 −627 0 482 −64 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ −390 942 −482 0 74 ⎥ ⎢ ⎥ ⎣ ⎦ −42 −271 64 −74 0

where θ1 ≈ 235 and θ2 ≈ 1560, so the ratio θ1/θ2 ≈ 0.151 is not as large as we ex­ pected. Hence, there is still room for improvement and polishing of our closed formulas

59 to be made after more analysis is done.

On the positive side, if we generate skew-symmetric matrices with integer entries

that range from −1015 to 1015, we have empirically shown that we can expect our closed formulas in skewexpm to give a far better and acceptable orthogonality error than expm,

as illustrated in Figure 5.9.

60 Bibliography

[1] A. Al-Mohy and N. Higham. A New Scaling and Squaring Algorithm for the Matrix

Exponential. SIAM J. Matrix Anal. Appl., 31(3) (2009), pp. 970-989.

[2] J. Gallier and D. Xu. Computing Exponentials of Skew-Symmetric Matrices and

Logarithms of Orthogonal Matrices. International Journal of Robotics and Automa­

tion, Vol. 17, No. 4, 2002.

[3] G. Golub and C. Van Loan. Matrix Computations, 4th Edition. Johns Hopkins

University Press, 2013.

[4] B. Hall. Lie Groups, Lie Algebras, and Representations, 2nd Edition. Springer In­

ternational Publishing, 2015.

[5] N. Higham. The Scaling and Squaring Method for the Matrix Exponential Revis­

ited. SIAM J. Matrix Anal. Appl., 26(4) (2005), pp. 1179-1193.

[6] R. Horn and C. Johnson. , 2nd Edition. Cambridge University Press,

2013.

[7] C. Moler and C. Van Loan. Nineteen Dubious Ways to Compute the Exponential of

a Matrix, Twenty-Five Years Later. SIAM Review, Vol. 45, No. 1 (Mar., 2003), pp.

3-49.

61 [8] A. Oshinuga. Lyapunov Exponents of Continuous Dynamical Systems. California

State Polytechnic University Pomona, 2010.

[9] R. Piamonte. Exponentials of Skew-Symmetric Matrices. California State Polytech­

nic University Pomona, 2007.

[10] T. Politi. A Formula for the Exponential of a Real Skew-Symmetric Matrix of Order

4. BIT Numerical Mathematics, Vol. 41, Issue 4, pp. 842-845, 2001.

[11] H. von Bremen. Implementation of Approach to Compute the Lyapunov Character­

istic Exponents for Continuous Dynamical Systems to Higher Dimensions. Journal

of The Franklin Institute, Vol. 347, Issue 1 (Feb., 2010), pp. 315 - 338.

[12] E. Weisstein. Quartic Equation. A Wolfram Web Resource.

62 Appendix A

Roots of Polynomials

Here, we present the roots of generalized monic polynomials up to degree four.

A.1 The Linear Case and the Quadratic Formula

If X + a = 0, then the root is X = −a. Furthermore, if X2 + aX + b = 0, then √ −a ± a2 − 4b X = . 2

A.2 The Cubic Formula

If X3 + aX2 + bX + c = 0, then after slightly modifying the cubic formula in Piamonte

[9], we obtain

⎛ ⎞ 1 3 −p + p2 − 4q3 3 −p − p2 − 4q3 X = ⎝−a + ζ + ζ¯ ⎠ 3 2 2

where p = 2a3 − 9ab + 27c, q = a2 − 3b, and ζ is a cube root of unity, in other words,

ζ = e2πMi/3 for M ∈ {0,1,2}.

63 A.3 The Quartic Formula

The following result is taken directly from Weisstein [12]. If X4 +aX3 +bX2 +cX +d =

0, then ⎧ ⎪ 1 ⎨ 4 (−a + 2R ± 2D) X = ⎪ 1 ⎩ 4 (−a − 2R ± 2E), where 1 R = a2 − 4b + 4y, 2 ⎧ ⎪ 1 2 2 3 ⎨ 2 3a − 4R − 8b + (4ab − 8c − a )/R, R =� 0 D =

⎪ 1 2 2 ⎩ 2 3a − 8b + 8 y − 4d, R = 0 ⎧ ⎪ 1 2 2 3 ⎨ 2 3a − 4R − 8b − (4ab − 8c − a )/R, R �= 0 E =

⎪ 1 2 2 ⎩ 2 3a − 8b − 8 y − 4d, R = 0, and y is any real solution to the cubic equation

Y 3 − bY 2 + (ac − 4d)Y + (4bd − c2 − a 2 d) = 0.

64 Appendix B

Matlab Codes

B.1 The Skewexpm Function

function eS = skewexpm(S) % SKEWEXPM(S) computes the exponential of a real skew-symmetric matrix % up to size 9 using the formulas from Diego Avalos's thesis. % See also expm. [n, m] = size(S); if n ~= m msg1 = 'Input must be a square real skew-symmetric matrix.'; error(msg1); quit; elseif S+S' ~= zeros(n) msg2 = 'Input must be a real skew-symmetric matrix.'; error(msg2); quit; elseif n > 9

65 msg3 = 'Input must be a real skew-symmetric matrix of size 9 ... or less.'; error(msg3); quit; end coeff = charpoly(S); if S == zeros(n) eS = eye(n); elseif n == 2 a = coeff(3); th = sqrt(a); c0 = cos(th); c1 = sin(th)/th;

eS = c0*eye(2) + c1*S; elseif n == 3 a = coeff(3); th = sqrt(a); c1 = sin(th)/th; c2 = (1-cos(th))/thˆ2;

eS = eye(3) + c1*S + c2*Sˆ2; elseif n == 4 a = coeff(3); b = coeff(5); k = unique(abs(roots([1, a, b]))); if length(k) == 1 th = sqrt(a/2); c0 = cos(th); c1 = sin(th)/th;

eS = c0*eye(4) + c1*S; elseif length(k) == 2 & b == 0 th = sqrt(a);

66 c1 = sin(th)/th; c2 = (1-cos(th))/thˆ2;

eS = eye(4) + c1*S + c2*Sˆ2; else

th1 = sqrt((a-sqrt(aˆ2-4*b))/2); th2 = ... sqrt((a+sqrt(aˆ2-4*b))/2); c0 = (th2ˆ2*cos(th1)-th1ˆ2*cos(th2))/(th2ˆ2-th1ˆ2); c1 = th2ˆ2*sin(th1)/(th1*(th2ˆ2-th1ˆ2))... -th1ˆ2*sin(th2)/(th2*(th2ˆ2-th1ˆ2)); c2 = (cos(th1)-cos(th2))/(th2ˆ2-th1ˆ2);

c3 = sin(th1)/(th1*(th2ˆ2-th1ˆ2))... -sin(th2)/(th2*(th2ˆ2-th1ˆ2)); eS = c0*eye(4) + c1*S + c2*Sˆ2 + c3*Sˆ3; end elseif n == 5 a = coeff(3); b = coeff(5);

k = unique(abs(roots([1, a, b]))); k = k(k~=0); e = imag(eig(S)); if length(k) == 1

m = length(e(e~=0))/2; th = sqrt(a/m); c1 = sin(th)/th; c2 = (1-cos(th))/thˆ2;

eS = eye(5) + c1*S + c2*Sˆ2; else

th1 = sqrt((a-sqrt(aˆ2-4*b))/2); th2 = ... sqrt((a+sqrt(aˆ2-4*b))/2); c1 = (th2ˆ3*sin(th1)-th1ˆ3*sin(th2))... /(th1*th2*(th2ˆ2-th1ˆ2)); c2 = (th2ˆ4*(1-cos(th1))-th1ˆ4*(1-cos(th2)))... /(th1ˆ2*th2ˆ2*(th2ˆ2-th1ˆ2)); c3 = (th2*sin(th1)-th1*sin(th2))/(th1*th2*(th2ˆ2-th1ˆ2));

67 c4 = (th2ˆ2*(1-cos(th1))-th1ˆ2*(1-cos(th2)))... /(th1ˆ2*th2ˆ2*(th2ˆ2-th1ˆ2)); eS = eye(5) + c1*S + c2*Sˆ2 + c3*Sˆ3 + c4*Sˆ4; end elseif n == 6 a = coeff(3); b = coeff(5); c = coeff(7); % This code is complete for all subcases where there are distinct % eigenvalues. The remaining subcases for size 6 and above are ... to be % included. Also, some codes may be simplied further to avoid % cancellation error.

p = 2*aˆ3-9*a*b+27*c; q = aˆ2-3*b; r = sqrt(q); phi = ... acos(0.5*p/qˆ1.5); th1 = sqrt((1/3)*(a-r*cos(phi/3)-sqrt(3)*r*sin(phi/3))); th2 = sqrt((1/3)*(a-r*cos(phi/3)+sqrt(3)*r*sin(phi/3))); th3 = sqrt((1/3)*(a+2*r*cos(phi/3))); c0 = th2ˆ2*th3ˆ2*(th3ˆ2-th2ˆ2)*cos(th1)... -th1ˆ2*th3ˆ2*(th3ˆ2-th1ˆ2)*cos(th2)+... th1ˆ2*th2ˆ2*(th2ˆ2-th1ˆ2)*cos(th3); c0 = c0/((th3ˆ2-th1ˆ2)*(th3ˆ2-th2ˆ2)*(th2ˆ2-th1ˆ2)); c1 = th2ˆ3*th3ˆ3*(th3ˆ2-th2ˆ2)*sin(th1)... -th1ˆ3*th3ˆ3*(th3ˆ2-th1ˆ2)*sin(th2)+... th1ˆ3*th2ˆ3*(th2ˆ2-th1ˆ2)*sin(th3); c1 = c1/(th1*th2*th3*(th3ˆ2-th1ˆ2)*(th3ˆ2-th2ˆ2)*(th2ˆ2-th1ˆ2)); c2 = (th3ˆ4-th2ˆ4)*cos(th1)... -(th3ˆ4-th1ˆ4)*cos(th2)... +(th2ˆ4-th1ˆ4)*cos(th3); c2 = c2/((th3ˆ2-th1ˆ2)*(th3ˆ2-th2ˆ2)*(th2ˆ2-th1ˆ2)); c3 = th2*th3*(th3ˆ4-th2ˆ4)*sin(th1)...

68 -th1*th3*(th3ˆ4-th1ˆ4)*sin(th2)+... th1*th2*(th2ˆ4-th1ˆ4)*sin(th3); c3 = c3/(th1*th2*th3*(th3ˆ2-th1ˆ2)*(th3ˆ2-th2ˆ2)*(th2ˆ2-th1ˆ2)); c4 = (th3ˆ2-th2ˆ2)*cos(th1)... -(th3ˆ2-th1ˆ2)*cos(th2)... +(th2ˆ2-th1ˆ2)*cos(th3); c4 = c4/((th3ˆ2-th1ˆ2)*(th3ˆ2-th2ˆ2)*(th2ˆ2-th1ˆ2)); c5 = th2*th3*(th3ˆ2-th2ˆ2)*sin(th1)... -th1*th3*(th3ˆ2-th1ˆ2)*sin(th2)+... th1*th2*(th2ˆ2-th1ˆ2)*sin(th3); c5 = c5/(th1*th2*th3*(th3ˆ2-th1ˆ2)*(th3ˆ2-th2ˆ2)*(th2ˆ2-th1ˆ2)); eS = c0*eye(6) + c1*S + c2*Sˆ2 + c3*Sˆ3 + c4*Sˆ4 + c5*Sˆ5; elseif n == 7 a = coeff(3); b = coeff(5); c = coeff(7);

p = 2*aˆ3-9*a*b+27*c; q = aˆ2-3*b; r = sqrt(q); phi = ... acos(0.5*p/qˆ1.5); th1 = sqrt((1/3)*(a-r*cos(phi/3)-sqrt(3)*r*sin(phi/3))); th2 = sqrt((1/3)*(a-r*cos(phi/3)+sqrt(3)*r*sin(phi/3))); th3 = sqrt((1/3)*(a+2*r*cos(phi/3))); c1 = th2ˆ3*th3ˆ3*(th3ˆ2-th2ˆ2)*sin(th1)... -th1ˆ3*th3ˆ3*(th3ˆ2-th1ˆ2)*sin(th2)+... th1ˆ3*th2ˆ3*(th2ˆ2-th1ˆ2)*sin(th3); c1 = c1/(th1*th2*th3*(th3ˆ2-th1ˆ2)*(th3ˆ2-th2ˆ2)*(th2ˆ2-th1ˆ2)); c2 = th2ˆ4*th3ˆ4*(th3ˆ2-th2ˆ2)*(1-cos(th1))... -th1ˆ4*th3ˆ4*(th3ˆ2-th1ˆ2)*(1-cos(th2))+... th1ˆ4*th2ˆ4*(th2ˆ2-th1ˆ2)*(1-cos(th3)); c2 = ...

c2/(th1ˆ2*th2ˆ2*th3ˆ2*(th3ˆ2-th1ˆ2)*(th3ˆ2-th2ˆ2)*(th2ˆ2-th1ˆ2)); c3 = th2*th3*(th3ˆ4-th2ˆ4)*sin(th1)...

69 -th1*th3*(th3ˆ4-th1ˆ4)*sin(th2)+... th1*th2*(th2ˆ4-th1ˆ4)*sin(th3); c3 = c3/(th1*th2*th3*(th3ˆ2-th1ˆ2)*(th3ˆ2-th2ˆ2)*(th2ˆ2-th1ˆ2)); c4 = th2ˆ2*th3ˆ2*(th3ˆ4-th2ˆ4)*(1-cos(th1))... -th1ˆ2*th3ˆ2*(th3ˆ4-th1ˆ4)*(1-cos(th2))+... th1ˆ2*th2ˆ2*(th2ˆ4-th1ˆ4)*(1-cos(th3)); c4 = ...

c4/(th1ˆ2*th2ˆ2*th3ˆ2*(th3ˆ2-th1ˆ2)*(th3ˆ2-th2ˆ2)*(th2ˆ2-th1ˆ2)); c5 = th2*th3*(th3ˆ2-th2ˆ2)*sin(th1)... -th1*th3*(th3ˆ2-th1ˆ2)*sin(th2)+... th1*th2*(th2ˆ2-th1ˆ2)*sin(th3); c5 = c5/(th1*th2*th3*(th3ˆ2-th1ˆ2)*(th3ˆ2-th2ˆ2)*(th2ˆ2-th1ˆ2)); c6 = th2ˆ2*th3ˆ2*(th3ˆ2-th2ˆ2)*(1-cos(th1))... -th1ˆ2*th3ˆ2*(th3ˆ2-th1ˆ2)*(1-cos(th2))+... th1ˆ2*th2ˆ2*(th2ˆ2-th1ˆ2)*(1-cos(th3)); c6 = ...

c6/(th1ˆ2*th2ˆ2*th3ˆ2*(th3ˆ2-th1ˆ2)*(th3ˆ2-th2ˆ2)*(th2ˆ2-th1ˆ2)); eS = eye(7) + c1*S + c2*Sˆ2 + c3*Sˆ3 + c4*Sˆ4 + c5*Sˆ5 + c6*Sˆ6; elseif n == 8 a = coeff(3); b = coeff(5); c = coeff(7); d = coeff(9);

p = -2*bˆ3+9*a*b*c+72*b*d-27*cˆ2-27*aˆ2*d; q = bˆ2-3*a*c+12*d; phi = acos(-0.5*p/qˆ1.5); y = (1/3)*(b+2*sqrt(q)*cos(phi/3)); R=0.5*sqrt(aˆ2-4*b+4*y); D=0.5*sqrt(3*aˆ2-4*Rˆ2-8*b+(4*a*b-8*c-aˆ3)/R); E=0.5*sqrt(3*aˆ2-4*Rˆ2-8*b-(4*a*b-8*c-aˆ3)/R); th1 = 0.5*sqrt(a-2*R-2*D); th2 = 0.5*sqrt(a-2*R+2*D); th3 = 0.5*sqrt(a+2*R-2*E); th4 = 0.5*sqrt(a+2*R+2*E);

70 c0 = P(1)ˆ2/(T(4,1)*T(3,1)*T(2,1))*cos(th1)... -P(2)ˆ2/(T(4,2)*T(3,2)*T(2,1))*cos(th2)... +P(3)ˆ2/(T(4,3)*T(3,2)*T(3,1))*cos(th3)... -P(4)ˆ2/(T(4,1)*T(4,3)*T(4,2))*cos(th4); c1 = P(1)ˆ2/(th1*T(4,1)*T(3,1)*T(2,1))*sin(th1)... -P(2)ˆ2/(th2*T(4,2)*T(3,2)*T(2,1))*sin(th2)... +P(3)ˆ2/(th3*T(4,3)*T(3,2)*T(3,1))*sin(th3)... -P(4)ˆ2/(th4*T(4,1)*T(4,3)*T(4,2))*sin(th4); c2 = U(1)/(T(4,1)*T(3,1)*T(2,1))*cos(th1)... -U(2)/(T(4,2)*T(3,2)*T(2,1))*cos(th2)... +U(3)/(T(4,3)*T(3,2)*T(3,1))*cos(th3)... -U(4)/(T(4,1)*T(4,3)*T(4,2))*cos(th4); c3 = U(1)/(th1*T(4,1)*T(3,1)*T(2,1))*sin(th1)... -U(2)/(th2*T(4,2)*T(3,2)*T(2,1))*sin(th2)... +U(3)/(th3*T(4,3)*T(3,2)*T(3,1))*sin(th3)... -U(4)/(th4*T(4,1)*T(4,3)*T(4,2))*sin(th4); c4 = V(2,3,4)/(T(4,1)*T(3,1)*T(2,1))*cos(th1)... -V(1,3,4)/(T(4,2)*T(3,2)*T(2,1))*cos(th2)... +V(1,2,4)/(T(4,3)*T(3,2)*T(3,1))*cos(th3)... -V(1,2,3)/(T(4,1)*T(4,3)*T(4,2))*cos(th4); c5 = V(2,3,4)/(th1*T(4,1)*T(3,1)*T(2,1))*sin(th1)... -V(1,3,4)/(th2*T(4,2)*T(3,2)*T(2,1))*sin(th2)... +V(1,2,4)/(th3*T(4,3)*T(3,2)*T(3,1))*sin(th3)... -V(1,2,3)/(th4*T(4,1)*T(4,3)*T(4,2))*sin(th4);

71 c6 = cos(th1)/(T(4,1)*T(3,1)*T(2,1))... -cos(th2)/(T(4,2)*T(3,2)*T(2,1))... +cos(th3)/(T(4,3)*T(3,2)*T(3,1))... -cos(th4)/(T(4,1)*T(4,3)*T(4,2));

c7 = sin(th1)/(th1*T(4,1)*T(3,1)*T(2,1))... -sin(th2)/(th2*T(4,2)*T(3,2)*T(2,1))... +sin(th3)/(th3*T(4,3)*T(3,2)*T(3,1))... -sin(th4)/(th4*T(4,1)*T(4,3)*T(4,2));

eS = c0*eye(8)+c1*S+c2*Sˆ2+c3*Sˆ3+c4*(Sˆ2)ˆ2+c5*Sˆ5+c6*Sˆ6+c7*Sˆ7; elseif n ==9 a = coeff(3); b = coeff(5); c = coeff(7); d = coeff(9);

p = -2*bˆ3+9*a*b*c+72*b*d-27*cˆ2-27*aˆ2*d; q = bˆ2-3*a*c+12*d; phi = acos(-0.5*p/qˆ1.5); y = (1/3)*(b+2*sqrt(q)*cos(phi/3)); R=0.5*sqrt(aˆ2-4*b+4*y); D=0.5*sqrt(3*aˆ2-4*Rˆ2-8*b+(4*a*b-8*c-aˆ3)/R); E=0.5*sqrt(3*aˆ2-4*Rˆ2-8*b-(4*a*b-8*c-aˆ3)/R); th1 = 0.5*sqrt(a-2*R-2*D); th2 = 0.5*sqrt(a-2*R+2*D); th3 = 0.5*sqrt(a+2*R-2*E); th4 = 0.5*sqrt(a+2*R+2*E); c1 = P(1)ˆ3*T(3,2)*T(4,2)*T(4,3)*sin(th1)... -P(2)ˆ3*T(3,1)*T(4,1)*T(4,3)*sin(th2)... +P(3)ˆ3*T(2,1)*T(4,1)*T(4,2)*sin(th3)... -P(4)ˆ3*T(2,1)*T(3,1)*T(3,2)*sin(th4); c1 = ...

c1/(th1*th2*th3*th4*T(4,1)*T(4,2)*T(4,3)*T(3,1)*T(3,2)*T(2,1)); c2 = P(1)ˆ4*T(3,2)*T(4,2)*T(4,3)*(1-cos(th1))...

72 -P(2)ˆ4*T(3,1)*T(4,1)*T(4,3)*(1-cos(th2))... +P(3)ˆ4*T(2,1)*T(4,1)*T(4,2)*(1-cos(th3))... -P(4)ˆ4*T(2,1)*T(3,1)*T(3,2)*(1-cos(th4)); c2 = c2/(th1ˆ2*th2ˆ2*th3ˆ2*th4ˆ2 ... *T(4,1)*T(4,2)*T(4,3)*T(3,1)*T(3,2)*T(2,1)); c3 = P(1)*U(1)*T(3,2)*T(4,2)*T(4,3)*sin(th1)... -P(2)*U(2)*T(3,1)*T(4,1)*T(4,3)*sin(th2)... +P(3)*U(3)*T(2,1)*T(4,1)*T(4,2)*sin(th3)... -P(4)*U(4)*T(2,1)*T(3,1)*T(3,2)*sin(th4); c3 = ...

c3/(th1*th2*th3*th4*T(4,1)*T(4,2)*T(4,3)*T(3,1)*T(3,2)*T(2,1)); c4 = P(1)ˆ2*U(1)*T(3,2)*T(4,2)*T(4,3)*(1-cos(th1))... -P(2)ˆ2*U(2)*T(3,1)*T(4,1)*T(4,3)*(1-cos(th2))... +P(3)ˆ2*U(3)*T(2,1)*T(4,1)*T(4,2)*(1-cos(th3))... -P(4)ˆ2*U(4)*T(2,1)*T(3,1)*T(3,2)*(1-cos(th4)); c4 = c4/(th1ˆ2*th2ˆ2*th3ˆ2*th4ˆ2 ... *T(4,1)*T(4,2)*T(4,3)*T(3,1)*T(3,2)*T(2,1)); c5 = P(1)*V(2,3,4)*T(3,2)*T(4,2)*T(4,3)*sin(th1)... -P(2)*V(1,3,4)*T(3,1)*T(4,1)*T(4,3)*sin(th2)... +P(3)*V(1,2,4)*T(2,1)*T(4,1)*T(4,2)*sin(th3)... -P(4)*V(1,2,3)*T(2,1)*T(3,1)*T(3,2)*sin(th4); c5 = ...

c5/(th1*th2*th3*th4*T(4,1)*T(4,2)*T(4,3)*T(3,1)*T(3,2)*T(2,1)); c6 = P(1)ˆ2*V(2,3,4)*T(3,2)*T(4,2)*T(4,3)*(1-cos(th1))... -P(2)ˆ2*V(1,3,4)*T(3,1)*T(4,1)*T(4,3)*(1-cos(th2))... +P(3)ˆ2*V(1,2,4)*T(2,1)*T(4,1)*T(4,2)*(1-cos(th3))... -P(4)ˆ2*V(1,2,3)*T(2,1)*T(3,1)*T(3,2)*(1-cos(th4)); c6 = c6/(th1ˆ2*th2ˆ2*th3ˆ2*th4ˆ2 ... *T(4,1)*T(4,2)*T(4,3)*T(3,1)*T(3,2)*T(2,1)); c7 = P(1)*T(3,2)*T(4,2)*T(4,3)*sin(th1)...

73 -P(2)*T(3,1)*T(4,1)*T(4,3)*sin(th2)... +P(3)*T(2,1)*T(4,1)*T(4,2)*sin(th3)... -P(4)*T(2,1)*T(3,1)*T(3,2)*sin(th4); c7 = ...

c7/(th1*th2*th3*th4*T(4,1)*T(4,2)*T(4,3)*T(3,1)*T(3,2)*T(2,1)); c8 = P(1)ˆ2*T(3,2)*T(4,2)*T(4,3)*(1-cos(th1))... -P(2)ˆ2*T(3,1)*T(4,1)*T(4,3)*(1-cos(th2))... +P(3)ˆ2*T(2,1)*T(4,1)*T(4,2)*(1-cos(th3))... -P(4)ˆ2*T(2,1)*T(3,1)*T(3,2)*(1-cos(th4)); c8 = c8/(th1ˆ2*th2ˆ2*th3ˆ2*th4ˆ2 ... *T(4,1)*T(4,2)*T(4,3)*T(3,1)*T(3,2)*T(2,1)); eS = eye(9) + c1*S + c2*Sˆ2 + c3*Sˆ3 + c4*Sˆ4 + c5*Sˆ5 + ... c6*Sˆ6 ... + c7*Sˆ7 + c8*Sˆ8; end function t = T(j,k) th = [th1,th2,th3,th4]; t = th(j)ˆ2-th(k)ˆ2; end function u = U(j) th = [th1,th2,th3,th4];

nth = th(th~=th(j));

u = nth(1)ˆ2*nth(2)ˆ2+nth(1)ˆ2*nth(3)ˆ2+nth(2)ˆ2*nth(3)ˆ2; end function v = V(a,b,c) th = [th1,th2,th3,th4]; v = th(a)ˆ2+th(b)ˆ2+th(c)ˆ2; end function p = P(j) th = [th1,th2,th3,th4];

74 nth = th(th~=th(j));

p = nth(1)*nth(2)*nth(3); end end

B.2 The Skewsymgenerator Function

function S = skewsymgenerator(n,size) % SKEWSYMGENERATOR(n,size) randomly generates an n-by-n skew-symmetric % matrix with integer entries that range between -size to +size. % See also randi.

N = n*(n-1)/2; rng('shuffle'); r = randi([-size, size], 1, N); S = zeros(n,n); for rnum = 1:n rsub = 0; for jj = (n-1):-1:(n-rnum) rsub = rsub +jj; end S(rnum,(rnum+1):n) = r((rsub-jj+1):rsub); end S = S - S';

75