U.U.D.M. Project Report 2011:1

Generation of the classical groups SO(4) and SO(8) by means of unit quaternions and unit octonions

Karin Nilsson

Examensarbete i matematik, 15 hp Handledare och examinator: Ernst Dieterich

Januari 2011

Department of Mathematics Uppsala University

Generation of the classical groups SO(4) and SO(8) by means of unit quaternions and unit octonions

Karin Nilsson January 10, 2011

Abstract The unit circle in the complex plane, viewed as a group, is well-known to be isomorphic to the matrix group SO(2). We explain how this isomorphism can be viewed as a special case of a more general context relating the unit sphere in a unital absolute valued algebra A to the group SO(A). In case A is the quaternion algebra, this yields an explicitly described set of matrices generating SO(4), such that every matrix in SO(4) has length at most two. In case A is the octonion algebra, it yields an explicitly described set of matrices generating SO(8), such that every matrix in SO(8) has length at most seven.

Acknowledgements I would like to thank my supervisor Ernst Dieterich for introducing me to the subject as well as helping me through it, dedicating a lot more time than I could ever ask for. I have learnt a lot and I am greatful for it. I would also like to thank my closest friends and family for always motivating me.

1 Contents

1 Introduction 3

2 A connection between O(n) and the Euclidean space E 3

3 A description of the group O(E) 5

4 The algebras R, C, H and O 7 4.1 The algebra R ...... 7 4.2 The algebra C ...... 7 4.3 The algebra H ...... 8 4.4 The algebra O ...... 8 4.4.1 Multiplication laws in O ...... 9 4.5 Two important results ...... 11

5 The mappings L and R 12 5.1 A description of S(A)...... 12

6 Properties of the mappings L and R 15 6.1 Case A = C ...... 15 6.2 Case A = H ...... 16 6.3 Case A = O ...... 16

7 The case Im(A) 17

8 Description of O(A) 20 8.1 The case A = R ...... 20 8.2 The case A = C ...... 20 8.3 The case A = H ...... 20 8.4 The case A = Im(H)...... 21 8.5 The case A = O ...... 21 8.5.1 Isotopies and companions ...... 22 8.6 The case A = Im(O)...... 25

2 1 Introduction

The goal of this thesis is to describe O(n) and SO(n) for n = 1, 2, 3, 4, 7 and 8 by using properties of the algebras R, C, H and O. The orthogonal group O(n) is the group of real n × n matrices A such that A−1 = AT . As we will see it is possible to view O(n) as the group of n×n all length-preserving linear operators on R , i.e. the group O(E) for an Euclidean E with dim(E) = n. The special orthogonal group SO(n) is the subgroup of O(n) of all n × n real matrices with determinant 1. We will se that SO(n) is isomorphic to the n×n group of all length-preserving linear operators on R with determinant 1, i.e. the group SO(E) for an n-dimensional Euclidean vector space E. Soon we will notice that O(E) is generated by reflections, and as it turns out theese reflections can, for all of the n, be describe solely using leftmultipli- cation by a unit La : E → E (x 7→ ax where kak = 1), rightmultiplication by a unit Ra : E → E (x 7→ xa where kak = 1) and the konjugation κ: E → E (x 7→ x¯ = 2hx, 1i − x). On the other hand, by a result from Ernst Dieterich and Erik Darp¨oin [4], we know that La,Ra ∈ SO(n) for n ∈ {2, 4, 8}. We will arrive at a constructive description of O(E) and SO(E).

2 A connection between O(n) and the Eu- clidean space E

Let E be a Euclidean vector space, i.e. a finite dimensional vector space endowed with a scalar product h , i. We have the following definition.

Definition 2.1. L(E) = {f : E → E | f is linear} GL(E) = {f ∈ L(E) | f is invertible} O(E) = {f ∈ L(E) | kf(x)k = kxk} SO(E) = {f ∈ L(E) | kf(x)k = kxk and det(f) = 1}

It can be proved that GL(E), O(E) and SO(E) are groups under compo- sition and that SO(E) < O(E) < SO(E).

Let V be a finte dimensional vector space. Let f ∈ L(V ) and let e = (e1, ..., en) be a basis of V. Then we define the matrix [f]e = [[f(e1)]e...[f(en)]e]n×n as the matrix-representation for f and e, where the k:th column is the matrix of f(ek) for the basis e. I.e. if f(ek) = a1ke1 + ... + ankek,   a1k  .    we define [f(ek)] =  . , where aij ∈ R ∀i, j ∈ {1, ..., n}.    .  ank

3 Proposition 2.2. For V, an Euclidean space, and e = (e1, ...en) an ON-base n×n in V, the mapping ϕ : L(V ) → R , ϕ(f) = [f]e is bijective. Proof. For a choosen basis, an operator f ∈ L(V ) is uniquely determined by f(e1), ..., f(en) and so [f]e as well is uniquely determined for each f ∈ L(V ). Conversely to every matrix corresponds an uniquely determined linear operator, namely the one taking ek 7→ f(ek). So ϕ is bijective.

Definition 2.3 (The determinant of a linear operator). Let a = (a1, ..., an) be a basis for a finite dimensional vector space, let f ∈ L(V ) and define the determinant det(f) of f as det(f) := det([f]a)

The determinant is independent of basis. To see this let a, b be two differ- n×n −1 ent basis for V. Then there exists a T ∈ R such that [f]a = T [f]bt and −1 −1 −1 det([f]a) = det(T [f]bT ) = det(T )det([f]b)det(T ) = det(TT )det([f]b) = det([f]b).

The next proposition enables us to study O(V) and SO(V) instead of directly looking at O(n) and SO(n). But first we need a lemma.

Lemma 2.4. Let f ∈ L(V ), for a finite dimensional vector space V , let A = ϕ(f) (where ϕ is the mapping defined in proposition 2.2) and let e = (e1, ...en) T be an ON-basis for V. Then hf(ei), f(ej)i = (A A)ij

Proof. P P hf(ei), f(ej)i = h µ Aµieµ, ν Aνjeνi = 2 2 ha1ie1 + ... + anien, a1je1 + ... + anjeni = a1ia1je1 + ... + anianjen = P P P µ,ν AµiAνjhei, eji = µ,ν AµiAνjδµν = µ AµiAµj = P T T µ(A )iµAµj = (A A)ij.

n×n Proposition 2.5. The mapping ϕ : L(V ) → R , ϕ(f) = [f]e induces the following isomorphisms of groups: (i) GL(V )→ ˜ GL(n) (ii) O(V )→ ˜ O(n) (iii) SO(V )→ ˜ SO(n)

Proof. We will prove that the intended mappings are well defined. Then, as ϕ is bijective, so will ϕ|GL(v) (for case (i)), ϕ|O(V ) (for case (ii)) and ϕ|SO(v) (for case (iii)) be. (i) (⇒) Let f ∈ GL(V ). Then there exists a g ∈ GL(V ) such that fg = gf = I. But [f]e[g]e = [fg]e = In = [gf]e = [g]e[f]e, i.e [f]e ∈ GL(n) (⇐) Let A ∈ GL(n). Then there exists a B ∈ GL(n) such that AB = BA = In. Now due to proposition 2.2 there exists unique linear operators f, g, h1 and h2 such that A = [f]e, B = [g]e, h1 = [AB]e and h2 = [BA]e. Then fg = h1 = I = h2 = gf and therefore f ∈ GL(V ).

4 (ii) (⇒) Let f ∈ O(V ) and let e = (e1, ..., en) be an ON-basis for V. Then hf(ei), f(ej)i = hei, eji = δij, where the first equality holds because f is orthogonal and the second because e is an ON-basis. Now by lemma 2.4 T T T hf(ei), f(ej)i = ([f]e [f]e)ij, so that we have δij = ([f]e [f]e)ij, i.e [f]e [f]e = In and so [f]e ∈ O(n) (⇐) Let A ∈ O(n) and f ∈ L(V ) such that A = ϕ(f). Then hei, eji = δij = T (A A)ij = hf(ei), f(ej)i, where the first equality holds because e is an ON- basis, the second equality holds because A is orthogonal and the last equality comes from lemma 2.4. This means that hf(ei), f(ej)i = hei, eji, which is equivalent to kf(x)k = kxk ∀x ∈ V so that f ∈ O(V ).

(iii) Above we defined det(f) := det([f]a) for any basis a. This means that det(f) = 1 if and only if det([f]e) = 1.

3 A description of the group O(E)

Let V be an Euclidean space.

Lemma 3.1. Let f ∈ O(V ). Then det(f) ∈ {1, −1}

T Proof. As f ∈ O(V ) we have that [f]a[f]a = 1n for some basis a = (a1, ...an) T T 2 of V . Then 1 = det(1n) = det([f]a[f]a ) = det([f]a)det([f]a ) = det([f]a) , i.e. det(f) = det([f]a) ∈ {1, −1}. The following proposition describes all elements in O(n) and SO(n) in terms of reflections. So, by proposition 2.5 it is also a description of O(V) and SO(V).

Proposition 3.2. Every element f ∈ O(n) that pointwise fixes a k-dimensional subspace can be written as a product of at most n-k reflections.

Proof. By induction on the dimension k = n, . . . , 0 of the subspace fixed by Ql f. If k = n then f = 1Rn = i=0 si where l = 0, i.e. f is a product of n − k = 0 reflections. Let 0 ≤ k < n and assume that the proposition holds for all f ∈ O(n) which fixes a subspace V of dim(V ) ≥ k + 1. Take f ∈ O(n) such that f(u) = u ∀u ∈ U where U is a k-dimensional n subspace of R . If f = 1Rn then the proposition holds so assume f 6= 1Rn . n Then there exists a v ∈ R such that f(v) = w 6= v. Now we need to establish two facts.

The first fact is that sv−w(w) = v. To see this let V denote the subspace of n R spanned by {v + w, v − w}. Notice that hv + w, v − wi = hv, vi − hw, wi − hv, wi + hv, wi = 0

. This means that sv−w on V is equivalent to reflection with v + w 1 1 1 1 fixed. Now v = 2 (v − w) + 2 (v + w) and w = − 2 (v − w) + 2 (v + w) so 1 1 sv−w(w) = −(− 2 )(v − w) + 2 (v + w) = v.

The second fact is that if u ∈ U then sv−w(u) = u. Note that

hu, v − wi = hu, vi − hu, wi = hf(u) − u, f(v) − wi = hf(u) − u, 0i = 0

5 , i.e u ⊥ v − w. This implies that u is fixed by sv−w and so sv−w(u) = u ∀u ∈ U.

By the facts we have that sv−wf(x) = x ∀x ∈ U + Rv, where dim(U + Rv) = Q l dim(U) + dim(Rv) = k + 1. By the assumption sv−wf(x) = i = 0 si where l ≤ n − (k + 1). This implies that f = sv−wsl . . . s1 where the number of reflections is ≤ n − (k + 1) + 1 = n − k.

Lemma 3.3. Every element f ∈ O(n) is a product of at most n reflections.

Proof. All f ∈ O(n) fixes zero and so the lemma follows from proposition 3.2.

Corollary 3.4. The elements of SO(n) are generated by an even number of reflections.

Proof. Let f ∈ O(n). The determinant of a reflection is −1 and so 1 = l det(f) = det(sl . . . s1 = (−1) , which implies that l is even.

A reflection in the hyperplane orthogonal to Ra is described by hx,ai sa : V → V , x 7→ x − 2 kak2 a. So from the previous theorem we have that if dim(V ) f ∈ SO(V ) then f = sa2k · ... · sa1 where k ≤ 2 . But how to describe O−(V )?

Corollary 3.5. If n ∈ {2, 4, 8} then O−(n) = SO(n)κ.

− Proof. Let f ∈ O (n). Then f = f1A = f(κκ) = (fκ)κ where det(fκ) = det(f)det(κ) = (−1)2 = 1. Hence O−(n) ⊂ SO(n)κ. This means that for all f ∈ O−(n) there exists a g ∈ SO(n) such that f = gκ. Now let g ∈ SO(n). Then det(gκ) = det(g)det(κ) = 1(−1) = −1 so that gκ ∈ O−(n). Hence SO(n)κ ⊂ O−(n).

The preceding corollary tells us that in order to describe O(n) we can focus on describing SO(n).

6 4 The algebras R, C, H and O Definition 4.1 (Algebra). An algebra A is a vector space A equipped with a bilinear multiplication A × A → A, (x, y) 7→ xy.

Definition 4.2 (Absolute valued algebras). An absolute valued algebra is a non-zero real algebra A with a norm k · k : A → R≥0 such that kxyk = kxkkyk for all x, y ∈ A.

Definition 4.3 (Division algebra). A division algebra is an algebra A such that the mappings La : A → A, x 7→ ax, and Ra : A → A, x 7→ xa, are bijective for all a ∈ A\{0}.

Proposition 4.4. If A is a finite dimensional absolute valued algebra then A is a division algebra.

Proof. If A is an absolute valued algebra then if x1, x2 ∈ A are such that x1 6= x2 we have that kLa(x1)k = kax1k = |a|kx1k 6= |a|kx2k = kLa(x2)k so La is injective (similarly Ra is injective). Now, because La and Ra are operators on a finitely dimensional vector space, they are also surjective (by a result in ).

In the following sections we will show that R, C, H and O are finitely dimensional absolute valued algebras. Then they are, by the preceding propo- sition, division algebras as well.

4.1 The algebra R Together with the ordinary multiplication of real numbers R is a one-dimensional real algebra.

Lemma 4.5. For all x, y ∈ R we have that kxyk = kxk · kyk. Proof. kxyk = |x| · kyk = |x| · |y| · k1k = |x| · |y| = kxk · kyk

4.2 The algebra C A complex number is on the form a + bi, for a, b ∈ R where we define mul- tiplication so that i2 = −1 and so that the distributivity law holds. With the ordinary addition for complex numbers, C is a 2-dimensional Euclidean vector space over R. It is easy to verify, by distributivity, that the multi- plication is bilinear, commutative and associative, so that C is a commuta- tive and associative real algebra endowed with the standard scalar product ha + bi, c + dii = ac + bd.

Lemma 4.6. For all x, y ∈ C we have that kxyk = kxk · kyk.

iθ iθ Proof. Let x, y ∈ C. Then x = r1e 1 and y = r2e 2 for some r1, r2, θ1, θ2 ∈ R iθ iθ i(θ +θ ) so that we have: kxyk = kr1e 1 r2e 2 k = |r1r2| · ke 1 2 k = |r1r2| = |r1| · iθ iθ iθ iθ |r2| = |r1|| e 1 k · |r2|ke 2 k = kr1e 1 k · kr1e 1 k = kxkkyk.

7 4.3 The algebra H

The elements of H are of the form x0 +x1i+x2j +x3k (where the xi:s are real numbers). With addition defined componentwise, H is a real vector space with basis (1, i, j, k). The multiplication in H is accomplished by the distributive law together with the following multiplication of the basis elements.

1 i j k 1 1 i j k i i −1 k −j j j −k −1 i k k j −i −1 As ij = k 6= −k = ji we conclude that the quaternions are not com- mutative. However, for example by tedious verifications directly from the definition of the multiplication, we have that the multiplication is associa- tive. As bilinearity follows directly from the distributivity in O we have that O is an associative algebra endowed with the standard scalar product hx0 + x1i + x2j + x3, y0 + y1i + y2j + y3i = x0y0 + x1y1 + x2y2 + x3y3.

Lemma 4.7. Let x, y ∈ H, then xy =y ¯x¯

Proof. After checking that it holds for all the basis elements in H the lemma follows from the fact that the mapping (x, y) 7→ xy − y¯x¯, where x, y ∈ H, is bilinear.

The following lemma follows from the preceding lemma and the fact that hx, xi =xx ¯ for all x ∈ H (which is easily verified from the definition of quatornian multiplication).

Lemma 4.8. For all x, y ∈ H we have that kxyk = kxk · kyk.

2 Proof. kxyk 1H = hxy, xyi1H = (xy ¯ )(xy) =y ¯(¯xx)y = hx, xiyy¯ = hx, xihy, yi1H 2 2 = kxk kyk 1H

4.4 The algebra O

An octonion is on the form x∞ + x0i0 + x1i1 + x2i2 + x3i3 + x4i4 + x5i5 + x6i6 (where the xi:s are real numbers). Defining addition componentwise, O is an 8-dimensional Euclidean vector space with the basis (1, i0, ..., i6). We define the multiplication in O by demanding the distributive law along with the fol- lowing multiplication of basis elements.

2 in = −1 in+1in+2 = in+4 = −in+2in+1 in+2in+4 = in+1 = −in+4in+2 in+4in+1 = in+2 = −in+1in+4

Where the subscripts run modulo seven. Since i0i1 = i4 6= −i4 = i1i0, O is not commutative and since (i0i1)i2 = i3i2 = −i5 6= i5 = i0i4 = i0(i1i2), O is not associative. That the multiplication in O is bilinear follows directly

8 from the distributivity. To sum up we have that O is a non-associative, non- commutative 8-dimensional real algebra endowed with the standard scalar product hx∞ +x0i0 +...+x6i6, y∞ +y0i0 +...+y6i6i = x∞y∞ +x0y0 +...+x6y6.

Lemma 4.9. For all x, y ∈ O we have that kxyk = kxk · kyk. Proof. Can be verified by direct multiplication.

4.4.1 Multiplication laws in O In this section we will derive some arithmetic, for O, that will be used in later sections.

Corollary 4.10. For all x, y ∈ O we have that hxy, xyi = hx, xihy, yi. Proof. Using lemma 4.9 we derive the following: hxy, xyi = kxyk2 = (kxk · kyk)2 = kxk2 · kyk2 = hx, xihy, yi

 hxy, xzi = hx, xihy, zi Lemma 4.11 (The scaling laws). If x, y, z ∈ then O hxy, zyi = hx, zihy, yi Proof. The first equality can be derived from corollary 4.10 by replacing y with y + z, expanding both sides, cancel some terms and then divide by 2. The second equality follows similarily from corollary 4.10 by replacing x with x + z.

Lemma 4.12 (The exchange law). hxy, uzi = 2hx, uihy, zi − hxz, uyi for all x, y, u, z ∈ O. Proof. If we replace x in the first equality of lemma 4.11 with x + u and then expand both sides we have that

hxy, xzi + hxy, uzi + huy, xzi + huy, uzi = (hx, xi + 2hx, ui + hu, ui)hy, zi

By lemma 4.11 the right hand side is equivalent to

hxy, xzi + hx, uihy, zi + huy, uzi so that we can cancel some terms and rearrange to arrive at the desired expression.

 hxy, zi = hy, xz¯ i Lemma 4.13 (Braid laws). If x, y, z ∈ then O hxy, zi = hx, zy¯i

Proof. If we replace u in lemma 4.12 with the unit 1 ∈ O we get hxy, zi = 2hx, 1ihy, zi − hxz, yi = hy, 2hx, 1izi − hy, xzi = hy, (2hx, 1i − x)zi = hy, xz¯ i

The second equality arise from replacing z with 1 and u with z in lemma 4.12 and then expanding the right hand side.

9 Lemma 4.14. hx, xi =xx ¯ = xx¯ = hx,¯ x¯i for all x ∈ O.

Proof. Let x, t ∈ O arbitrary. Then, by lemmata 4.13 and 4.11 we have that hxx,¯ ti = hx, xti = hx1, xti = hx, xih1, ti = hhx, xi, ti hxx,¯ ti = hx, txi = h1x, txi = h1, tihx, xi = hhx, xi, ti

Soxx ¯ = hx, xi = xx¯. Finally, by lemma 4.13, we have

hx, xi = hx1, 1xi = h1, x¯1xi = h1¯x, x¯1i = hx,¯ x¯i.

−1 1 Lemma 4.15. For each x ∈ O the inverse is x = kxk2 x¯ Proof. By lemma 4.14 we have that xx¯ = hx, xi and because hx, xi = kxk2 1 this implies that x kxk2 x¯ = 1

Lemma 4.16 (Product conjugation). xy =y ¯x¯ for all x, y ∈ O. Proof. The equality follows from applying lemma 4.13 a couple of times as follows. hy¯x,¯ ti = hx,¯ yti = hx¯t,¯ yi = ht,¯ xyi = ht,¯ (xy)1i = hxyt,¯ 1i = hxy, ti for all t ∈ O.

Lemma 4.17 (The Moufang laws). Let x, y, z ∈ O. (xy)(zx) = (x(yz))x = x((yz)x)

Proof.

h(xy)(zx), ti = hxy, t(zx)i = hxy, t(¯xz¯)i = 2hx, tihy, x¯z¯i − hx(¯xz¯), tyi = 2hx, tihyz, x¯i − hx¯z,¯ x¯(ty)i = 2hyz, x¯ihx, ti − hx,¯ x¯ihz¯y,¯ ti = 2hx, yzihx, ti − hx, xihyz, ti = h2hx, yzix − hx, xiyz, ti for all t ∈ O Where we use the lemmata 4.13, 4.16, 4.12, 4.11 and 4.14. By this we have that (xy)(zx) = 2hx, yzix−hx, xiyz is a in x and yz only, which means that we can replace y and z with any two elements that have the same product, and so

(xy)(zx) = (x(yz))(1x) = (x(yz))x and (xy)(zx) = (x1)(y(z)x) = x((yz)x)

Lemma 4.18. (yx)y = y(xy) for all x, y ∈ O. Proof. Follows from replacing z in lemma 4.17 with 1.

Lemma 4.19 (Inverse laws). x¯(xy) = hx, xiy = (yx)¯x for all t ∈ O.

Proof. hx¯(xy), ti = hxy, xti = hx, xihy, ti = hhx, xiy, ti holds for all t ∈ O.

10 4.5 Two important results

Notice that, in the previous sections, we have stated that R, C, H and O are all finite dimensional real vector spaces and we have as well defined a scalar product for each, so they are indeed Euclidean spaces. We have the following proposition.

Proposition 4.20. R, C and H are finitely dimensional associative absolute valued algebras with unit.

Proof. Let A ∈ {R, C, H, O}. Then A is a real n-dimensional vector space where n ∈ {1, 2, 4, 8} and as an instant consequence from the distributive law the corresponding defined multiplications are bilinear, which prooves that A is a real algebra. In each case we have, in previous sections, concluded that only O is non-associative. Now, with this, the proposition follows from the lemmas 4.5, 4.6, 4.8 and 4.9, which verifies that the norm is multiplicative for each algebra.

The next proposition is a classical result by to Albert. See proof in [3].

Proposition 4.21. (i) If A is an absolute valued algebra with a unit then A is finitely dimensional and isomorphic to R, C,H or O. (ii) If A moreover is associative, then A is isomorphic to R, C or H.

11 5 The mappings L and R

Let A be an absolute valued algebra with unit. Then we have, from prop- sition 4.21, that A ∈ {R, C, H, O}.

Lemma 5.1. Let A be a finitely dimensional absolute valued algebra and let a ∈ S(A). Then La,Ra ∈ O(A).

Proof. As A is an algebra, the multiplication is bilinear, and so La and Ra are linear mappings. Furthermore, because A is a finitely dimensional absolute valued algebra, proposition 4.4 implies that A is a division algebra and so La and Ra are bijective and consequently invertible. Finally, as kak = 1 we have that:

kLa(x)k = kaxk = |a|kxk = kxk = kxk|a| = kxak = kRa(x)k so that La and Ra are orthogonal.

The next proposition comes from [4].

Proposition 5.2. Let A be a division algebra over R with 1 < dim(A) < ∞ and let a, b ∈ A\{0}. Then det(La) have the same sign as det(Lb) and det(Ra) have the same sign as det(Rb).

Corollary 5.3. Let A be a finitely dimensional absolute valued algebra with unity such that 1 < dim(A) and let a ∈ S(A). Then both La and Ra belong to SO(A). Proof. First recall from proposition 5.1 that under the stated circumstances we have that La,Ra ∈ O(A). To complete the proof we need to show that det(La) = det(Ra) = 1. Because La,Ra ∈ O(A) we have, from proposition 3.1 that det(Ra), det(La) ∈ {1, −1}. Now 1A ∈ S(A) so, according to proposition 5.2, det(La) have the same sign as det(L ) = 1 and det(Ra) have the same sign as det(R ) = 1. 1A 1A This means that det(Ra) = det(La) = 1 and so La,Ra ∈ SO(A). Corollary 5.3 implies that the following mappings are welldefined for A ∈ {C, H, O}. L : S(A) −→ SO(A), a 7→ La R : S(A) −→ SO(A), a 7→ Ra This implies that im(L), im(R) ⊂ SO(A) for A ∈ {C, H, O}.

5.1 A description of S(A) Let S(E) denote all elements with norm 1 in the Euclidean space E and let A ∈ {C, H, O}. In this section we will look closer at S(A). With the information obtained we will be able to describe the mappings L and R defined in section 5 (se next section).

12 Lemma 5.4. S(A) is closed under the multiplication induced by A.

Proof. Let x, y ∈ S(A). Then we have that kxyk = kxkkyk = 1 · 1 = 1 so that xy ∈ S(A).

Lemma 5.5. S(A) has a unity element.

Proof. Let 1A be the unit in A. Now we have that k1Ak = k1A1Ak = k1Akk1Ak. Now 1A 6= 0 so we can divide by k1Ak and arrive at the conclusion that k1Ak = 1 so that 1A ∈ S(A). Now x1A = x = 1Ax holds for all x ∈ A and because S(A) ⊂ A it holds as well for all x ∈ S(A). −1 Lemma 5.6. If x ∈ S(A), then x =x ¯ ∈ S(A).

Proof. A has a standard basis 1, i0, ..., il where l = 0 when A = C, l = 2 when A = H and l = 6 when A = O. Each x ∈ A can then be represented as a linear combination x = α1 + β0i0 + ... + βlil = α1 + v with α, β0, ..., βl ∈ R. Then we have that

2 2 v = (α1 + β0i0 + ... + βlil) Pl 2 2 P = n=0 βnin + n6=m βnβm(inim + imin) Pl 2 2 = − n=0 βnin + 0 = −kvk21

Where we used the fact that iλiµ + iµiλ = 0 (as xy =y ¯x¯ for all x, y ∈ A) and 2 that iλ = −1 for all λ, µ.

Now, if we let x ∈ S(A): xx¯ = (α1 + v)(α1 − v) = (α1)(α1) − (α1)v + v(α1) − v2 = α21 − αv + αv + kvk2 = α21 + αv − αv + kvk2 = (α1)(α1) + (α1)v − v(α1) − v2 = (α1 − v)(α1 + v) =xx ¯

−1 So x =x ¯. To complete the proof we need to show thatx ¯ ∈ S(A) for all x ∈ S(A).

2 2 2 2 2 2 2 2 kx¯k = α + (−β0) + ... + (−βl) = α + β0 + ... + βl = kxk = 1

So, by lemma 5.6, each x ∈ S(A) has an inverse in S(A). By lemmata 5.4, 5.5 and 5.6 we know that S(A) is closed under multiplication as well us under taking inverses and also that it has a unit. So S(A) is a group if it is associative. As earlier mentioned C and H are associative, but as for example (i0i1)i2 = i3i2 = −i5 6= i5 = i0i4 = i0(i1i2), O is not associative. Definition 5.7 (Inverse loop). An inverse loop is a set L together with a multiplication L × L → L, (x, y) 7→ xy, such that: (L1) There is an element 1 ∈ L such that 1x = x = x1 for all x ∈ L. (L2) For all x ∈ L there is a element x−1 ∈ L such that xx−1 = 1 = x−1x.

13 (L3) For all elements x, y ∈ L we have that x−1(xy) = (x−1x)y and (yx)x−1 = y(xx−1). Corollary 5.8. The unit in an inverse loop is unique.

Proof. Let L be an inverse loop and let 1L and eL be two units in L. Then 1L = 1LeL = eL. Corollary 5.9. The inverse of any element x in an inverse loop is unique. Proof. Let L be an inverse loop and let y and z be two inverses for x ∈ L. Then y = y1 = y(xz) = (yx)z = 1z = z, where the fourth equality follows from (iii) in the definition of an inverse loop.

Lemma 5.10. O\{0} is an inverse loop.

Proof. We need to verify (L1), (L2) and (L3).

(L1) Follows from the definition of octonion multiplication. I.e. we have an element 1O ∈ O such that 1Oin = n1O = 1O for each basis element in ∈ O. That 1O is a unit for an arbitrary element x ∈ O follows from the distributivity of octonian multiplication. −1 1 (L2) Let x ∈ O. By lemma 4.15 we know that x = kxk2 x¯ which belongs to O. (L3) Let x ∈ O. Then, asx ¯(xy) = hx, xiy by lemma 4.19 we have that −1 1 1 −1 x (xy) = kxk2 x¯(xy) = kxk2 hx, xiy = y = (x x)y −1 1 1 −1 (yx)x = (yx)¯x kxk2 = kxk2 hx, xiy = y = y(xx )

Proposition 5.11. (i) S(C) is an abelian group. (ii) S(H) is a non-abelian group. (iii) S(O) is an inverse loop, but not a group. Proof. Let A ∈ {C, H, O}. By lemmata 5.4, 5.5 and 5.6 we know that S(A) is closed under multiplication as well us under taking inverses and also that it has a unit. (i) C is kommutative and associative which means that S(A) is as well and so we have that S(C) is an abelian group. (ii) H is associative, so S(H) is also associative. However ij = k 6= −k = ji so S(H) is not commutative, which means that S(H) is a group, but not abelian. (iii) As O is not associative it cannot be a group. By Lemmata 5.5 and 5.6 we have that (L1) and (L2) are satisfied. Finally, (L3) follows from lem- mata 5.6, 4.19 and 4.14 as follows: x−1(xy) =x ¯(xy) = hx, xiy = (¯xx)y = (x−1x)y (yx)x−1 = (yx)¯x = yhx, xi = y(xx¯) = y(xx−1)

14 6 Properties of the mappings L and R

Recall that L and R are well defined mappings. In this section we will look closer at L and R when A ∈ {C, H, O}. Lemma 6.1. Both L and R are injective.

Proof. Let a, b ∈ S(A). Then La = L(a) = L(b) = Lb and consequently a = a1 = La(1) = Lb(1) = b1 = b and a = 1a = Ra(1) = Rb(1) = 1b = b so that a = b in both cases.

6.1 Case A = C 2 Lemma 6.2. Denote by ρα the rotation in R about the origin with the angle α.

(i) SO(C) = {ρα | 0 ≤ α < 2π} Proof. 2 (i) First note that all rotations in R are length-preserving (kf(x)k = kxk for all x ∈ C) and hence are orthogonal, so that {ρα | 0 ≤ α < 2π} ⊂ SO(C). Let f ∈ SO(C). Then kf(x)k = kxk for all x ∈ C, so especially kf(1C)k = k1Ck = 1 so that f(1C) = a ∈ S(C). Furthermore as hf(x), f(y)i = hx, yi for all x, y ∈ S(C) we have that ha, f(i)i = hf(1), f(i)i = h1, ii = 0, i.e. f(i)⊥a. This means that we have the two possibilities f(i) = f1(i) = ai and f(i) = f2(i) = −ai. Expressing f1 and f2 as matrices we get cos θ − sin θ [f ] = A (θ) = 1 e 1 sin θ cos θ and cos(θ) sin θ  [f ] = A (θ) = 2 e 2 sin θ − cos θ . with 0 ≤ θ < 2π. 2 2 From this we have that det(f1) = det(A1(θ)) = cos (θ) + sin (θ) = 1 and 2 2 det(f2) = det(A2(θ)) = −cos (θ) − sin (θ) = −1, so that only f1 ∈ SO(C). 2 Finally f = f1 describes a rotation in R so that SO(C) ⊂ {ρα | 0 ≤ α < 2π}.

Proposition 6.3. If A = C, then L = R is a group isomorphism. Proof. By earlier results we know that L and R are well defined injective mappings of groups. We need to show that they are surjective such that L = R. Let x ∈ C and let a, b ∈ S(H). Then L(ab) = Lab and R(ab) = Rab.

Lab(x) = (ab)x = a(bx) = LaLb(x) Rab(x) = x(ab) = x(ba) = (xb)a = RaRb(x) This implies that L and R are homomorphisms. Now La(x) = ax = xa = Ra(x) for all a ∈ S(C) and all x ∈ C so L = R. In order to prove that L and R are surjective notice that a rotation ρα = La for a = cos(α)+isin(α) ∈ S(C), so the surjectivity follows from lemma 6.2.

15 6.2 Case A = H Definition 6.4. (Antihomomorphism of groups) Let (G, ◦), (H, ∗) be two groups such that ’◦’ and ’∗’ is the multiplications in G and in H respectively. Then a mapping ϕ : G → H is called a antihomomorphism if

ϕ(x ◦ y) = ϕ(y) ∗ ϕ(x) for all x, y ∈ G.

Proposition 6.5. If A = H, then L is an injective of groups and R is an injective antihomomorphism of groups.

Proof. We already know that L and R are well defined injective mappings of groups. Now we need to prove that L is a homomorphism and that R is an antihomomorphism. Let x ∈ H and let a, b ∈ S(H). Then L(ab) = Lab and R(ab) = Rab.

Lab(x) = (ab)x = a(bx) = LaLb(x) Rab(x) = x(ab) = x(ba) = RbRa(x)

This implies that L is a homomorphism and that R is an antihomomorphism.

As the image of a group in a grouphomomorphism is a group we conclude the following.

Corollary 6.6. If A = H then im(L) and im(R) are subgroups of O(H).

6.3 Case A = O As O is not associative we cannot conclude that L and R are homomorphisms.

Proposition 6.7. If A = O and a, b ∈ S(O), then LaLb = Lab and RbRa = Rab if a =a ¯.

Proof. Let a, b ∈ S(O) and s ∈ O. Then the proposition follows from lem- mata 4.19 and 5.6.

LaLa¯(x) = Laa¯(x) iff (aa¯)x = a(¯ax) Ra¯Ra(x) = Raa¯(x) iff (xa)¯a = x(aa¯)

16 7 The case Im(A) 2 Let A ∈ {R, C, H, O} and define the set Im(A) = {x ∈ A : x ∈ R}\{R\{0}} = 1⊥ of purely imaginary elements in . We let i , ..., i denote the basis of A A 0 n A and i1, ..., in denote the basis of A. Let O1(A) denote all f ∈ O(A) such that f(1A) = 1A and consider the mapping ψ : O1(A) −→ L(Im(A)), ψ(f) = fι

Where for each f ∈ O1(A) ψ assigns a linear function ψ(f) = fι on Im(A), such that fι(i1) = f(i1), ..., fι(in) = f(in).

Im(A) is a subspace of A and consequently, together with the defined inner product on A, Im(A) is also an Euclidean vector space. This means that we can conclude the following:

Lemma 7.1. Let A ∈ {R, C, H, O}. Then O(Im(A)) is a group. We will as well be able to conclude that if dim(A) = n+1 then O(Im(A)) ' O(n) . So we can describe O(3) in terms of O(Im(H)) and O(7) in terms of O(im(O)).

Lemma 7.2. O1(A) is a subgroup of O(A).

Proof. By definition of O1(A), O1(A) ⊆ O(A). Furthermore the identity −1 operator fixes one, so it is in O1(A). Now let f ∈ O1(A) and let f be the −1 −1 −1 inverse of f. Then 1A = f ◦ f(1A) = f (1A), i.e f ∈ O1(A).

Lemma 7.3. ψ(f) ∈ O(Im(A)) for all f ∈ O1(A).

Proof. Let ψ(f) = fι for some f ∈ O1(A). We need to show that fι is invertible and that kfι(x)k = kxk for all x ∈ Im(A). First note that as f must be invertible (as it belongs to O1(A)). Let g ∈ O1(A) be the inverse of f and let ψ(g) = gι. We have that:

fι ◦ gι(x1i1 + ... + xnin) = x1fι ◦ gι(i1) + ... + xnfι ◦ gι(in) = x1f ◦ g(i1) + ... + xnf ◦ g(in) = x1i1 + ... + xnin

So that gι is the inverse of fι. Furthermore, as f is orthogonal we can derive the following:

kfι(x1i1 + ... + xnin)k = kx1f(i1) + ... + xnf(in)k = kf(x1i1 + ... + xnin)k = kx1i1 + ... + xnink

By the preceding lemma we have that

ψ : O1(A) −→ O(Im(A)) Note that if x ∈ 1⊥ then hf(x), 1 i = hf(x), f(1 )i = hx, 1 i = 0 so that A A A A f(x) ∈ 1⊥. This means that each f ∈ O ( ) is a well-defined operator on A 1 A Im(A). By the next proposition we can say even more.

17 Proposition 7.4. ψ : O1(A) −→ O(Im(A)) is a bijective homomorphism of groups. Proof. We already know, by lemmata 7.1, 7.2 and 7.3, that ψ is a well-defined mapping of groups, so we need to show that ψ is bijective and that ψ is a homomorphism (i.e that ψ(fg) = ψ(f)ψ(g) for all f, g ∈ O1(A)). Injectivity: Let f, g ∈ O1(A) such that ψ(f) = fι 6= gι = ψ(g), i.e. for some x ∈ Im(A) we have that fι(x) 6= gι(x). This implies that:

g(x) = g(0i0 + x1i1 + ... + xnin) = x1g(i1) + ... + xng(in)

= x1gι(i1) + ... + xngι(in) = gι(x1i1, ..., xnin)

6= fι(x1i1, ..., xnin) = x1fι(i1) + ... + xnfι(in)

= x1f(i1) + ... + xnf(in) = f(0i0 + x1i1 + ... + xnin) = f(x) I.e. g(x) 6= f(x) for some x ∈ 1⊥ ⊂ , so that g 6= f, which means that ψ is A A injective.

Surjectivity: Let f ∈ O(Im(A)) and let g ∈ L(A) such that g(1A) = 1A, g(i1) = f(i1),...,g(in) = f(in). We need to prove that g ∈ O1(A), i.e. that g is orthogonal and invertible. Let x ∈ A. Then there is unique α ∈ R and x0 ∈ Im(A) such that x = α + x0. Notice that g(x) = g(α + x0) = g(α) + g(x0) = α + f(x0). We will prove that kg(x)k = kxk using the fact that α is orthogonal to both x0 and f(x0). kg(x)k2 = kα + f(x0))k2 = hα + f(x0), α + f(x0)i = hα, αi + 2hα, f(x0)i + hf(x0), f(x0)i = hα, αi + hx0, x0i = hα, αi + 2hα, x0i + hx0, x0)i = hα, f(x0), α, f(x0)i = kα + x0k2 = kxk2 So g is orthogonal. −1 −1 Now let h ∈ L(A) such that h(1A) = 1A, h(i) = f (i),...,h(k) = f (k). Then h is an invers of g.

Homomorphism: Let f, g ∈ O1(A), ψ(f) = fι and ψ(g) = gι. We need to show that ψ(f) ◦ ψ(g) = ψ(f ◦ g). First notice that if fι ◦ gι(ij) = f ◦ g(ij) for all ij ∈ {i1, ..., in} then it follows from the distributivity law that fι ◦ gι(x) = f ◦ g(x) for all x ∈ Im(A). Let ij be any basis vector in Im(A) and let g(ij) = a1ji1 + ... + anjin for some a1j, ..., anj ∈ R. Then

fι ◦ gι(ij) = fι(g(ij)) = fι(a1ji1 + ... + anjin) = a1jfι(i1) + ... + anjfι(in) = a1jf(i1) + ... + anjf(in) = f(a1ji1 + ... + anjin) = f(g(ij)) = f ◦ g(ij)

− Let SO1(A) denote all f ∈ SO(A) such that f(1A) = 1A and let O1 (A) = − {f ∈ O (A) | f(1A) = 1A}. In the next proposition we will arrive at a descrip- tion of SO(Im(A)).

18 Proposition 7.5. SO1(A) ' SO(Im(A))

Proof. We will prove that ϕ0 := ϕ|SO1(A) is an isomorphism between SO1(A) and SO(Im(A)). I.e. that ϕ0: SO1(A) −→ SO(Im(A)) are a well defined bijective homomorphism. −1 We will prove that ϕ0 and (ϕ0) are well defined. Then, as SO(Im(A)) and SO1(A) are both closed under composition and as ϕ is a homomorhism, ϕ0 will be a homomorhisms as well. Furthermore, as ϕ is a , ϕ0 will be a as well. 1 0 0 0  0 a11 a12 a13 ϕ0 well defined: Let f ∈ SO1(A). Then [f]e =   and 0 a21 a22 a23 0 a31 a32 a33   a11 a12 a13 ϕ(f) = A = a21 a22 a23. As 1 = det(f) = 1 ∗ det(A) = det(A) we have a31 a32 a33 that ϕ0(f) ∈ SO(Im(A)).   a11 a12 a13 −1 (ϕ0) well defined: Let f ∈ SO(Im(A)). I.e [f]e = a21 a22 a23 a31 a32 a33 for some aij ∈ R such that det([f]e) = 1. As ϕ is surjective, we know that there is g ∈ O1(A) such that ϕ(g) = f. But then 1 0 0 0  0 a11 a12 a13 [g]e =   0 a21 a22 a23 0 a31 a32 a33 −1 and so det(g) = 1 ∗ det(f) = det(f) = 1 so that (ϕ0) (f) = g ∈ SO1(A).

This means that SO(Im(A)) can be described in terms of SO(A) when A ∈ R, C, H, O.

19 8 Description of O(A) In this section we will describe O(A) for A ∈ {R, C, im(H), H, im(O), O} in terms of the functions La,Ra and κ, where a is a unit element in A.

8.1 The case A = R Proposition 8.1. Let 1 be the unity in R. (i) SO(R) = {L1 = R1} − (ii) O (R) = {L−1 = R−1} T 2 Proof. O(1) = {A ∈ R | AA = 1R} = {[a] | a ∈ R and a = 1} = {[a] | a ∈ S(R)} = {[1], [−1]}, so that O(R) = {L1 = R1,L−1 = R−1}. Furthermore we know that SO(R) < O(R) with the elements with determinant 1. So we have − that SO(R) = {L1 = R1} and consequently O (R) = {L−1 = R−1}.

8.2 The case A = C Proposition 8.2.

(i) SO(C) = {La = Ra | a ∈ S(C)} − (ii) O (C) = {Laκ = Raκ | a ∈ S(C)} Proof. (i) By proposition 6.3 we have isomorphisms L: S(C)−→ ˜ SO(C) and R: S(C)−→ ˜ SO(C). This means that {La | a ∈ S(C)} = im(L) = SO(C) = im(R) = {Ra | a ∈ S(C)}. Furthermore as C is commuatative we have that La = Ra for all a ∈ C. − (ii) By corollary 3.5 we have that O (C) = SO(C)κ, so the statement follows from (i).

8.3 The case A = H By proposition 3.2 we can describe SO(H) by an even number of reflections in H. We will use this fact to describe SO(H), but first we need to describe an arbitrary reflection in H.

Proposition 8.3. If sa : H → H is the reflection in the hyperplane orthogonal to a ∈ S(H) then sa(x) = −axa¯ for all x ∈ H.

Proof. Let u ∈ {1, i, j, k} and map ua 7→ −auaa = −aa¯ua¯ = −1Hua¯ = −ua¯ (where we use that aa¯ = 1H by lemma 5.6 and that ua =a ¯u¯ by lemma 4.7). Then a 7→ −a, ia 7→ ia, ja 7→ ja and ka 7→ ka, which is what we wanted.

Lemma 8.4 (Cayley).

(i) All f ∈ SO(H) satisfies f = LaRb = RbLa with a, b ∈ S(H). − (ii) If f ∈ O (H) then f = LaRbκ = RbLaκ with a, b ∈ S(H). Proof.

20 (i) From proposition 3.2 we have that f ∈ SO(H) is generated by 0, 2 or

4 reflections. I.e f = sa4 · ... · sa1 = −a4(−a3(−a2(−a1xa1)a2)a3)a4, with a1, ..., a4 ∈ S(H). Now, because H is associative and x¯ = x for all x ∈ H we 4 know that f = (−1) a4a¯3a2a¯1xa¯1a2a¯3a4. Let a4a¯3a2a¯1 = a anda ¯1a2a¯3a4 = b. By lemma 5.6 we know thata ¯3, a¯1 ∈ S(A) so, since (by lemma 5.4) S(H) is closed under multiplication, we have that a, b ∈ S(H). (ii) Follows directly from (i) and corollary 3.5.

Proposition 8.5.

(i) SO(H) = {LaRb = RbLa | a, b ∈ S(H)} − (ii) O (H) = {LaRbκ = RbLaκ | a, b ∈ S(H)} Proof.

(i) By lemma 8.4 we have that SO(H) ⊂ {LaRb = RbLa | a, b ∈ S(H)}. Now, as SO(H) is closed under multiplication and as La,Ra ∈ SO(H) for all a ∈ S(H), we know that {LaRb = RbLa | a, b ∈ S(H)} ⊂ SO(H). − (ii) By corollary 3.5 we have that O (H) = SO(H)κ, so the statement follows from (i).

8.4 The case A = Im(H) Recall that:

O1(H) = {f ∈ O(H) | f(1H) = 1H}

SO1(H) = {f ∈ SO(H) | f(1H) = 1H} − − O1 (H) = {f ∈ O (H) | f(1H) = 1H} = SO1(H)κ Proposition 8.6.

(i) SO1(H) = {LaRa¯ = Ra¯La | a ∈ S(H)}. − (ii) O1 (H) = {LaRa¯κ = Ra¯Laκ | a ∈ S(H)}. Proof.

(i) From proposition 8.5 we have that f ∈ SO(H) if and only if f = LaRb = RbLa for some a, b ∈ S(H). Now if f ∈ SO1(H) we must have 1H = f(1H) = −1 −1 a1Hb = ab so that b = a . By lemma 5.6, we have that a =a ¯. Now let f = LaRa¯ = Ra¯La for some a ∈ S(A). By lemma 5.6 we know that a¯ ∈ S(A) so proposition 8.5 implies that f ∈ SO(H). Finally, as LaRa¯(1H) = a1Ha¯ = aa¯ = 1H, we can conclude that f ∈ SO1(H). (ii) Follows from (i) and proposition 7.5.

8.5 The case A = O 8 As we have mentioned earlier, in section 3, a reflection in R is described by hx,ai 8 8 sa(x) = x − 2 kak2 a for some a ∈ R . As O is the same vector space as R we can use this description for a reflection in O as well. We will now describe SO(O) by means of reflections. First we need a lemma.

21 Proposition 8.7. Let a ∈ S(O). Then sas1 = Ba and s1sa = Ba¯.

Proof. Let a ∈ S(O) and x ∈ O. Then

hx,1i sas1(x) = sa(x − 2 k1k2 1) = sa(x − 2hx, 1i) = −sa(2hx, 1i − x) hx,a¯ i = −sa(¯x) = −(¯x − 2 kak2 a) = 2hx,¯ 1aia − x¯ = 2hx¯a,¯ 1ia − x¯ = 2hax, 1ia − x¯ = (2hax, 1i − ax + ax)a − x¯ = (ax + ax)a − x¯ = (ax + ax)a − x¯ = (ax +x ¯a¯)a − x¯ = (ax)a + (¯xa¯)a − x¯ = (ax)a +x ¯(¯aa) − x¯ = (ax)a +x ¯ − x¯ = (ax)a = axa = Ba(x)

Where we have used lemmata 4.13 and 4.16 and 4.18 freely. Notice that (¯xa¯)a =x ¯(¯aa) because O is (by lemma 5.10) an inverse loop. −1 −1 Now (s1sa)(sas1) = 1O, i.e. s1sa = (sas1) = (Ba) so we need to show −1 that (Ba) = Ba¯.

−1 −1 −1 −1 Ba−1 Ba(x) = a (axa)a = (a (a(xa)))a = ((a−1a)(xa))a−1 = (xa)a−1 = x(aa−1) = x1 = x

−1 −1 This means that Ba−1 = (Ba) and by lemma 5.6 we know that a =a ¯, so that s1sa = Ba−1 = Ba¯.

8.5.1 Isotopies and companions Definition 8.8 (Isotopy). An isotopy of an inverse loop L is a triple (α, β | γ) of bijections α, β, γ : L→˜ L such that α(x)β(y) = γ(xy) for all x, y ∈ L.

Let (α, β | γ) be an isotopy of an inverse loop L, i.e. . Denote the unit in L by 1L. As we have α(x)β(y) = γ(xy) for all x, y ∈ L we have especially that

α(z)β(1L) = γ(z1L) = γ(z) = γ(1Lz) = α(1L)β(z), ∀z ∈ L

−1 −1 Let a = (α(1L)) and b = (β(1L)) . Then we have that

α(z) = γ(z)b = Rbγ(z) β(z) = aγ(z) = Laγ(z)

I.e. γ(xy) = (γ(x)b)(aγ(y)) for some a, b ∈ L. Notice that (α, β | γ) is an isotopy if and only if such a, b exists. We have the following definition.

Definition 8.9 (Companions). Let (α, β | γ) be an isotopy of an inverse loop and let a, b ∈ L such that γ(xy) = (γ(x)b)(aγ(y)) for all x, y ∈ L. Then a and b are called a pair of companions for γ.

Lemma 8.10. If x(ry) = (xr)y for all x, y ∈ O for some r ∈ R, then r is real.

22 Proof. Let r = r∞ + r0i0 + ... + r6i6 ∈ O such that x(ry) = (xr)y for all x, y ∈ O. As (in+1in)in+2 = −in+3in+2 = in+5 in+1(inin+2) = in+1in+6 = −in+5 holds for all in ∈ {i0, ..., i6} we derive (by the distributivity law) that r0 = ... = r6 = 0. The following lemma is proved in [1].

Lemma 8.11. Let (α, β | γ) be an isotopy of O\{0} and let a, b be a pair of companions for γ. Then any other pair of companions for γ are on the form r−1a and br where r is real.

Lemma 8.12. If γ ∈ SO(O) such that (α, β | γ) is an isotopy for some α, β ∈ SO(O), then α and β are uniquely determined up to sign, the only other pair being −α and −β.

Proof. Let α, β, γ ∈ SO(O) be an isotopy. I.e. α(x)β(y) = γ(xy) for all x, y ∈ O\{0} and let a, b ∈ O\{0} be a pair of companions for γ. Recall that this means that α = Rbγ and β = Laγ. As α, β ∈ SO(O) we know that 1 = det(α) = det(R γ) = det(R )det(γ) = det(R ) b b b . 1 = det(β) = det(Laγ) = det(La)det(γ) = det(La) By lemma 8.11 a and b are unique up to scalar multiplication, i.e. all pair −1 of companions for γ are br, r a, where r is real. Then, as r ∈ R we have that 8 8 1 = det(α) = det(Rbrγ) = r det(Rb)det(γ) = r −8 −8 , 1 = det(β) = det(Lr−1aγ) = r det(La)det(γ) = r which implies that r ∈ {1, −1}. It can be shown that the spin group Spin(8) consists of all isotopies (α, β, γ) such that γ ∈ SO(O). The following lemma is proved in [1]. Lemma 8.13. If τ : Spin(8) → Spin(8) such that τ((α, β|γ)) = (β, γ|α) for all (α, β|γ) ∈ Spin(8), then τ is an outer .

Lemma 8.14. Assume that all f ∈ SO(O) satisfies that f = Ban ...Ba1 for some a1, ..., an ∈ S(O). Then

f = Rbn ...Rb1 = Lcn ...Lc1 for some b1, ..., bn, c1, ..., cn ∈ S(O). Proof. Let f ∈ SO(O). According to the assumption this means that f =

Ban ...Ba1 for some a1, ..., an ∈ S(O). If we let x, y ∈ O we know have that

f(xy) = Ban ...Ba1 (xy) = an(...(a2(a1(xy)a1)a2)...)an By the Moufang laws (lemma 4.17) this means that

f(xy) = an(...(a2((a1x)(ya1))a2...)an = a (...(a ((a (a x))((ya )a ))a )...)a n 3 2 1 1 2 3 n (1) = ...

= Lan ...La1 (x)Ran ...Ra1 (y)

23 We let α = Lan ...La1 and β = Ran ...Ra1 . Recall that La,Ra ∈ SO(O) if a ∈ S(O) so that α, β ∈ SO(O). This means that f(xy) = α(x)β(y) for all x, y ∈ O for some α, β ∈ SO(O). In other words, (α, β |f ) is a member of Spin(8). By lemma 8.13 this implies that τ((α, β|f)) = (β, f | α) ∈ Spin(8) so that β(x)f(y) = α(xy) for all x, y ∈ O\{0}. As α ∈ SO(O) we have that

α = Bbn ...Bb1 with b1, ..., bn ∈ S(O). By the same reasoning as in equation

(1) we have that α(xy) = Lbn ...Lb1 (x)Rbn ...Rb1 (y) for all x, y ∈ O\{0}. By

lemma 8.12 we can derive that β = Lbn ...Lb1 and f = Rbn ...Rb1 .

That f = Lcn ...Lc1 for some c1, ..., cn ∈ S(O) follows in the same manner by studying the element τ((β, f | α)) = (f, α | β) ∈ Spin(8). Proposition 8.15.

(i) If f ∈ SO(O) then f = La7 ...La1 = Rb7 ...Rb1 for some a1, ..., a7, b1, ..., b7 ∈ S(O). − (ii) If f ∈ O (O) then f = La7 ...La1 κ = Rb7 ...Rb1 κ for some a1, ..., a7, b1, ..., b7 ∈ S(O). Proof. − (i) Let f ∈ SO(O). Then (by corollary 3.5) we know that fκ ∈ O (O). − Furthermore κ = −s1, where s1 is reflection in 1, so −fs1 ∈ O (O). If fs1 ∈ SO(O) then k − fs1(x)k = kfs1(x)k = kxk for all x ∈ O so that − −fs1 ∈ SO(O) as well. Since that is not true we must have that fs1 ∈ O (O). By proposition 3.2 this implies that fs1 is generated by n ≤ 8 reflections ai. Qn n But as det(fs1) = i=1 det(sai ) = (−1) = −1, we can write fs1 = sa7 ...sa1 for some a1, ..., a7 ∈ S(O), so we have (using the fact that two reflections in the same direction is the identity)

f = sa7 ...sa1 s1 = sa7 s1s1sa6 sa5 s1s1sa4 sa3 s1s1sa2 sa1 s1s1s1 Using lemma 8.7 we arrive at:

f = Ba7 Ba¯6 ...Ba¯2 Ba1 s1s1 = Ba7 Ba¯6 ...Ba¯2 Ba1 .

By lemma 5.6 we know thata ¯6, a¯4, a¯2 ∈ S(O) so that f = Bb7 ...Bb1 for some b1, ..., b7 ∈ S(O) so that lemma 8.14 concludes the proof. (ii) Follows from (i) and corollary 3.5. Proposition 8.16.

(i) SO(O) = {La7 ...La1 = Rb7 ...Rb1 | a1, ..., a7, b1, ..., b7 ∈ S(O)} − (ii) O (O) = {La7 ...La1 κ = Rb7 ...Rb1 κ | a1, ..., a7, b1, ..., b7 ∈ S(O)} Proof. (i) By proposition 8.15 we have that

SO(O) ⊂ {La7 ...La1 = Rb7 ...Rb1 | a1, ..., a7, b1, ..., b7 ∈ S(O)}. Since SO(O) is closed under composition and since La,Ra ∈ SO(O) for all a ∈ S(O) (by corollary 5.3) we know that

{La7 ...La1 = Rb7 ...Rb1 | a1, ..., a7, b1, ..., b7 ∈ S(O)} ⊂ SO(O) (ii) Follows from (i) and corollary 3.5.

24 8.6 The case A = Im(O) Recall that:

O1(O) = {f ∈ O(O) | f(1O) = 1O}

SO1(O) = {f ∈ SO(O) | f(1O) = 1O} − − O1 (O) = {f ∈ O (O) | f(1O) = 1O} Proposition 8.17.

(i) SO1(O) = {La7 ...La1 = Rb7 ...Rb1 | a1, ..., a7, b1, ..., b7 ∈ S(O) and a7 = −1 −1 ((a6(...a3(a2a1)...)) , b7 = ((...(b1b2)b3...)b6) } − (ii) O1 (O) = {La7 ...La1 κ = Rb7 ...Rb1 κ | ... ∈ S(O)}. Proof. (i) From proposition 8.16 we have that f ∈ SO(O) if and only if f =

La7 ...La1 = Rb7 ...Rb1 for some a1, ..., a7, b1, ..., b7 ∈ S(O). Now if f ∈ SO1(O) we must have

1O = f(1O) = La7 ...La1 (1O) = a7(...a2(a11O)...) = a7((...a3(a2a1)...) and

1O = f(1O) = Rb7 ...Rb1 (1O) = (...(1Ob1)b2...)b7 = (...(b1b2)b3...)b7

−1 −1 so that a7 = ((a6(...a3(a2a1)...)) and b7 = ((...(b1b2)b3...)b6) .

Now let f = La7 ...La1 = Rb7 ...Rb1 for some a1, ..., a7, b1, ..., b7 ∈ S(O) −1 −1 such that a7 = ((a6(...a3(a2a1)...)) and b7 = ((...(b1b2)b3...)b6) }. By proposition 8.16(i) we know that f ∈ SO(O). Furthermore as

−1 f(1O) = La7 ...La1 (1O) = a7((...a3(a2a1)...) = a7(a7) = 1O and −1 f(1O) = Rb7 ...Rb1 (1O) = (...(b1b2)b3...)b7 = (b7) b7 = 1O

we know that f ∈ SO1(O). (ii) Follows from (i) and proposition 7.5.

25 References

[1] John H. Conway, Derek A. Smith, On quaternions and octonions: Their geometry, arithmetic and symmetry, A K Peters, Ltd. (2003) [2] H.-D. Ebbinghaus et al., Numbers, Springer-Verlag, (1990) [3] A. A. Albert, Absolute valued algebras, Ann. of Math. (2), 48:495-501, 1947 [4] Erik Darp¨oand Ernst Dieterich, The double sign of a real division algebra of finite dimension greater than one, to appear

26