U.U.D.M. Project Report 2013:11

The (1, 2, 4, 8)-Theorem for Composition

Rasmus Précenth

Examensarbete i matematik, 15 hp Handledare och examinator: Ernst Dieterich Juni 2013

Department of Uppsala University

The (1, 2, 4, 8)-Theorem for Composition Algebras

Rasmus Pr´ecenth June 2, 2013

1 Contents

1 Introduction 3 1.1 Hurwitz Problem ...... 3 1.2 The (1,2,4,8)-Theorem for Real Division Algebras ...... 3

2 Preliminaries 4 2.1 Quadratic Forms and Bilinear Forms ...... 4 2.2 Algebras ...... 8

3 Composition Algebras 9 3.1 Definition and Basic Properties ...... 9 3.2 Purely Imaginary Elements and Conjugation ...... 12 3.3 Associativity Properties ...... 15 3.4 Doubling ...... 19

4 The (1, 2, 4, 8)-Theorem 23 4.1 Proving Hurwitz Theorem ...... 25

Appendices 28

Appendix A Hurwitz Problem 28

2 1 Introduction 1.1 Hurwitz Problem The topic of quadratic forms comes up in various parts of . The so-called Hurwitz Problem, named after the german mathematician Adolf Hurwitz, asks for what values of n the following identity holds

n ! n ! n X 2 X 2 X 2 xi yi = zi (1.1) i=1 i=1 i=1 where all xi, yi ∈ R and each zi is a linear combination of {xiyj | 1 < i, j ≤ n}. The values of n where this is true is for n = 1, 2, 4, 8. This statement is called Hurwitz Theorem. This thesis will show that these are in fact the only valid values of n no matter which field we’re considering. A theorem proved using the framework of composition algebras by Nathan Jacobson for fields with not two (Jacobson 1958 [3]). Tonny Albert Springer completed this theorem by proving it for characteristic two as well (Springer 1963 [4]). The identities for 1 and 2 are relatively simple and are

2 2 2 x1y1 = (x1y1) (1.2) and 2 2 2 2 2 2 (x1 + x2)(y1 + y2) = (x1y1 − x2y2) + (x1y2 + x2y1) . (1.3) The ones for 4 and 8 are highly non-trivial. They are included in in Appendix A.

1.2 The (1,2,4,8)-Theorem for Real Division Algebras 1 A is a non-zero algebra A in which the linear operators La and Ra are invertible for each a ∈ A \{0}. If the base field is R then the algebra is called real. Well-known examples of real division algebras are;

• The algebra of real R

• The algebra of complex numbers C • The algebra of H • The algebra of octinions O These have dimensions 1,2,4 and 8 respectively. In 1877 the german mathematician Ferdinand Georg Frobenius proved the following theorem for finite-dimensional real division algebras (Frobenius 1878 [2]). Theorem 1.1 (Theorem of Frobenius). There are, up to isomorphism, only three associative finite-dimensional real division algebras. They are R, C and H with H being the only non-commutative algebra among them. 1See section 2.2 for the definition of an algebra.

3 Max Zorn, most famous for his lemma considering partial orders, later proved a theorem analogous to the theorem of Frobenius (Zorn 1931 [6]). In 1931 he proved the following.

Theorem 1.2. There are, up to isomorphism, only four alternative2 finite- dimensional real division algebras. They are R, C, H and O with O being the only non- among them. This thesis will show that there is a similar theorem for composition algebras, where there is no restriction on the field.

2 Preliminaries 2.1 Quadratic Forms and Bilinear Forms Definition Given a field k of any characteristic and a V over said field. A on V is a map N : V → k satisfying

N(λx) = λ2N(x) ∀λ ∈ k, x ∈ V (2.1) and that the associated map h , iN : V × V → k given by

hx, yiN = N(x + y) − N(x) − N(y) (2.2) is bilinear. It follows easily that N(0) = 0 and that h , iN is symmetric. The rest of this thesis will only consider symmetric bilinear forms for this reason. That means that if nothing else is specified we assume that bilinear forms are symmetric. For readability the subscript N will be omitted when it’s clear from the context what quadratic form the refers to.

Example If V = Rn×1 then all quadratic forms on V are described by sym- metric matrices via the equation

N(x) = xT Ax. (2.3)

Moreover, A is uniquely determined by N.

Proof. Given a symmetric matrix A ∈ Rn×n define the map N : V −→ k as in (2.3). The first property holds because

N(λx) = (λx)T A(λx) = λ2xT Ax = λ2N(x) and the second since

hx, yi = N(x + y) − N(x) − N(y) = (x + y)T A(x + y) − xT Ax − yT Ay = xT Ax + xT Ay + yT Ax + yT Ay − xT Ax − yT Ay = xT Ay + yT Ax = xT (2A)y.

2See section 2.2 for the definition of alternative.

4 The last equality holds since A is symmetric.

xT Ay = (xT Ay)T = yT AT (xT )T = yT Ax

This shows that h , i is bilinear which proves that N is a quadratic form. The converse is shown using the fact that all symmetric bilinear forms h , i : Rn×1 × Rn×1 → R are given by a symmetric matrix A in the following way hx, yi = xT Ay.

The quadratic form is recovered from the bilinear form via the formula N(x) = 1 2 hx, xi since hx, xi = N(2x) − 2N(x) = 2N(x) (2.4) 1 T T 1 and hence N(x) = 2 x Ax = x Bx where B = 2 A is symmetric. Finally assume that for some quadratic form N on V we have

N(x) = xT Ax = xT Bx for A 6= B. It follows that 0 = xT Ax − xT Bx = xT (A − B)x which implies that A − B = 0 and hence A = B.

The equation in (2.4) is true for all quadratic forms on all vector spaces and gives a way to recover the quadratic form from the associated bilinear form provided that char k 6= 2. Proposition 2.1. If char k 6= 2 all quadratic forms N on V can be recovered from the bilinear form h , iN via the formula 1 N(x) = hx, xi . (2.5) 2 N

However, if char k = 2 we have hx, xiN = 0 for all x ∈ V . Proof. hx, xiN = N(2x) − 2N(x) = 4N(x) − 2N(x) = 2N(x) (2.6) 1 If char k 6= 2 we get N(x) = 2 hx, xiN and if char k = 2 we have hx, xiN = 2N(x) = 0. We now move on to properties of symmetric bilinear forms. Definition Two vectors x, y ∈ V are said to be orthogonal with respect to a bilinear form h , i on V if hx, yi = 0. We denote this with x ⊥ y (or y ⊥ x since hx, yi = hy, xi). Two subsets U1,U2 ⊆ V are said to be orthogonal if all their vectors are orthogonal, i.e

U1 ⊥ U2 ⇔ x ⊥ y ∀x ∈ U1, y ∈ U2.

The notation x ⊥ U is short for {x} ⊥ U. The orthogonal complement U ⊥ of a subset U ⊆ V is the set consisting of all vectors orthogonal to all vectors in U, i.e U ⊥ = {x ∈ V | x ⊥ U} or equivalently, the largest subset U ⊥ of V satisfying U ⊥ ⊥ U.

5 Proposition 2.2. Given a bilinear form h , i : V ×V −→ k and a subset U ⊆ V , then U ⊥ is a subspace of V and U ⊆ (U ⊥)⊥.

Proof. (SS1) h0, xi = h0 + 0, xi = h0, xi + h0, xi. This shows that h0, xi = 0 for all x ∈ U so 0 must belong to U ⊥. (SS2) Assume that x and y lie in U ⊥. Then hx, ui = hy, ui = 0 for all u ∈ U. It follows that hx + y, ui = hx, ui + hy, ui = 0 + 0 = 0 for all u ∈ U. Hence x + y ∈ U ⊥.

(SS3) Assume that x ∈ U ⊥. Then, since hx, ui = 0 for all u ∈ U, we have hλx, ui = λhx, ui = λ0 = 0 for all u ∈ U and this shows the final subspace axiom, namely λx ∈ U ⊥. This proves that U ⊥ is a subspace of V . Now assume that x ∈ U, then x ⊥ U ⊥ by definition. But that is also the definition of an element in (U ⊥)⊥, hence x ∈ (U ⊥)⊥. The converse (U ⊥)⊥ ⊆ U is not true for all bilinear forms and subspaces U. For a subspace to behave that way we need a property on both the subspace and the form to make sure it behaves nicely. Definition A bilinear form h , i is called non-degenerate if the only vector in V orthogonal to all other vectors is 0, i.e V ⊥ = {0} or equivalently

hx, yi = 0, ∀y ∈ V =⇒ x = 0. (2.7)

A quadratic form N on V is said to be non-degenerate if its associated bilinear form h , iN is non-degenerate. If the restriction of h , i to a subspace U of V is non-degenerate we call that subspace non-singular. The notations h , i|U×U and ⊥U will denote the restricted form and the restricted orthogonality relation, respectively. Proposition 2.3. If the form h , i is non-degenerate and ha, yi = hb, yi for all y ∈ V then a = b.

Proof. Using (2.7) we get

ha, yi = hb, yi ∀y ∈ V ⇐⇒ ha − b, yi = 0 ∀y ∈ V =⇒ a − b = 0 ⇐⇒ a = b

The last proposition will be very useful later when proving identities in compo- sition algebras. By proving that hx, zi = hy, zi holds for all z one can show that x = y. One thing useful for proving things for bilinear forms is the following equiv- alence.

Proposition 2.4. If h , i is a bilinear form on a vectorspace V and U is a subspace of V , then the following are equivalent. (i) U is non-singular.

6 (ii) U ∩ U ⊥ = {0}.

Proof. (i) ⇒ (ii) If U is non-singular, then h , i|U×U is non-degenerate. Which is the same as U ⊥U = {0}. On the other hand U ⊥U = {x ∈ U | x ⊥ U} = U ∩ U ⊥. So U ∩ U ⊥ = {0}. (ii) ⇒ (i) Assume U ∩ U ⊥ = {0}. Let x ∈ U and assume

hx, yi = 0, ∀y ∈ U.

If we can show x = 0, then we are done. The above property is the same as x ⊥ U, so x ∈ U ⊥. That means that x ∈ U ∩ U ⊥ = {0} so x must be 0.

Lemma 2.5. Let V be a finite-dimensional vector space and h , i a non- ∗ on V . Then the linear map λ : V −→ V , v 7→ λv = hv, ·i is an isomorphism.

Proof. Let V be a vector space over k and dim V = n for some n ∈ N. Also let h , i : V × V −→ k be a non-degenerate bilinear form. ∗ Now let λ : V −→ V , v 7→ λv = hv, ·i. We see that λ is linear since, λv+w = hv + w, ·i = hv, ·i + hw, ·i = λv + λw and λcv = hcv, ·i = chv, ·i = cλv for all v, w ∈ V and c ∈ k. Now look at the kernel of λ,

ker λ = {v ∈ V | λv = 0}

= {v ∈ V | λv(w) = 0, ∀w ∈ V } = {v ∈ V | hv, wi = 0, ∀w ∈ V } = {0}.

So λ is injective. It is also surjective since dim V ∗ = dim V holds when V is finite-dimensional. Hence λ is an isomorphism. Theorem 2.6. If V is a vector space over k, h , i a bilinear form on V and U ⊆ V a finite-dimensional non-singular subspace, then

V = U ⊕ U ⊥.

If moreover h , i is non-degenerate, then U ⊥ is non-singular. Proof. Assume that U ⊆ V is non-singular and that dim U < ∞. Then U∩U ⊥ = {0} from Proposition 2.4 which makes the sum U ⊕ U ⊥ direct. It remains to show that U + U ⊥ = V . Let w ∈ V . Since U is finite dimensional and the ∗ restriction h , i|U×U is non-degenerate then µ : U −→ U , v 7→ µv = hv, ·i|U×U ∗ is an isomorphism by Lemma 2.5. Let λ : V −→ V , v 7→ λv = hv, ·i. The form ∗ λw|U belongs to U so there must be a vector u ∈ U such that µu = λw|U which is the same as hu, vi = hw, vi, ∀v ∈ U. Subtracting by hw, vi yields

hu − w, vi = 0, ∀v ∈ U.

Let u0 = u − w then u0 = u − w ⊥ U so u0 ∈ U ⊥ and

w = u + u0

7 which concludes the proof of the first statement. Now assume that h , i is non-degenerate, that U ⊥ is singular and that x ∈ U ⊥ ∩ (U ⊥)⊥. If we can prove that x = 0 then we are done. Let y ∈ V . Then we can write y = u + u0 where u ∈ U and u0 ∈ U ⊥ since V = U ⊕ U ⊥ by the first statement of this theorem. By our assumption x ∈ U ⊥ and x ∈ (U ⊥)⊥ so hx, ui = 0 and hx, u0i = 0 respectively. Adding these we get hx, ui + hx, u0i = hx, u + u0i = 0. This means that we have hx, yi = 0 for all y ∈ V which implies that x = 0 since h , i is non-degenerate. Corollary 2.7. If h , i : V × V −→ k is a bilinear form on a vector space V and U ⊆ V a finite-dimensional non-singular subspace, then

dim V = dim U + dim U ⊥ (2.8)

Proof. Follows from Theorem 2.6. Corollary 2.8. If h , i : V × V −→ k is a non-degenerate bilinear form and U a finite-dimensional non-singular subspace of V , then

(U ⊥)⊥ = U (2.9)

Proof. We already know from Proposition 2.2 that U ⊆ (U ⊥)⊥ so we only need to prove the inclusion (U ⊥)⊥ ⊆ U. Let x ∈ (U ⊥)⊥. From Theorem 2.6 we know that V = U ⊕ U ⊥ so we can write x = u + u0 where u ∈ U and u0 ∈ U ⊥. But u ∈ (U ⊥)⊥ since u ∈ U which means that x − u ∈ (U ⊥)⊥. On the other hand x − u = u0 ∈ U ⊥ so x − u ∈ U ⊥ ∩ (U ⊥)⊥ and hence x − u = 0. Moving the u to the other side gives x = u so x ∈ U.

2.2 Algebras Definition An algebra A over a field k is a vector space over k together with a bilinear multiplication A × A −→ A,(x, y) 7→ xy. It follows that the maps La : A −→ k, x 7→ ax and Ra : A −→ k, x 7→ xa are linear for all a ∈ A.

Reminder A multiplication A × A −→ A,(x, y) 7→ xy is bilinear if the following axioms are satisfied. 1. x(y + z) = xy + xz for all x, y, z ∈ A. 2.( x + y)z = xz + yz for all x, y, z ∈ A. 3. λ(xy) = (λx)y = x(λy) for all x, y ∈ A and λ ∈ k. They are called right additivity, left additivity and homogenity, respectively.

Definition The dimension of an algebra A is the dimension of its underlying vector space.

Here follows a list of some various types of algebras

Division Algebra A non-zero algebra in which the maps La and Ra are bi- jective for all a 6= 0. Division by non-zero elements is possible in division algebras, hence its name. A commutative and associative division algebra with unity forms a field together with vector addition and multiplication.

8 Quadratic Algebra An algebra with unity e in which the of an element x is contained in the span of e and x, i.e

x2 = αe + βx for some α, β ∈ k (2.10)

Alternative Algebra An algebra A is called alternative if for all x, y ∈ A

x(xy) = (xx)y x(yy) = (xy)y x(yx) = (xy)x.

In other words, the subalgebra generated by x and y is associative.

3 Composition Algebras

In this section we will speak of an algebraic objects known as composition algebras. These were introduced by Nathan Jacobson as a means to prove Hurwitz Theorem for any field of characteristic not two. The way this is presented in follows closely to that in [5].

3.1 Definition and Basic Properties Definition A composition algebra C over a field k is a pair (C,N) where C is a non-zero algebra with identity e and N : C −→ k a non-degenerate quadratic form that satisfies N(xy) = N(x)N(y) ∀x, y ∈ C. (3.1) The equation (3.1) is what relates the composition algebras to the quadratic forms.

Composition algebras are sometimes called normed algebras. The quadratic form N is called the and the associated bilinear form h , iN is called the inner product. Note that it is not neccesarily an inner product in the usual sense, it doesn’t have to be positive definite.

Definition A composition subalgebra D of a composition algebra C is a non- singular subspace D of C that is closed under multiplication and contains the identity e.

Examples The set of real numbers R and the set of complex numbers C are real 2 composition algebras with the quadratic forms NR(x) = x and NC(x + iy) = x2 + y2 respectively. A slightly more advanced example is the algebra Z2 ⊕ Z2 over Z2 with mul- tiplication (x1, y1)(x2, y2) = (x1x2, y1y2), quadratic form N((x, y)) = xy and identity e = (1, 1).

Proposition 3.1. Let C be any composition algebra. Then the identity e sat- isfies N(e) = 1. (3.2)

9 Proof. N(e) = N(ee) = N(e)N(e) =⇒ N(e) = 1 or N(e) = 0 Assume that N(e) = 0 then N(x) = N(x)N(e) = 0 for all x ∈ C so

he, xi = 0 ∀x ∈ C which, since h , i is non-degenerate, implies that e = 0. But a composition algebra is by definition non-zero so we have a contradiction. Hence N(e) = 1.

Definition An element x of C is said to have an inverse if there is some y ∈ C such that xy = yx = e.

Remark Nothing is yet said about the uniqueness of inverses. Until the unique- ness is proven we have to include the possibility of mulitple inverses.

Proposition 3.2. If x is an element of a composition algebra and x has an inverse x−1 then N(x−1) = N(x)−1. In particular, N(x) and N(x−1) are both non-zero. Proof. Assume x is an invertible element in a composition algebra C. Then the following holds 1 = N(e) = N(xx−1) = N(x)N(x−1). Hence, N(x−1) = N(x)−1. We will see later that the condition N(x) 6= 0 is in fact sufficient for x to have an inverse and that the inverse is, in fact, unique. We now continue with some more technical identities that will prove very useful. Proposition 3.3. In every composition algebra C the following identities hold for all x, x1, x2, y, y1, y2 ∈ C

hx1y, x2yi = hx1, x2iN(y) (3.3)

hxy1, xy2i = N(x)hy1, y2i (3.4)

hx1y1, x2y2i + hx1y2, x2y1i = hx1, x2ihy1, y2i. (3.5)

Proof. Regarding (3.3) we have for all x1, x2, y ∈ C

hx1y, x2yi = N(x1y + x2y) − N(x1y) − N(x2y)

= N((x1 + x2)y) − N(x1y) − N(x2y)

= N(x1 + x2)N(y) − N(x1)N(y) − N(x2)N(y)

= (N(x1 + x2) − N(x1) − N(x2))N(y)

= hx1, x2iN(y).

10 The identity (3.4) is proved in the same way as (3.3). The last one is proved by using (3.3). On one hand we have hx1(y1 + y2), x2(y1 + y2)i = hx1y1 + x1y2, x2y1 + x2y2i

= hx1y1, x2y1 + x2y2i + hx1y2, x2y1 + x2y2i

= hx1y1, x2y1i + hx1y1, x2y2i + hx1y2, x2y1i +

hx1y2, x2y2i

= hx1y1, x2y2i + hx1y2, x2y1i + hx1, x2iN(y1) +

hx1, x2iN(y2) and on the other we have

hx1, x2iN(y1 + y2) = hx1, x2i(hy1, y2i + N(y1) + N(y2))

= hx1, x2ihy1, y2i + hx1, x2i(N(y1) + N(y2)).

By (3.3), the starting terms of both chains coincide. Subtracting hx1, x2i(N(y1)+ N(y2)) from the end terms we obtain (3.5). Corollary 3.4. Let C be a composition algebra and let x, y ∈ C with hx, yi = 0. Then for all x1, y1 ∈ C,

hx1x, y1yi = −hx1y, y1xi. (3.6)

Proof. Follows directly from (3.5). Proposition 3.5. Let C be a composition algebra and x an element in C. Then the following identity holds

x2 − hx, eix + N(x)e = 0. (3.7)

Proof. Let x ∈ C. Then create the inner product of the left hand side of (3.7) with an arbitrary y.

hx2 − hx, eix + N(x)e, yi = hx2, yi − hx, eihx, yi + N(x)he, yi = hx2, yi − hx, eihx, yi + hxe, xyi = 0

The equalities are valid since N(x)he, yi = hxe, xyi by (3.4) and hx2, yi + hxe, xyi = hx, eihx, yi by (3.5). Using Proposition 2.3 we obtain (3.7). Corollary 3.6. If (C,M) and (C,N) are composition algebras, then M = N. Proof. For every x ∈ C we have by (3.7)

2 x = hx, eiM x − M(x)e 2 x = hx, eiN x − N(x)e.

So hx, eiM x − M(x)e = hx, eiN x − N(x)e. (3.8) Now we have two cases, e and x are either linearly independent or linearly dependent.

11 First case If e and x are linearly independent then −M(x) = −N(x) by (3.8) so M(x) = N(x).

Second case If e and x are linearly dependent then x = λe for some scalar λ. Thus we obtain

M(x) = λ2M(e) = λ2 = λ2N(e) = N(x).

Both cases lead to M(x) = N(x) which concludes the proof.

The identity (3.7) can be interpreted (weaker) as follows. Corollary 3.7. Every composition algebra is a quadratic algebra. Proof. Let C be a composition algebra. Then for all x ∈ C, x2 = hx, eix−N(x)e by (3.7). So x2 ∈ span{e, x}.

This gives us some geometric understanding of the multiplication in a com- position algebra.

3.2 Purely Imaginary Elements and Conjugation We will now generalize the notion of imaginary complex numbers that shows up in many parts of mathematics. Definition The set of all purely imaginary elements of a composition algebra C over k is the set

Im C = {x ∈ C | x2 ∈ ke, x∈ / ke \{0}}. (3.9)

Proposition 3.8. Let C be a composition algebra over a field k with char k 6= 2. Then Im C is a non-singular subspace of C, Im C = (ke)⊥ and

C = Im C ⊕ ke.

Proof. To prove Proposition 3.8 it is enough to show that ke is a non-singular subspace and that Im C = (ke)⊥. The rest will follow from Theorem 2.6 since dim ke = 1. Assume that ke is singular. Then x ∈ (ke)⊥ ∩ ke for some non-zero x ∈ C. So x = λe, λ 6= 0 and hx, ei = 0. But this is a contradiction since 0 = hλe, ei = λhe, ei = λ(N(2e) − 2N(e)) = 2λ when char k 6= 2. Hence ke is non-singular. To show Im C = (ke)⊥ we first show Im C ⊆ (ke)⊥. Let x ∈ Im (C) \{0} then e and x are linearly independent and x2 = λe for some λ. The identity (3.7) on the other hand gives us x2 = hx, eix − N(x)e. Combining these two we get N(x) = −λ and hx, ei = 0. So x ∈ (ke)⊥. Now assume that x ∈ (ke)⊥. Then hx, ei = 0 and from (3.7) we have, once again, x2 = hx, eix − N(x)e = −N(x)e. Also, since x ⊥ ke, we have x∈ / ke \{0} because ke is non-singular. Hence x ∈ Im C. That concludes the proof that Im C = (ke)⊥. By Theorem 2.6, Im C is a non-singular subspace of C and C = Im C ⊕ ke, a direct sum decomposition.

12 The above proposition has a generalization called Frobenius Lemma ([1, p. 227]) that states that all real quadratic algebras A can be written as A = Re ⊕ Im A. That proof can easily be extended to all quadratic algebras over a field with characteristic not equal to two. Hopefully, Proposition 3.8 gives the reader some more understanding of what Im C looks like. For example, Im R = {0}, R = R ⊕ {0} and Im C = Ri, C = Ri ⊕ R. In Z2 ⊕ Z2, however, we have Im (Z2 ⊕ Z2) = {0} and Z2e = {0, (1, 1)}. It follows that Z2 ⊕ Z2 6= Im (Z2 ⊕ Z2) ⊕ Z2e. Definition Let C be a composition k-algebra. The conjugation map ¯ : C −→ C, x 7→ x is defined by x = hx, eie − x. (3.10)

This can also be seen as −s(x) where s(x) is the reflection of x in (ke)⊥ (or Im C if char k 6= 2). For C = C the definition coincides with the normal understanding of conjugates. In R we have x = x and in C we have x + iy = x−iy. Computing the conjugate for each element in Z2 ⊕ Z2 one gets 0 = 0, e = e, (0, 1) = (1, 0) and (1, 0) = (0, 1). Many of the usual properties of conjugates hold with the above definition of conjugation for every composition algebra. Proposition 3.9. Let C be a composition algebra. Then conjugation is a linear map and x = x. Proof. The proof that conjugation is linear follows from the definition since h , i is a bilinear form;

x + y = hx + y, eie − (x + y) = hx, eie − x + hy, eie − y = x + y, and similarily for multiplication with scalars. The last part follows from the computation

x = hx, eie − x = hhx, eie − x, eie − hx, eie + x = hx, eihe, eie − hx, eie − hx, eie + x = 2hx, eie − 2hx, eie + x = x. since he, ei = 2N(e) = 2 by Proposition 2.1 and 3.1. This is a good start. But one might ask what other properties hold. For example, in C (with the ordinary conjugate) we have xx = |x|2, xy = xy and |x| = |x|. It turns out that these also hold in general composition algebras, with slight modifications. Proposition 3.10. In every composition algebra C the following holds for all x, y ∈ C. (i) xx = xx = N(x)e (ii) xy = yx (iii) N(x) = N(x)

13 (iv) hx, yi = hx, yi Proof. (i) Follows from (3.7) in the following way

N(x)e = hx, eix − x2 = (hx, eie − x)x = xx.

The other way round, N(x)e = xx is done in the same way. Remember that λx = (λx)e = x(λe) for all scalars λ. (ii) We first observe that

xy = hy, eix + hx, eiy − hx, yie − yx (3.11)

since

xy = (x + y)2 − x2 − y2 − yx = hx + y, ei(x + y) − N(x + y)e − hx, eix + N(x)e − hy, eiy + N(y)e − yx = hx, eiy + hy, eix − (N(x + y) − N(x) − N(y))e − yx = hy, eix + hx, eiy − hx, yie − yx

by (3.7). By the definition of conjugates, (3.11) and (3.5) we have

yx = (hy, eie − y)(hx, eie − x) = hx, eihy, eie − hx, eiy − hy, eix + yx = hx, eihy, eie − hx, yie − xy = hxy, eie − xy = xy.

(iii) From the previous proposition and part (i);

N(x)e = xx = xx = N(x)e

so N(x) = N(x). (iv) Using part (iii) and the linearity of conjugation

hx, yi = N(x + y) − N(x) − N(y) = N(x + y) − N(x) − N(y) = N(x + y) − N(x) − N(y) = hx, yi.

We saw earlier in Proposition 3.2 that if x has an inverse x−1 then N(x−1) = N(x)−1. We will now show something stronger using the newly established properties of conjugation.

14 Proposition 3.11. In every composition algebra C and for every x ∈ C the following are equivalent

(i) x has an inverse. (ii) N(x) 6= 0. In that case, N(x)−1x is inverse to x. Proof. (i) ⇒ (ii) See Proposition 3.2.

(ii) ⇒ (i) Assume N(x) 6= 0. Let y = N(x)−1x, then

xy = x(N(x)−1x) = N(x)−1(xx) = N(x)−1N(x)e = e

The equality yx = e is similar which concludes that xy = yx = e.

3.3 Associativity Properties So far we have not established any facts about associativity in composition algebras. It turns out that they need not be associative but they satisfy some other, although weaker, properties. Unfortunately, these properties are not obvious and need some technical propositions first. Proposition 3.12. Let C be a composition algebra and let x, y, z ∈ C, then

hxy, zi = hy, xzi (3.12) hxy, zi = hx, zyi (3.13) hxy, zi = hyz, xi (3.14)

Proof.

hy, xzi = hy, (hx, eie − x)zi = hy, hx, eizi − hy, xzi = hx, eihy, zi − hy, xzi = hxy, zi + hxz, yi − hy, xzi = hxy, zi

The only non-trivial step follows from (3.5);

hx, eihy, zi = hxy, zi + hxz, yi

To prove (3.13) and (3.14) we use part (ii) and (iv) of Proposition 3.10 and (3.12).

hxy, zi = hy, xzi = hy, zxi = hzy, xi = hx, zyi

15 proves (3.13) and

hxy, zi = hz, xyi = hz, xyi = hz, yxi = hyz, xi proves (3.14)

Remark Recall that the adjoint of a linear operator f on an inner product space V is the unique linear operator f ∗ on V such that hf(x), yi = hx, f ∗(y)i for all x, y ∈ V . Using that definition we can define the notion of adjoints in composition algebras, which gives us a nice way of interpreting (3.12) and (3.13);

∗ Lx = Lx ∗ Ry = Ry since hLx(y), zi = hxy, zi = hy, xzi = hy, Lx(z)i and similarily for Ry.

We can now prove a slight generalization to the identity xx = N(x)e. Proposition 3.13. If C is a composition algebra and x, y ∈ C then the following holds

x(xy) = N(x)y (3.15) (xy)y = N(y)x (3.16)

Proof. Using Proposition 2.3 we can show that if hx(xy), zi = hN(x)y, zi for all z ∈ C then (3.15) will hold. That is done using (3.12), (3.4) and part (iii) of Proposition 3.10 in the following way:

hx(xy), zi = hxy, xzi = N(x)hy, zi = N(x)hy, zi = hN(x)y, zi.

Now using part (ii) of Proposition 3.10 and conjugating both sides in (3.15) we get x(xy) = (xy)x = (yx)x and N(x)y = N(x)y. Combining the above expressions proves (3.16). We are now ready to state our first associativity related result. Corollary 3.14. For all x and y in a composition algebra C the following holds

x(xy) = (xx)y (3.17) x(yy) = (xy)y (3.18)

16 Proof. Follows immediately from (3.15), (3.16) and Proposition 3.10 (i). Corollary 3.15. Let C be a composition algebra. Then every element x ∈ C that satisfies N(x) 6= 0 has a unique inverse N(x)−1x. Proof. Let x ∈ C and N(x) 6= 0. Then x has an inverse y = N(x)−1x by Proposition 3.11. Assume that there is another element z such that xz = zx = e. Then

y = y(xz) = N(x)−1x(xz) = N(x)−1(xx)z = N(x)−1N(x)z = z by Corollary 3.14. Proposition 3.16. In every composition algebra C the Moufang identities hold for all x, y, a ∈ C.

(ax)(ya) = a((xy)a) (3.19) a(x(ay)) = (a(xa))y (3.20) x(a(ya)) = ((xa)y)a (3.21)

Proof. Once again we use Proposition 2.3 to show an identity. The reader should at this point be familiar with the tools used so we won’t write every identity and proposition used, which will increase readability. We will point out, however that (3.5) is used twice.

h(ax)(ya), zi = hya, (ax)zi = hya, (xa)zi = hy, xaiha, zi − hyz, (xa)ai = hxy, aiha, zi − hyz, N(a)xi = hxy, aiha, zi − N(a)hyz, xi = hxy, aiha, zi − N(a)hxy, zi = hxy, aiha, zi − N(a)h(xy)z, ei = hxy, aiha, zi − h(xy)z, aai = h(xy)a, azi = ha((xy)a), zi

This proves (3.19). To prove (3.20) we use a similar chain of equalities where

17 we use (3.19) once.

ha(x(ay)), zi = hx(ay), azi = hx, (az)(ay)i = hx, (az)(ay)i = hx, (ay)(za)i = hx, a((yz)a)i = hx, a((yz)a)i = hx, ((yz)a)ai = hx, (a(zy))ai = hxa, a(zy)i = ha(xa), zyi = h(a(xa))y, zi

To prove the last equality we start with the second and conjugate it. The left hand side becomes

a(x(ay)) = (x(ay))a = ((ya)x)a and the right hand side, with the help of (3.19) since (ax)a = (ax)(ea) = a((xe)a) = a(xa),

(a(xa))y = y(a(xa)) = y((ax)a) = y(a(xa)).

Substituting y, a and x for x0, a0 and y0 respectively (for readability) and com- bining the above equalities we get

x0(a0(y0a0)) = ((x0a0)y0)a0 which proves (3.21). Proposition 3.17. Let C be a composition algebra. Then C is alternative. Proof. To prove that C is alternative we have to prove the following equalities

x(yx) = (xy)x (3.22) (xx)y = x(xy) (3.23) (xy)y = x(yy). (3.24)

The first follows from (3.19)

(xy)x = (xy)(ex) = x((ye)x) = x(yx).

The second follows if we start with (3.17) and expand both sides, starting with the left

x(xy) = x((hx, eie − x)y) = x(hx, eiy − xy) = hx, eixy − x(xy).

18 The right hand side is

(xx)y = (x(hx, eie − x))y = (hx, eix − xx)y = hx, eixy − (xx)y.

Combining the two chains proves (3.23). Proving (3.24) is similar. Summarizing everything done so far we get the following theorem. Theorem 3.18. Let C be a composition algebra. Then C is a quadratic, alter- native algebra that satisfies the Moufang identities. Even if this says something about the elements it still doesn’t say anything about the structure of the algebra. We are still missing one vital part; a con- struction method called doubling that will prove very useful.

3.4 Doubling

Lemma 3.19. Let C be a composition algebra. If D ( C is a finite-dimensional proper non-singular subspace of C, then there is an element a ∈ D⊥ \{0} such that N(a) 6= 0.

Proof. Assume that N(x) = 0 for all x ∈ D⊥. Theorem 2.6 tells us that C = D ⊕D⊥ where D⊥ is non-singular. We also know that D 6= C so D⊥ 6= {0} Let a be any non-zero element in D⊥. Then,

ha, xi = N(a + x) − N(a) − N(x) = 0 − 0 − 0 = 0, ∀x ∈ D⊥ and since D⊥ is non-singular, a = 0 which is a contradiction.

If D1 ( C is a finite-dimensional proper composition subalgebra, then the lemma makes sure that we can create the following subalgebra of C

D2 = D1 ⊕ D1a where D1 is a proper composition subalgebra of C and a is a non-zero element in D⊥ such that N(a) 6= 0.

Proposition 3.20 (The doubling construction). If D1 is a finite-dimensional proper composition subalgebra of a composition algebra C, a ∈ D⊥ \{0} and N(a) 6= 0 then the subspace

D2 = D1 + D1a (3.25) of C is a composition subalgebra of C with dim D2 = 2 dim D1 and the sum being a direct sum decomposition of D2. Multiplication in D2 then is

(x + ya)(z + wa) = (xz − N(a)wy) + (wx + yz)a. (3.26)

Proof. There is quite a lot of information in this proposition but the proof essentially boils down to the following two parts.

19 ⊥ (i) By showing D1a ⊆ D1 we will show that the sum is direct.

(ii) By showing (3.26) we will show that D2 is closed under multiplication and non-singular.

⊥ Proof of (i). We know that a ∈ D1 so we need to show that xa ⊥ D1 for all x ∈ D1. Therefore, let x ∈ D1 and look at the inner product hxa, yi where y ∈ D1. Identity (3.12) gives us hxa, yi = ha, xyi. Since D1 is a composition algebra it must be closed under conjugation and multiplication so xy ∈ D1. So, since ⊥ a ⊥ xy we must have xa ⊥ y. This concludes the proof that D1a ⊆ D1 . ⊥ Since dim D1 < ∞ we can decompose C into D1 ⊕ D1 which is a direct sum by Theorem 2.6. In particular, so is also D1 ⊕ D1a. Proof of (ii). The left hand side of (3.26) is

(x + ya)(z + wa) = xz + x(wa) + (ya)z + (ya)(wa).

If we can prove the following equations we will have shown (3.26).

x(wa) = (wx)a (3.27) (ya)z = (yz)a (3.28) (ya)(wa) = −N(a)wy (3.29)

Note that v = −v for all v ∈ C with v ⊥ e. In particular

va = −va (3.30) for all v ∈ D1. The following will prove (3.27);

hx(wa), vi = hwa, xvi = hwa, xvi = h−wa, vxi = −hwa, vxi = hwx, vai = hwx, (hv, eie − v)ai = hv, eihwx, ai − hwx, vai = hv, ei0 − hwx, vai = −h(wx)a, vi = h(wx)a, vi since it holds for arbitrary v. That −hwa, vxi = hwx, vai follows from (3.6). Similarily we prove (3.28);

h(ya)u, zi = hya, zui = −hyu, zai = −h(yu)a, zi = h(yu)a, zi.

Here we once again used (3.6) to get hya, zui = −hyu, zai.

20 To prove the final equality we observe that from (3.30) we get

va = va = −va = −av = av (3.31) for all v ∈ D1 so we have

(ya)(wa) = (ay)(wa) = a((yw)a) = a(a(yw)) = a(a(wy)) = (aa)(wy) = −(aa)(wy) = −N(a)(wy).

This concludes the proof of (3.26) which shows that D2 is closed under multiplication. The only thing left is to show that D2 is non-singular. Looking at the norm N on D2 we see that

N(x + ya) = hx, yai + N(x) + N(ya) = N(x) + N(y)N(a).

Looking at the inner product we similarily get for x, y, v, w ∈ D1

hx + ya, v + wai = hx, vi + hx, wai + hya, vi + hya, wai = hx, vi + hya, wai = hx, vi + hy, wiN(a)

Let x + ya ∈ D2 and assume that hx + ya, v + wai = 0 for all v, w ∈ D1. Then hx, vi + hy, wiN(a) = 0 for all v, w ∈ D1. In particular we have w = 0 which implies that hx, vi = 0 for all v ∈ D1. Since D1 is non-singular we have x = 0. With a similar argument we also have hy, wiN(a) = 0 for all w ∈ D1 which (since N(a) 6= 0) implies that y = 0. Together these two cases gives us x + ya = 0 so D2 is non-singular.

To summarize the proof so far we know that D2 = D1 ⊕ D1a is a direct sum decomposition, that D2 is a non-singular subspace of C and it is closed under multiplication. Hence D2 is a composition subalgebra. We now conclude the proof by showing that dim D2 = 2 dim D1. Let Ra : D1 −→ D1a be the map x 7→ xa. It is a linear map since multiplica- tion is bilinear. Moreover a has an inverse since N(a) 6= 0 so Ra has in- −1 verse Ra = Ra−1 which proves that Ra is bijective and hence an isomor- phism. It then follows that dim D1a = dim D1. From the direct sum we get dim D2 = dim D1 + dim D1a = 2 dim D1 which completes the proof. Proposition 3.20 is a very powerful tool. It gives us a contructive method of creating new composition subalgebras from already existing ones. However, it does not say anything about the properties of the new ones except that they have double dimension. Proposition 3.21. Let C be a composition algebra and D a finite-dimensional proper composition subalgebra. Then D is associative.

21 Proof. Let a ∈ D⊥\{0} with N(a) 6= 0. The existence of such an a is guaranteed by Lemma 3.19. Using the equality N((x + ya)(z + wa)) = N(x + ya)N(z + wa) for x, y, z, w ∈ D we have

N((x + ya)(z + wa)) = N((xz − N(a)wy) + (wx + yz)a) = N(xz − N(a)wy) + N(a)N(wx + yz) = hxz, −N(a)wyi + N(xz) + N(−N(a)wy) + N(a)(hwx, yzi + N(wx) + N(yz)) and

N(x + ya)N(z + wa) = (N(x) + N(y)N(a))(N(z) + N(w)N(a)) = N(x)N(z) + N(x)N(w)N(a) + N(y)N(z)N(a) + N(y)N(w)N(a)N(a) = N(xz) + N(wx)N(a) + N(yz)N(a) + N(a)2N(wy) by Proposition 3.20. Combining the above we get

hwx, yzi = hxz, wyi.

Since we, from (3.13) and Proposition 3.10, have

hwx, yzi = hyz, wxi = h(yz)x, wi = hx(zy), wi and

hxz, wyi = h(xz)y, wi we then have hx(zy), wi = h(xz)y, wi. If we do a substitution for readability we see that

hx0(y0z0), w0i = h(x0y0)z0, w0i holds for all x0, y0, z0, w0 ∈ D so x0(y0z0) = (x0y0)z0 for all x0, y0, z0 ∈ D which shows that D is associative.

Proposition 3.22. Let C be a composition algebra and D1 a finite-dimensional ⊥ proper composition subalgebra. Also let a ∈ D1 with N(a) 6= 0 and let D2 = D1 ⊕ D1a. Then

D2 is associative ⇐⇒ D1 is assosiative and commutative.

Proof. D2 is a composition subalgebra by Proposition 3.20 with multiplication as in (3.26).

“⇒” If D2 is associative then so is every subalgebra of D2. In particular, D1 is associative. Let x, y ∈ D1. Then (xy)a = x(ya) and from (3.27) x(ya) = (yx)a. Combining these we get

xy = xyaa−1 = ((xy)a)a−1 = ((yx)a)a−1 = yxaa−1 = yx.

22 “⇐” Assume that D1 is both commutative and associative. Then, for all 3 x1, x2, x3, y1, y2, y3 ∈ D1 we get  (z1z2)z3 = (x1x2)x3 − N(a) (y2y1)x3 + y3(y2x1) + y3(y1x1) +  y3(x1x2) − N(a)y3(y2y1) + (y2x1)x3 + (y1x2)x3 a  z1(z2z3) = x1(x2x3) − N(a) x1(y3y2) + (y3x2)y1 + (y2x3)y1 +  (y3x2)x1 + (y2x3)x1 + y1(x3x2) − N(a)y1(y2y3) a

where zi = xi + yia, i = 1, 2, 3.

Since D1 is associative and commutative we have that x1(x2x3) = (x1x2)x3 and that terms containing a in the above are equal. To show that (z1z2)z3 = z1(z2z3) we are left with proving that

N(a)(y2y1)x3 = N(a)x1(y3y2)

N(a)y3(y2x1) = N(a)y3(y1x1)

N(a)(y3x2)y1 = N(a)(y2x3)y1.

Using (3.28) we see that for x, y, z ∈ D1

N(a)xyz = aaxyz = az((ya)x) = az((yx)a) = N(a)xyz

and similarily for y and z. So

N(a)xyz = N(a)xyz = N(a)xyz = N(a)xyz

which proves the three equalities above since D1 is both commutative and associative.

4 The (1, 2, 4, 8)-Theorem

With the doubling method to back us up we are now ready to take on the big theorem of this paper. Theorem 4.1 (Theorem 3.18 revisited). Let C be a composition algebra over a field k. Then C is a quadratic, that satisfies the Moufang identities. Moreover, it must also satisfy one of the following;

dim C k commutative associative alternative 1 char(k) 6= 2 Yes Yes Yes 2 Any Yes Yes Yes 4 Any No Yes Yes 8 Any No No Yes

Figure 1: Characteristics of composition algebras

3We omit most of the multiplications for readability since they are not the most important part.

23 Proof. Let C be a composition algebra. Then C is quadratic by Corollary 3.7, alternative by Proposition 3.17 and it satisifes the Moufang identities by Proposition 3.16.

1. Let’s start with any composition algebra C. Then, if char k 6= 2, we can find a one-dimensional composition subalgebra D1 = ke. It is non-singular since hλe, µei = λµhe, ei = 2λµ 6= 0 if λ, µ 6= 0. However if char k = 2 then hλe, µei = 0 for all λ, µ ∈ k so ke is singular. Hence if char k = 2 then dim C > 1. 2. If dim C > 1 then, in the case of char k 6= 2, Lemma 3.19 and Proposition 3.20 gives us a two-dimensional subalgebra D2 = D1 ⊕ D1a for some element a ∈ C. Since D1 is both associative and commutative D2 must be associative by Proposition 3.22. A simple computation shows that it is also commutative. If char k = 2 we then use the following lemma. Lemma 4.2. If C is a composition k-algebra with char k = 2, then there is a ∈ C such that he, ai 6= 0. Also, D = ke ⊕ ka is a two-dimensional associative and commutative composition subalgebra.

Proof of Lemma 4.2. Assume that he, ai = 0 for all a, then e = 0 since h , i is non-degenerate on C. So he, ai 6= 0 for some a ∈ C. Note that a∈ / ke (otherwise he, ai = he, λei = 0) so D must be two-dimensional. It is non-singular since if hλe + µa, xi = 0 for all x ∈ D then in particular hλe + µa, ei = 0 and hλe + µa, ai = 0. These in turn give µ = 0 and λ = 0 respectively since he, ei = 0 and ha, ai = 0 so λe + µa = 0. It is closed under multiplication as the following computation shows

xy = (αe + βa)(γe + δa) = αγe + (αδ + βγ)a + βδa2 = αγe + (αδ + βγ)a + βδ(ha, eia − N(a)e) = (αγ − βδN(a))e + (αδ + βγ + βδha, ei)a.

That it is associative and commutative follows from two similar computa- tions.

0 By this lemma we create, in the case char k = 2, D2 = ke ⊕ ka for some a0 ∈ C with he, a0i= 6 0. So now we have, no matter what the characteristic, a two-dimensional composition subalgebra D2 ⊆ C. 3. If we assume that dim C > 2 then we can perform the doubling another time and get a four-dimensional subalgebra D3 = D2 ⊕ D2b. It is also associative by Prop. 3.22 but not commutative as the following shows. Let y = a or y = a0 depending on the characteristic. If y = a then by the choice of a we have ha, ei = 0, so y = −y 6= y. If y = a0 then assume that y = y. We then have y = hy, eie − y which in turn yields 2y = hy, eie. So hy, eie = 0 =⇒ hy, ei = 0 which contradicts the choice of a0. In both cases we have y 6= y. We will now show that D2b is non-singular. Assume that for some xb ∈ D2b we have hxb, zbi = 0 for all zb ∈ D2b. Then hxb, zbi = hx, ziN(b) = 0 which implies that hx, zi = 0 for all z ∈ D2. Since D2 is non-singular must have x = 0 so xb = 0 and D2b is non- singular. Using the same method as in the proof of Lemma 3.19 we find

24 an element x ∈ D2b such that N(x) 6= 0. We’re now finally able to show ⊥ that D3 is not commutative. Since x ∈ D2b we also have x ∈ D2 so xy = −xy by (3.30). But xy = yx = −yx since x ⊥ e. Combining the equalities and remembering that we have y 6= y we get

xy = yx 6= yx.

4. If we assume that dim C > 4 we can perform the doubling one final time to obtain an eight-dimensional subalgebra D4 = D3 ⊕ D3c. It is not commutative since D3 ⊂ D4. It is not associative, by Proposition 3.22. Moreover, it is not a proper composition subalgebra of C by Proposition 3.21. So C = D4 and the proof is complete.

4.1 Proving Hurwitz Theorem With all the theory developed in the framework of composition algebras it is now trivial to prove Hurwitz Theorem for all fields with characteristic not equal to two4. Theorem 4.3 (Hurwitz Theorem). Let k be a field with char k 6= 2. The only values of n ∈ N \{0} for which (1.1) holds are n ∈ {1, 2, 4, 8} where xi, yj ∈ k and zi = zi(x, y) is bilinear. Proof. Assume that (1.1) holds for some bilinear map z = z(x, y). Let N(x) = Pn 2 i=1 xi . Then N is clearly a non-degenerate quadratic form. Moreover, N(x)N(y) = N(z(x, y)) by (1.1). The only thing missing before (kn,N) is a composition al- gebra is the existence of an e. Lemma 4.4. Let N : A −→ k be a non-degenerate quadratic form and A × A −→ A, (x, y) 7→ xy a bilinear map such that

N(xy) = N(x)N(y) ∀x, y ∈ A.

Then there is a map ∗ : A × A −→ A, (x, y) 7→ x ∗ y such that

N(x ∗ y) = N(x)N(y) ∀x, y ∈ A and there is an element e ∈ A such that

e ∗ x = x ∗ e = x ∀x ∈ A.

Proof of Lemma 4.4. Let v ∈ A such that N(v) 6= 0. It is guaranteed to exist since N is non-degenerate. Let u = N(v)−1v2. Then N(u) = 1 and N(ux) = N(xu) = N(x) for all x ∈ A. We also have hux, uyi = N(ux + uy) − N(ux) − N(uy) = N(x + y) − N(x) − N(y) = hx, yi so

hLu(x),Lu(y)i = hx, yi (4.1)

4 P 2 P 2 The case char k = 2 is uninteresting since ( xi) = ( xi ) in that case.

25 ∗ 5 and similarily for Ru. Let Lu be the adjoint of Lu. Then for all x, y ∈ A

∗ hx, yi = hLu(x),Lu(y)i = hx, Lu(Lu(y))i.

∗ ∗ −1 Which means that y = Lu(Lu(y)) for all y ∈ A. So Lu = Lu . Note that −1 −1 N(Lu (x)) = N(uLu (x)) = N(x). Once again, a similar argument holds for Ru. Now define the map ∗ : A × A −→ A as

−1 −1 x ∗ y = Ru (x)Lu (y).

−1 −1 Then N(x ∗ y) = N(Ru (x))N(Lu (y)) = N(x)N(y). We also have

2 −1 2 −1 −1 u ∗ x = Ru (u )Lu (x) = uLu (x) = x 2 −1 −1 2 −1 x ∗ u = Ru (x)Lu (u ) = Ru (x)u = x which means that u2 is an identity element relative to ∗. Lemma 4.4 gives us a way to construct a new bilinear map z0 such that (kn,N) is a composition algebra with the bilinear multiplication being z0. The- orem 4.1 then implies that n must be 1, 2, 4 or 8 and we are done.

5We’ll leave it to the reader to prove that the adjoint is well-defined and unique on a finite-dimensional vector space together with a non-degenerate bilinear form.

26 References

[1] H.-D. Ebbinghaus. Numbers, chapter 8. Springer Verlag, 1991. [2] F. G. Frobenius. Uber¨ lineare substitutionen und bilineare formen. Journal f¨urdie reine und angewandte Mathematik, 1878.

[3] N. Jacobson. Composition algebras and their . Rendiconti del Circolo Matematico di Palermo, 1958. [4] T. A. Springer. Oktaven, Jordan-Algebren und Ausnahmegruppen. Math. Institut der Universit¨atG¨ottingen,1963.

[5] T. A. Springer and F. D. Veldkamp. , Jordan Algebras, and Ex- ceptional Groups. Springer Verlag, 2000. [6] M. Zorn. Theorie der alternativen ringe. Abhandlungen aus dem Mathema- tischen Seminar der Universit¨atHamburg, 1931.

27 Appendix A Hurwitz Problem

The identity for n = 4 is

2 2 2 2 2 2 2 2 2 2 2 2 (x1 + x2 + x3 + x4)(y1 + y2 + y3 + y4) = z1 + z2 + z3 + z4 (A.1) where

z1 = x1y1 − x2y2 − x3y3 − x4y4

z2 = x1y2 + x2y1 + x3y4 − x4y3

z3 = x1y3 − x2y4 + x3y1 + x4y2

z4 = x1y4 + x2y3 − x3y2 + x4y1.

The identity for n = 8 is

2 2 2 2 2 2 (x1 + ··· + x8)(y1 + ··· + y8) = z1 + ··· + z8 (A.2) where

z1 = x1y1 − x2y2 − x3y3 − x4y4 − x5y5 − x6y6 − x7y7 − x8y8

z2 = x1y2 + x2y1 + x3y4 − x4y3 + x5y6 + x6y5 + x7y8 − x8y7

z3 = x1y3 − x2y4 + x3y1 + x4y2 + x5y7 − x6y8 + x7y5 + x8y6

z4 = x1y4 + x2y3 − x3y2 + x4y1 + x5y8 + x6y7 − x7y6 + x8y5

z5 = x1y5 − x2y6 − x3y7 − x4y8 + x5y1 + x6y2 + x7y3 + x8y4

z6 = x1y6 + x1y5 + x3y8 − x4y7 − x5y2 + x6y1 − x7y4 + x8y3

z7 = x1y7 − x2y8 + x3y5 + x4y6 − x5y3 + x6y4 + x7y1 − x8y2

z8 = x1y8 + x2y7 − x3y6 + x4y5 − x5y4 − x6y3 + x7y2 + x8y1.

28