<<

U.U.D.M. Project Report 2020:17

K-Theory and An-Spaces

William Hedlund

Examensarbete i matematik, 30 hp Handledare: Thomas Kragh Examinator: Denis Gaidashev Juni 2020

Department of Uppsala University

Uppsala university

Institution of Mathematics

K-Theory and An-Spaces

Author: Supervisor: William Hedlund Thomas Kragh

June 18, 2020

Abstract We define the reduced and unreduced K-theory rings and prove the Bott periodicity theorem. With this we construct the exact six-term loop in K-theory induced by a pair (X,A) of spaces. This loop is applied along with Adams operations, which we construct, to prove Adams’ theorem on the Hopf invariant, implying that the only 0 1 3 7 spheres with H- structure are S , S , S , and S . We define An-spaces and the Stasheff polytopes, and use singular chains to construct an An-algebra structure on the singular chain complex of such a space.

Sammanfattning Vi definierar de reducerade och oreducerade K-teoriringarna och bevisar Botts pe- riodicitetssats. D¨armed konstruerar vi den exakta sextermiga kretsen i K-teori som induceras av ett par (X,A) av rum. Denna krets anv¨ands tillsammans med Adamso- perationer, som vi konstruerar, f¨or att bevisa Adams sats om Hopfinvarianten, som medf¨or att de enda sf¨arer som b¨ar H-rumsstruktur ¨ar S0, S1, S3, och S7. Vi definierar An-rum och Stasheffpolytoperna, och anv¨ander singul¨ara kedjor f¨or att konstruera en An-algebrastruktur p˚adet singul¨ara kedjekomplexet av ett s˚adant rum. 1 Introduction

The notion of an H-space generalizes that of a , by dropping the require- ments of inverses and associativity, according to this definition: Definition 1.1. An H-space is a X with a map µ: X × X → X, which we call the multiplication, such that µ(x, e) = µ(e, x) = x for every x ∈ X, for some element e ∈ X which we call the identity of the multiplication. We can examine how invariants in algebraic can tell us which spaces can be equipped with such a multiplication. It is a famous result that the only spheres which admit H-space structure are S0, S1, S3, and S7. We reproduce the proof of this statement using K-theory, in proposition 2.13 and theorem 2.5. The main part of this thesis is dedicated to the construction of enough machinery of K-theory for the proof of this. In the last section we introduce An-spaces, which interpolate, for n from 2 to ∞, between H-spaces and spaces with strictly associative multiplication. For the section on K-theory our reference [1] is Hatcher’s Vector Bundles and K-Theory, whose exposition we generally follow throughout. For the section on An-spaces our references are the two parts of Stasheff’s Associativity of H-spaces, from which we take our notation and quote some facts about the Stasheff polytopes.

2 K-Theory

K-theory is an algebraic object associated to a topological space, which is constructed from vector bundles over that space. A map f : X → Y pulls back vector bundles over Y to ones over X in a way which makes K-theory a contravariant functor. We will be able to understand the K-theory of spheres quite explicitly, and this will enable us to answer which spheres can be H-spaces. Since K-theory is constructed from vector bundles, we define those and state some useful properties.

2.1 Some Properties of Vector Bundles Definition 2.1. A vector bundle over a space X is a space E and a surjection p: E → X, along with a complex vector space structure on p−1(x) for each x ∈ X, such that there is an −1 n open {Uα} of X with a hα : p (Uα) → Uα × C for each α, with n possible depending on α, restricting to a vector space isomorphism p−1(x) → {x} × Cn for each x ∈ Uα. X is called the base space of the bundle, and E the total space. The space p−1(x) is called the fibre over x, and the maps hα are called local trivializations. All the fibres over each connected component must have the same dimension, but if the base space is disconnected our definition allows the fibre dimensions to vary over the components. Often the vector bundle is denoted just by its total space, i.e. we talk about the vector bundle E in the notation above.

1 The notions of direct sum and tensor product of vector spaces carry over to bundles. For vector bundles E1 and E2 the direct sum E1 ⊕ E2 and tensor product E1 ⊗ E2 have fibres which are the direct sums/tensor products of the fibres of E1 and E2. The same distributivity holds: E1 ⊗ (E2 ⊕ E3) = E1 ⊗ E2 ⊕ E1 ⊗ E3. Given a bundle E → Y and a map f : X → Y , there is a pullback bundle f ∗E, whose fibre over x ∈ X is the fibre of E over f(x). This pullback respects composition of maps, ∗ ∗ ∗ ∗ ∗ ∗ and the operations on vector bundles: (f ◦ g) E = g (f E), f (E1 ⊕ E2) = f E1 ⊕ f E2, ∗ ∗ ∗ f (E1 ⊗ E2) = f E1 ⊗ f E2. An isomorphism between vector bundles E1 and E2 over the same base space is a home- −1 −1 omorphism E1 → E2 which maps p1 (x) to p2 (x) by a vector space isomorphism for each ∼ x in the base space. E1 = E2 means that the bundles are isomorphic. We will extend this notion to bundles over homeomorphic spaces: if f : X → Y is a homeomorphism, then an isomorphism between E1 → X and E2 → Y (over the map f) is a homeomorphism between total spaces which maps the fibre over x ∈ X to the fibre over f(x) by an isomorphism. ∼ ∗ Equivalently E1 → X and E2 → Y are considered isomorphic if E1 = f E2. If two maps f and g are homotopic, then they pull back bundles in the same way: f ∗E ∼= f ∗E for every bundle E. A trivial bundle over X is a bundle isomorphic to the product X × Cn. If the base space X is compact and Hausdorff we have the following fact: for every bundle E → X there is a bundle E0 → X such that E ⊕ E0 is a trivial bundle. In our work we shall always work with compact Hausdorff spaces, in large part in order to apply this fact. The pullback of a trivial bundle is trivial. If the base space X is contractible, every bundle over it is trivial. This is because the identity of X can be factored up to homotopy through the one-point space, over which every bundle is trivial (inspecting the definition reveals that a bundle over a point is just a vector space collapsed to that point). ∗ For a subspace A ⊂ X, we can consider the restriction of a bundle E to A, E|A = ιAE. n n n n−1 For the sphere S , the two hemispheres D+ and D− which intersect in the equator S are contractible spaces, so the restrictions of bundles to either one are trivial. On the h−1 n n n−1 n−1 k + −1 n−1 h− n−1 k intersection D+ ∪ D− = S we then have a map S × C −−→ p (S ) −→ S × C . At each x ∈ Sn−1 this map is an isomorphism of Ck, and we can consider it as giving a map n−1 S → GLk(C). This map is called the clutching function, or transition function, of the bundle. Conversely, one can construct a bundle over Sn given a clutching function on the equator, by gluing together two copies of Dn × Ck along the two ∂Dn × Ck in the way given by the clutching function. We have the following fact on clutching functions: two isomorphic bundles give homo- topic clutching functions, and conversely, two homotopic clutching functions give isomor- phic bundles. So we can classify bundles over spheres by homotopy classes of functions n−1 S → GLk(C).

2 2.2 The K-Rings K-theory is a functor which maps a topological space to a set of equivalence classes of complex vector bundles over it, given a group structure via direct sum of vector bundles. Note that we are considering complex K-theory, so “vector bundle” always mean “complex vector bundle”. Also it is advantageous to assume only that the dimension of a bundle is locally constant, so the fibre dimensions can vary if the base space is disconnected. We assume the base spaces to be compact Hausdorff, to have all technical properties available. We let εn → X denote the trivial vector bundle of dimension n.

Definition 2.2. Two vector bundles E1 and E2 are stably isomorphic, denoted E1 ≈s E2, if n ∼ n E1 ⊕ ε = E2 ⊕ ε for some number n. This is an equivalence relation; the only nontrivial thing is to check transitivity: assuming n ∼ n m ∼ m n+m ∼ n+m ∼ m+n E1 ⊕ε = E2 ⊕ε and E2 ⊕ε = E3 ⊕ε , we have E1 ⊕ε = E2 ⊕ε = E3 ⊕ε . Our aim is to make a group from these equivalence classes, with direct sum as group operation. Since stable isomorphism requires dimensions to agree, the identity would clearly have to be the trivial bundle of dimension zero, but then we cannot have inverses since adding a bundle cannot decrease the dimension. Though there are no inverses, there is a cancellation property which we can use to 0 construct a group of formal differences of vector bundles: define E1 − E1 to be equiv- 0 0 0 alent to E2 − E2 if E1 ⊕ E2 ≈s E2 ⊕ E1. We only need to show that this relation is 0 0 0 0 indeed transitive. Suppose that E1 ⊕ E2 ≈s E2 ⊕ E1 and E2 ⊕ E3 ≈s E3 ⊕ E2. Then 0 0 0 0 0 0 0 0 n ∼ 0 0 n E1 ⊕E3 ⊕E2 ≈s E2 ⊕E3 ⊕E1 ≈s E3 ⊕E2 ⊕E1, so E1 ⊕E3 ⊕E2 ⊕ε = E3 ⊕E2 ⊕E1 ⊕ε . Since 0 ∼ k we are working over a compact Hausdorff space there is a bundle E such that E ⊕ E2 = ε . 0 n+k ∼ 0 n+k Adding this bundle to either side we get E1 ⊕ E3 ⊕ ε = E3 ⊕ E1 ⊕ ε , so we have 0 0 E1 ⊕ E3 ≈s E3 ⊕ E1, proving transitivity. Proposition 2.1. The set of equivalence classes under this relation forms an Abelian group, 0 0 0 0 with addition (E1 − E1) + (E2 − E2) = (E1 ⊕ E2) − (E1 ⊕ E2).

0 0 Proof. Let us first prove that the given operation is well-defined. If (E2 − E2) = (E3 − E3), 0 0 0 0 0 0 i.e. E2 ⊕ E3 ≈s E3 ⊕ E2, we have E1 ⊕ E2 ⊕ E1 ⊕ E3 ≈s E1 ⊕ E3 ⊕ E1 ⊕ E2, so that 0 0 0 0 (E1 ⊕ E2) − (E1 ⊕ E2) = (E1 ⊕ E3) − (E1 ⊕ E3). For any bundle E, the class of E − E gives a zero element for this operation. Thus an inverse to E − E0 is given by E0 − E. The group here defined, for a base space X, is called the unreduced K-theory, or simply the K-theory, of X, and is denoted K(X).

Note that any element of K(X) has a representative of the form E − εn, since with a suitable E00 we have E − E0 = (E − E0) + (E00 − E00) = (E ⊕ E00 − εn). In K(X) we take E written on its own to mean E − ε0, which is consistent with E − E0 = (E − ε0) + (ε0 − E0) = E ⊕ ε0 − E0 ⊕ ε0 = E − E0, so there is no confusion. The tensor product of vector bundles gives a product in K(X), defined by the formula 0 0 0 0 0 0 (E1 − E1)(E2 − E2) = E1 ⊗ E2 − E1 ⊗ E2 − E1 ⊗ E2 + E1 ⊗ E2.

3 Proposition 2.2. This operation is well-defined and makes K(X) a commutative ring with identity ε1.

0 0 0 0 Proof. Suppose E2 − E2 = E3 − E3 so that E2 ⊕ E3 ≈s E3 ⊕ E2. Then four of the terms of 0 0 0 0 (E1 − E1)(E2 − E2) − (E1 − E1)(E3 − E3) are

0 0 0 0 E1 ⊗ E2 − E1 ⊗ E2 − E1 ⊗ E3 + E1 ⊗ E3 = E1 ⊗ (E2 ⊕ E3) − E1 ⊗ (E2 ⊕ E3) = 0,

0 and similarly for the terms with E1, so the result is independent of choice of representative, since this operation, like the tensor product, is commutative. Associativity and distributivity of this multiplication follow from those properties of the tensor product. ε1 is multiplicative identity since ε1 ⊗ E ∼= E, and ε0 ⊗ E ∼= ε0. We may notate εn simply as n, and note that nE is then indeed the sum of n copies of E. A map f : X → Y induces a pullback f ∗ : K(Y ) → K(X), by taking (E − E0) to (f ∗E − f ∗E0). Pullbacks respect direct sum, and so respect stable isomorphism, so this is well-defined. We also have (f ◦ g)∗ = g∗f ∗ in K-theory, since the same holds on vector bundles. Since the pullback respects direct sums and tensor products, it in fact is a ring homomorphism. These facts together say that K-theory is a contravariant functor from compact Hausdorff spaces to rings. If we consider in particular the inclusion of a point x0 of X, we get a map K(X) → n m ∼ K(x0) = {ε − ε } = Z. Since every bundle over a point is trivial, this map just picks out the difference of the bundles’ dimensions over the point x0. Definition 2.3. The of this restriction homomorphism is an ideal of K(X), denoted by Ke(X) and called the reduced K-theory of X.

Proposition 2.3. There is a homomorphism K(X) → Ke(X), giving a splitting ∼ of abelian groups K(X) = Ke(X) ⊕ Z.

Proof. Let r : X → x0 be the retraction to x0 and ι: x0 → X the inclusion. The map a 7→ a − r∗(ι∗a) is a group endomorphism of K(X), and if a ∈ Ke(X) we have ι∗a = 0, so a 7→ a. Thus this map splits the inclusion of the kernel Ke(X) of the restriction map, so we have a splitting as indicated.

Explicitly, Ke(X) consists of those differences E − E0 where both terms have the same dimension over x0. Now given x0 ∈ X and y0 ∈ Y , a map f : X → Y takes Ke(Y ) into Ke(X) iff the connected component of x0 is mapped into that of y0. In particular, if we restrict to the category of pointed compact Hausdorff spaces and pointed maps, the reduced K-theory is also a contravariant functor to rings.

4 2.3 The External Product Definition 2.4. The external product µ: K(X) ⊗ K(Y ) → K(X × Y ) is defined by

∗ ∗ µ(a ⊗ b) = pX (a)pY (b), (2.1) p denoting projection onto either factor. We also use the notation a ∗ b for the external product. Proposition 2.4. The external product is a ring homomorphism. Proof. Using functoriality and commutativity,

∗ ∗ ∗ ∗ ∗ ∗ µ((a ⊗ b)(c ⊗ d)) = µ(ac ⊗ bd) = pX (ac)pY (bd) = pX (a)pX (c)pY (b)pY (d) = ∗ ∗ ∗ ∗ = pX (a)pY (b)pX (c)pY (d) = µ(a ⊗ b)µ(c ⊗ d).

We now consider a special line bundle over the sphere S2, which will be useful in es- tablishing the ”Fundamental Product Theorem”, allowing for several important calculations of K-theory. First, we consider the sphere S2 as the complex projective plane CP 1 via the following map: 1 (z : w) 7→ 2zw, |z|2 − |w|2 ∈ S2 ⊂ × . |z|2 + |w|2 C R

The inverse is given by (z, x) 7→ (z : 1 − x) away from the point (0, 1), and by (z, x) 7→ (1 + x : z) away from the point (0, −1) (these maps agree on their common domain). The canonical line bundle over CP 1, which we denote H, has total space

2 1 H = {(z, `) ∈ C × CP | z ∈ `},

with projection π :(z, `) 7→ `. Pick some ε > 0 and let U1 = {(1 : z) | |z| < 1 + ε}, and 1 U2 = {(z : 1) | |z| < 1+ε}. The intersection U1∩U2 is the annulus {(1 : z) | 1+ε < |z| < 1+ε}. Define trivializations

−1 ϕi : π (Ui) → Ui × C ((w1, w2), `) 7→ (`, wi).

These are trivializations since each line in Ui projects surjectively onto the i’th coordinate axis; an inverse to ϕ1 is given by ((1 : z), w) 7→ ((w, zw), (1 : z)), and similarly for ϕ2. This gives

−1 ϕ1 ◦ ϕ2 : ((zw2, w2), (z : 1)) 7→ ((z : 1), w2) = ((1 : 1/z), w2) 7→ ((w2, w2/z), (1 : 1/z)),

so on the annulus we have the transition function gH (1 : z) = z. Proposition 2.5. The canonical line bundle H satisfies the relation H2 +1 = 2H in K(S2).

5 Proof. This will follow from the relation (H ⊗ H) ⊕ ε1 = H ⊕ H. The transition function 1 on the annulus for (H ⊗ H) ⊕ ε is (gH ⊗ gH ) ⊕ 1, and that of H ⊕ H is gH ⊕ gH , i.e.

 z2 0   z 0  (1 : z) 7→ and (1 : z) 7→ . (2.2) 0 1 0 z

Now if we construct a homotopy between these two maps the bundles are isomorphic, by the facts on clutching functions given in 2.1. Let α(t) be a path from the identity to the matrix  0 1  , and define g (1 : z) = (z ⊕ 1)α(t)(1 ⊕ z)α(t); this is a homotopy between the two 1 0 t transition functions.

This relation means that the ring homomorphism Z[H] → K(S2) sending the generator to the canonical line bundle factors through the quotient Z[H]/(H −1)2. Then for any space X we have a ring homomorphism

2 2 2 K(X) ⊗ Z[H]/(H − 1) → K(X) ⊗ K(S ) → K(X × S ), (2.3) the second map being the external product. We now cite the following ”Fundamental Product Theorem”, theorem 2.2 of [1]

2 ∼ 2 Theorem 2.1. This composition is a ring isomorphism K(X)⊗Z[H]/(H −1) = K(X ×S ) for every compact Hausdorff space X.

2 ∼ 2 Taking X to be a point, it follows that K(S ) = Z[H]/(H − 1) .

2.4 The Sequence Associated to a Pair (A, X) Now we look at what K-theory does to the sequence of maps induced by a closed subspace A ⊂ X. To have the technical properties which are desired, we want to consider spaces which are compact Hausdorff, and subspaces which are closed, whereupon they inherit those properties of their ambient space. Also note that all operations like suspensions and quotients keep us in this situation.

Proposition 2.6. If X is compact Hausdorff and A ⊂ X closed, the sequence in reduced ι q q∗ ι∗ K-theory induced by A ,−→ X −→ X/A, i.e. Ke(X/A) −→ Ke(X) −→ Ke(A), is exact. Proof. There is only one place at which to show exactness. The inclusion im q∗ ⊂ ker ι∗ means ι∗q∗ = 0, and this follows since qι collapses A to a point, so factors through a space with one point, whose reduced K-theory is trivial. The other inclusion ker ι∗ ⊂ im q∗ means, considering elements of the form E − εn, that every vector bundle whose restriction to A is stably trivial comes from pulling back some bundle over X/A. p Suppose then that E −→ X is stably trivial over A. We may add zero in the form εk − εk, to assume that E is trivial in the expression E − εn. Now we construct a bundle over X/A which will pull back to E. Let h: p−1(A) → A × Cn be a trivialization, and E/h the

6 quotient space of E under the identification h−1(x, v) ∼ h−1(y, v) for x, y ∈ A. Then the map qp factors through the quotient E → E/h, inducing a projection E/h → X/A. We show first that this is a vector bundle. Every point except for A/A has some neighbourhood over which E/h is the same as E, giving a local trivialization of E/h. It remains to trivialize over some neighbourhood of A/A. The key to this is to prove that E is in fact trivial over some neighbourhood of A. Since E is trivial over A, we have a set of sections si : A → E, together giving a basis in each fibre over A. We can cover A with open (in X) sets Uj, over each of which E is trivial. Restricting si n and composing with a trivialization gives a map A ∩ Uj → E|Uj → Uj × C . Looking only at the latter factor of this composition, the coordinate expression of the section, we get a n n C -valued map which by the extends to a map Uj → C . Using this as a coordinate expression (in the given trivialization over Uj) gives a section sij : Uj → E which agrees with si on A ∩ Uj. Let {ϕj, ϕ} be a subordinate P to the cover {Uj,X − A}. Then the sum σi = j ϕjsij is a section defined on all of X which agrees with si on A. By continuity, the σi must give a basis for each fibre in some neighbourhood of A: in each Uj we can consider the σi together as giving some matrix-valued function whose determinant must be nonzero in some containing A ∩ Uj. This local frame gives a trivialization of E over some neighbourhood of A, which agrees with h on −1 p (A), since the sections σi used to trivialize agreed with the sections si coming from h. Call this trivialization eh: p−1(U) → U × Cn. The composition

−1 eh n q n p (U) −→ U × C −→ U/A × C respects the equivalence relation used to define E/h, so factors through a map

−1  −1 n p (U)/ ∼ = p (U/A) → U/A × C .

eh−1 This is a homeomorphism whose inverse is induced by U ×Cn −−→ p−1(U) → p−1(U)/ ∼, and gives a trivialization of E/h over U/A. It remains to show that E ∼= q∗(E/h). Calling the quotient map Q: E → E/h and the projection P : E/h → X/A, we have PQ = qp, so the map

E → q∗(E/h) e 7→ (p(e),Q(e)) is a vector bundle homomorphism. The restriction to each fibre of E is an isomorphism, since Q identifies fibres but leaves each fibre uncollapsed. Thus we have an isomorphism E ∼= q∗(E/h). In the special case where A is contractible, taking the quotient does not change anything in K-theory: Proposition 2.7. If A ⊂ X is contractible, the quotient map q : X → X/A induces a bijection q∗ : Vectn(X/A) → Vectn(X) for every n.

7 This means that the quotient map induces isomorphism in both reduced and unreduced K-theory. p Proof. Since A is contractible, every vector bundle E −→ X is trivial over A, and we can apply the construction in the last proof to get a vector bundle E/h → X/A. We will use this construction as an inverse to q∗. To make this a map Vectn(X) → Vectn(X/A), we must show that the construction is, up to isomorphism, independent of the choice of trivialization h. We aim to show that any two such trivializations h0 and h1 are homotopic. The trivializations differ by a transition function g : A → GLn(C). Since A is con- tractible, this map is homotopic to a map with constant value α. The trivializations αh0 and h0 give the same equivalence relation on E, and so the same bundle over X/A. There- fore we can replace h0 by αh0 and proceed as though α is the identity. Since the transition function between h1 and h0 retracts to the identity, we have a homotopy H between them. It is a homotopy through trivializations since at each time we are simply multiplying h0 by some GLn(C)-valued function. Using H : p−1(A)×I → A×Cn, we get a map (p×1)−1(A×I) = p−1(A)×I → A×I ×Cn (p,1) which trivializes E ×I −−→ X ×I over A×I. This induces a bundle (E ×I)/H → (X/A)×I, whose restrictions to either of (X/A) × I are E/h0 and E/h1, so those bundles are isomorphic. Therefore the map E 7→ E/h is well-defined. We saw in the preceding proof that q∗(E/h) ∼= E. For the converse, consider the trivialization h of q∗(E) over A given by taking some identification ψ of the fibre of E over A/A with Cn and letting h(x, e) = (x, ψ(e)). Then the map q∗(E)/h → E taking [x, e] to e is well-defined, and an isomorphism on each fibre, so we have the sought isomorphism q∗(E)/h ∼= E. Now we note that the quotient map X → X/A might be factored through the collapsing of a contractible subspace as X,→ X ∪ CA → X/A. Where the latter map collapses the (contractible) on A. Since the latter map induces isomorphism on K-theory, the sequence Ke(X ∪ CA) → Ke(X) → Ke(A) is exact. But we can extend this sequence to the left via the quotient map X ∪ CA → (X ∪ CA)/X = SA. Since every two steps in the sequence come (up to isomorphism) from an inclusion and a quotient, this longer sequence

Ke(SA) → Ke(X/A) → Ke(X) → Ke(A) is also exact. We generalize this in the following proposition: Proposition 2.8. For X a compact Hausdorff space, and A ⊂ X a closed subspace, there is an of reduced K-theory

... → Ke(S2A) → Ke(S(X/A)) → Ke(SX) → Ke(SA) → Ke(X/A) → Ke(X) → Ke(A), where at each place we have the suspension of the space three steps previous. The map Ke(SnX) → Ke(SnA) is (Snι)∗, the pullback of the n’th suspension of the inclusion A,→ X, and the map Ke(Sn(X/A)) → Ke(SnX) is (Snq)∗, the pullback of the n’th suspension of the quotient X → X/A.

8 Before the proof we introduce a necessary tool, the reduced suspension of a space:

Definition 2.5. The reduced suspension of a space X with basepoint x0, denoted ΣX, is obtained from the suspension SX by collapsing the interval over the basepoint, {x0} × I, to a point.

Equivalently, ΣX is obtained from X ×I by collapsing the subspace {x0}×I ∪X ×{0, 1} to a point, which map factors through the quotient X ×I → SX. A pointed map f : X → Y induced a map Σf :ΣX → ΣY , mapping [x, t] 7→ [f(x), t]; this respects equivalence classes iff f is a pointed map. From this expression one sees that Σ(g ◦ f) = Σg ◦ Σf, so reduced suspension is a functor. The diagram

Sf SX SY

Σf ΣX ΣY

is commutative: both compositions take [x, t] to [f(x), t] ∈ ΣY . Since SX → ΣX collapses a contractible subspace, it induces isomorphism on K-theory, and by the diagram above this isomorphism transforms (Σf)∗ into (Sf)∗. ι q Reduced suspension respects subspaces and quotients, in the sense that if A ,−→ X −→ X/A is inclusion of a subspace and the collapsing of it to a point, applying Σ gives a sequence Σq ΣA −→Σι ΣX −→ Σ(X/A), which is also an inclusion and a quotient. Σι:[a, t] 7→ [ι(a), t] is the inclusion, noting that implicitly we take A to contain the basepoint of X. We have a homeomorphism Σ(X/A) ∼= ΣX/ΣA, since both are obtained from X × I by collapsing the subspace A × I ∪ X × {0, 1} to a point, and composing Σq with this homeomorphism gives the quotient ΣX → ΣX/ΣA. With this tool we go ahead to prove the proposition: Proof. We have a set of short sequences A,→ X → X/A,ΣA,→ ΣX → Σ(X/A), et.c., each sequence being the reduced suspension of the preceding one. Via the homeomorphism Σ(X/A) ∼= ΣX/ΣA we know that each of these sequences takes the form of inclusion and collapse of a subspace, so give exact sequences in K-theory. To connect the K-theory of these sequences we consider the diagram

X X ∪ CA (X ∪ CA) ∪ CX SA SX

X X/A ΣA ΣX

Applying K-theory, we get a map Ke(ΣA) → Ke(X/A) via three pullbacks and the inverse of the pullback of X ∪ CA → X/A: this map connects the exact sequences together. Since each sequence in the set is the reduced suspension of the preceding one, this construction connects all their K-theory into a long sequence. To show that this long sequence is exact

9 we only need to consider exactness at Ke(X/A) and Ke(ΣA), where this connecting map is involved, i.e. of the sequence Ke(ΣX) → Ke(ΣA) → Ke(X/A) → Ke(X). But by construction this is isomorphic to Ke(SX) → Ke(SA) → Ke(X∪CA) → Ke(X), coming from the top row of the diagram above. The first two maps there are of the form considered just above the proposition, and since the third map collapses a contractible subspace and induces isomorphism, we have exactness at Ke(X ∪ CA) ∼= Ke(X/A). Now we consider the diagram

X ∪ CA (X ∪ CA) ∪ CX ((X ∪ CA) ∪ CX) ∪ C(X ∪ CA)

SA SX We know that the upper row induces an exact sequence in K-theory, and we are interested in the lower path. Since the vertical maps induce isomorphism, it will suffice to show that the induced diagram in K-theory is commutative. Both maps (X ∪ CA) ∪ CX → SX factor through the map which takes X to the equator of SX, includes CA in the top half of SX, and includes CX as the bottom half of SX. Then to get the lower path in the square we compose with the map SX → SX which collapses the bottom cone to a point, and to get the upper path with the map which collapses the upper cone to a point. These cones being contractible, both compositions induce the same on K-theory, so the induced diagram is indeed commutative, giving exactness at Ke(SA) ∼= Ke(ΣA). Now since reduced and unreduced suspension have the same K-theory, we get the long exact sequence in the proposition. By construction two out of three maps come from iterated reduced suspension, and thus going to unreduced suspension they are transformed to the iterated unreduced suspension. First we apply this to the inclusion A,→ A ∨ B, where A ∨ B is the wedge product, constructed by identifying two basepoints in A and B. Collapsing all of A to the common basepoint of the wedge product leaves only B, i.e. (A ∨ B)/A ∼= B, and we have an exact sequence q∗ ι∗ Ke(B) −→B Ke(A ∨ B) −→A Ke(A).

ιB qA We might equally well consider the sequence B ,−→ A∨B −→ A, and note that qAιA = 1A and ∼ qBιB = 1B, so that the above is in fact a split short exact sequence, and we have Ke(A∨B) = Ke(A) ⊕ Ke(B), with isomorphism given by pulling back through quotient/inclusions. Next we consider the pair (X × Y,X ∨ Y ), X ∨ Y being included in the product as the subspace X × {y0} ∪ {x0} × Y . Then the quotient (X × Y )/(X ∨ Y ) is by definition the smash product X ∧ Y . We will use the exact sequence of this pair and the external product of unreduced K-theory to define a reduced external product, taking values in the K-theory of the smash product rather than that of the ordinary product. We have the exact sequence Ke(S(X × Y )) → Ke(S(X ∨ Y )) → Ke(X ∧ Y ) → Ke(X × Y ) → Ke(X ∨ Y ).

10 We showed already that the last term is Ke(X)⊕Ke(Y ), and shall prove something similar for the second term. The suspension does not satisfy S(X ∨ Y ) = SX ∨ SY (as can be seen by considering the case where Y is a point). However, Σ(X ∨ Y ) = ΣX ∨ ΣY since both spaces are constructed from (XtY )×I by collapsing the subspace {x0}×I∪{y0}×I∪(XtY )×{0, 1} to a point, the two constructions corresponding to different orders of collapse. Let us show that this isomorphism Ke(S(X ∨ Y )) ∼= Ke(SX) ⊕ Ke(SY ) is in fact realized via the pullbacks of the inclusions SX,→ S(X ∨ Y ) ←-SY . The factor of the isomorphism which maps onto Ke(SX) is the composition

(q∗)−1 ι∗ q∗ Ke(S(X ∨ Y )) −−−→ Ke(Σ(X ∨ Y )) −→ Ke(ΣX) −→ Ke(SX).

To show that this is the same as the pullback of the inclusion SX,→ S(X ∨ Y ), consider the diagram SX,→ S(X ∨ Y ) → Σ(X ∨ Y ) ←- ΣX ← SX, and note that the composition of the first two maps is the same as that of the last two. Taking the pullbacks of the maps in this diagram then shows that the pullback of the inclusion equals the composition in the previous diagram. The sequence is then isomorphic to

Ke(S(X × Y )) → Ke(SX) ⊕ Ke(SY ) → Ke(X ∧ Y ) → Ke(X × Y ) → Ke(X) ⊕ Ke(Y ). (2.4)

∗ ∗ The last map is a surjection split by the map pX + pY : Ke(X) ⊕ Ke(Y ) → Ke(X × Y ). To see this simply note that the composed inclusion X,→ X ∨ Y,→ X × Y is a right inverse ∗ ∗ for pX , and similarly for Y . The first map is also a split surjection, via the map SpX + SpY : we have Sp∗ ι∗ = Sp∗ Sι∗ = S(p ι )∗ = 1∗ = 1 , and similarly for SY . Thus X SX X X X X SX Ke(SX) the second map is zero, and the last three terms form a split short exact sequence, giving Ke(X × Y ) ∼= Ke(X ∧ Y ) ⊕ Ke(X) ⊕ Ke(Y ).

2.5 The Reduced External Product Now we consider the product of a ∈ Ke(X) and b ∈ Ke(Y ), regarding the reduced groups as the kernels of the restrictions to the basepoints x0 and y0 of X and Y . Then the product a ∗ b restricts to zero over the basepoint (x0, y0), so we have a ∗ b ∈ Ke(X × Y ). In the above decomposition of Ke(X × Y ), we get the component of a ∗ b in Ke(X) by pulling back through the inclusion of X as X × {y0}. However we have

∗ ∗ ∗ ∗ ∗ ∗ ιX (a ∗ b) = ιX (pX (a)pY (b)) = (pX ιX ) (a)(pY ιX ) (b) = 0,

∗ since pY ιX is a constant map, so (pY ιX ) is the zero map. Similarly, the component in Ke(Y ) is zero, so we have a ∗ b ∈ Ke(X ∧ Y ). This defines the reduced external product Ke(X) ⊗ Ke(Y ) → Ke(X ∧ Y ). In fact the external product respects the decompositions

K(X) ⊗ K(Y ) = (Ke(X) ⊕ Z) ⊗ (Ke(Y ) ⊕ Z) = Ke(X) ⊗ Ke(Y ) ⊕ Ke(X) ⊕ Ke(Y ) ⊕ Z,

11 and K(X × Y ) = Ke(X × Y ) ⊕ Z = Ke(X ∧ Y ) ⊕ Ke(X) ⊕ Ke(Y ) ⊕ Z, in the sense that the product restricted to the components Ke(X), Ke(Y ), or Z is the identity to that same component of K(X ×Y ). This is because an element a⊗(εn −εm) ∈ Ke(X)⊗Z ∗ ∗ n m ∗ n m ∗ is mapped to pX (a)pY (ε − ε ) = pX (a)(ε − ε ), which is a multiple of pX (a), and so lies in the component Ke(X) of K(X × Y ). Similarly the component Ke(Y ) ⊗ Z is mapped into Ke(Y ), and likewise for the last Z-component. Two pointed maps f : X → X0 and g : Y → Y 0 induce a map f ∧ g fitting into a commutative diagram: f × g X × Y X0 × Y 0

q1 q2 f ∧ g X ∧ Y X0 ∧ Y 0

0 0 ∗ ∗ For a ∈ Ke(X ), b ∈ Ke(Y ), the reduced product a ∗ b pulls back through q2 to pX0 (a)pY 0 (b), which f × g then pulls back to f ∗(a)g∗(b). By commutativity this means that (f ∧ g)∗(a ∗ b) ∗ ∗ ∗ ∗ ∗ pulls through q1 to f (a)g (b), i.e. that (f ∧ g) (a ∗ b) = f (a) ∗ g (b). Now it follows from the product theorem that the reduced product Ke(X) ⊗ Ke(S2) → Ke(X ∧ S2) is an isomorphism, since this is one component of an isomorphism. We observe that the smash product Sn ∧ X is the reduced suspension ΣnX: the latter can be formed n n n from I × X by collapsing the subspace ∂I × X ∪ I × {x0} to a point, and this can be done by first collapsing ∂In in the first factor, leaving us with Sn × X, and the rest of the collapsing is then the quotient map Sn × X → Sn ∧ X. Since ΣX is obtained from SX by collapsing an interval to a point, it follows inductively that ΣnX is obtained from SnX by collapsing an n-disc to a point, so the quotient SnX → ΣnX induces isomorphism on K-theory. With this and the product theorem, we can prove the Bott periodicity theorem: Theorem 2.2. The homomorphism

β : Ke(X) → Ke(X ∧ S2) ∼= Ke(S2X) a 7→ (H − 1) ∗ a is an isomorphism for every compact Hausdorff space X.

Proof. The map factors as Ke(X) → Ke(X) ⊗ Ke(S2) → Ke(X ∧ S2), the first map being 2 ∼ a 7→ a ⊗ (H − 1), which is an isomorphism since Ke(S ) = Z is generated by H − 1. The second map is reduced external product, which as we observed is an isomorphism, by the product theorem for the unreduced case. From this the following corollary follows, giving the K-theory of the spheres:

2n+1 2n ∼ Proposition 2.9. Ke(S ) = 0, and Ke(S ) = Z, generated by (H − 1) ∗ ... ∗ (H − 1) (viewing S2n as a smash product ∧nS2).

12 Proof. All complex vector bundles over S1 are trivial, since there is only one homotopy class of transition functions for such bundles, which are defined on the union of two arcs of 1 S , valued in the path- GLn(C). By Bott periodicity the reduced K-theory remains trivial when increasing the sphere’s dimension by two. For the even-dimensional 2 ∼ spheres, we had Ke(S ) = Z, generated by H − 1, and the theorem says that multiplying with H − 1 is an isomorphism, which we iterate to get the K-theory of the even-dimensional spheres. Bott periodicity can be applied to the long sequence we derived for the K-theory of iterated suspensions for a pair (X,A):

→ Ke(S2X) → Ke(S2A) → Ke(S(X/A)) → Ke(SX) → Ke(SA) → Ke(X/A) → Ke(X) → Ke(A).

=∼ The Bott periodicity isomorphism β : Ke(A) −→ Ke(S2A) lets us connect the end of the sequence to an earlier step and form a loop from Ke(S(X/A)) to Ke(A). To see that this loop is exact we need to check exactness at two points, where to connecting map β is involved. First, the part Ke(A) → Ke(S(X/A)) → Ke(SX) is exact, since the kernel of the second map is the image of the map Ke(S2A) → Ke(S(X/A)), and the first map is just that precomposed with an isomorphism. Next for the part Ke(X) → Ke(A) → Ke(S(X/A)). The kernel of the second map is the β-preimage of the kernel of Ke(S2A) → Ke(S(X/A)), which is the image of Ke(S2X) → Ke(S2A). We then wish to show that the maps Ke(S2X) → Ke(S2A) and β◦ι∗ Ke(X) −−→ Ke(S2A) have the same image, i.e. that

(S2ι)∗ Ke(S2X) Ke(S2A) β β ι∗ Ke(X) Ke(A)

commutes. Now for ι × 1: A × S2 → X × S2, the induced map ι ∧ 1: A ∧ S2 → X ∧ S2 2 is the inclusion of this subspace, i.e. ιS2A = S ι. Thus in the diagram above we have (S2ι)∗(a ∗ (H − 1)) = (ι ∧ 1)∗(a ∗ (H − 1)) = ι∗(a) ∗ (H − 1), showing commutativity. Then we have a six-term exact loop

Ke(X/A) Ke(X) Ke(A)

Ke(SA) Ke(SX) Ke(S(X/A))

2.6 A Relative Product For A, B ⊂ X we define a relative multiplication Ke(X/A) ⊗ Ke(X/B) → Ke(X/(A ∪ B)) using the relative diagonal map ∆:e X/(A∪B) → X/A∧X/B induced by the diagonal on X

13 (which respects the equivalence relations involved). This relative multiplication then takes a ⊗ b to ∆e ∗(a ∗ b). Proposition 2.10. This relative product has the following properties:

1. In the case A = B = x0, it reduces to the ordinary product in Ke(X). 2. If f : X → Y is a map taking A, B ⊂ X into U, V ⊂ Y , respectively, then, letting f denote any of the three maps induced between quotient spaces, we have f ∗(ab) = f ∗(a)f ∗(b) for a ∈ Ke(Y/U), b ∈ Ke(Y/V ). 3. If X and Y are contractible spaces, with A ⊂ X and B ⊂ Y , then the relative product

Ke((X×Y )/(A×Y ))⊗Ke((X×Y )/(X×B)) → Ke((X×Y )/(A×Y ∪X×B)) = Ke(X/A∧Y/B)

∗ ∗ satisfies the formula ab = ι1(a) ∗ ι2(b), where ι1 : X/A → (X × Y )/(A × Y ) is induced by the inclusion of X as X × {y0}, and similarly for ι2. Proof.

1. The two diagonal maps and the quotient give the following commutative diagram in K-theory: Ke(X × X)

Ke(X)

Ke(X ∧ X)

∗ ∗ Now since the external product of a and b pulls back to p1(a)p2(b) ∈ Ke(X × X), and pi ◦ ∆ = 1X , it pulls back to ab in Ke(X). 2. Consider the diagram f X/(A ∪ B) Y/(U ∪ V )

f ∧ f X/A ∧ X/B Y/U ∧ Y/V

Which commutes since either composition is the map induced by f × f : X → Y × Y . Since we have (f ∧ f)∗(a ∗ b) = f ∗(a) ∗ f ∗(b), commutativity of this diagram shows that the relative product is respected by maps as indicated.

14 3. Consider the diagram

X×Y X×Y X×Y X×Y A×Y × X×B A×Y ∧ X×B

ι1 × ι2 X/A × Y/B X/A ∧ Y/B

The right vertical map is equivalent to the relative diagonal on (X × Y )/(A × Y ∪ X × B). The map ι1 × ι2 is induced by the map X × Y → X × Y × X × Y ;(x, y) 7→ (x, y0, x0, y), which respects the relations involved, and likewise induces the upper left composition in the diagram. The lower right composition is induced by the map (x, y) 7→ (x, y, x, y). These two maps are homotopic in a way which respects the relations, making the diagram homotopy commutative. To construct the homotopy, X Y let rt and rt be deformations of the identities to the basepoints x0 and y0. Then Y X ft(x, y) = (x, rt (y), rt (x), y) is a homotopy between the two maps. To see that this respects the relations, note that if x ∈ A or y ∈ B, the first or last coordinate of the value will put it in the subspace A × Y × X × Y ∪ X × Y × X × B which is collapsed to a point. Now take a ∈ Ke((X ×Y )/(A×Y )) and b ∈ Ke((X ×Y )/(X ×B)). The reduced external product a ∗ b at the top right is pulled downwards to the relative product ab. It is ∗ ∗ also by definition pulled to the left to the external product p1(a)p2(b), which is then ∗ ∗ ∗ ∗ ∗ ∗ pulled down to (ι1p1) (a)(ι2p2) (b) = p1(ι1a)p2(ι2b). By homotopy commutativity, the relative product ab is pulled to the left to this same element, meaning that it is the ∗ ∗ reduced external product: ab = ι1(a) ∗ ι2(b).

We get the following statement on products in spaces which have finite contractible covers, giving as a special case the product structure in Ke(Sn): Proposition 2.11. If X can be covered with a finite set of closed contractible subspaces A1,...,An, all n-ary products in Ke(X) are zero.

Proof. The quotient maps qi : X → X/Ai all induce isomorphisms on K-theory, so any ∗ ∗ product a1 . . . an can be written as q1(a1) . . . qn(an). Denote the quotient maps qi1,...,ik : X →

X/(Ai1 ∪ ... ∪ Aik ), and note that they are all induced by the identity on X. Therefore by ∗ ∗ ∗ ∗ property (2) we can replace q1(a1)q2(a2) by q1,2(a1a2), and so on until we have q1,...,n(a1 . . . an), the argument denoting the relative product over X/(A1 ∪...∪An). But this space is a point, so that argument is necessarily zero. Therefore a1 . . . an = 0.

It follows that all products in Ke(SX) are trivial, since SX is the union of two cones, which are contractible. In particular the product is trivial over any sphere of positive dimension.

15 2.7 H-Spaces and the Hopf Invariant Now we come to the matter of determining which spheres can be given the structure of an H-space. The first step will be to rule out spheres of even dimension, except the rather trivial case of S0. We observe the following in order to calculate K(S2k ×S2`): The external product Ke(S2k) ⊗ Ke(X) → Ke(S2k ∧ X) is an isomorphism since it is obtained by iterating external product with H − 1, the generator of Ke(S2), which is an isomorphism by Bott periodicity. Then the external product K(S2k) ⊗ K(X) → K(S2k × X) is also an isomorphism. This is 2k ∼ 2k 2k because we can split K(S ) ⊗ K(X) = Ke(S ) ⊗ Ke(X) ⊕ Ke(S ) ⊕ Ke(X) ⊕ Z, where the external product is simply the identity on the last three terms of the decomposition, and is as just stated an isomorphism on the first term. 2k ∼ 2k 2k Now since K(S ) = Ke(S )⊕Z, where the generator of Ke(S ) squares to zero, we have 2k ∼ 2 K(S ) = Z[α]/(α ), with α corresponding to the k’th external power of the generator H −1 2 2k 2` ∼ 2 2 ∼ 2 2 of Ke(S ). This in turn implies that K(S ×S ) = Z[α]/(α )⊗Z[β]/(β ) = Z[α, β]/(α , β ), with α and β in the latter term being the pullbacks of the k’th and `’th external powers of the generator H − 1. We apply these arguments:

Proposition 2.12. The even-dimensional sphere S2k is not an H-space if k > 0.

Proof. Let µ: S2k × S2k → S2k be the multiplication in the H-space. We then have, up to isomorphisms, µ∗ : Z[γ]/(γ2) → Z[α, β]/(α2, β2). Let i be the inclusion of S2k as the first 2k 2k 2k factor of S × S as S × {e}. Then µ ◦ i = 1S2k . Now α and β being the pullbacks of the generator of Ke(S2k) through both factors’ projections, i∗ takes α to γ and β to zero. Thus since i∗ ◦ µ∗ = 1 , the coefficient of α in µ∗(γ) is 1. By the same reasoning for Ke(S2k) the second factor, the coefficient of β is also 1. Therefore we have µ∗(γ) = α + β + mαβ for some integer m. However, we then have

0 = µ∗(0) = µ∗(γ2) = µ∗(γ)2 = (α + β + mαβ)2 = 2αβ 6= 0,

which is a contradiction. Now for the harder part of determining which odd-dimensional spheres can be H-spaces. Given an H-structure on a sphere we will construct a certain map f and consider its mapping cone Cf . The K-theory of this cone fits into a short exact sequence which lets us determine certain elements α and β in Ke(Cf ). Studying the relations between these elements in the light of Adams operations, to be constructed presently, will allow us to determine which spheres can have H-structure. To an H-space structure µ: S2n−1 × S2n−1 → S2n−1 on an odd-dimensional sphere we associate a map f : S4n−1 → S2n as follows: consider S4n−1 to be the boundary of the disc 2n 2n 2n 2n 2n D × D , and S as decomposed into the upper and lower hemispheres D+ and D− . 2n 2n 2n Then on the part ∂D × D of the boundary we define f(x, y) = |y|µ(x, y/|y|) ∈ D+ , and 2n 2n 2n on the part D × ∂D f(x, y) = |x|µ(x/|x|, y) ∈ D− . Note that the formulae agree on the common points of the boundary parts, and give a map which extends continuously to where either x or y is zero.

16 4n 2n Now we consider the mapping cone of f, Cf , which is constructed from D t S by 4n 2n identifying each x ∈ ∂D with f(x). S is a subspace of Cf , and collapsing it to a point collapses the boundary of D4n to a point, leaving the sphere S4n. Thus associated to the pair 2n (Cf ,S ) we have a sequence in K-theory, which is in fact a short exact sequence because 2n 2n the suspensions of S and Cf /S are both odd-dimensional spheres with trivial K-theory:

4n 2n 0 → Ke(S ) → Ke(Cf ) → Ke(S ) → 0

∗2n 4n Let α ∈ Ke(Cf ) be the image of the generator (H − 1) of Ke(S ), and β ∈ Ke(Cf ) an element which maps to the generator (H − 1)∗n of Ke(S2n). Since products are trivial over spheres we see that β2 maps to zero, meaning that it is in the image of the previous map, i.e. proportional to α: β2 = hα. The constant h is called the Hopf invariant of the map f. h does not depend on the choice of β, because any other choice must be β + mα for some integer m, and then we have (β + mα)2 = β2 + 2mαβ, since α2 = 0, so we just need to show that αβ = 0. Since α maps to zero, so does αβ, so αβ = kα for some integer k. Then we have kαβ = αβ2 = α(hα) = hα2 = 0. Thus either k = 0, in which case αβ = 0, or αβ = 0, since it lies in the image of Ke(S4n), which is isomorphic to Z. Now the following statement on the Hopf invariant is the restriction which we will use to determine which spheres can be H-spaces: Proposition 2.13. For f constructed from an H-space structure as above, the Hopf invariant is ±1.

4n 4n 2n Proof. Let e denote the identity of the multiplication, and Φ: D ,→ D t S → Cf the characteristic map of the 4n-cell of Cf . Consider the following diagram:

Ke(Cf ) ⊗ Ke(Cf ) Ke(Cf ) =∼

2n 2n 2n Ke(Cf /D+ ) ⊗ Ke(Cf /D− ) Ke(Cf /S ) Φ∗ ⊗ Φ∗ Φ∗ =∼

Ke((D2n × D2n)/(∂D2n × D2n)) ⊗ Ke((D2n × D2n)/(D2n × ∂D2n)) Ke((D2n × D2n)/∂(D2n × D2n)) =∼ =∼

Ke(D2n/∂D2n) ⊗ Ke(D2n/∂D2n)

The horizontal maps are relative products. The first two vertical maps come from the quotient maps. The middle vertical maps are the pullbacks along the maps induced by Φ. The bottom vertical map comes the inclusions of D2n as the first and second factor of D2n × D2n. The diagonal map is the reduced external product, considering D2n/∂D2n = S2n and (D2n × D2n)/∂(D2n × D2n) = S2n ∧ S2n. We show commutativity using the properties of the relative product proved above. The upper square commutes because all the quotient maps are induced by the same map Cf → Cf ,

17 namely the identity, wherefore they respect the relative product. In the second square, the vertical maps are all induced by the same map Φ, so the relative product is respected again. In the bottom triangle, since D2n is contractible, the relative product can obtained by pulling back and taking external product. The top left vertical map is an isomorphism since both collapsed subspaces are con- tractible. The middle right map is an isomorphism because the map induced by Φ is just the identity of S4n. The bottom vertical map is an isomorphism because (D2n × D2n)/(∂D2n × D2n) deformation retracts to the included subspace (D2n × e)/(∂D2n × e), simply by de- forming the right factor to e with the left factor fixed. (And symmetrically for the other factor.) The diagonal is an isomorphism because it is an iteration of the Bott periodicity isomorphism. Since e is the identity of the multiplication, Φ restricted to (D2n × e)/(∂D2n × e) is the 2n 2n 2n 2n 2n inclusion of D /∂D as D− /∂D+ ⊂ Cf /D+ . Now we have a commutative diagram

2n S Cf

2n 2n Φ 2n D /∂D Cf /D+

2n where the left hand map collapses the hemisphere D+ to a point, which induces isomorphism in K-theory. Since β restricts to a generator of Ke(S2n), Φ pulls the element which is pulled back to β back to a generator of Ke(D2n/∂D2n). By this and the entirely symmetrical 2n argument for the other factor, with Cf /D− , we see that the element β ⊗ β in the top left is mapped to a generator of the bottom group, which is then mapped to a generator of 2n Ke(Cf /S ). By definition this element is mapped to ±α ∈ Ke(Cf ). Since this path through the diagram coincides with the product, we have shown β2 = ±α.

2.8 Adams Operations and Adams’ Theorem In order to use the Hopf invariant as a restriction which tells us which spheres are H-spaces, we will develop certain ring endomorphisms of K-theory, called Adams operations. The Adams operations are related to the ordinary product in K-theory, and can be calculated exactly for the K-theory of spheres; these two virtues will allow us to apply the operations in a way which gives the desired restriction on the sphere’s dimension. Their properties are given in the following theorem:

Theorem 2.3. For every compact Hausdorff space X there is a set of ring endomorphisms ψk : K(X) → K(X), indexed by the natural numbers, with the following properties:

1. ψkf ∗ = f ∗ψk for any map f of such spaces.

2. ψk(L) = Lk if L is a line bundle.

3. ψk ◦ ψ` = ψk`.

18 4. ψp(α) ≡ αp mod p for every prime p.

The last property meaning that ψp(α) − αp ∈ pK(X). The second property says that, looking at line bundles only, the operations simply raise elements to the corresponding power. This in general is not a endomorphism of the ring, of course, so we need to do some work to extend this to the rest of the ring. The last property then says that the prime Adams operations approximate the powers in the ring up to the subgroup pK(X). We will use these desired properties for a notion of what to aim at in our construction of the operations. In the more general case of a bundle E = L1 ⊕ ... ⊕ Ln, decomposed as a k k k sum of line bundles, the properties give ψ (E) = L1 + ... + Ln. We will construct operations which satisfy the properties on sums of line bundles, and then use the splitting principle to reduce the general case to that one. The construction will use the exterior powers λk of vector bundles, which have the following properties:

k ∼ L i k−i  1. λ (E1 ⊕ E2) = i λ (E1) ⊗ λ (E2) . 2. λ0(E) = 1, meaning the trivial line bundle.

3. λ1(E) = E.

4. λk(E) = 0 if k is greater than the maximum dimension of the fibres of E.

k Now the idea will be to define ψ (E) as some polynomial sk applied to the first through k 1 k k’th exterior powers of E: ψ (E) = sk(λ (E), . . . , λ (E)). To determine what this polyno- mial should be let us assume E = L1 ⊕...⊕Ln, a sum of line bundles. We can then describe j each λ (E) as a polynomial in the Li, by an inductive argument. 1 First, we have λ (E) = E = L1 + ... + Ln. Then, since exterior power higher than 1 vanish for line bundles, property (1) gives that for a line bundle Ln+1

k k k−1 λ (E ⊕ Ln+1) = λ (E) ⊗ 1 ⊕ λ (E) ⊗ Ln+1.

k n n Supposing that λ (L1 ⊕ ... ⊕ Ln) = σk (L1,...,Ln) for some set of polynomials σk , this gives a recurrence relation

n+1 n n σk (L1,...,Ln+1) = σk (L1,...,Ln) + Ln+1σk−1(L1,...,Ln). But this is precisely the recurrence relation of the elementary symmetric polynomials:

n X σk (x1, . . . , xn) = xj1 . . . xjk .

1≤j1<...

n+1 In the recurrence relation above the first term corresponds to the terms of σk with jk < n + 1, and the second term accounts for the terms where jk = n + 1. Thus the exterior powers of a sum of line bundles are given by these elementary symmetric polynomials applied to its summands. Now the existence of an appropriate polynomial sk follows from

19 k k the fundamental theorem of symmetric polynomials: since the polynomial L1 + ... + Ln is symmetric in its arguments, there is a unique polynomial sk such that

n n k k sk(σ1 (L1,...,Ln), . . . , σn(L1,...,Ln)) = L1 + ... + Ln.

These particular polynomials are called Newton polynomials. It is independent of n, which justifies simply denoting it by sk, since

k k n n L1 + ... + Ln−1 = sk(σ1 (L1,...,Ln−1, 0), . . . , σn(L1,...,Ln−1, 0)) = ... n−1 n−1 ... = sk(σ1 (L1,...,Ln−1), . . . , σn (L1,...,Ln−1)),

n n n with the convention σj = 0 for j > n. Since sk(σ1 (L1,...,Ln), . . . , σn(L1,...,Ln)) is of degree k, it depends only on the first k symmetric functions, so we consider it a polynomial in k variables, independently of n. We prove the following important fact about these polynomials, which will let us prove property (4) of the Adams operations:

p Proposition 2.14. For p prime, the polynomial P (y1, . . . , yp) = y1 − sp(y1, . . . , yp) has a factor p.

p p Proof. The first step is to note that this polynomial applied to (σ1(x1, . . . , xp), . . . , σp(x1, . . . , xp)) p is precisely the mixed part of (x1 + ... + xp) . The terms of that part have coefficients given p! according to the multinomial theorem as , where the kj are the exponents of the x’s, k1!...kp! and add up to p. Since p is prime and we are considering the mixed terms, these multinomial coefficients all have a factor p. p p Thus the composed polynomial P (σ1(x1, . . . , xp), . . . , σp(x1, . . . , xp)) has a factor p. Then p p write P (σ1(x1, . . . , xp), . . . , σp(x1, . . . , xp)) = pQ(x1, . . . , xp). The polynomial Q is symmet- 0 0 p p ric, so there is a unique P such that P (σ1(x1, . . . , xp), . . . , σp(x1, . . . , xp)) = Q(x1, . . . , xp). Then by uniqueness we must have pP 0 = P .

k 1 k Finally then we define ψ (E) = sk(λ (E), . . . , λ (E)). To prove that this satisfies all the properties we use the following result which we quote from [1]:

Theorem 2.4 (The Splitting Principle). Let E → X be a vector bundle over a compact Hausdorff space. Then there is a compact Hausdorff space F (E) with a map p: F (E) → X such that the pullback p∗ : K(X) → K(F (E)) is injective and p∗(E) is a sum of line bundles.

Now let us prove that all the properties of the Adams operations hold: Proof. We first note that exterior product is a natural operation, i.e. commutes with pull- backs: λj(f ∗(E)) = f ∗(λj(E)). This confirms property (1). Now to show additivity of the k operations, consider ψ (E1 ⊕E2). Take the map p1 : F (E1) → X from the splitting principle, ∗ ∗ ∗ ∗ and we have p1(E1 ⊕ E2) = p1(E1) ⊕ p1(E2) = L1 ⊕ ... ⊕ Ln ⊕ p1(E2). Next take the map ∗ ∗ ∗ ∗ 0 0 p2 : F (p1(E2)) → F (E1), and we have (p2◦p1) (E1⊕E2) = p2(L1)⊕...⊕p2(Ln)⊕L1⊕...⊕Lm. k ∗ k ∗ k 0k 0k By property (1) ψ (E1 ⊕ E2) pulls back to p2(L1) + ... + p2(Ln) + L1 + ... + Lm, and

20 k k k k k ∗ ∗ k ψ (E1) + ψ (E2) pulls back first to L1 + ... + Ln + ψ (p1(E2)), and then to p2(L1) + ... + ∗ k 0k 0k k p2(Ln) + L1 + ... + Lm. By injectivity of both pullbacks we then have additivity of ψ . 0 0 Next to show multipicativity. Suppose E and E are the sums of line bundles Li and Lj, respectively. Then E ⊗ E0 is the sum of the line bundles L ⊗ L0 and we calculate

k 0 X k X k X k 0k X k X k k k 0 ψ (E⊗E ) = ψ (Li⊗Lj) = (Li⊗Lj) = Li ⊗Lj = Li Lj = ψ (E)ψ (E ). i,j i,j i,j i j

Property (2) holds by construction. For property (3) we apply ψk ◦ ψ` to E, which we split over F (E):

∗ k ` k ` k ` k ` k` k` p ψ (ψ (E)) = ψ (ψ (L1 + ... + Ln)) = ψ (ψ (L1)) + ... + ψ (ψ (Ln)) = L1 + ... + Ln k` k` ∗ ∗ k` = ψ (L1 + ... + Ln) = ψ (p (E)) = p (ψ (E)), and property (3) follows from injectivity of p∗. p p 1 p 1 p For property (4), note that E − ψ (E) = λ (E) − sp(λ (E), . . . , λ (E)), and apply proposition 2.14.

k By property (1), ψ (a) restricts to zero in K(x0) if a does, so the operations restrict to endomorphisms of Ke(X). They respect the reduced exterior product, i.e. ψk(a ∗ b) = ψk(a) ∗ ψk(b). This follows from the following calculation, with q : X × Y → X ∧ Y the quotient map:

∗ k  k ∗ k ∗ ∗ k ∗ k ∗ q ψ (a ∗ b) = ψ (q (a ∗ b)) = ψ (pX (a)pY (b)) = ψ (pX (a))ψ (pY (b)) = ... ∗ k ∗ k ... = pX (ψ (a))pY (ψ (b)), which means that ψk(a ∗ b) = ψk(a) ∗ ψk(b). Now we can determine the Adams operations exactly in Ke(S2n). Proposition 2.15. ψk : Ke(S2n) → Ke(S2n) is multiplication by kn. Proof. We prove first the case n = 1. Let α = H − 1, with H the canonical line bundle over CP 1, be a generator of Ke(S2). Then ψk(α) = ψk(H − 1) = ψk(H) − ψk(1) = Hk − 1, since H is a line bundle. Now we can rewrite this as ψk(α) = (α + 1)k − 1 = 1 + kα − 1 = kα, since α ∈ Ke(S2), where all products are trivial. Now we proceed by induction, assuming the proposition holds for n. The external product Ke(S2) ⊗ Ke(S2n) → Ke(S2n+2) is an isomorphism, so picking generators α and β we have ψk(α∗β) = ψk(α)∗ψk(β) = kα ∗knβ = kn+1(α ∗ β), and since α ∗ β generates Ke(S2n+2) the proof is complete. Now for the theorem of Adams which will tell us which spheres can be H-spaces. We recall the space Cf constructed in the definition of the Hopf invariant, and the elements α, β ∈ Ke(Cf ). Theorem 2.5 (Adams’ Theorem). A map f : S4n−1 → S2n of odd Hopf invariant can only exist if n is 1, 2, or 4.

21 The map f we constructed had Hopf invariant ±1, so the theorem proves that the only odd-dimensional spheres which are H-spaces are S1, S3, and S7 (and in even dimension we of course have only S0). Proof. We examine the Adams operations on the elements α and β. Since α is the image of a generator of Ke(S4n), we have ψk(α) = k2nα by naturality. β maps to a generator of Ke(S2n), so ψk(β) maps to kn times that generator, so ψk(β) = knβ modulo something in k n the kernel, i.e. ψ (β) = k β + µkα for some integer µk. n 2 2 Property (4) gives that 2 β + µ2α = ψ (β) ≡ β = hα mod 2, with h the Hopf invariant. In the case where the Hopf invariant is odd, this means that µ2 is odd. Now we note that ψkψ` = ψk` = ψ`ψk, so that the expression

k `  k n n n 2n n ψ ψ (β) = ψ (` β + µ`α) = k ` β + (k µ` + ` µk)α is invariant under exchange of k and `. We subtract the expression with these symbols exchanged and note that we can cancel α since it generates a subgroup isomorphic to Z. 2n n 2n n This gives the expression (k − k )µ` = (` − ` )µk. Now take k = 2 and ` = 3 and we 2n n 2n n n n n n have (2 − 2 )µ3 = (3 − 3 )µ2, or 2 (2 − 1)µ3 = 3 (3 − 1)µ2. If the Hopf invariant n n n is odd, µ2 and 3 are odd, so 2 divides 3 − 1. Now the proof will be completed by the following lemma: Lemma 2.1. If 2n divides 3n − 1 then n is 1, 2, or 4. Proof. We take out the even part of n and write n = 2`m with m odd. Now we determine the highest power of 2 dividing 3n − 1 in terms of `, by induction; we will find that this is 2 for ` = 0 and 22+` for ` > 0. For ` = 0, 3n − 1 = 3m − 1 ≡ 2 mod 4, since 3 ≡ −1 mod 4, so with m odd 3m ≡ −1 ≡ 3 mod 4. Thus the highest power of 2 dividing 3n − 1 is 2 for ` = 0. For ` = 1 we have 3n − 1 = 32m − 1 = (3m − 1)(3m + 1). We just showed that the highest power of 2 dividing the first factor is 2. 3m + 1 ≡ 4 mod 8, since 32 ≡ 1 mod 8, so with m odd 3m ≡ 3 mod 8. Therefore the highest power of 2 dividing the second factor is 4, and the highest one dividing 3n − 1 is 8 = 22+1. Now for the inductive step, with ` ≥ 1, going from ` to ` + 1 means going from n to 2n, so we write 32n − 1 = (3n − 1)(3n + 1). Since 32 ≡ 1 mod 4 and n is even here, we have 3n + 1 ≡ 2 mod 4, so the highest power of 2 dividing the second factor is 2. Therefore the highest power of 2 dividing 32n − 1 is twice the highest one dividing 3n − 1, which by inductive assumption is 22+(`+1). Now if 2n divides 3n − 1, we have n ≤ 2 + `, meaning that 2` ≤ 2`m = n ≤ ` + 2. This inequality means that ` ≤ 2 and thus that n ≤ 4. It is a trivial matter to check the remaining cases n = 1,..., 4 and see that the ones which work are precisely 1, 2, and 4.

22 3 An-Spaces

3.1 The Stasheff Polytopes and An-Spaces To generalize the notion of an H-space, which has a multiplication X ×X → X with identity, we introduce the notion of An-spaces, for n ≥ 2. An A2-space is simply an H-space. An A3-space is an H-space along with a homotopy between the two ways of multiplying three elements, i.e. (x, y, z) 7→ (xy)z and (x, y, z) 7→ x(yz). Thus an A3-structure on an H-space X is a map I × X3 → X, whose restrictions to {0} × X3 and {1} × X3 are (x, y, z) 7→ (xy)z and (x, y, z) 7→ x(yz), respectively. In the general picture, we have n-ary maps Xn → X, parametrized by the Stasheff polytopes, or associahedra, to be described below. i−2 In [2] Stasheff constructs polytopes Ki, for i ≥ 2, such that Ki is homeomorphic to I . We think of Ki as parametrizing the ways in which i factors can be multiplied together; for instance K3 is the interval [0, 1], so 0 can correspond to (xy)z and 1 to x(yz), with the points in between interpolating between these two “pure” bracketings of the symbols. Each Ki has a boundary Li, such that Ki is the cone CLi on its boundary. Li is made up of facets, each of which is homeomorphic to a product Kr × Ks, with r + s = i + 1. These facets are indexed by the meaningful ways in which a set of parentheses can be inserted into the string x1 . . . xi. Inserting parentheses around s symbols, beginning with xj, as x1 . . . xj−1 (xj . . . xj+s−1) xj+s . . . xi, corresponds to a facet Kr × Ks, and the map which includes this facet into Li is ∂j(r, s): Kr ×Ks → Li. The component in Ks then parametrizes how the factors in the parenthesis are multiplied together, while the component on Kr does the same for the outer factors, with the parenthesis regarded as a single factor. Proposition 3.1. We have the following the relations: (a) ∂j(r, s + t − 1) ◦ (1 × ∂k(s, t)) = ∂j+k−1(r + s − 1, t) ◦ (∂j(r, s) × 1)

(b) If k < j or k > j + s − 1,

∂j+s−1(r + s − 1, t) ◦ (∂k(r, s) × 1) = ∂k(r + t − 1, s) ◦ (∂j(r, t) × 1) ◦ (1 × T )

In [2] these relations are taken as defining the polytopes inductively, with K2 a point. In n−2 the proof of proposition 3 Stasheff constructs these polytopes explicitly as subsets Kn ⊂ I . To understand what these properties say, consider for instance the insertion of two sets of parentheses as

x1 . . . xj−1(xj . . . xj+k−2(xj+k−1 . . . xj+k+t−2)xj+k+t−1 . . . xj+s+t−2)xj+s+t−1 . . . xi.

Property (1) then says that the order of insertion of these parentheses is irrelevant. That proposition also states that there are degeneracy maps sj : Ki → Ki−1 defined for 1 ≤ j ≤ i, for which the following relations hold:

Proposition 3.2. 1. sjsk = sksj+1 for k ≤ j.

23 2. sj∂k(r, s) = ∂k−1(r − 1, s)(sj × 1) for j < k and r > 2.

3. sj∂k(r, s) = ∂k(r, s − 1)(1 × sj−k+1) for s > 2, k ≤ j < k + s, sj∂k(i − 1, 2) = π1 for 1 < j = k < i and 1 < j = k + 1 ≤ i, s1∂2(2, i − 1) = π2 and si∂1(2, i − 1) = π2, πm being projection onto the m’th factor.

4. sj∂k(r, s) = ∂k(r − 1, s)(sj−s+1 × 1) for k + s ≤ j. These degeneracy maps correspond to deleting an identity factor in a string as in

x1 . . . xj−1exj+1 . . . xi → x1 . . . xj−1xj+1 . . . xi.

The degeneracy map then describes how to multiply the remaining i − 1 factors, given a parameter in Ki describing how to multiply the original i factors. Then for instance the first property says that in the expression x1 . . . xk−1exk+1 . . . xjexj+2 . . . xi the parameter describing how to multiply the i − 2 factors in the reduced string is independent of the order in which the identity factors are deleted. Now we take this as the definition of an An-space, or An-structure; by theorem 5 of [2] this is consistent with the terminology used there:

Definition 3.1. A space X with basepoint e is an An-space, or admits an An-structure, if i there are maps µi : Ki × X → X for 2 ≤ i ≤ n, such that the following hold:

1. µ2(∗, e, x) = µ2(∗, x, e) = x for every x ∈ X, where ∗ = K2.

2. for ρ ∈ Kr, σ ∈ Ks, with r + s = i + 1, we have

µi(∂k(r, s)(ρ, σ), x1, . . . , xi) = µr(ρ, x1, . . . , xk−1, µs(σ, xk, . . . , xk+s−1), xk+s, . . . , xi). Put another way,

i  k−1 r−k µi ◦ ∂k(r, s) × 1X = µr ◦ 1Kr × 1X × µs × 1X ◦ (1Kr × πk−1),

where πk−1 is the map which transposes the Ks-factor past k − 1 X-factors.

3. For τ ∈ Ki and i > 2, we have

µi(τ, x1, . . . , xj−1, e, xj+1, . . . , xi) = µi−1 (sj(τ), x1, . . . , xj−1, xj+1, . . . , xi)

Given an An-space X we get some algebraic structure by applying the singular chain complex functor C∗: we have induced maps

i (µi)∗ : C∗(Ki × X ) → C∗(X).

We want to use this to obtain some structure on C∗(X) itself. To this end we shall presently × define a cross product C∗(X) ⊗ C∗(Y ) −→ C∗(X × Y ) of singular chains, whereupon the composition ⊗i × i (µi)∗ C∗(Ki) ⊗ C∗(X) −→ C∗(Ki × X ) −−−→ C∗(X)

24 gives a way for the elements of C∗(Ki) to act by i-nary operations on C∗(X). We would however like to reduce from this huge set of operations to something smaller; in fact we can reduce to only having one i-nary operation for each i ≤ n. We shall do this at the end of this section by constructing a singular chain γi ∈ C∗(Ki) for each i ≤ n, and defining

mi(σ1, . . . , σi) = (µi)∗(γi × σ1 × ... × σi).

3.2 Simplicial constructions

n ∆ denotes the n-simplex, defined as the convex hull of the standard basis {e0, . . . en} in Rn+1. This is the same as the set of points whose coordinates are nonnegative and sum to 1. Any map from the vertices of ∆n to those of ∆m determines by linear extension a map n m n−1 n ∆ → ∆ . For 0 ≤ k ≤ n, we have face maps fk : ∆ → ∆ , determined by mapping the i’th vertex to the i’th vertex if i < k, and to the i + 1’th vertex if i ≥ k. We wish to construct the chains γi inductively, using the fact that Ki is the cone on its boundary Li. To this end we seek to extend the (i − 3)-simplices of Li to (i − 2)-simplicies n+1 n of Ki; define the cone simplex Cσ : ∆ → CX for a singular simplex σ : ∆ → X simply by applying the cone functor C, and identifying C∆n ∼= ∆n+1 via

n n+1 n+2 C∆ → ∆ ;[x, t] 7→ (t, (1 − t)x) ∈ R .

n n n+1 We look at what the functor C does with the face inclusion fj : ∆ → ∆ :

n n+1 ∼ n n+1 ∼ n+2 Cfj : ∆ = C∆ → C∆ = ∆       (x1, . . . , xn+1) n (x1, . . . , xn+1) (x0, . . . , xn+1) ∼ , x0 7→ fj , x0 ∼ (x0, . . . , xj, 0, xj+1, . . . , xn+1), 1 − x0 1 − x0

and this is precisely the inclusion of the j + 1’th face of ∆n+2, so we have a relation, letting n ∼ n+1 n n+1 the identification C∆ = ∆ be implicit, Cfj = fj+1 .

Proposition 3.3. The cone operator satisfies the relation ∂C +C∂ = ι∗, where ι: X,→ CX is the inclusion at the base of the cone.

Proof. It suffices to prove this for a singular simplex σ : ∆n → X. By definition ∂Cσ is the n n n+1 sum over the face inclusions fj : ∆ → ∆ :

n+1 X j n ∂Cσ = (−1) Cσ ◦ fj , j=0

while n X j n−1 C∂σ = C (−1) (σ ◦ fj ). j=0

25 Using linearity, and the functoriality of the cone functor, the latter sum can be written as

n n n+1 X j n−1 X j n  X j n C∂σ = (−1) Cσ ◦ Cfj = (−1) Cσ ◦ fj+1 = − (−1) Cσ ◦ fj . j=0 j=0 j=1

n Thus most of the sums cancel and we have ∂Cσ + C∂σ = Cσ ◦ f0 . But

n (Cσ ◦ f0 )(x0, . . . , xn+1) = Cσ(0, x0 . . . , xn+1) = [σ(x0, . . . , xn+1), 0] = (ι ◦ σ)(x0, . . . , xn+1).

Next we construct a cross product for singular chains: given singular simplices σ : ∆p → X and τ : ∆q → Y , the product map (σ, τ): ∆p × ∆q → X × Y is not a singular chain, since the products of the simplices is not a simplex (except when either is a point). We can however represent this map by a singular chain, essentially by giving a simplicial subdivision of ∆p ×∆q and defining σ×τ to be the sum of the restrictions of (σ, τ) to all the subsimplices, with appropriate signs. This is done by defining a set of maps ∆p+q → ∆p and ∆p+q → ∆q, whose products then give inclusions of subsimplices ∆p+q ,→ ∆p × ∆q. These maps are labelled by shuffles:

Definition 3.2. A (p, q)-shuffle is a partition of the set {1, . . . , p + q} into a pair of disjoint ordered subsets (µ, ν), of sizes p and q. The sign of a shuffle (µ, ν) is the sign of the permutation (µ1, . . . , µp, ν1, . . . , νq). Denote the set of (p, q)-shuffles Sh(p, q). We also generalize this to (p, q, r)-shuffles (µ, ν, λ), whose sign is that of the permutation (µ1, . . . , µp, ν1, . . . , νq, λ1, . . . , λr). Of course this generalizes in this natural way to partitions into any number of sets. We want to consider ordered collections of integers, with some integers perhaps appearing multiple times. Call these collection multisets; they can be thought of as nondecreasing maps from a finite well-ordered set to the integers. A multiset µ of integers of size p ordered nondecreasingly defines a map ∆p+q → ∆p by the following map of the vertices: a vertex whose index i satisfies µj ≤ i < µj+1 is mapped to the vertex with index j, with the convention µ0 = 0 and µp+1 = p + q + 1 to cover all cases (being nondecreasing means that this condition holds only for one j). Note that for a multiset µ with repeated entries this map may miss some vertices. Call this map ηµ : ∆p+q → ∆p. For each (p, q)-shuffle (µ, ν) we then have a map (ηµ, ην): ∆p+q → ∆p × ∆q.

Definition 3.3. The Eilenberg-Zilber map EZ : C∗(X) ⊗ C∗(Y ) → C∗(X × Y ) is defined on each generator σ ⊗ τ ∈ Cp(X) ⊗ Cq(Y ) by X EZ(σ ⊗ τ) = sgn(µ, ν)(σ, τ) ◦ (ηµ, ην). (µ,ν)∈Sh(p,q)

We use the shorthand σ × τ = EZ(σ ⊗ τ), and call this the cross product of σ and τ.

26 Now we wish to prove four properties of this map which will be necessary in its application:

Proposition 3.4. The cross product σ × τ satisfies the following:

1. It is a chain map C∗(X) ⊗ C∗(Y ) → C∗(X × Y ), i.e.

∂(σ × τ) = EZ(∂σ ⊗ τ + (−1)|σ|σ ⊗ ∂τ) = ∂σ × τ + (−1)|σ|σ × ∂τ.

2. It is associative, i.e. (σ × τ) × ρ = σ × (τ × ρ).

3. For a pair of maps f : X → X0 and g : Y → Y 0, the product map’s pushforward is

(f × g)∗(σ × τ) = (f∗σ) × (g∗τ).

4. The map T : X × Y → Y × X which transposes the factors has pushforward

|σ||τ| T∗(σ × τ) = (−1) τ × σ.

Before going into the proof of this let us prove a few relations between the maps ηµ and the simplicial face maps fk. For convenience fix the integers p and q throughout. Let the notation µ↓k mean the ordered multiset obtained from µ by subtracting 1 from each entry greater than k. Note that this may create extra copies of k, in the case where one or more entries of µ are k + 1. Let µ↑k mean the ordered set obtained from µ by doubling the k’th entry (which in the case k = 0 means inserting a zero as the first entry, and in the case k = |µ| + 1 means inserting p + q as the last entry). We then have the following relations:

µ µ↓k p+q−1 p+q p µ µ↑k p+q−1 p−1 p η ◦ fk = η : ∆ → ∆ → ∆ , fk ◦ η = η : ∆ → ∆ → ∆ .

We prove the first one first, by looking at the vertices. Label the vertices of the simplices, µ from left to right, {ui}, {vi}, and {wi}, respectively. For i < k, fk(ui) = vi, and η (vi) = wj ↓k ↓k ↓k with µj ≤ i < µj+1 ⇔ µj ≤ i < µj+1, this equivalence holding since µ and µ agree µ on entries ≤ k. Now for i ≥ k, fk(ui) = vi+1, and η (vi+1) = wj, with µj ≤ i + 1 < ↓k ↓k µj+1 ⇔ µj − 1 ≤ i < µj+1 − 1. This inequality holds for i, j iff µj ≤ i < µj+1 holds, since ↓k ↓k ↓k µj+1 − 1 = µj+1, and either µj − 1 = µj or µj ≤ k, in which case µj − 1 ≤ i ≥ µj = µj. Thus the first relation holds on all vertices. µ For the second one, if i < µk, we have η (ui) = vj and fk(vj) = wj, with µj ≤ i < µj+1 ⇔ ↑k ↑k ↑k µj ≤ i < µj+1, this equivalence holding since µ and µ agree on entries with index ≤ k. µ ↑k ↑k If i ≥ µk, we have η (ui) = vj and fk(vj) = wj+1, with µj ≤ i < µj+1 ⇔ µj+1 ≤ i < µj+2, ↑k this equivalence holding since µj = µj+1 for j ≥ k. Thus the second relation holds on all vertices. Now we go ahead and prove proposition 3.4 Proof. We prove the above for singular simplices, and it will hold for all chains by linearity.

27 1. In ∂(σ × τ) certain terms will cancel and the others make up the desired RHS. By defi- nition the boundary of the product is given by the signed sum over the precompositions with the face maps:

p+q X X k µ ν ∂(σ × τ) = (−1) sgn(µ, ν)(σ, τ) ◦ (η , η ) ◦ fk (µ,ν)∈Sh(p,q) k=0 p+q X X ↓k ↓k = (−1)ksgn(µ, ν)(σ, τ) ◦ (ηµ , ην ) (µ,ν)∈Sh(p,q) k=0

For 0 < k < p + q, note that k is distinguished in (µ↓k, ν↓k), as the only integer appear- ing twice. Thus the term corresponding to (µ, ν, k) can only be cancelled by a term corresponding to (α, β, k), for some shuffle (α, β) such that (α↓k, β↓k) = (µ↓k, ν↓k). This is possible iff k and k + 1 are separated in the partition (µ, ν). In that case transposing them gives another shuffle (α, β), and this transposition makes no difference after ap- plying ↓ k. The sign of (α, β) is the negative of that of (µ, ν), since the corresponding permutations differ by one transposition. Therefore the corresponding terms cancel in ∂(σ × τ). Now what remains are the terms coming from shuffles where either µ or ν contain both k and k + 1, and the terms with k = 0 or k = p + q. Let us look at the terms where k, k + 1 ∈ µ, or k = 0 with 1 ∈ µ, or k = p + q ∈ µ:

X ↓k ↓k (−1)ksgn(µ, ν)(σ, τ) ◦ (ηµ , ην ). (3.1) (µ,ν),k

This will correspond to the term ∂σ × τ:

p X X k α β X k α↑k β (−1) sgn(α, β)(σ◦fk, τ)◦(η , η ) = (−1) sgn(α, β)(σ, τ)◦(η , η ) (α,β)∈Sh(p−1,q) k=0 (3.2) Now each (α↑k, β) can be obtained from a (p, q)-shuffle (µ, ν): simply take (α↑k, β) ↑k ↑k and add 1 to all entries strictly greater than αk, as well as to αk+1. Then (α , β) = (µ↓αk , ν↓αk ). Note conversely that each (µ↓k, ν↓k) in (3.1) is uniquely obtained from a (p − 1, q)-shuffle (α, β): simply take (µ↓k, ν↓k) and remove the entry k from µ↓k. Then (µ↓k, ν↓k) = (α↑`, β), where ` is the index of k in µ. Thus the terms in (3.1) and (3.2) match up one-to-one, and to show equality we only need to show that signs agree. The term (−1)ksgn(α, β)(σ, τ) ◦ (ηα↑k , ηβ) corre- sponds to the term (−1)αk sgn(µ, ν)(σ, τ) ◦ (µ↓αk , ν↓αk ), with µ and ν formed as de- scribed above. sgn(α, β) is the sign of the permutation of (1, . . . , p + q) taking it to (α1, . . . , αp−1, β1, . . . , βq, p + q), while sgn(µ, ν) is the sign of the permutation

(α1, . . . , αk, αk + 1, αk+1 + 1, . . . , αp−1 + 1, β1, . . . , β`, β`+1 + 1, . . . , βq + 1).

28 If we precompose the first permutation with the stepwise descending cycle ((p + q)(p + q − 1) . . . αk), we get the permutation

(α1, . . . , αk−1, αk + 1, αk+1 + 1, . . . , αp−1 + 1, β1, . . . , β`, β`+1 + 1, . . . , βq + 1, αk).

To get our desired permutation we simply transpose αk past q+(p−1−(k−1)) = p+q−k p+q−αk entries. The cycle has length p + q − (αk − 1), so contributes a sign (−1) . The transpositions contribute a sign (−1)p+q−k. Thus in sum the signs of the two permutations differ by

sgn(µ, ν) = (−1)2p+2q−αk−ksgn(α, β) = (−1)k−αk sgn(α, β). Inserting this into our comparison of terms we see that signs agree, so each term of (3.1) equals one of (3.2), and vice versa. The above holds symmetrically for the term (−1)pσ × ∂τ, with the sign difference coming from the fact that βk in the end is transposed only past q − 1 − (k − 1) = q − k entries, so we get a total sign difference (−1)p+q−βk+q−k = (−1)|σ|+k−βk . 2. Let σ, τ, ρ be p-, q-, and r-simplices, respectively. Then (σ × τ) × ρ is X X sgn(µ, ν) ((σ ◦ ηµ, τ ◦ ην) × ρ) = sgn(µ, ν)sgn(θ, λ) ((σ ◦ ηµ, τ ◦ ην) , ρ) ◦ (ηθ, ηλ) (µ,ν)∈Sh(p,q) (µ,ν)∈Sh(p,q) (θ,λ)∈Sh(p+q,r)

µ θ θ We compute the composition η ◦ η . We have η (ui) = vj, with θj ≤ i < θj+1, and µ µθ η (vj) = wk, with µk ≤ j < µk+1. But if we define (µθ)k = θµk , we have η (ui) = wk,

with θµk ≤ i < θµk+1 . With j = µk we then have θj ≤ i < θj+1 and µk ≤ j < µk+1, so ηµ ◦ ηθ = ηµθ. Since (µ, ν) and (θ, λ) are shuffles, (µθ, νθ, λ) is a (p, q, r)-shuffle, and every such shuffle can be obtained in this way by appropriate choice of µ, ν, θ. Note that there are no double-countings of (p, q, r)-shuffles in the above sum since λ determines θ, and then for each (µ, ν) we have distinct (µθ, νθ, λ). Now for the sign, note that sgn(µ, ν) is the sign of the permutation of (1, . . . , p + q + r) taking it to (µ1, . . . , µp, ν1, . . . , νq, p + q + 1, . . . , p + q + r). Applying this after the permutation (θ1, . . . , θp+q, λ1, . . . , λr) gives the permutation

(θµ1 , . . . , θµp , θν1 , . . . , θνq , λ1, . . . , λr), whose sign is sgn(µθ, νθ, λ). Thus we have shown that (σ × τ) × ρ can be written as the following sum over (p, q, r)-shuffles: X sgn(α, β, λ) σ ◦ ηα, τ ◦ ηβ, ρ ◦ ηλ . (α,β,λ)∈Sh(p,q,r) Now all the above holds symmetrically, expressing σ × (τ × ρ) as the same sum - the one difference of note is that we use the permutation (1, . . . , r, µ1 + r, . . . , µp + r, ν1 + r, . . . , νq + r), whose sign is also sgn(µ, ν).

29 3. This is immediate from the following calculation:

X µ ν (f × g)∗(σ × τ) = sgn(µ, ν)(f × g)∗(σ, τ) ◦ (η , η )

X µ ν = sgn(µ, ν)(f ◦ σ, g ◦ τ) ◦ (η , η ) = EZ(f∗σ ⊗ g∗τ).

4. X µ ν X ν µ T∗(σ × τ) = sgn(µ, ν)T ◦ (σ, τ) ◦ (η , η ) = sgn(µ, ν)(τ, σ) ◦ (η , η ). Now the permutation (µ, ν) is obtained from (ν, µ) by moving q entries past p entries, meaning that the sign difference is (−1)pq. Thus

pq X ν µ |σ||τ| T∗(σ × τ) = (−1) sgn(ν, µ)(τ, σ) ◦ (η , η ) = (−1) EZ(τ ⊗ σ).

These properties imply in particular that we can unambiguously write a product of chains σ1 × ... × σn without inserting parentheses, and that

n Pj−1 X |σi| ∂(σ1 × ... × σn) = (−1) i=1 σ1 × ... × ∂σj × ... × σn j=1

Having defined this cross product, we are ready to use it to construct our singular repre- sentations of the Stasheff Polytopes.

3.3 Singular Representations of the Associahedra

Denote by Fi the set of face-inclusions ∂j(r, s): Kr × Ks → Li ⊂ Ki. Our inductive con- 0 struction of the γi starts by letting γ2 be the only map ∆ → K2. Then, having constructed γ2 through γi−1, let λi ∈ C∗(Li) be the singular chain defined by

X j(s+1)+i λi = (−1) ∂j(r, s)∗(γr × γs). (3.3)

∂j (r,s)∈Fi

This chain represents the boundary of Ki by including the cross products of the chains representing the factors of its facets, with signs chosen so that this chain is closed, and to agree with the sign conventions of an A∞-algebra. We simply let γi be the cone Cλi.

Proposition 3.5. ∂λi = 0 for every i ≥ 3.

Proof. Since each facet of a facet in Li is a common facet of exactly two facets of Li, the terms in the boundary of each facet will cancel with terms in the boundaries of neighbouring facets. The base case of the induction argument holds since λ3 is a sum of pushforwards of the product γ2 × γ2 of closed chains. Now for the inductive step ∂λ` = 0 for ` < i implies that ∂γ` = ∂Cλ` = ι∗λ`; we simplify notation and omit the inclusion ι∗.

30 k(s+1)+i Let us look at the boundary of the term (−1) ∂k (r + t − 1, s)∗ (γr+t−1 × γs), noting that every term takes this form for appropriately chosen indices. Consider the part of the boundary where the differential acts on the left γ-factor. Either r + t − 1 is 2, in which case that part is zero (γ2 being closed). Otherwise ∂γr+t−1 = λr+t−1, and we pick out the part j(t+1)+r+t−1 coming from the term (−1) ∂j(r, t)∗ (γr × γt):

ρ ρ (−1) ∂k(r+t−1, s)∗(∂j(r, t)∗(γr×γt)×γs) = (−1) (∂k(r + t − 1, s) ◦ (∂j(r, t) × 1))∗ (γr × γt × γs) , with ρ = k(s + 1) + i + j(t + 1) + r + t − 1. Now we have two cases: • j ≤ k ≤ j + s − 1:

∂k(r+t−1, s)◦(∂j(r, t) × 1) = ∂j(r, t+s−1)◦(1 × ∂k−j+1(t, s)), so the term considered above can be written as

ρ (−1) (∂j(r, t + s − 1) ◦ (1 × ∂k−j+1(t, s)))∗ (γr × γt × γs) . (3.4) A term like this, up to sign, can only come from the boundary of the term

j(t+s)+i (−1) ∂j(r, t + s − 1)∗(γr × γt+s−1), and from a term in the part of the boundary where the differential acts on the factor γt+s−1. This term is

σ (−1) (∂j(r, t + s − 1) ◦ (1 × ∂k−j+1(t, s)))∗ (γr × γt × γs), (3.5) with σ = j(t + s) + i + (k − j + 1)(s + 1) + t + s − 1 + r. (Note the sign (−1)r from moving the differential past γr.) Now canceling terms shows that σ = ρ + 1 (mod 2), so the terms in (3.4) and (3.5) cancel. •¬ (j ≤ k ≤ j + s − 1):

∂k(r + t − 1, s) ◦ (∂j(r, t) × 1) = ∂j+s−1(r + s − 1, t) ◦ (∂k(r, s) × 1) ◦ (1 × T ), so the term considered above can be written as

ρ+st (−1) (∂j+s−1(r + s − 1, t) ◦ (∂k(r, s) × 1))∗ (γr × γs × γt) , (3.6) st since T∗(γt × γs) = (−1) γs × γt. A term like this, up to sign, can only come from the (j+s−1)(t+1)+i boundary of the term (−1) ∂j+s−1(r + s − 1, t)∗(γr+s−1 × γt), and from a term in the part of the boundary where the differential acts on the factor γr+s−1. This term is σ (−1) (∂j+s−1(r + s − 1, t) ◦ (∂k(r, s) × 1))∗ (γr × γs × γt), (3.7) with σ = (j + s − 1)(t + 1) + i + k(s + 1) + r + s − 1. Now canceling terms shows that σ = ρ + st + 1 (mod 2), so the terms in (3.6) and (3.7) cancel.

The conclusion is that every term in the boundary of λi where the differential has acted on the left γ-factor will cancel with a unique term of ∂λi. Because of the relation in proposition (3.1) (a), we see also that each term where the differential has acted on the right γ-factor cancels with one of the other terms. Thus the boundary of λi is zero.

31 Now we have constructed chains γi representing all the Ki, with ∂γi = λi. We can view these chains which represent Kn and all its facets and subfacets as making up a subcomplex ∼ of C∗(Kn). We define this inductively: A2 is hγ2i = Z, and Ai is the sum over all the inclusions ∂j(r, s) ◦ ×: Ar ⊗ As → C∗(Ki), along with the subgroup generated by γi. Finally we can use these chain representatives to construct n-ary operations on the chain complex C∗(X), which give the structure of an A∞-algebra.

Theorem 3.1. Given an An-structure on X, the maps on C∗(X)

mk(a1, . . . , ak) = (µk)∗ (γk × a1 × ... × ak) , m1 = ∂ (3.8) satisfy the An-relations

k−s+1 X X ρ (−1) mr (a1, . . . , aj−1, ms (aj, . . . , aj+s−1) , . . . ak) = 0 (3.9) r+s=k+1 j=1 r,s≥1 for all k ≤ n, where ρ = j(s + 1) − 1 + s(|a1| + ... + |aj−1|). Cf. definition 2.1 of [3] for these relations, where more consequences of the theorem above, which is theorem 2.3 of that article, are given. Proof. The term with r = 1, s = k is

k k (−1) ∂mk(a1, . . . ak) = (−1) (µk)∗∂(γk × a1 × ... × ak). We express this as the negative of the rest of the sum (3.9) by applying the differential to each factor of the cross product in turn. We deal first with when the differential acts on one of the aj: these term make up the sum

k k X k+|a1|+...+|aj−1| (−1) (µk)∗(−1) (γk × a1 × ... × ∂aj × ... × ak) j=1 k X |a1|+...+|aj−1| = (−1) mk(a1, . . . , m1(aj), . . . , ak). j=1 This is precisely the negative of the part of (3.9) where r = k, s = 1. Next we look at the terms where the differential acts on γk. Since ∂γk = λk, this is the sum

k X j(s+1)+k (−1) (−1) (µk)∗(∂j(r, s)∗(γr × γs) × a1 × ... × ak)

∂j (r,s)∈Fk X j(s+1) k  = (−1) µk ◦ ∂j(r, s) × 1X ∗ (γr × γs × a1 × ... × ak) . ∂j (r,s)∈Fk

k  Now because of the An-structure, the map µk ◦ ∂j(r, s) × 1X is equal to j−1 r−j+1 µr ◦ 1Kr × 1X × µs × 1X ◦ (1Kr × πj−1) ,

32 where πj−1 permutes the Ks-factor past j − 1 X-factors. As a sequence of transpositions, this induces a pushforward which permutes cross product factors with a sign:

s(|a1|+...+|aj−1|) (πj−1)∗(γs × a1 × ... × ak) = (−1) (a1 × ... × aj−1 × γs × aj × ... × ak).

Therefore the sum becomes

X j(s+1)+s(|a1|+...+|aj−1|) (−1) (µr)∗ (γr × a1 × ... × aj−1 × (µs)∗ (γs × aj × ... × aj+s−1) × ... × ak)

∂j (r,s)∈Fk

X j(s+1)+s(|a1|+...+|aj−1|) = (−1) mr (a1, . . . , aj−1, ms(aj, . . . , aj+s−1), . . . , ak) ,

∂j (r,s)∈Fk and this is precisely the negative of the rest of the terms in (3.9), noting that the indices j, r, s for the face inclusions have the same constraints as in that sum.

References

[1] Allen Hatcher. Vector bundles and k-theory. https://pi.math.cornell.edu/ ~hatcher/VBKT/VB.pdf. [2] J.D. Stasheff. Homotopy Associativity of H-spaces I. Transactions of the American Mathamatical Society, 108:275–292, August 1963.

[3] J.D. Stasheff. Homotopy Associativity of H-spaces II. Transactions of the American Mathamatical Society, 108:293–312, August 1963.

33