THE WEDGE SUM AND THE IN TYPE THEORY

MASTER THESIS MATHEMATICAL INSTITUTE LMU MUNICH

ANDREAS FRANZ THESIS SUPERVISOR: DR. IOSIF PETRAKIS

Abstract. Martin-Löf’s intensional type theory (ITT) can be extended with higher inductive types (HITs), which generalize inductive types, since, in addition to point constructors, constructors that output (higher) paths are allowed. We focus on a specific class of HITs, the so-called pushout types, which define important types from the geometric point of view. In order to show that pushouts are invariant under homotopy, and thus feasible in the framework of homotopy type theory, we develop a tool-set to induce the right morphisms between them. We elaborate two fundamental examples of pushout types, the wedge sum and the smash product of pointed types. Using our tool-set we show that these two pushouts correspond to the and tensor product, respectively, in the subcategory of pointed types and base-point preserving functions.

Contents 1. Introduction2 2. Intensional type theory4 2.1. Function types6 2.2. Dependent function types7 2.3. Inductive types8 2.3.1. Identity types9 2.3.2. Empty type 15 2.3.3. Unit type 15

Date: August 25, 2017. 1 2 ANDREAS FRANZ

2.3.4. Pair types 16 2.3.5. Dependent pair types 18 2.3.6. Coproduct types 22 2.4. Propositions as types 23 3. The homotopy interpretation 23 3.1. Basic behavior of functions and identifications 23 3.2. and equivalences 30 4. Pushouts 44 4.1. Basics 44 4.2. Pushout functions 60 4.3. Homotopy invariance of pushout types 72 5. The wedge sum 76 6. The smash product 86 7. Conclusion 111 References 112

1. Introduction In this thesis we study an extension of intensional type theory (ITT) by higher inductive types (HITs). ITT was originally proposed by Per Martin Löf in the 1970s [9] as a formal system for doing constructive mathematics, e.g. in the sense of Bishop [2]. In section2 we give an elementary introduction to this theory. In particular, we include several standard examples of inductive types. Roughly speaking, an inductive type is defined by giving its constructors (constructors can be seen as functions that generate terms of the type being defined) and specifying an induction principle, which expresses that the inductive type is “freely generated” by this constructors. The identity type of two terms x, y of type A, written x =A y, is an example of an inductive type that will be of particular interest to us. It has just one constructor refla : a =A a expressing that every term is equal to itself. Similar to sets, types can have inhabitants and one might think of identity types x =A y as sets with at most one element, depending on weather x equals y or not. This idea that any two inhabitants of some identity type are equal is also called the principle of uniqueness of identity proofs (UIP) and ITT becomes extensional, 3 when we add such a principle. This approach has some disadvantages, in particular type checking is not algorithmically decidable, whereas in ITT it is. The first hint that it is reasonable to refute UIP was provided by Hofmann and Streicher [7], who gave a model of ITT in which the interpretation of identity types has indeed multiple elements. The novel point of view taken in homotopy type theory is to think of types A not as sets, but as spaces and in section3 we present basic results that support this nomenclature. Under the types as spaces interpretation the terms a : A become points and the inhabitants of the identity types x =A y become paths between x and y in the space A. What is more, this interpretation can be extended in a natural way when forming identity types recursively, i.e. from x, y : A we can form the type x =A y of identifications or paths between x and y, from p, q : x =A y we can form the type p =x=Ay q of identifications between those identifications or homotopies and so on. Under the types as spaces interpretation it is not desirable anymore to assume that identity types are inhabited by a single term, since there may be several paths between two points that are not homotopic to each other. In section4 we present pushout-types, a specific class of higher inductive types (HITs) that generalize inductive types, since constructors that generate identifications (that may now be unequal to reflexivity) are allowed. Pushout is a type former, where from three types A, B and C together with two functions f : C → A and g : C → B we may form the pushout-type A tC B. The reason we restrict ourselves to pushout-types is that, at the present moment, a general theory of higher inductive types is still missing. Still, pushout-types provide a general enough framework to formally represent and thus reason about a large class of spaces in type theory that have non trivial homotopy, such as the circle or the interval, without having to consider point set first. Additionally they are interesting also from a purely type theoretic point of view. As an example we include the proof of function extensionality (see lemma 4.6). In section 4.3 we show, that the pushout of types is invariant under homotopy in the sense that if we have equivalences α : A → A0, β : B → B0 and γ−1 : C0 → C, the pushouts AtC B of (A, B, C, f, g) and A0 tC0 B0 of (A0,B0,C0, α◦f ◦γ−1, β ◦g◦γ−1) are equivalent. To this end we develop a tool-set to induce the right morphisms between pushout types. In the last two sections of this work we restrict our 4 ANDREAS FRANZ attention to pointed types and base-point preserving maps, i.e. every type A comes together with a specified base-point ∗A : A and every map f : A → B comes together with a proof pf : ∗B =B f(∗A). We present several examples of important pushouts in this setting, in particular the appropriate notions for a (binary) coproduct called “wedge sum” and a tensor product, called the “smash product” of pointed types. We note that related work has been done by Guillaume Brunerie [3], Evan Cavallo [5] and Robert Graham [6] and parts of their work have been formalized in Agda (see [4]).

Acknowledgements. I want to thank my thesis supervisor Dr. Iosif Pe- trakis, as well as Dr. Chuangjie Xu, for their guidance and support throughout this project. They suggested countless improvements to this work and greatly helped me to develop my understanding of homotopy type theory. Without the many hours they took to read and discuss my ideas, I would not have come as far. I also want to express my appreciation to Prof. Schuster, who gave me the possibility to work on homotopy type theory during a research internship at the university of Verona and invited me to a conference, where I presented some of my results. None of this would have been possible without my supervisor Dr. Petrakis, who introduced us and could not have been more encouraging to me.

2. Intensional type theory Before we present the basic types of ITT, we briefly discuss some of the underlying ideas of the theory. We refer to chapter one of the HoTT book [10] for a more detailed description. In ITT there is a primitive notion for collection, called a type and we call the elements of types terms, inhabitants or points. Membership of a term a to a type A, written a : A, and judgmental equality between two terms a : A and a0 : A, written a ≡ a0, are the two basic judgments of type theory. We note that judgments bind more loosely than anything else. The basic rules of ITT stipulate certain constructions on types and terms that we are allowed to perform. We will make use of a universe type, denoted by U, whose inhabitants are itself types. For consistency reasons U can not inhabit itself, thus formally we have to work inside an infinite hierarchy of universes U0 : U1 : U2 : ..., but since we will never leave the scope of a particular universe, we will just omit the subscript. It is important to 5 understand that the rules of a specific type theory are not formulated inside some meta-theory, but instead have to be specified separately, whenever we introduce a new type, thus ITT is really a collection of concrete types, each coming with its own set of rules. The general pattern to define a new type is as follows. We need to give: (i) A formation rule, explaining how to form new types, that is, how to infer judgments of the form A : U that A is a type. We can for example introduce the type of functions A → B : U if we are given two types A : U and B : U. (ii) Introduction rules, explaining how to introduce new terms of a type. The introduction rules for the type of natural numbers N say that 0 : N , i.e. zero is a natural number and, given some number n : N, also its successor is a natural number, that is, succ(n): N. (iii) Elimination rules, also called induction principles, explaining how to use inhabitants of a type. For example, the elimination rule for function types expresses that we can apply a function f : A → B to an inhabitant a of A in the domain of the function and this yields an inhabitant f(a) of B in the codomain. (iv) Computation rules, explaining the procedural behavior of eliminators acting on points. For instance, if we are given some b : B, we

can introduce the constant function bA : A → B with the obvious computation rule bA(a) ≡ b. (v) Optionally a uniqueness principle for maps into or out of the type. We can interpret first order logic directly in ITT by assigning the quantifiers and logical connectives appropriate formation rules in such a way that the introduction and elimination rules correspond precisely to the rules of logical inference. In other words, propositions are special types and proving a propo- sition means to construct an inhabitant of the corresponding type (we may even speak of a proof-term in this situation). For instance, logical implication is naturally interpreted in function types and proving an implication is iden- tified with the act of constructing a function of appropriate type. This is also called the propositions as types interpretation or Curry-Howard isomorphism (see [8] for Howard’s original paper). To give a complete presentation of this 6 ANDREAS FRANZ interpretation, but also for convenience reasons, we introduce types for each of the logical connectives, even though some could be introduced as special cases of others.

Lastly we note that in the setting of type theory, terms can not be in- troduced in isolation of their type, but the type of a term is a part of its intrinsic nature. Consequently it does not make sense to ask if a term a has type A. This sort of question would be sensible in the setting of set theory, where a ∈ A is a proposition which has to be proven inside the superstructure of first order logic. Under the propositions as types interpretation a : A is not a proposition but is expressing the judgment that a is a proof of the proposition A.

2.1. Function types. Given two types A and B, we may form the type of functions A → B. When forming function types, we associate to the right, i.e. A → B → C is to be read as A → (B → C). The most basic way to construct elements of a function type is by λ-abstraction. Given some term Φ: B, possibly involving a free occurrence of the variable x : A, we can set:

λx.Φ: A → B

The computation rule, also called β-reduction, for this function says that in order to evaluate it at some given point a : A, we need to substitute a for x in the expression Φ. We write this as:

(λx.Φ)(a) :≡ Φ[x/a]

If not specified otherwise by using brackets, the scope of a λ-abstraction is the entire expression after “λx.”. Alternatively, we can name a function by giving a defining equation. For instance, in the above context, we can introduce a function f : A → B by

f(x) :≡ Φ, which evaluates for a given a : A as:

f(a) :≡ Φ[x/a] 7

Of course it should not matter, if we define a function by λ−abstraction or by giving it a name and therefore we add a rule:

f ≡ λx.f(x)

This rule is also called η−conversion and it expresses that functions are determined by their values, i.e. it is a uniqueness principle for functions. Under the propositions as types interpretation functions correspond to proofs of implications. We note that indeed we can read the introduction and elimination rules as “given that B is true, then A → B is true” and “if A → B is true and A is true, then B is true”.

Remark 2.1. Sometimes we will also use the notation x 7→ Φ instead of λx.Φ, to denote a function of type A → B, where x : A and the expression Φ: B possibly involves a free occurrence of the variable x.

2.2. Dependent function types. Dependent functions are a generalization of ordinary functions for which the type of the codomain may vary depending on points in the domain. We model this by using type families P over A, which are inhabitants of the type A → U, that is, P represents a function that gives a type P (a): U for every given a : A. Given a type A : U and a type Q family P : A → U we can form the type of dependent functions (x:A) P (x). If not specified otherwise by using brackets, the scope of such an expression is Q everything after “ (x:A)”. Assume we are given some term Φ: P (x), possibly involving a free occurrence of the variable x. As in the non-dependent case, to introduce a dependent function, we can use λ-abstraction Y λx.Φ: P (x) x:A or specify a name F by giving a defining equation

F(x) :≡ Φ.

As above we formulate a uniqueness principle

F ≡ λx.F(x) and an obvious computation rule

F(a) :≡ Φ[x/a]. 8 ANDREAS FRANZ

Under the proposition as types interpretation dependent functions correspond to universal quantification and applying a dependent function to an inhabitant a of A gives a proof F(a): P (a) that a has property P . We note that the type of functions A → B is just a special case, when the type family P is constant B. An important class of dependent functions are polymorphic functions, which take as one of their inputs a type. This can be seen as a generic way to define functions. A basic example is given by the class of identity functions Y id : A → A, A:U which is defined by λA.λx.x.

We write idA : A → A for id(A), the identity function on A. Another example are constant functions, where we assume we are given types A, B : U and some fixed b : B. We can then define: Y λA.λB.λx.b : A → B A,B:U

We also write bA for the function (λA.λB.λx.b)(A)(B). 2.3. Inductive types. The idea of inductive types is to specify a set of constructors, that is, functions whose codomain is allowed to be the type under definition, together with an induction principle stating that in order to show a property for all elements of the inductive type, it suffices to establish the property for all those elements that can be reached by applying the constructors repeatedly. We say that the type is freely generated by its constructors. Recall that in ITT, predicates on inhabitants x of A are modeled by type families P : A → U and to proof that P holds for all x : A Q means to construct an inhabitant of (x:A) P (x). We mentioned that if the Q type family is constant, i.e. P (x) ≡ B for some type B, the type (x:A) P (x) is the type A → B of ordinary functions and then we speak of recursion instead of induction.

Example 2.2. A standard example of an inductive type is the type N of natural numbers. It is generated by two constructors

0 : N 9 and succ : N. In order to establish some property P : N → U for all n : N, we need to give a proof

p0 : P (0) that 0 has property P and additionally provide an inhabitant Y  ps : P (n) → P succ(n) n:N and then, from a proof q : P (n) that n has property P , we get a proof  ps(n)(q): P succ(n) that succ(n) has property P .

2.3.1. Identity types. One example of an inductive type that is particularly important to us is the type of identifications.

Definition 2.3 (Identity types). For every type A : U and every two terms x, y : A we can form the type of identifications x =A y. For every x : A we have a canonical proof

reflx : x =A x that x is equal to itself. The induction principle says that if we are given a type family Y C : (x =A y) → U x,y:A and a function Y c : C(x, x, reflx), x:A then there is a dependent function Y Y F : C(x, y, p),

x,y:A p:x=Ay such that

F(x, x, reflx) :≡ c(x) 10 ANDREAS FRANZ for every x : A.

Remark 2.4. Often the type A : U in x =A y will be clear from the context and in this case we may also write x = y for the type of identifications between x and y.

Let us take a closer look at the induction principle. Put in simpler terms it says that, in order to establish some property C(x, y, p) for x, y : A and p : x =A y, it suffices to consider the case where x ≡ y and p ≡ reflx. Therefore, in order to establish a property C for all triplets (x, y, p), it suffices to show it on the triplets of the form (x, x, reflx). Using the induction principle, we now present several properties of identity types. We will do the proofs in this section in more detail and later on switch to terminology as noted above.

Lemma 2.5. Identity is a symmetric relation, i.e. for every type A and every x, y : A there is a function

(x =A y) → (y =A x) denoted by λp.p−1, such that for every x : A

−1 reflx ≡ reflx. We call p−1 the inverse of p.

Proof. We consider the type family Y C : (x =A y) → U, x,y:A defined by the equation

C(x, y, p) :≡ (y =A x). Then, for every x : A, we have that

C(x, x, reflx) ≡ (x =A x) and therefore

reflx : C(x, x, reflx). 11

We conclude that Y λx.reflx : C(x, x, reflx). x:A By the induction principle for identity types, there is a dependent function Y Y λx.λy.λp.p−1 : C(x, y, p),

x,y:A p:x=Ay such that −1 reflx ≡ reflx.  Lemma 2.6. Identity is a transitive relation, i.e. for every type A and every x, y, z : A there is a function

(x =A y) → (y =A z) → (x =A z) denoted by λp.λq.p  q, such that, for every x : A

reflx  reflx ≡ reflx.

We call p  q the concatenation of p and q.

Proof. We consider the type family Y C : (x =A y) → U, x,y:A defined by the equation Y C(x, y, p) :≡ (y =A z) → (x =A z). z:A We aim to inhabit the type Y (1) C(x, x, reflx). x:A Since for every x : A, we have that Y C(x, x, reflx) ≡ (x =A z) → (x =A z), z:a this means to inhabit Y (x =A z) → (x =A z). x,z:A 12 ANDREAS FRANZ

Again, we can use the induction principle for identity types, where we consider the type family Y D : (x =A z) → U, x,z:A defined by the equation

D(x, z, q) :≡ (x =A z). Then, for every x : A, we have that

D(x, x, reflx) ≡ (x =A x) and therefore Y λx.reflx : D(x, x, reflx). x:A By the induction principle for identity types, there is a dependent function Y Y F : D(x, z, q),

x,z:A q:x=Az such that

F(x, x, reflx) ≡ reflx. As noted above, we have Y Y Y C(x, x, reflx) ≡ (x =A z) → (x =A z) ≡ x:A x:A z:A Y Y ≡ D(x, z, q),

x,z:A q:x=Az therefore F inhabits (1). By the induction principle for identity types, there is a function Y Y G : C(x, y, p),

x,y:A p:x=Ay such that

G(x, x, reflx) ≡ F(x). Lastly, we note that

H :≡ λx.λy.λz.λp.λq.G(x, y, p)(z, q) 13 has type Y Y Y x =A z x,y,z:A p:x=Ay q:y=Az and therefore, given x, y, z : A, p : x =A y and q : y =A z, we suppress x, y and z and define (p  q) :≡ H(x, y, z, p, q) and then

(reflx  reflx) ≡ H(x, x, x, reflx, reflx) ≡

≡ G(x, x, reflx)(x, reflx) ≡ F(x, x, reflx) ≡ reflx.

 Lastly we prove that equal terms can be substituted for one another: if x = y, then any property P : A → U that we can establish for x must hold also for y.

Lemma 2.7 (Indiscernability of identicals). Assume we are given a type family P : A → U, two terms x, y : A and a path p : x =A y. Then there is a function transportP (p): P (x) → P (y), such that for every x : A we have

P transport (reflx) ≡ idP (x).

Proof. We consider the type family Y C : (x =A y) → U, x,y:A defined by the equation

C(x, y, p) :≡ P (x) → P (y).

Then we have that

C(x, x, reflx) ≡ P (x) → P (x) and we can inhabit this type by idP (x). But then Y λx.idP (x) : C(x, x, reflx) x:A 14 ANDREAS FRANZ and by the induction principle for identity types, there is a dependent function Y Y F : C(x, y, p),

x,y:A p:x=Ay such that

F(x, x, reflx) ≡ idP (x).

Therefore, for any given x, y : A and p : x =A y, we suppress x and y and define transportP (p) :≡ F(x, y, p). Then it holds that

P transport (reflx) ≡ F(x, x, reflx) ≡ idP (x).

 The following lemma covers the case where the type family P in lemma 2.7 is constant.

Lemma 2.8. If P : A → U is a constant type family, i.e. P (x) :≡ B for some fixed B : U, then for every x, y : A, p : x = y and fixed b : B there is a path B P transportconstp (b): transport (p, b) = b such that B transportconstreflx (b) ≡ reflb. Proof. We consider the type family Y C : (x =A y) → U, x,y defined by the equation

C(x, y, p) :≡ transportP (p, b) = b.

Since by lemma 2.7

P transport (reflx) ≡ idP (x) ≡ idB, we have for every x : A that

C(x, x, reflx) ≡ b =B b 15 and therefore

reflb : C(x, x, reflx). We conclude that Y λx.reflb : C(x, x, reflx). x:A By the induction principle for identity types, there is a dependent function Y Y F : C(x, y, p),

x,y:A p:x=Ay such that

F(x, x, reflx) ≡ reflb.

For any given x, y : A and p : x =A y, we suppress x and y and define

B transportconstp (b) :≡ F(x, y, p) and then B transportconstreflx (b) ≡ F(x, x, reflx) ≡ reflb.  2.3.2. Empty type. The type 0 with no inhabitants is defined by no construc- tors. Assume we are given a type family P : 0 → U, then there is a dependent function Y F : P (x), x:0 with no computation rules. Under the proposition as types interpretation the empty type corresponds to falsity.

2.3.3. Unit type. The unit type 1 is the inductive type freely generated by just one point constructor: ? : 1 Assume we are given a type family P : 1 → U together with a term t : P (?), then there is a dependent function Y F : P (x), x:1 such that F(?) ≡ t. 16 ANDREAS FRANZ

The elimination rule for identity types allows us to formulate a propositional uniqueness principle for the unit type.

Lemma 2.9. The unit type has only one inhabitant, that is: Y x =1 ? x:1 Proof. We consider the type family

P : 1 → U, defined by the equation

P (x) :≡ x =1 ? and aim to inhabit the type Y P (x). x:1 By the induction principle of the unit type, it suffices to give a proof for

P (?) ≡ ? = ? and in this case we can use refl?.  2.3.4. Pair types. We note that the following results will be generalized in the next section, where we will introduce dependent pairs. We primarily present the simpler versions here for the reader not yet familiar with the subject. So, assume we are given two types A, B : U, then we can form the type of pairs or product type A × B, which can be interpreted as logical conjunction. The pair type is freely generated by one constructor:

(_,_): A → B → A × B

This means that for every inhabitant x : A and every inhabitant y : B, we can introduce an inhabitant

(x, y): A × B.

If we are given a type family P : A × B → U together with a dependent function Y Y G : P (x, y), x:A y:B 17 we can introduce a dependent function Y F : P (z), z:A×B with the obvious computation rule:

F(x, y) :≡ G(x)(y)

The induction principle for pair types gives the projection functions, since we can define λx.λy.x : A → B → A, which, by the induction principle of pair types, gives a function

π1 : A × B → A, such that for every x : and y : B  π1 (x, y) ≡ x and λx.λy.y : A → B → B, which gives a function

π2 : A × B → B such that for every x : A and y : B  π2 (x, y) ≡ y. Using the projection functions we can express a propositional uniqueness principle, as we did for the unit type.

Lemma 2.10. Assume we are given two types A, B : U. The inhabitants of the pair type A × B are the pairs of terms in A and B, that is: Y  z = π1(z), π2(z) z:A×B Proof. We consider the type family

P : A × B → U, defined by the equation  P (z) :≡ z = π1(z), π2(z) 18 ANDREAS FRANZ and we aim to inhabit the type Y P (z). z:A×B Since by the computation rules of the projection functions we have for every x : A and y : B that  π1 (x, y) ≡ x and  π2 (x, y) ≡ y, we get that P (x, y) ≡ (x, y) = (x, y). Therefore we can define a dependent function Y Y G : P (x, y), x:A y:B by the defining equation

G(x)(y) :≡ refl(x,y). Then, by the induction principle for pairs, there is a dependent function Y  F : z = π1(z), π2(z) , z:A×B such that  F (x, y) ≡ refl(x,y).  Remark 2.11. Assume we are given two functions f : A → A0 and g : B → B0. We can define λx.λy.f(x), g(y) : A → B → A0 × B0 and then, by the recursion principle of the product, there is a function

(f, g): A × B → A0 × B0, such that (f, g)(x, y) ≡ f(x), g(y). 2.3.5. Dependent pair types. Similar to the case of dependent functions, we can generalize the notion of a product in such a way that the type of the 19 second coordinate may vary along the points in the first coordinate. Assume we are given a type A : U together with a type family P : A → U, then we P can introduce the dependent pair type x:A P (x). If not specified otherwise P by using brackets, the scope of such an expression is everything after “ x:A”. The dependent pair types are freely generated by one constructor: Y X (_,_): P (x) → P (x), x:A x:A This means that for every inhabitant x : A and every proof t : P (a), we can introduce an inhabitant X (a, t): P (x). x:A If we are given a type A : U, a type family

P : A → U, a type family X C : P (x) → U x:A and a dependent function Y Y G : C(x, t), x:A t:P (x) then there is a dependent function Y F : C(s), P s: x:A P (x) such that for every x : A and t : P (x)

F(x, t) ≡ G(x)(t).

We note that pair types are just a special case of dependent pair types, where the type family P is constant, that is, if P ≡ λx.B for some fixed B : U, we have that X P (x) ≡ A × B. x:A We can generalize the projection functions to dependent pairs, but since the situation is more involved, we state this in a lemma. 20 ANDREAS FRANZ

Lemma 2.12. Assume we are given a type A : U together with a type family P : A → U. Then there is a function X  π1 : P (x) → A x:A and there is a dependent function Y  π2 : P π1(s) , P s: x:A P (x) such that for every x : A and t : P (x) we have that  π1 (x, t) ≡ x and  π2 (x, t) ≡ t.

Proof. To define the first projection, we consider the constant type family X C : P (x) → U, x:A defined by the equation C(s) :≡ A and we aim to inhabit Y X C(s) ≡ P (x) → A. P s: x:A P (x) x:A To this end we define an inhabitant Y Y G : A x:A t:P (x) by the defining equation G(x, t) :≡ x and then, by the recursion principle for dependent pairs, there is a function X  π1 : P (x) → A, x:A such that  π1 (x, t) :≡ x. 21

For the second projection we note that the type of the codomain may now vary, i.e. the second projection has to be a dependent function of type: Y  π2 : P π1(s) . P s: x:A P (x) We use the induction principle for dependent pairs, where we consider the type family X C : P (x) → U, x:A defined by the equation  C(s) :≡ P π1(s) and aim to inhabit the type Y C(s). P s: x:A P (x) We can define a function Y Y G : C(x, t) x:A t:P (x) by the defining equation G(x)(t) :≡ p and then, by the induction principle for dependent pairs, there is a dependent function Y  π2 : P π1(s) P s: x:A P (x) such that  π2 (x, t) ≡ t. 

Remark 2.13. If we are given a dependent function F : Q P C(s), (s: x:A P (x)) we may write F(x, t) instead of F(x, t) for simplicity.

Under the propositions as types interpretation, dependent pairs correspond to existential quantification. We note that our notion of existence is indeed P constructive, since for every s : x:A P (x) the projection to the first coordi- nate π1(y): A gives an inhabitant of A and the fact that this inhabitant has  property P is proven by π2(y): P π1(y) . As we did before for pair types, 22 ANDREAS FRANZ we can prove that the inhabitants of the dependent pair types are indeed the pairs of terms x of A and t of P (x).

Lemma 2.14. Assume we are given a type A : U together with a type family P : A → U, then we have: Y  y = π1(s), π2(s) P s: x:A P (x) Proof. Since by definition of the projection functions

π1(x, t) ≡ x and

π2(x, t) ≡ t, We can define a dependent function Y Y  G : (x, t) = π1(x, t), π2(x, t) , x:A t:P (x) by the defining equation

G(x)(t) :≡ refl(x,t). By the induction principle for dependent pairs, there is a dependent function Y  F : s = π1(s), π2(s) , P s: x:A P (x) such that

F(x, t) ≡ refl(x,t).  2.3.6. Coproduct types. Assume we are given two types A, B : U, then we can form the coproduct or sum type A + B, which can be thought of as the disjoint union of the two types or as logical disjunction under the proposition as types interpretation. The coproduct of A and B is the inductive type, freely generated by the constructors

inl : A → A + B and inr : B → A + B. 23

If we are given a type family P : A + B → U and two dependent functions Y  Gl : P inl(x) x:A and Y  Gr : P inr(b) , y:B then there is a dependent function Y F : P (w), w:A+B such that  F inl(a) :≡ Gl(a) and  F inr(b) :≡ Gr(b). 2.4. Propositions as types. In the preceding sections we presented the propositions as types interpretation, by assigning the quantifiers and logical connectives appropriate formation rules. We will frequently use terminology in accordance to this and therefore summarize it in the following table: type theory first order logic 0 ⊥ 1 > A → B A → B A × B A ∧ B A + B A ∨ B Q (x:A) P (x) ∀xP (x) P x:A P (x) ∃xP (x) For the purpose of this work, another point of view will be more important and is the content of the following section.

3. The homotopy interpretation 3.1. Basic behavior of functions and identifications. Based on work of Hofmann and Streicher (see [7]), who gave a model of ITT in which iden- tity types are inhabited by more then one element, Awodey, Warren and Voevodsky (see [1], [11]) proposed the point of view that inhabitants of the 24 ANDREAS FRANZ identity type x =A y better ought to be thought of as paths between x and y in a space A and equalities of the form p =x=y q ought to be thought of as homotopies between the paths p, q : x = y in A and so on and this we call the homotopy- or types as spaces interpretation. Under this interpretation it is not desirable anymore that there is only one way to inhabit identity types. To get a better grasp on this, let us consider again the induction principle for identity types, which from now on, to further stress the point of view that types are spaces and equalities are paths, will be called path induction. In a homotopic reading it states that in order to establish a property for every path p : x = y, with two distinct endpoints, it suffices to show it for the constant path reflx. This is reasonable, when we think of properties in our theory as being those that are invariant under homotopy, since every path with one free endpoint can be continuously deformed to the constant path. The same can not be said for loops l : x = x, for instance, it is well known that not all loops of the circle are homotopic to each other. Therefore, adding a principle such as UIP to ITT is not reasonable under the homotopy interpretation anymore.

The analogy between identity types and path spaces in topology is very strong and we will present the most basic results that show this in the rest of this section. In fact we have already defined operations on identity types that correspond to inversion and concatenation of paths in lemma 2.5 and lemma 2.6 respectively. The next lemma now justifies this nomenclature, by showing that these operations behave appropriately.

Lemma 3.1. Assume we are given a type A : U, points x, y, z, w : A and paths p : x = y, q : y = z and r : z = w. Then we have the following:

(i) p = p  refly and p = reflx  p. −1 −1 (ii) p  p = refly and p  p = reflx. (iii) (p−1)−1 = p. (iv) p  (q  r) = (p  q)  r.

Proof. We can proof all four points by path induction. For (i) it suffices to consider the case where x ≡ y and p ≡ reflx and since by lemma 2.6 reflx  reflx ≡ reflx, in this case we can proof both claims by reflreflx . 25

For (ii) it suffices to consider the case where x ≡ y and p ≡ reflx and since −1 by lemma 2.5 reflx ≡ reflx and by lemma 2.6 reflx  reflx ≡ reflx, in this case we can proof both claims by reflreflx . For (iii) it suffices to consider the case where x ≡ y and p ≡ reflx and since −1−1 −1 using lemma 2.5 twice gives reflx ≡ reflx ≡ reflx, in this case we can proof the claim by reflreflx . For (iv) it suffices to consider the case where x ≡ y ≡ z ≡ w and p ≡ q ≡ r ≡ reflx. By using lemma 2.6 twice, we have that

reflx  (reflx  reflx) ≡ reflx  reflx ≡ reflx and similarly

(reflx  reflx)  reflx ≡ reflx  reflx ≡ reflx.

Hence, in this case, we can proof the claim by reflreflx .  Next we want to establish that functions respect equalities. Under the homotopy interpretation this says that all functions preserve paths.

Lemma 3.2. Assume we are given two types A, B : U together with a function f : A → B. Then, for every x, y : A, there is a function

apf :(x =A y) → (f(x) =B f(y)), such that for every x : A we have

apf (reflx) ≡ reflf(x).

Proof. It suffices to consider the case where x ≡ y and p ≡ reflx and then we need to inhabit the type f(x) =B f(x). We can do so by reflf(x).  Later on we will also need the following.

Lemma 3.3. Assume we are given two types A, B : U together with a function f : A → B. Then we have that for every x, y : A there is a function

2 Y apf : (p = q) → apf (p) = apf (q), p,q:x=y such that ap2 (refl ) ≡ refl . f p apf (p) 26 ANDREAS FRANZ

Proof. By path induction it suffices to consider the case where p ≡ q and r ≡ reflp and then we need to inhabit the type apf (p) = apf (p). We can do so by refl . apf (p)  Lemma 3.4. Assume we are given three types A, B, C : U together with functions f : A → B and g : B → C and paths p : x =A y and q : y =A z. Then we have:

(i) apf (p  q) = apf (p)  apf (q). −1 −1 (ii) apf (p ) = apf (p) .  (iii) apg apf (p) = apg◦f (p). (iv) ap (p) = p. idA Proof. All claims can be proven by path induction. For (i) it suffices to consider the case where x ≡ y ≡ z and p ≡ q ≡ reflx and since by lemma 2.6

reflx  reflx = reflx and by lemma 3.2

apf (reflx) ≡ reflf(x), this means we need to show

reflf(x) = reflf(x), which we can do by reflreflf(x) . For (ii) it suffices to consider the case where x ≡ y and p ≡ reflx and since by lemma 2.5 −1 reflx ≡ reflx, in this case we need to show

−1 apf (reflx) = apf (reflx) . Since by lemma 3.2

apf (reflx) ≡ reflf(x) and using again lemma 2.5 gives

−1 reflf(x) ≡ reflf(x), 27 we can further simplify this to

reflf(x) = reflf(x), which we can inhabit by reflreflf(x) . For (iii) it suffices to consider the case where x ≡ y and p ≡ reflx and since by lemma 3.2 for any h : A → C it holds that

aph(reflx) ≡ reflh(x), proofing the claim means to show that

reflg(f(x)) = reflg(f(x)) and we can do so by reflg(f(x)). For (iv) it suffices to consider the case where x ≡ y and p ≡ reflx and since by lemma 3.2 we have that

ap (refl ) ≡ refl , idA x idA(x) and

idA(x) ≡ x, in this case we need to show

reflx = reflx and we can do so by reflreflx .  Remark 3.5 (Application of a constant function). Assume we are given two types A, B : U, some fixed b : B and let us consider the constant function:

bA :≡ λx.b : A → B Then we have that: Y Y ap (p) = refl bA b x,y:A p:x=y Proof. By path induction it suffices to consider the case where x ≡ y and p ≡ reflx, but since bA(x) ≡ b and applying the computation rule from lemma

3.2, we can use reflreflb in this case.  We note that we can not say the same thing for functions that are only propositionally constant, i.e. functions f : A → B, such that f = bA, because 28 ANDREAS FRANZ we have for every x, y : A and every p : x = y that

apf (p): f(x) = f(y) and

reflb : b = b. But this means that

apf (p) = reflb is generally not well typed.

Remark 3.6 (Application of identity is identity). Assume we are given a type A : U and let us consider the identity function

idA :≡ λx.x. Then we have that: Y Y ap (p) = p idA x,y:A p:x=y Proof. By path induction it suffices to consider the case where x ≡ y and p ≡ reflx, but since idA(x) ≡ x and applying the computation rule from lemma 3.2, we can use reflreflx in this case.  There is also a dependent version of lemma 3.2 that we want to consider next. However we need to be careful, because for every dependent map Y F : P (x) x:A and every path

p : x =A y, the expression F(x) = F(y) is generally not well typed, since F(x): P (x) and F(y): P (y). The solution is to use transport from lemma 2.7.

Lemma 3.7 (Dependent application). Assume we are given a type A : U, a Q type family P : A → U and a dependent function F : (x:A) P (x). Then, for every x, y : A, there is a dependent function

Y P  apdF : transport (p, F(x)) = F(y) , p:x=y 29 such that for every x : A we have

apdF(reflx) ≡ reflF(x). Proof. By path induction it suffices to consider the case where x ≡ y and p ≡ reflx and in this case we need to show that

P transport (reflx, F(x)) = F(x). By lemma 2.7 we have that

P transport (reflx) ≡ idP (x) and hence our goal simplifies to

F(x) = F(x), which can be proven by reflF(x).  We recall that function types are just a special case of dependent function types, where the type family is constant, i.e Y A → B ≡ B. x:A Therefore it makes sense to consider for every function f : A → B and every path p : x =A y application

apf (p): f(x) = f(y), as well as dependent application

B apdf (p): transport (p, f(x)) = f(y) of f to p and we want to relate the two in the next lemma.

Lemma 3.8. Assume we are given two types A, B : U, a function f : A → B, two points x, y : A and a path p : x =A y, then we have:

B apdf (p) = transportconstp (f(x))  apf (p) Proof. By path induction it suffices to consider the case where x ≡ y and p ≡ reflx. By lemma 3.7 the left hand side of the equality simplifies

apdf (reflx) ≡ reflf(x). 30 ANDREAS FRANZ

By lemma 2.8 we have

B transportconstreflx (f(x)) ≡ reflf(x) and by lemma 3.2 we have

apf (reflx) ≡ reflf(x). Together with the computation rule from lemma 2.6, we get that also the right hand side of the equality is judgmentally equal to reflf(x). Therefore, in this case, we can proof the claim by reflreflf(x) .  3.2. Homotopies and equivalences. In this section we transfer more ideas from homotopy theory to ITT and present appropriate notions for homotopy between two functions and equivalence of spaces.

Definition 3.9 (Homotopies). Assume we are given two types A, B : U together with two maps f : A → B and g : A → B.A homotopy from f to g is a dependent function of type Y (f ∼ g) :≡ f(x) = g(x). x:A Thus, a homotopy between functions f and g is a proof that these functions are pointwise equal. Next we present a proof, that equality f = g between functions implies pointwise equality, i.e. equivalence.

Lemma 3.10. Assume we are given two types A, B : U together with two functions f, g : A → B. Then there is a function

happly :(f = g) → (f ∼ g), such that, if f ≡ g, we have

happly(reflf ) ≡ λx.reflf(x).

Proof. We consider the type family Y C : (f = g) → U f,g:A→B defined by the equation

C(f, g, p) :≡ (f ∼ g). 31

Then, for every f : A → B, we have that

C(f, f, reflf ) ≡ (f ∼ f) and therefore

λx.reflf(x) : C(f, f, reflf ). We conclude that Y λf.λx.reflf(x) : C(f, f, reflf ) f:A→B and then, by path induction, there is a dependent function Y Y F : C(f, g, p),

f,g:A→B p:f=A→B g such that

F(f, f, reflf ) ≡ λx.reflf(x). Lastly, given two functions f, g : A → B, we suppress f and g and define

happly(p) :≡ F(f, g, p).

 Lemma 3.11. We let eval : A → (A → B) → B be function evaluation at a, i.e. for every function f : A → B we define eval(a, f) :≡ f(a). Then we have for every two functions f, g : A → B, such that p : f = g and for every x : A:

apeval(x)(p) = happly(p)(x) Proof. By path induction, it suffices to consider the case where f ≡ g and p ≡ reflf . Since by the computation rule of happly, given in lemma 3.10, we have

happly(reflf )(x) ≡ reflf(x) and

apeval(x)(reflf ) ≡ refleval(x,f) ≡ reflf(x), we can use reflreflf(x) in this case.  Next we present the notion of an equivalence. For our purposes, the following definition, which is due to Voevodsky, will suffice. There are 32 ANDREAS FRANZ multiple other ways to define equivalences, we refer to chapter four of the HoTT book [10] for a detailed discussion.

Definition 3.12 (Equivalences). We say that a function f : A → B is an equivalence, if it has a left and a right inverse, that is: X  X  isequiv(f) :≡ (f ◦ g ∼ idB) × (h ◦ f ∼ idA) . g:B→A h:B→A We say the type A : U is equivalent to the type B : U, if there is an equivalence between the two types: X (A ' B) :≡ isequiv(f) f:A→B Next we present a proof that transport from lemma 2.7 is always an equivalence.

Lemma 3.13. Assume we are given a type A : U together with a type family P : A → U. Then we have that: Y Y isequivtransportP (p) x,y:A p:x=y Proof. By path induction it suffices to consider the case where x ≡ y and p ≡ reflx. By the computation rule from lemma 2.7 we have that

P transport (reflx) ≡ idP (x), which is obviously an equivalence, since it is its own quasi inverse.  Remark 3.14. Assume we are given two types A, B : U and functions f : A → B and g : B → A. If we have

ql : f ◦ g ∼ idB, and

qr : g ◦ f ∼ idA we can inhabit isequiv(f) by  (g, ql), (g, qr) . In this situation we may also say that f and g are quasi-inverse or mutually inverse. 33

Lemma 3.15. Assume we are given two types A, B : U and a function f : A → B, such that isequiv(f). Then there is a function f 0 : B → A that is quasi inverse to f.

Proof. Assume we are given (g, G), (h, H) : isequiv(f), that is: • g : B → A Q  • G : (y:B) f ◦ g(y) =B y • h : B → A Q  • H : (x:A) h ◦ f(x) =A x Then we can define terms Y H ◦ g : h ◦ f ◦ g(y) = g(y) y:B and  Y  λy.aph G(y) : h ◦ f ◦ g(y) = h(y) y:B and, using these two terms, we can inhabit g ∼ h by −1  I(y) :≡ H ◦ g(y)  aph G(y) . Next, we note that Y I ◦ f : g ◦ f(x) = h ◦ f(x) x:A and hence we can inhabit

g ◦ f ∼ idA by J(x) :≡ If(x)  H(x).

Since we already have G, which proves that f ◦ g ∼ idB, at hand, this shows that f and g are mutually inverse.  With the notion of an equivalence at hand, we can present an alternative description of paths in dependent pair types.

Theorem 3.16. Assume we are given a type A : U together with a type 0 P family P : A → U and let w, w : (x:A) P (x). Then there is an equivalence

0 X P  0 (w = w ) ' transport p, π2(w) = π2(w ). 0 p:π1(w)=π1(w ) 34 ANDREAS FRANZ

0 P Proof. Step 1: Firstly, we want to define for every w, w : (x:A) P (x) a function of type

0 X P  0 (w = w ) → transport p, π2(w) = π2(w ). 0 p:π1(w)=π1(w ) To this end, we consider the type family Y C : (w =P w0) → U, x:A P (x) 0 P w,w : x:A P (x) defined by the equation

0 X P 0 C(w, w , q) :≡ transport (p, π2(w)) = π2(w ). 0 p:π1(w)=π1(w ) 0 By path induction, it suffices to consider the case where w ≡ w and q ≡ reflw and then we have that

X P  C(w, w, reflw) ≡ transport p, π2(w) = π2(w).

p:π1(w)=π1(w) Now, since firstly

reflπ1(w) : π1(w) = π1(w) and secondly, by lemma 2.7

P transport (reflπ1(w)) ≡ idP (π1(w)), we can inhabit C(w, w, reflw) by the pair

(reflπ1(w), reflπ2(w)). Therefore Y λw.(reflπ1(w), reflπ2(w)): C(w, w, reflw) P w: x:A P (x) and then, by the path induction, there is a dependent function Y Y Φ: C(w, w0, q), 0 P 0 w,w : x:A P (x) q:w=w such that

Φ(w, w, reflw) ≡ (reflπ1(w), reflπ2(w)). 35

Step 2: Next, we want to define a function in the other direction, i.e. we want to inhabit the type:

Y  X P 0  0 transport (p, π2(w)) = π2(w ) → (w = w ) 0 P 0 w,w : x:A P (x) p:π1(w)=π1(w ) We start by using induction on the terms X w, w0 : P (x) x:A and X P 0 r : transport (p, π2(w)) = π2(w ), 0 p:π1(w)=π1(w ) 0 0 which splits them into pairs (w1, w2), (w1, w2) and (r1, r2) respectively, where 0 • w1, w1 : A, • w2 : P (w1), 0 0 • w2 : P (w1), 0 • r1 : w1 = w1 and P 0 • r2 : transport (r1, w2) = w2. So, we consider the type family

Y 0 E : (w1 = w1) → U 0 w1,w1:A defined by the equation

0 Y Y Y 0 0 E(w1, w1, r1) :≡ (w1, w2) = (w1, w2) 0 0 P 0 w2:P (w1) w2:P (w1) r2:transport (r1,w2)=w2 and aim to inhabit the type

Y Y 0 E(w1, w1, r1). 0 0 w1,w1:A r1:w1=w1 0 By path induction it suffices to consider the case where w1 ≡ w1 and r1 ≡ 0 reflw1 and then we have that w2 and w2 are both of type P (w1). Since by the computation rule from lemma 2.7 it holds that

P transport (reflw1 ) ≡ idP (w1), we get that P transport (reflw1 , w2) ≡ w2. 36 ANDREAS FRANZ

But this means that we need to inhabit

Y Y 0  E(w1, w1, reflw1 ) ≡ (w1, w2) = (w1, w2) . 0 0 w2,w2:P (w1) r2:w2=w2 We use path induction again, where we consider the type family

Y 0 Q : (w2 = w2) → U, 0 w2,w2:P (w1) defined by the equation

0 0  Q(w2, w2, r2) :≡ (w1, w2) = (w1, w2) and we aim to inhabit the type

Y Y 0 Q(w2, w2, r2). 0 0 w2,w2:P (w1) r2:w2=w2 0 Then it suffices to consider the case where w2 ≡ w2 and r2 ≡ reflw2 , but in this case we can proof the claim by refl(w1,w2). Therefore, for every w1 : A, there is a dependent function

Y Y 0 Gw1 : Q(w2, w2, r2), 0 0 w2,w2:P (w1) r2:w2=w2 such that

Gw1 (w2, w2, reflw2 ) ≡ refl(w1,w2). But as noted above, we have that

Y 0 E(w1, w1, reflw1 ) ≡ Q(w2, w2, r2) 0 r2:w2=w2 and hence Gw1 inhabits E(w1, w1, reflw1 ). This, in turn, gives a dependent function Y Y 0 J : E(w1, w1, r1), 0 0 w1,w1:A r1:w1=w1 such that

J(w1, w1, reflw1 ) ≡ Gw1 . 37

This finishes the induction on w, w0 and r and therefore gives a dependent function

Y  X P 0  0 Ψ: transport (p, π2(w)) = π2(w ) → (w = w ), 0 P 0 w,w : x:A P (x) p:π1(w)=π1(w ) such that

0 0  0 0 Ψ (w1, w2), (w1, w2), (r1, r2) ≡ J(w1, w1, r1)(w2, w2, r2). 0 P Step 3: Next we want to show that for every w, w : x:A P (x), the functions Φ(w, w0) and Ψ(w, w0) are mutually inverse. For readability we omit the occurrences of w and w0. We start by showing that for every

X P r : transport (p, π2(w)) 0 p:π1(w)=π1(w ) it holds that ΦΨ(r) = r. As before, we use induction on X w, w0 : P (x) x:A and X P 0 r : transport (p, π2(w)) = π2(w ), 0 p:π1(w)=π1(w ) 0 0 which splits them into pairs (w1, w2), (w1, w2) and (r1, r2) respectively, where 0 • w1, w1 : A, • w2 : P (w1), 0 0 • w2 : P (w1), 0 • r1 : w1 = w1 and P 0 • r2 : transport (r1, w2) = w2. Then we consider the type family

Y 0 R : (w1 = w1) → U 0 w1,w1:A defined by the equation

0 Y Y Y  R(w1, w1, r1) :≡ Φ Ψ(r) = r 0 0 P 0 w2:P (w1) w2:P (w1) r2:transport (r1,w2)=w2 38 ANDREAS FRANZ and we aim to inhabit the type

Y Y 0 R(w1, w1, r1). 0 0 w1,w1:A r1:w1=w1 0 By path induction it suffices to consider the case where w1 ≡ w1 and r1 ≡ reflw1 and since by the computation rule from lemma 2.7 it holds that P transport (reflw1 ) ≡ idP (w1) 0 and both w2 and w2 are of the same tpe P (w1), we get Y Y   R(w1, w1, reflw1 ) ≡ Φ Ψ (reflw1 , r2) = (reflw1 , r2). 0 0 w2,w2:P (w1) r2:w2=w2 0 Using path induction again, it suffices to consider the case where w2 ≡ w2 and r2 ≡ reflw2 and we are left to proof   (2) Φ Ψ (reflw1 , reflw2 ) = (reflw1 , reflw2 ). In this case we compute  Ψ (w1, w2), (w1, w2), (reflw1 , reflw2 ) ≡

≡ J(w1, w1, reflw1 )(w2, w2, reflw2 ) ≡

≡ Gw1 (w2, w2, reflw2 ) ≡ refl(w1,w2) and then

Φ((w1, w2), (w1, w2), refl(w1,w2)) ≡ (reflw1 , reflw2 ) and therefore, in this case, we can proof (2) by refl . (reflw1 ,reflw2 ) 0 P Step 4: Similarly we want to show that for every w, w : x:A P (x) and every q : w = w0, we have that:

(3) ΨΦ(q) = q

As in step 1, we firstly use path induction on q, by which it suffices to consider 0 the case where w ≡ w and q ≡ reflw and then we have

Φ(w, w, reflw) ≡ (reflπ1(w), reflπ2(w)).

Secondly we use induction on w, which splits the term into (w1, w2) and then we can proceed as in step 3 and compute  Ψ (w1, w2), (w1, w2), (reflw1 , reflw2 ) ≡ refl(w1,w2). 39

Therefore, in this case we can proof (3) by reflrefl . (w1,w2)  Theorem 3.16 gives an alternative way to infer uniqueness for dependent pairs from lemma 2.14.

Corollary 3.17. Assume we are given a type A : U and a type family P : A → U, then we have: Y  y = π1(y), π2(y) P y: x:A P (x) P Proof. Assume we are given y : x:A P (x). By theorem 3.16 it suffices to show that P transport (reflπ1(y), π2(y)) = π2(y) and since by lemma 2.7 we have

P  transport reflπ1(y) ≡ idP (π1(y)), we can proof this claim by reflπ2(y).  For the special case of a non dependent product, theorem 3.16 tells us that the paths in a product space are pairs of paths.

Corollary 3.18. Assume we are given two types A, B : U and two points r, s : A × B, then we have:    (r = s) ' π1(r) = π1(s) × π2(r) = π2(s) Proof. We let P : A → U be the constant type family, defined by the equation P (x) :≡ B. By theorem 3.16 we have that

X P  (r = s) ' transport p, π2(r) = π2(s).

p:(π1(r)=π1(s)) Next we note that

 P    transport p, π2(r) = π2(s) ' π2(r) = π2(s) , since by lemma 2.8 we have a path

B P transportconstp (b): transport (p, π2(r)) = π2(r) and hence, if we are given some

P  p : transport p, π2(r) = π2(s), 40 ANDREAS FRANZ we can define

B P transportconstp (b)  p : transport (p, π2(r)) = π2(r) and if we are given some

P q : transport (p, π2(r)) = π2(r), we can define

B −1 transportconstp (b)  q : π2(r) = π2(r). This defines an equivalence by lemma 2.6 and therefore   (r = s) ' π1(r) = π1(s) × π2(r) = π2(s) .

 Definition 3.19. We denote by

=    pair : π1(r) = π1(s) × π2(r) = π2(s) → (r = s) the equivalence from corollary 3.18. In particular we have that

= pair (reflx, refly) ≡ refl(x,y).

The next lemma characterizes inversion and concatenation of paths in a product space component-wise.

Lemma 3.20. Assume we are given two types A, B : U together with points x, y, z : A and x0, y0, z0 : B, paths p : x = y and q : y = z and paths r : x0 = y0 and s : y0 = z0. Then we have: (i) pair=(p−1, r−1) = pair=(p, r)−1 (ii) pair=(p, r)  pair=(q, s) = pair=(p  q, r  s)

Proof. We can proof both claims by path induction. For (i) it suffices to 0 0 consider the case where x ≡ y, x ≡ y , p ≡ reflx and r ≡ reflx0 . By definition 3.19, we have that = pair (reflx, reflx0 ) ≡ refl(x,x0) and since by lemma 2.5 we have that

−1 refl(x,x0) ≡ refl(x,x0), 41 in this case we can proof the claim by reflexivity. Similarly for (ii) it suffices 0 0 0 to consider the case where x ≡ y ≡ z, x ≡ y ≡ z , p ≡ q ≡ reflx and r ≡ s ≡ reflx0 . Using again definition 3.19 and lemma 2.6, by which we have that

refl(x,x0)  refl(x,x0) ≡ refl(x,x0), we can proof the claim by reflexivity in this case.  Lemma 3.21. Assume we are given four types A, A0,B,B0 : U together with two functions f : A → A0 and g : B → B0, then it holds that

Y Y =  ap(f,g)(p) = pair apf◦π1 (p), apg◦π2 (p) z,z0:A×B p:z=z0 Proof. We use induction on z and z0, which splits them into pairs. By path induction it suffices to consider the case where (x, y) ≡ (x0, y0) and = p ≡ refl(x,y) but since pair (reflf(x), reflg(x)) ≡ refl(f(x),g(x)), in this case we can use reflrefl(f(x),g(x)) .  Lemma 3.22. Assume we are given three types A, B, C : U together with a function f : A × B → C. For every x : A, we define

fx(y) :≡ f(x, y) and for every y : B, we define

fy(x) :≡ f(x, y). Then it holds

Y Y Y =  apf pair (p, refly) = apfy (p) y:B x,x0:A p:x=x0 and Y Y Y =  apf pair (reflx, q) = apfx (q). x:A y,y0:B q:y=y0 Proof. We check that the above expressions are well typed. For every y : B, x, x0 : A and p : x = x0 we have that

= 0 pair (p, refly):(x, y) = (x , y) and therefore =  0 apf pair (p, refly) : f(x, y) = f(x , y). 42 ANDREAS FRANZ

But we also have that

0 apfy (p): fy(x) = fy(x ) and by definition

fy(x) ≡ f(x, y) and 0 0 fy(x ) ≡ f(x , y). The second case is similar. Now, for the first claim, we assume that we are given some y : B and then, by path induction it suffices to consider the case 0 where x ≡ x and p ≡ reflx. Since by definition 3.19 we have that

= pair (reflx, refly) ≡ refl(x,y) and by lemma 3.2 we have that

apfy (reflx) ≡ reflfy(x) ≡ reflf(x,y), in this case we are left to proof that

reflf(x,y) = reflf(x,y) and we can use reflreflf(x,y) . The second claim follows in a completely analogue way.  Definition 3.23 (Contractability). We say that a type A : U is contractible, if there exists an inhabitant X Y (a, F): x = y :≡ isContr(A) x:A y:A and we call a the center of contraction. Another useful lemma is the following. It states that paths for which one of the end-points is allowed to vary and the other one is fixed are homotopic to the constant path in the total space. Lemma 3.24. Assume we are given some type A : U together with some fixed a : A, then the following type is inhabited: X isContr( x = a) x:A

The center of contraction is given by (a, refla). 43

Proof. We aim to inhabit the type Y s = (a, refla) P s: x:A x=a

We use induction on s, which splits it into s1 : A and s2 : s1 = a. Next we note that by theorem 3.26 we have that

x7→x=a −1 transport (s2, s2) = s2  s2 and by lemma 3.1 we have that

−1 s2  s2 = refla. The claim therefore follows from the characterization of paths in a P-space, theorem 3.16.  Lemma 3.25. Assume we are given two types A, B : U together with terms

F : isContr(A) and G : isContr(B), then we have that A ' B.

Proof. From F and G we get points π1(F): A and π1(G): B, hence we can define  π1(G) A : A → B and  π1(F) B : B → A. We compute that   π1(F) B ◦ π1(G) A(x) ≡ π1(F) and   π1(G) A ◦ π1(F) B(y) ≡ π1(G) and therefore we have that   π2(F): π1(F) B ◦ π1(G) A ∼ idA 44 ANDREAS FRANZ and   π2(G): π1(G) A ◦ π1(F) B ∼ idB.  The following theorem characterizes transport, when the type family states a pointwise equality between functions. We will frequently use it, when constructing equivalences.

Theorem 3.26. Assume we are given two functions f, g : A → B, two points x, y : A, a path p : x =A y and a path q : f(x) =B g(x). We let P : A → U be the type family defined by the equation

P (x) :≡ f(x) =B g(x). Then we have:

P −1 transport (p, q) =f(a0)=g(a0) apf (p)  q  apg(p). Proof. By path induction it suffices to consider the case where x ≡ y and p ≡ reflx. Since by lemma 2.7 we have that

P transport (reflx) ≡ idP (x) and by the computation rules from lemmas 3.2 and 2.5 that −1 apf (p) ≡ reflf(x) and

apg(reflx) ≡ reflg(x), in this case we can use lemma 3.1 and proof the claim by reflq. 

4. Pushouts 4.1. Basics. In section 2.3 we encountered inductive types. The idea was to specify a set of constructors that generate points in the type under definition. This idea is generalized by allowing constructors to additionally generate paths. We call these types higher inductive types (HITs). There is no general theory for HITs yet, so we will focus instead on a specific class, so called pushout types, which provide a general enough framework to define many important examples of types with non-trivial homotopy, such as the interval, n-spheres and many more. 45

Definition 4.1. Assume we are given three types A, B, C : U together with two functions f : C → A and g : C → B. The pushout of (A, B, C, f, g) is the higher inductive type A tC B generated by • a function inl : A → A tC B, • a function inr : B → A tC B, and • for each z : C a path Glue(z): inlf(z) = inrg(z). The recursion principle for pushout types states that, if we are given some type D : U together with two functions

sl : A → D and

sr : B → D and a dependent function Y Com : sl ◦ f(z) = sr ◦ g(z), z:C then there is a function s : A tC B → D, such that:

• s ◦ inl ≡ sl • s ◦ inr ≡ sr  • aps Glue(z) = Com(z) The induction principle states that, if we are given a type family

P : A tC B → U together with two dependent functions Y  Fl : P inl(x) x:A and Y  Fr : P inr(y) y:B and a dependent function Y      G : transportP Glue(z), F inlf(z) = F inrg(z) , z:C 46 ANDREAS FRANZ then there is a dependent function Y F : P (w), w:AtC B such that:

• F ◦ inl ≡ Fl • F ◦ inr ≡ Fr  • apdF Glue(z) = G(z) In other words, A tC B is the disjoint union of A and B, where for every z : C we have a path from inlf(z) to inrg(z). We depict this information in a square: g C / B

f Glue inr

  A / A tC B inl Geometrically, pushout types can be thought of as “gluing” the types A and B together along the type C and this is done by making the points of C into paths.

Remark 4.2. Firstly we establish that gluing a type A with itself, along itself, does not change its homotopy class. That is, we consider the following pushout square: id A A / A

idA Glue inr

  A / A tA A inl Then we have an equivalence:

A tA A ' A 47

Proof. We define Φ: A tA A → A by means of the recursion principle of A tA A as follows:

• Φ ◦ inl :≡ idA • Φ ◦ inr :≡ idA  • apΦ Glue(x) := reflx We want to proof, that Φ and inl are mutually inverse, that is we aim to show that it holds for every w : A tA A that:

P (w) :≡ inl ◦ Φ(w) = w

We apply the induction principle of A tA A and set:  • reflinl(x) : P inl(x) • Glue(x): P inr(x) Then it is left to show:

P transport (Glue(x), reflinl(x)) = Glue(x) Since firstly, by theorem 3.26 we have

P −1 transport (Glue(x), reflinl(x)) = apinl◦Φ Glue(x)  Glue(x)  and secondly, by definition we have apΦ Glue(x) = refla, we can proof the claim by reflGlue(x). Conversely, we let Q(x) :≡ Φ ◦ inl(x) = x and then, since

Φ ◦ inl(x) ≡ idA(x) ≡ x, Q we can define λa.refla : (x:A) Q(x).  Remark 4.3 (Sum types). The sum A + B can be defined as the pushout of the square: 0 / B

inr

  A / A + B inl 48 ANDREAS FRANZ

Example 4.4 (The interval). The interval is the pushout of the square: 1 / 1

seg inl

  1 / I inr We introduce a special notation and set:

• inl(?) :≡ 0I • inr(?) :≡ 1I Up to equivalence, the interval is just a point, as we present in the following lemma and may therefore not seem very interesting at first sight.

Lemma 4.5. The interval is contractible

Proof. We claim that 0I is the center of contraction. To this end we need to inhabit the type Y 0I = i. i:I We apply the induction principle of the interval, where we consider the type family P : I → U, defined by the equation

P (i) :≡ 0I = i. Then we can use

refl0I : P (0I ) and

seg : P (1I ), to proof the claim over the points. Then it is left to show that

P transport (seg, refl0I ) = seg But by theorem 3.26, this means to show that

seg = seg, 49 which we can inhabit by reflseg.  We have seen in lemma 3.10 that from an equality f = g between two functions, we can infer pointwise equality. Once we introduce pushouts and therefore the interval into our theory also the converse is true. We present the proof in the next lemma. This also demonstrates that HITs are interesting from a purely type theoretic point of view.

Lemma 4.6 (Function extensionality). Assume we are given two types A, B : U and two functions f, g : A → B. Then there is a function:

functExt :(f ∼ g) → (f = g)

Proof. Assume we are given H : f ∼ g, then we have for every x : A that

H(x): f(x) = g(x) and can therefore define a function Φx : I → B by the recursion principle of the interval, such that:

• Φx(0I ) :≡ f(x) • Φx(1I ) :≡ g(x) : • apΦx (seg) = H(x) But then we can define

Ψ :≡ λi.λx.Φx(i): I → A → B and this gives

apΨ(seg):(λx.Φx(0I )) = (λx.Φx(1I )).

Lastly by definition of Φx we have that

λx.Φx(0I ) ≡ f and

λx.Φx(1I ) ≡ g.  Next, we prove a dependent version of the previous lemma.

Lemma 4.7 (Dependent function extensionality). Assume we are given a type A : U together with a type family P : A → U and two dependent functions 50 ANDREAS FRANZ Q F, G : (x:A) P (x). Then there is a dependent function:  Y  FunctExt : F(x) =P (x) G(x) → (F = G) x:A Q Proof. Assume we are given H : (x:A) F(x) =P (x) G(x), then we have for every x : A that

H(x): F(x) =P (x) G(x) and can therefore define a function Φx : I → P (x) by the recursion principle of the interval, such that:

• Φx(0I ) :≡ F(x) • Φx(1I ) :≡ G(x) : • apΦx (seg) = H(x) But then we can define Y Ψ :≡ λi.λx.Φx(i): I → P (x) x:A and this gives

ap (seg): λx.Φ(0 ) =Q λx.Φ(1 ). Ψ I (x:A) P (x) I

Lastly by definition of Φx we have that

λx.Φx(0I ) ≡ F and

λx.Φx(1I ) ≡ G.  Corollary 4.8 (Weak function extensionality). Assume that we are given a type A : U together with a type family P : A → U and an inhabitant Y H : isContrP (x), x:A then it also holds that  Y  isContr P (x) . x:A This is also called weak function extensionality. 51

Proof. Since by assumption we are given some Y H : isContrP (x), x:A we have in particular for every x : A inhabitants  π1 H(x) : P (x) and  Y  π2 H(x) : π1 H(x) = y. y:P (x) Q We then define for every F : (x:A) P (x) an dependent function G, by the defining equation   G(x) :≡ π2 H(x) F(x) , which then has type Y  π1 H(x) =P (x) F(x). x:A Lastly we define  Y λx.π1 H(x) : P (x) x:A and then, by lemma 4.7, we have that   FunctExt G(x) : λx.π1 H(x) = F

Since F was chosen arbitrary, this shows the claim.  We note that from lemma 4.8, one can proof that happly is an equivalence as has been done, e.g. in chapter 4.9 of the HoTT book [10]. For our purposes, the following lemma will suffice, which provides a direct proof that happly is left inverse to functExt.

Lemma 4.9. The function happly from lemma 3.10 is left inverse to functExt from lemma 4.6.

Proof. We aim to show that Y Y happlyfunctExt(H) = H. f,g:A→B H:f∼g 52 ANDREAS FRANZ

So assume we start of with two functions f, g : A → B together with some H : f ∼ g. Following the procedure from lemma 4.6, we get a function

Ψ :≡ λi.λx.Φx(i): I → A → B, where Φ is defined by recursion on the interval, such that:

• Φx(0I ) :≡ f(x) • Φx(1I ) :≡ g(x) : • apΦx (seg) = H(x) In particular, we then have a proof that f and g are equal:

apΨ(seg): f = g Next, by lemma 3.11 we have for every p : f = g that Y happly(p)(x) = apeval(x)(p) x:A and hence, by dependent function extensionality (lemma 4.7) that   happly apΨ(seg) = λx.apeval(x) apΨ(seg) = λx.apeval(x)◦Ψ(seg) and since we can compute   λi. eval(x) ◦ Ψ(i) ≡ λi. eval(x, Ψ(i)) ≡ λi.Ψ(i, x) ≡ λi.Φx(i) ≡ Φx we get that

λx.apeval(x)◦Ψ(seg) = λx.apΦx (seg). Lastly by definition of Φ we have

λx.apΦx (seg) ≡ H.

 The interval further enforces the types as spaces interpretation. In topology a path is defined as a continuous function from the interval, in homotopy type theory we have the following lemma.

Lemma 4.10. Assume we are given a type A : U, then we have: X I → A ' x = y (x,y):A×A 53

Proof. Since for some given f : I → A we have

apf (seg): f(0I ) = f(1I ), we can set  X (f(0I ), f(1I )), apf (seg) : x = y. (x,y):A×A P If we are given s : (x,y):A×A x = y, we can define a function f : I → A by recursion on the interval. To this end we set:  • f(0I ) :≡ π1 π1(s)  • f(1I ) :≡ π2 π1(s)

• apf (seg) := π2(s) Next we want to show that the above functions define an equivalence. So assume we start of with f : I → A. The above method gives a term    X f(0I ), f(1I ) , apf (seg) : x = y (x,y):A×A and from this we define a function g : I → A by recursion, such that:

(i) g(0I ) :≡ f(0I )

(ii) g(1I ) :≡ f(1I )

(iii) apg(seg) := apf (seg) Since we have

reflf(0I ) : g(0I ) = f(0I ) and

reflf(1I ) : g(1I ) = f(1I ) and lastly, since by theorem 3.26 we have that

i7→f(i)=g(i) transport (seg, reflf(0I )) = reflf(1I ) equals −1 apf (seg)  apg(seg) = reflf(1I ), which is proven by ((iii)), we get f ∼ g by the induction principle of the interval. Function extensionality gives f = g and therefore shows that we end up with what we started with. Conversely assume we start of with 54 ANDREAS FRANZ P s : (x,y):A×A x = y. Then, firstly we get a function f : I → A, such that:  • f(0I ) :≡ π1 π1(s)  • f(1I ) :≡ π2 π1(s)

• apf (seg) := π2(s) This function is then mapped to    X f(0I ), f(1I ) , apf (seg) : x = y (x,y):A×A and we aim to show   s =P f(0 ), f(1 ), ap (seg) . (x,y):A×A x=y I I f By theorem 3.16, it suffices to provide a path  p : f(0I ), f(1I ) = π1(s), such that (x,y)7→x=y  transport p, apf (seg) = π2(s). But we have by definition of f that    f(0I ), f(1I ) ≡ π1 π1(s) , π2 π1(s) and since by lemma 2.10 we have   π1 π1(s) , π2 π1(s) = π1(s), we can set p :≡ reflπ1(s). Then, since (x,y)7→x=y transport (reflπ1(s)) ≡ idx=y, we are left to show that

apf (seg) = π2(s), which is inhabited by the definition of f.  For the next examples we need the following definitions. 55

Definition 4.11. We define the type of pointed types as X U ∗ :≡ A. A:U ∗ If we are given an inhabitant (A, ∗A): U , we say the type A : U is pointed.

Remark 4.12. Usually we will just speak of a pointed type A : U and implicitly assume that ∗A : A has been chosen.

Later on we will need also the following.

∗ Definition 4.13. Assume we are given two pointed types (A, ∗A): U and ∗ (B, ∗B): U . We define the type of pointed maps:

∗ X Map (A, B) :≡ ∗B =B f(∗A) f:A→B ∗ If we are given an inhabitant (f, pf ): Map (A, B), we say that f is base- point preserving or f is a pointed function and call pf a proof that f is base-point preserving. We make Map∗(A, B) a pointed type by setting  (∗B)A, refl∗B as its base-point. Remark 4.14. Assume we want to show that two pointed maps

∗ (f, pf ), (g, pg): Map (A, B) are equal. By theorem 3.16, this means that we need to give a path

q : f = g, such that h7→∗B =h(∗A) transport (q, pf ) = pg. By theorem 3.26, this means to show that

p ap (q) = p f  eval(∗A) g and using lemma 3.11, this is equivalent to

pf  happly(q)(∗A) = pg.

In particular, if we aim to show that some given pointed function (f, pf ): ∗ ∗ Map (A, B) is equal to the base-point ((∗B)A, refl∗B ) in Map (A, B), we can −1 concatenate the above equation with happly(q)(∗A) from the right and this 56 ANDREAS FRANZ gives −1 pf = happly(q)(∗A) . In words this reads that the proof that f is base-point preserving is equal to the proof that the constant function (∗B)A is equal to f, evaluated at the base-point. The same argumentation gives that in order to show that ∗ (f, pf ): Map (A, A) is equal to (idA, refl∗A ), we need a

q : f = idA, such that −1 pf = happly(q)(∗A) .

Example 4.15 (Wedge sum). The wedge sum A ∨ B is a pushout of two pointed types A and B. It can be thought of as the sum A + B of A and B, where the base-points ∗A and ∗B are identified by a path glue.

glue

∗A ∗B

A B

Remark 4.16. We write ∗A over the arrows to denote the constant function λ ? .∗A. For functions out of the unit 1 → A to some pointed type A, if we do not write anything over the arrows, we always mean the constant function

(∗A)1.

The wedge sum A ∨ B of two pointed types A and B is the pushout of the square: ∗ 1 B / B

∗A Glue inr

  A / A ∨ B inl 57

We make A ∨ B a pointed type, by setting ∗A∨B :≡ inl(∗A).

Example 4.17 (Suspension). Te suspension is the pushout of the square: A / 1

merid inr

  1 / ΣA inl We make the suspension a pointed type by choosing inl(?) as its base-point. We also introduce a special notation and set: • N :≡ inl(?) • S :≡ inr(?)

Definition 4.18. The type with just two points is the suspension of the empty type: 2 :≡ Σ0 We introduce a special notation and set

• 02 :≡ inl(?) and • 12 :≡ inr(?).

Definition 4.19 (The circle). The circle is the suspension of the space with two points: 1 S :≡ Σ2 The following lemma shows that the circle has the universal property that we would expect.

Lemma 4.20. Assume we a given a pointed type A : U, then it holds that

∗ 1 Map (S ,A) ' (∗A =A ∗A) Proof. Assume that we are given a base-point preserving function

∗ 1 (f, pf ): Map (S ,A). Then we have in particular that

pf : ∗A = f(N), 58 ANDREAS FRANZ  apf merid(12) : f(N) =A f(S) and −1 apf merid(02) : f(S) =A f(N). Hence we can define

 −1 −1 pf  apf merid(12)  apf merid(02)  pf :(∗A = ∗A).

Conversely, assume we are given a loop l :(∗A = ∗A), then we can define a 1 function g : S → A by the recursion principle of the suspension. To this end we define:

• g(N) :≡ g(S) :≡ ∗A  • apg merid(02) := refl∗A  • apg merid(12) := l

Then we have that g is base-point preserving by refl∗A . We want to show that these functions are mutually inverse. So assume we start of with ∗ 1 f : Map (S ,A). We have to show that the function g, defined by

• g(N) :≡ g(S) :≡ ∗A ,  • apg merid(02) := refl∗A and   −1 −1 • apg merid(12) := pf  apf merid(12)  apf merid(02)  pf is equal to (f, pf ). So we consider the type family

1 P : S → U, defined by the equation

P (s) :≡ g(s) = f(s).

If s ≡ N, we compute P (N) ≡ (∗A = f(N)) and hence we can define

pf : P (N).

If s ≡ S, we compute P (S) ≡ (∗A = f(S)) and hence we can define  pf  apf merid(02) : P (N). It is left to show for every b : 2 that

P  transport (merid(b), pf ) = pf  apf merid(02) . 59

By theorem 3.26, this means to show for every b : 2 that −1   apg merid(b)  pf  apf merid(b) = pf  apf merid(02) .

If b ≡ 02, we use the definition of g and then we have to show that   pf  apf merid(02) = pf  apf merid(02) , which is inhabited by reflrefl . Similarly, if b ≡ 12, we have to pf  apf (merid(02)) show

 −1 −1  pf  apf merid(02)  apf merid(12)  pf  pf  apf merid(12) =  = pf  apf merid(02) , which, after canceling inverses, is again inhabited by reflrefl . By pf  apf (merid(02)) the induction principle of the circle, this gives an inhabitant

H : g ∼ f, such that, over the points we have:

• H(N) ≡ pf  • H(S) ≡ pf  apf merid(02) Function extensionality gives:

functExt(H): g = f

Now we use theorem 3.16 to show that

∗ (f, pf ) =Map (S1,A) (g, refl∗A ). This requires us to show that

h7→∗A=h(N) transport (functExt(H), pg) = pf . By theorem 3.26, lemma 3.11 and lemma 4.9, this means to show that

H(N) = pf , but since H(N) ≡ pf , we can inhabit this type by reflpf . Conversely, assume we start of with a loop l : ∗A = ∗A and define:

• g(N) :≡ g(S) :≡ ∗A  • apg merid(02) := refl∗A  • apg merid(12) := l 60 ANDREAS FRANZ

We want to show that

 −1 −1 l = pg  apg merid(12)  apg merid(02)  pg .

But pg ≡ refl∗A and by checking the above definitions, we see that we can use refll to inhabit this type.  n We note that one can use suspensions to define general n-spheres S for some natural number n, where, as indicated above, the type with two inhabitants plays the role of the “0-sphere”, in accordance to what one would define in topology. We can depict the situation for the first few steps. The suspension of the empty type 2 simply consists of two points:

02 12 • • By adding paths between those two points, we get the circle: N •

merid(02) merid(12)

• S Making the points of the circle into paths, we get a sphere: N

1 S

S

4.2. Pushout functions. Next, we reduce the task to define a function between pushout types A tC B → A0 tC0 B0 to giving functions α : A → A0, 61

β : B → B0 and γ : C → C0. This will be done in such a way that if α, β and γ are equivalences, then so is the resulting function.

Lemma 4.21. Assume, we are given types A, B, C, A0,B0,C0 : U, functions f : C → A, g : C → B, f 0 : C0 → A0, g0 : C0 → B0 and additionally two commutative squares, i.e. functions α : A → A0, β : B → B0 and 0 0 γ : C → C together with two dependent functions Coml :(f ◦ γ) ∼ (α ◦ f) 0 and Comr :(g ◦ γ) ∼ (β ◦ g), as depicted below:

f g A o C / B

α Coml γ Comr β

   A0 o C0 / B0 f 0 g0

Then, there is a pushout function α tγ β : A tC B → A0 tC0 B0, such that: • (α tγ β) ◦ inl ≡ inl0 ◦ α • (α tγ β) ◦ inr ≡ inr0 ◦ β  −1 0   • apαtγ β Glue(z) = apinl0 Coml(z)  Glue γ(z)  apinr0 Comr(z) Proof. We apply the recursion principle of A tC B. To this end we check: 0 • Coml(z): f ◦ γ(z) = α ◦ f(z) 0 • Comr(z): g ◦ γ(z) = β ◦ g(z) • Glue0γ(z) : inl0 ◦ f 0 ◦ γ(z) = inr0 ◦ g0 ◦ γ(z) This shows that

−1 0   apinl0 Coml(z)  Glue γ(z)  apinr0 Comr(z) has the appropriate type

inl0 ◦ α ◦ f(z) = inr0 ◦ β ◦ g(z) to define a function by the recursion principle for pushout types with compu- tation rules stated in the lemma.  Remark 4.22. When we say that two squares as in lemma 4.21 commute, we always mean that we are given types A, B, C, A0,B0,C0 : U, functions 62 ANDREAS FRANZ f : C → A, g : C → B, f 0 : C0 → A0, g0 : C0 → B0, α : A → A0, 0 0 0 β : B → B , γ : C → C Coml :(f ◦ γ) ∼ (α ◦ f) and dependent functions 0 0 Coml :(f ◦ γ) ∼ (α ◦ f) and Comr :(g ◦ γ) ∼ (β ◦ g) and we will generally not write this down explicitly.

Remark 4.23. Assume we are given types A, B, C : U and we let α be the pushout function of the commutative squares:

f g A o C / B

idA Coml idC Comr idB

   ACo / B f g

One might guess that α ∼ idAtC B, so let us try to proof it. We define a type family P : A tC B → U, by the equation P (w) :≡ α(w) = w. We compute αinl(x) ≡ inl(x) and therefore  reflinl(x) : P inl(x) and similarly αinr(y) ≡ inl(y) and therefore  reflinr(y) : P inr(y) . Then it is left for us to show that

w7→α(w)=w transport (Glue(z), reflinl(x)) = refl(inr(y) and by theorem 3.26 this means to show that −1 apα Glue(z)  Glue(z) = refl(inr(y). 63

Since α is a pushout function, we check the computation rules from lemma 4.21, by which we have:   apα Glue(z) = apinl(Coml)  Glue(z)  apinr Glue(z) Now we are stuck: to proceed, we need to know what the proofs of commu- tativity are. We see that it is generally not enough to assume that α ∼ α0, β ∼ β0 and γ ∼ γ0, to conclude that (α tγ β) ∼ (α0 tγ0 β0). We can however establish sufficient conditions under which it is possible to do this and this will be done next. We note that we can still say something about the α from above, namely that it is an equivalence. We will prove this in lemma 4.28.

Lemma 4.24. Assume we are given H : α ∼ α0, Iβ ∼ β0 together with commutative squares:

f g f g A o C / B A o C / B

α γ 0 0 γ 0 0 Coml Comr β α Coml Comr β       A0 o C0 / B0 A0 o C0 / B0 f 0 g0 f 0 g0 If we have additionally for every z : C proofs that 0  • Coml(z) = Coml(z)  H f(z) and 0  • Comr(z) = Comr(z)  I g(z) , then it holds for the pushout functions that

α tγ β ∼ α0 tγ β0.

Proof. We want to apply the induction principle of AtC B to proof the claim. So let us consider the type family

P : A tC B → U, defined by the equation

P (w) :≡ (α tγ β)(w) = (α0 tγ β0)(w).

Then we have that   apinl0 H(x) : P inl(x) 64 ANDREAS FRANZ and   apinr0 I(y) : P inr(y) It is left for us to show:

P      transport Glue(z), apinl0 H f(z) = apinr0 I g(z) By theorem 3.26 this means to show: −1      (4) apαtγ β Glue(z) apinl0 H f(z) apα0tγ β0 Glue(z) = apinr0 I g(z) By definition of pushout functions, given in lemma 4.21, we have that

−1 −1 0 −1  apαtγ β Glue(z) = apinr0 Comr(z)  Glue γ(z)  apinl0 Coml(z) and

 0 −1 0  0  apα0tγ β0 Glue(z) = apinl0 Coml(z)  Glue γ(z)  apinr0 Comr(z) . By assumption we have that 0  • Coml(z) = Coml(z)  H f(z) and 0  • Comr(z) = Comr(z)  I g(z) and therefore, from lemma 3.3, we get that

0 −1 apinl0 Coml(z) is equal to  −1 −1 apinl0 H f(z)  apinl0 Coml(z) and 0  apinr0 Comr(z) is equal to    apinr0 Comr(z)  apinr0 I g(z) . Therefore, we conclude that   −1 −1 apα0tγ β0 Glue(z) = apinl0 H f(z)  apinl0 Coml(z) 

0     Glue γ(z)  apinr0 Comr(z)  apinr0 I g(z) . Plugging everything into the left side of (4) gives:

−1 0 −1  apinr0 Comr(z)  Glue γ(z)  apinl0 Coml(z)  65

   −1 apinl0 H f(z)  apinl0 H f(z) 

−1 0     apinl0 Coml(z)  Glue γ(z)  apinr0 Comr(z)  apinr0 I g(z)

  0  0  We can now cancel out inverses of apinl0 H f(z) , apinl0 Coml(z) , Glue γ(z) 0  and apinr0 Comr(z) and this gives us that the left hand side of (4) equals   apinr0 I g(z) .  Lemma 4.25. Assume we are given H : γ ∼ γ0 and commutative squares:

f g f g A o C / B A o C / B

α γ α 0 0 0 Coml Comr β Coml γ Comr β       A0 o C0 / B0 A0 o C0 / B0 f 0 g0 f 0 g0 If we have additionally for every z : C proofs that 0 −1 • Coml(z) = apf 0 H(z)  Coml(z) and 0 −1 • Comr(z) = apg0 H(z)  Comr(z) , then it holds for the pushout functions that

0 α tγ β ∼ α tγ β. Proof. We want to apply the induction principle of AtC B to proof the claim. So let us consider the type family

P : A tC B → U, defined by the equation

0 P (w) :≡ (α tγ β)(w) = (α tγ β)(w).

Then we have that  reflinl0(α(x)) : P inl(x) and  reflinr0(β(y)) : P inr(y) . It is left for us to show that:

P transport (Glue(z), reflinl0(α(x))) = reflinr0(β(y)) 66 ANDREAS FRANZ

By theorem 3.26 this means to show: −1  (5) apαtγ β Glue(z)  apαtγ0 β Glue(z) = reflinr0(β(y)) By definition of pushout functions, given in lemma 4.21 we have that

−1 −1 0 −1  apαtγ β Glue(z) = apinr0 Comr(z)  Glue γ(z)  apinl0 Coml(z) and

 0 −1 0 0  0  apαtγ0 β Glue(z) = apinl0 Coml(z)  Glue γ (z)  apinr0 Comr(z) By the first assumption we have that

0 −1 Coml(z) = apf 0 H(z)  Coml(z) 0 −1 and therefore, from lemma 3.3, we get that the term apinl0 Coml(z) is equal to −1  apinl0 Coml(z)  apinl0◦f 0 H(z) . By the second assumption we have that

0 −1 Comr(z) = apg0 H(z)  Comr(z) 0  and therefore, from lemma 3.3, we get that the term apinr0 Comr(z) is equal to  −1  apinr0 apg0 H(z)  apinr0 Comr(z) . Plugging everything in the left side of (5) gives:

−1 0 −1 apinr0 Comr(z)  Glue γ(z)   −1 apinl0 Coml(z)  apinl0 Coml(z)  −1  0 0     apinl0◦f 0 H(z)  Glue γ (z)  apinr0 apg0 H(z)  apinr0 Comr(z)

0  We can cancel out inverses of apinl0 Coml(z) and this leaves us with −1 0 −1 apinr0 Comr(z)  Glue γ(z)  (6) −1  0 0     apinl0◦f 0 H(z)  Glue γ (z)  apinr0 apg0 H(z)  apinr0 Comr(z) . Lastly we note that Y Glue0 : inl0f 0(c0) = inr0g0(c0) c0:C0 67 and since we have for every z : C a path

0 H(z): γ(z) =C0 γ (z), we can define a path

 −1 0   0 0  apdGlue0 H(z) : apinl0◦f 0 H(z) Glue γ(z) apinr0◦g0 H(z) = Glue γ (z) , where we used theorem 3.26. Plugging this in (6) leaves us with

−1 0 −1  −1 apinr0 Comr(z)  Glue γ(z)  apinl0◦f 0 H(z)  apinl0◦f 0 H(z)  −1 0      Glue γ(z)  apinr0◦g0 H(z)  apinr0 apg0 H(z)  apinr0 Comr(z) .

Canceling out all inverses shows that the left hand side of (5) equals reflinr0(β(y)).  Corollary 4.26. Assume we are given H : α ∼ α0, I : β ∼ β0, J : γ ∼ γ0 together with commutative squares:

f g f g A o C / B A o C / B

α γ 0 0 0 0 0 Coml Comr β α Coml γ Comr β       A0 o C0 / B0 A0 o C0 / B0 f 0 g0 f 0 g0 If we have additionally for every z : C proofs that 0 −1  • Coml(z) = apf 0 J(z)  Coml(z)  H f(z) and 0 −1  • Comr(z) = apg0 J(z)  Comr(z)  I g(z) , then it holds for the induced functions

0 α tγ β ∼ α0 tγ β0

Proof. Immediate from lemma 4.24 and 4.25.  Next we want to define a composition operation on pushout functions. 68 ANDREAS FRANZ

Lemma 4.27. Assume, we are given four commutative squares:

f g A o C / B

α Coml γ Comr β

   A0 o C0 / B0 f 0 g0

0 0 0 0 0 α Coml γ Comr β

   A00 o C00 / B00 f 00 g00 We define proofs of commutativity by: 00 0   • Coml (z) :≡ Coml γ(z)  apα0 Coml(z) 00 0   • Comr (z) :≡ Comr γ(z)  apβ0 Comr(z) We let (α0 ◦ α tγ0◦γ β0 ◦ β) be the pushout function from the commutative squares: f g A o C / B

0 00 0 00 0 α ◦α Coml γ ◦γ Comr β ◦β

   A00 o C00 / B00 f 00 g00 Then we have:

0 0 (α0 tγ β0) ◦ (α tγ β) ∼ (α0 ◦ α tγ ◦γ β0 ◦ β)

Proof. We want to apply the induction principle of AtC B to proof the claim. So let us consider the type family

P : A tC B → U, 69 defined by the equation

0 0 P (w) :≡ (α0 tγ β0) ◦ (α tγ β)(w) = (α0 ◦ α tγ ◦γ β0 ◦ β)(w).

If w ≡ inl(x) for some x : A, we compute

0 0   (α0 tγ β0) ◦ (α tγ β)inl(x) ≡ (α0 tγ β0) inl0α(x) ≡ inl00α0 ◦ α(x) and also 0 (α0 ◦ α tγ ◦γ β0 ◦ β)inl(x) ≡ inl00α0 ◦ α(x). Similarly, if w ≡ inr(y) for some y : B, we compute

0 0   (α0 tγ β0) ◦ (α tγ β)inr(y) ≡ (α0 tγ β0) inr0β(y) ≡ inr00β0 ◦ β(y) and also 0 (α0 ◦ α tγ ◦γ β0 ◦ β)inr(y) ≡ inr00β0 ◦ β(y). Therefore we have that  reflinl00(α0◦α(x)) : P inl(x) , and in the same way we conclude

refl : P inr(y). inr00β0◦β(y) To finish the inductive step, we need to show for every z : C that:

P (7) transport (Glue(z), refl 00 0 ) = refl inl (α ◦α(x)) inr00β0◦β(y) By theorem 3.26 this means to show: (8) −1  ap(α0tγ0 β0)◦(αtγ β) Glue(z)  ap(α0◦αtγ0◦γ β0◦β) Glue(z) = reflinr00(β0◦β(y)) To proceed, we want to compute  (i) ap(α0tγ0 β0)◦(αtγ β) Glue(z) and  (ii) ap(α0◦αtγ0◦γ β0◦β) Glue(z) . We start by checking the definition of pushout functions from lemma 4.21,  which gives that ap(αtγ β) Glue(z) equals −1 0   apinl0 Coml(z)  Glue γ(z)  apinr0 Comr(z) . 70 ANDREAS FRANZ

Since (α0 tγ0 β0) ◦ inl0 ≡ inl00 ◦ α0 and similarly (α0 tγ0 β0) ◦ inr0 ≡ inr00 ◦ β0, we see that

 −1 0   ap(α0tγ0 β0) apinl0 Coml(z)  Glue γ(z)  apinr0 Comr(z) = −1  0   = apinl00◦α0 Coml(z)  ap(α0tγ0 β0) Glue γ(z)  apinr00◦β0 Comr(z) . We use again the definition of pushout functions, by which we have that

 0  ap(α0tγ0 β0) Glue γ(z) equals: −1  0  00 0   0  apinl00 inr β(y)  Glue γ γ(z)  apinr00 Comr γ(z) . Putting everything together, we see that (i) equals −1 −1  0  00 0  apinl00◦α0 Coml(z)  apinl00 inr β(y)  Glue γ γ(z) 

 0   apinr00 Comr γ(z)  apinr00◦β0 Comr(z) Next, we check again the definition of pushout functions from lemma 4.21, by which (ii) equals

00 −1 00 0  00  (9) apinl00 Coml (z)  Glue γ γ(z)  apinr00 Comr (z)

00 00 and then we use the definitions of Coml and Comr , by which we have: 00   0   • apinl00 Coml (z) ≡ apinl00 inr β(y)  apinl00 (apα0 Coml(z) 00   0   • apinr00 Comr (z) ≡ apinr00 Comr γ(z)  apinr00 (apβ0 Comr(z) Plugging these definitions in (9) shows that (ii) equals: −1 −1  0  00 0  apinl00◦α0 Coml(z)  apinl00 inr β(y)  Glue γ γ(z) 

 0   apinr00 Comr γ(z)  apinr00◦β0 Comr(z) We now see that (i) equals (ii) and this means that the left hand side of (8) equals a concatenation of inverse paths, which, by lemma 3.1, is equal to reflinr00(β0◦β(y)).  We noted above that it is generally not enough to assume that all vertical maps of a pushout function are identity to conclude that this pushout function 71 is the identity. The following lemma tells us however that such a pushout function is always an equivalence.

Lemma 4.28. Assume we are given types A, B, C : U and we let α be the pushout function of the commutative squares:

f g A o C / B

idA Coml idC Comr idB

   ACo / B f g Then there exists a function β that is a quasi-inverse of α.

Proof. Firstly we note that we can define for every z : C proofs of commuta- tivity 0 −1 Coml(z) :≡ Coml(z) and 0 −1 Comr(z) :≡ Comr(z) and these proofs of commutativity also make the given squares commute and therefore give a pushout function β. By lemma 4.27 we have that α ◦ β is equal to the induced function of a square, where the vertical maps are identities and the proofs of commutativity are given by reflexivity. We use the induction principle of A tC B to proof, that α ◦ β is pointwise equal to the identity function. To this end we consider the type family

P : A tC B → U, defined by the equation

P (w) :≡ α ◦ β(w) = idAtC B(w). It is obvious, that over the points, we can proof the claim by reflexivity. Then we need to show for every z : C that:

P transport (Glue(z), reflinl(f(z))) = reflinr(g(z)) 72 ANDREAS FRANZ

By theorem 3.26 this means to show: −1 (10) apα◦β Glue(z)  Glue(z) = reflinr(g(z)) But, since now the proofs of commutativity are pointwise equal to reflexivity, by definition of pushout functions, we have

−1 −1 apα◦β Glue(z) = Glue(z) , hence, by canceling inverse paths, we see, that the left hand side of (10) equals a concatenation of inverse paths, which, by lemma 3.1, is equal to reflinr(g(z)). The same reasoning applies to β ◦ α.  4.3. Homotopy invariance of pushout types. We are now able to show that pushout types are indeed invariant under homotopy.

Theorem 4.29. Assume, we are given the following squares:

A tC B < b

Glue inl inr

f g A o C / B

α Coml γ Comr β

   A0 o C0 / B0 f 0 g0

inl0 inr0 Glue0

" | A0 tC0 B0 Then, if α, β and γ are equivalences, there is also an equivalence

0 A tC B ' A0 tC B0 73

Proof. Since α and β are equivalences, by lemma 3.15 there are maps α0 : A0 → A, β0 : B0 → B and γ0 : C0 → C together with proofs: 0 • H : α ◦ α ∼ idA 0 0 • H : α ◦ α ∼ idA0 0 • I : β ◦ β ∼ idB 0 0 • I : β ◦ β ∼ idB0 0 • J : γ ◦ γ ∼ idC 0 0 • J : γ ◦ γ ∼ idC0 We claim that from H, I and J0 we can define proofs of commutativity that make the following squares commute:

f 0 g0 A0 o C0 / B0

α0 γ0 β0

   ACo / B f g To this end, we firstly note that for every z : C we have that

 0 0  0  apα0 Coml(z) : α f γ(z) = α α f(z) and Hf(z) : α0 ◦ αf(z) = f(z). Therefore we can define for every z : C a path:

  0 0  apα0 Coml(z)  H f(z) : α f γ(z) = f(z) Next we note that for every z0 : C0 we have that

J0(z0): γ ◦ γ0(z0) = c0 and therefore

0  0 0 0 0  0 0 0  apα0◦f 0 J(z ) : α f γ(γ (z )) = α f (z ) . 74 ANDREAS FRANZ

In particular, since for every z0 : C0 we have that γ0(z0): C, we can define for every z0 : C0 a path of type

fγ0(z0) = α0f 0(z0) by −1 −1  0 0   0 0  0 0  H f γ (z )  apα0 Coml γ (z )  apα0◦f 0 J (z ) . To summarize, we can make the left square of the above diagram commute by: −1 −1 0 0  0 0   0 0  0 0  Coml(z ) :≡ H f γ (z )  apα0 Coml γ (z )  apα0◦f 0 J (z ) Similarly, we have for every z : C that

 0 0  0  apβ0 Comr(z) : β g γ(z) = β β g(z) and   Ig(z) : β0 βg(z) = g(z). Therefore we can define for every z : C a path:

−1 −1 0 0  I g(z)  apβ0 Comr(z) : g(z) = β g γ(z) Then we use again that for every z0 : C0 we have that

J0(z0): γ ◦ γ0(z0) = c0 and therefore

 0 0  0 0 0 0  0 0 0  apβ0 apg0 J (z ) : β g γ(γ (z )) = β g (z ) . Then, since for every z0 : C0 we have γ0(z0): C, we can define for every z0 : C0 a path of type gγ0(z0) = β0g0(z0) by −1 −1  0 0   0 0   0 0  I g γ (z )  apβ0 Comr γ (z )  apβ0 apg0 J (z ) . To summarize, we can make the right square of the above diagram commute by: −1 −1 0 0  0 0   0 0   0 0  Coml(z ) :≡ I g γ (z )  apβ0 Comr γ (z )  apβ0 apg0 J (z ) 75

Putting everything together, we get the following commutative squares:

f g A o C / B

α Coml γ Comr β

   A0 o C0 / B0 f 0 g0

0 0 0 0 0 α Coml γ Comr β

   ACo / B f g By lemma 4.27, there is a proof

0 0 (α0 tγ β0) ◦ (α tγ β) = (α0 ◦ α) tγ ◦γ (β0 ◦ β), where the right hand side of this equation is the pushout function of two commutative squares, where proofs of commutativity are given by: 00 0   • Coml (z) :≡ Coml γ(z)  apα0 Coml(z) 00 0   • Comr (z) :≡ Comr γ(z)  apβ0 Comr(z) By Corollary 4.26, from H, I and J we get a path from (α0 ◦ α) tγ0◦γ (β0 ◦ β) to the pushout function of two commutative squares, where all vertical maps are 0 γ0 0 γ identity. By lemma 4.28, there is a function δr, such that (α t β )◦(αt β)◦δr is the identity function.

By the same reasoning, where we now have to use H0, I0 and J0, we see that also (αtγ β)◦(α0 tγ0 β0) is equal to the pushout function of commutative squares, where all vertical maps are identity, hence by 4.28 there is a function γ 0 γ0 0 δl, such that δl ◦ (α t β) ◦ (α t β ) is the identity function. But this means γ 0 γ0 0 γ (α t β) ◦ δr is right inverse to (α t β ) and δl ◦ (α t β) is left inverse to 0 γ0 0 0 γ0 0 (α t β ). This shows, that (α t β ) is an equivalence.  76 ANDREAS FRANZ

5. The wedge sum In example 4.15 we presented the wedge sum A ∨ B of two pointed types A and B. It was defined as the pushout of the square:

∗ 1 B / B

∗A Glue inr

  A / A ∨ B inl

We made A ∨ B a pointed type, by setting ∗A∨B :≡ inl(∗A). From now on we will shift our attention to the category of pointed maps and base-point preserving functions (see 4.11 and 4.13), i.e. every type A : U will come along with a specified base-point ∗A : A and every map f : A → B will come along with a proof pf : ∗B = f(∗A) that f is base-point preserving. Remark 5.1. In the definition of 4.15 we noted that for any two pointed types A, B : U also A ∨ B is pointed, where we chose ∗A∨B :≡ inl(∗A) as its base-point. Similarly we have proofs that inlA∨B and inrA∨B are base-point preserving by reflinl(∗A) ≡ refl∗A∨B and glueA∨B respectively. Remark 5.2. Assume we are given three pointed types A, B, C : U and base- ∗ ∗ point preserving functions (f, pf ): Map (A, B) and (g, pg): Map (B,C), then we can define a composition operation

_ ◦ _ : Map∗(B,C) × Map∗(A, B) → Map∗(A, C) by  (g, pg) ◦ (f, pf ) :≡ g ◦ f, pg  apg(pf ) . We check that  apg(pf ): g(∗B) = g f(∗A) and

pg : ∗C = g(∗B) and therefore  pg  apg(pf ): ∗C = g f(∗A) 77 has appropriate type. We note that this is exactly how we defined the proofs of commutativity in lemma 4.27.

Lemma 5.3. Assume we are given two pointed types A, B : U. Then there is an equivalence: A ∨ B ' B ∨ A

Proof. Firstly, by applying the recursion principle of A ∨ B, we define a function Φ: A ∨ B → B ∨ A:  • Φ inlA∨B(x) :≡ inrB∨A(x)  • Φ inrA∨B(y) :≡ inlB∨A(y)

• apΦ(glueA∨B) := glueB∨A Similarly we define a function Ψ: B ∨ A → A ∨ B:  • Φ inlB∨A(y) :≡ inrA∨B(y)  • Φ inrB∨A(x) :≡ inlA∨B(x)

• apΦ(glueB∨A) := glueA∨B We want to show that Φ and Ψ are mutually inverse. It is obvious from the computation rules that  reflinlA∨B (x) :Ψ ◦ Φ inlA∨B(x) = inlA∨B(x) and that  reflinrA∨B (y) :Ψ ◦ Φ inrA∨B(y) = inrA∨B(y). Then it is left to show that

w7→Ψ◦Φ(w)=w transport (glueA∨B, reflinlA∨B (x)) = reflinrA∨B (y). By theorem 3.26 this means to show:

−1 apΨ◦Φ(glueA∨B)  glueA∨B = reflinrA∨B (y). From the computation rules it is immediate that

−1 −1 apΨ◦Φ(glueA∨B) = glueA∨B and hence the claim follow from lemma 3.1. The other direction is completely analogue.  In the main result of this section we establish that the wedge sum is the coproduct in the category of pointed types and base-point preserving maps 78 ANDREAS FRANZ

(see 5.8). The coproduct of two objects X and Y in some category is the object

X t Y together with two morphisms injl : X → X t Y and injr : Y → X t Y , such that for any object Z in the category and any two morphisms f : X → Z and g : Y → Z, there is a unique morphism f t g : X t Y → Z for which

(f t g) ◦ injl = f and (f t g) ◦ injr = g. We continue with our discussion of the wedge sum by considering a special instance of pushout functions.

Lemma 5.4. Assume we are given two pointed types A, B : U together with ∗ 0 ∗ 0 base-point preserving maps (α, pα): Map (A, A ) and (β, pβ): Map (B,B ). We let: • inl : A → (A ∨ B) • inr : B → (A ∨ B)

• glue : inl(∗A) =(A∨B) inr(∗B) • inl0 : A0 → (A0 ∨ B0) • inr0 : B0 → (A0 ∨ B0) 0 0 • glue : inl (∗A0 ) =(A0∨B0) inr(∗B0 ) There is a canonical function α ∨ β : A ∨ B → A0 ∨ B0, called the wedge of α and β, such that: • α ∨ β ◦ inl ≡ inl0 ◦ α • α ∨ β ◦ inr ≡ inr0 ◦ β −1 0 • ap(α∨β)(glue) = apinl0 (pα)  glue  apinr0 (pβ)

What is more, apinl0 (pα) proves that α ∨ β is base-point preserving. Proof. We define α ∨ β to be the pushout function of the two commutative squares: ∗ ∗ A o A 1 B / B

α pα pβ β

   A0 o 1 / B0 ∗A0 ∗B0 This function has the appropriate computation rules by lemma 4.21. Lastly we check that

pα : ∗A0 = α(∗A) 79 and hence 0 0  apinl0 (pα): inl (∗A0 ) = inl α(∗A) . Since by definition 0 ∗A0∨B0 ≡ inl (∗A0 ) and  0  (α ∨ β)(∗A∨B) ≡ (α ∨ β) inl(∗A) ≡ inl α(∗A) , it holds indeed that apinl0 (pα) proves that (α∨β) is base-point preserving.  Next, we consider the pushout function δ : A ∨ A → A tA A of the commutative squares:

∗ ∗ A o A 1 A / A

∗ idA refl∗A A refl∗A idA

   AAo / A idA idA

A We make A t A a pointed type by setting inl∗A as its base-point. By remark 4.2 there is an equivalence Φ: A tA A → A, which is base-point preserving by refl∗A and this gives us a function (11) Col :≡ Φ ◦ δ of type A ∨ A → A, such that:

• Col ◦ inl ≡ idA • Col ◦ inr ≡ idA

• apCol(glue) = refl∗A What is more, Col is base-point preserving by refl . With this at hand, to inl∗A define a function h : A ∨ B → C, it suffices to give two base-point preserving functions f : A → C and g : B → C, since we may wedge f and g and 80 ANDREAS FRANZ compose with Col. We introduce a special notation and set:

f ∨c g :≡ Col ◦ (f ∨ g)

Lemma 5.5. Assume we are given two pointed types A, B : U, then we can define the left and right projections πl : A ∨ B → A and πr : A ∨ B → B, such that:  • πl inl(x) ≡ x  • πl inr(y) ≡ ∗A • ap (glue) = refl πl ∗A  • πr inl(x) ≡ ∗B  • πr inr(y) ≡ y • ap (glue) = refl πl ∗B

What is more πl is base-point preserving by refl∗A and πr is base-point pre- serving by refl∗B .

Proof. We can define πl as the collapsed pushout function of the commutative squares ∗ ∗ A o A 1 B / B

∗ ∗ idA refl∗A A refl∗A A

   AAo / A idA idA and πr as the collapsed pushout function of the commutative squares ∗ ∗ A o A 1 B / B

∗ ∗ B refl∗B B refl∗B idB

   BBo / B idB idB 81

These functions have the appropriate computation rules by lemma 4.21. Lastly we compute  πl(∗A∨B) ≡ πl inr(∗B) ≡ ∗A and  πr(∗A∨B) ≡ πr inr(∗B) ≡ ∗B, therefore refl∗A and refl∗B proof that πl and πr are base-point preserving respectively.  Lemma 5.6. Assume we are given two pointed types A, B : U. We let

c inlA∨B ∨ inrA∨B be the collapsed pushout function of the commutative squares with proofs of commutativity given by

pinlA∨B :≡ reflinlA∨B (∗A) and

pinrA∨B :≡ glueA∨B. Then it holds that: c inlA∨B ∨ inrA∨B ∼ idA∨B

Proof. We use the induction principle of A ∨ B to proof the claim. To this end, we let: c P (w) :≡ inlA∨B ∨ inrA∨B(w) = w We compute

c  (inlA∨B ∨ inrA∨B) inlA∨B(x) ≡ Col ◦ inl(A∨B)∨(A∨B) ◦ inlA∨B(x) ≡ inlA∨B(x) and

c  (inlA∨B ∨ inrA∨B) inrA∨B(y) ≡ Col◦inl(A∨B)∨(A∨B) ◦inrA∨B(y) ≡ inrA∨B(y) Therefore we can set  reflinlA∨B (∗A) : P inlA∨B(∗A) and  reflinrA∨B (∗B ) : P inrA∨B(∗B) . 82 ANDREAS FRANZ

Then, we need to show:

P transport (glueA∨B, reflinlA∨B (∗A)) = reflinrA∨B (∗B ) By theorem 3.26 this means to show:

−1 (12) ap c (glue ) glue = refl (inlA∨B ∨ inrA∨B ) A∨B  A∨B inrA∨B (∗B )

By definition, inlA∨B and inrA∨B are base-point preserving by

pinlA∨B :≡ reflinlA∨B (∗A) and

pinrA∨B :≡ glueA∨B respectively. Therefore by the computation rules for pushout functions from lemma 4.21, we have that ap (glue ) equals: inlA∨B ∨inrA∨B A∨B (13) glue ap (glue ) (A∨B)∨(A∨B)  inr(A∨B)∨(A∨B) A∨B Lastly we note that by definition of Col in 11 we have that

apCol(glue(A∨B)∨(A∨B)) = refl∗(A∨B) and since Col ◦ inr(A∨B)∨(A∨B) is the identity function we have that ap (glue ) = glue . Col◦inr(A∨B)∨(A∨B) A∨B A∨B But this means that   ap glue ap (glue ) = Col (A∨B)∨(A∨B)  inr(A∨B)∨(A∨B) A∨B     = ap glue ap ap (glue ) = Col (A∨B)∨(A∨B)  Col inr(A∨B)∨(A∨B) A∨B

= refl∗(A∨B)  glueA∨B = glueA∨B, where in the first step we used lemma 3.4 and in the last step we used lemma 3.1. But this means, that the left hand side of (12) is equal to a concatenation of inverse paths and therefore equal to reflinrA∨B (∗B ), again by lemma 3.1.  Lemma 5.7. Assume we are given two pointed types A, B : U and a base- ∗ point preserving map (f, pf ): Map (A, B). Then we have:

c f ◦ ColA ∼ (f ∨ f) 83

Proof. We aim to proof the claim by applying the induction principle for A ∨ A0. So let us consider the type family

R : A ∨ A0 → U defined by the equation   R(w) :≡ f ◦ ColA(w) = ColB ◦ (f ∨ f)(w) . By definition of Col in 11 we have that

ColA ◦ inlA∨A ≡ idA and therefore

f ◦ ColA ◦ inlA∨A ≡ f. Similarly, by definition of (f ∨ f) in lemma 5.4, we have that

(f ∨ f) ◦ inlA∨A ≡ inlB∨B ◦ f and using again the definition of Col in 11 that

ColB ◦ inlB∨B ≡ idB. Putting the last equations together gives us

ColB ◦ (f ∨ f) ◦ inlA∨A ≡ ColB ◦ inlB∨B ◦ f ≡ f and this means that for every x : A we can set  reflf(x) : R inlA∨A(x) and similarly

ColB ◦ (f ∨ f) ◦ inrA∨A ≡ ColB ◦ inrB∨B ◦ f ≡ f and this means that for every x : A we can set  reflf(x) : R inrA∨A(x) . Then, what is left is to show that

R transport (glueA∨A, reflf(∗A)) = reflf(∗A). By theorem 3.26 this means to show:

−1 (14) ap (glue ) ap c (glue ) = refl f◦ColA A∨A  (f∨ f) A∨A f(∗A) 84 ANDREAS FRANZ

But the definition of Col in 11 gives

ap (glue ) = refl ColA A∨A ∗A and hence

−1 ap (glue ) ap c (glue ) = f◦ColA A∨A  (f∨ f) A∨A −1 = apf (refl∗A )  ap(f∨cf)(glueA∨A) =

= ap(f∨cf)(glueA∨A), where we used lemmas 3.4 and 3.1. Lastly we check again the definition of Col, by which we have

• Col ◦ inl ≡ idA , • Col ◦ inr ≡ idA and

• apCol(glue) = refl∗A and then we use computation rules for the wedge of functions, provided in 5.4, which gives us that

ap (glue ) = ap (p ) glue ap (p ) f∨f A∨A inlB∨B f  B∨B  inrB∨B f and therefore

ap(f∨cf)(glueA∨A) =   = ap ap (p ) glue ap (p ) = Col inlB∨B f  B∨B  inrB∨B f −1 = prf  prf . This shows that the left hand side of (14) is equal to a concatenation of inverse paths and the proof is proven by lemma 3.1.  We can now proof that the wedge sum is the coproduct in the category of pointed types and base-point preserving maps.

Proposition 5.8. The wedge sum is the coproduct in the category of pointed types and base-point preserving maps

Proof. For any two pointed maps f : A → C and g : B → C, we have that f ∨c g : A ∨ B → C has appropriate type. It is left to show uniqueness. So assume, we are given pointed types A, B, C : U together with base-point 85

∗ ∗ preserving functions functions (f, pf ): Map (A, C), (g, pg): Map (B,C) and ∗ (α, pα): Map (A ∨ B,C), such that:

• (α, pα) ◦ (inl, glue) =Map∗(A,C) (f, pf )

∗ • (α, pα) ◦ (inr, refl∗A∨B ) =Map (B,C) (g, pg) Now firstly we recall that

pinl ≡ refl∗A∨B and

pinr ≡ glue. We then use 5.2, by which we have that  (α, pα) ◦ (inl, refl∗A∨B ) ≡ α ◦ inl, pα and  (α, pα) ◦ (inr, glue) ≡ α ◦ inr, pα  apα(glue) . By theorem 3.16 we get that  α ◦ inl, pα =Map∗(A,C) (f, pf ) is equivalent to

X h7→∗C =h(∗A) transport (p, pα) = pf . p:α◦inl=f

Hence we may assume that we are given an inhabitant pl : α ◦ inr = f, such that we have that

h7→h(∗A)=∗C transport (pl, pα) = pf . We use theorem 3.26, lemma 3.11 and remark 3.5, by which we have

h7→h(∗A)=∗C transport (pl, pα) = = ap (p ) p = eval(∗A) l  α −1 = happly(pl)(∗A)  pα. Therefore, we get that

(15) pα  happly(pl)(∗A) = pg. 86 ANDREAS FRANZ

We note that Y happly(pl): α ◦ inl(x) = f(x). x:A Similarly  α ◦ inr, pα  apα(glue) =Map∗(B,C) (g, pg) is equivalent to

X h7→∗C =h(∗B ) transport (p, pα  apα(glue)) = pg p:α◦inr=g and therefore we may assume that we are given pr : α ◦ inr = g, such that

h7→∗C =h(∗B ) transport (pr, pα  apα(glue)) = pg. We use again theorem 3.26, lemma 3.11 and remark 3.5, by which we have

h7→∗C =h(∗B ) transport (pr, pα  apα(glue)) = = p ap (glue) ap (p ) = α  α  eval(∗B ) r

= pα  apα(glue)  happly(pr)(∗B)

Therefore, when we abbreviate pα◦inr ≡ pα  apα(glue), we have

(16) pα◦inr  happly(pr)(∗B) = pf and we note that Y happly(pr): α ◦ inr(y) = g(y). y:B By lemma 5.6, inl ∨c inr is the identity function and using lemma 5.7, we get:

α ∼ α ◦ (inl ∨c inr) ∼ (α ◦ inl) ∨c (α ◦ inr)

Lastly, by 4.24, equations (16) and (15) suffice to inhabit:

(α ◦ inl) ∨c (α ◦ inr) ∼ f ∨c g

Then, function extensionality shows uniqueness. 

6. The smash product Next we consider the smash product of pointed types. The definition of the smash product relies on the product A × B from section2 of two pointed 87 types A and B. We make A × B a pointed type by choosing (∗A, ∗B) as its base-point. Remark 6.1. We note that, if we are given base-point preserving maps ∗ 0 ∗ 0 (α, pα): Map (A, A ) and (β, pβ): Map (B,B ), then, by corollary 3.18, we = have that pair (pα, pβ) proves that (α, β) is base-point preserving. Definition 6.2. Assume we are given two pointed types A, B : U. We let injl : A → A × B be the left injection, defined by the equation

injl(x) :≡ (x, ∗B), which is base-point preserving by refl(∗A,∗B ) and similarly injr : B → A × B the right injection, defined by the equation

injr(y) :≡ (∗A, y), which is base-point preserving by refl(∗A,∗B ) as well. The smash product A∧B of A and B is the pushout of the following square:

inj ∨cinj A ∨ B l r / A × B

Glue [_,_]

  1 / A ∧ B inl

We make A ∧ B a pointed type, by setting ∗A∨B :≡ inl(?) Proposition 6.3. Assume we are given two pointed types A, B : U. Then there is an equivalence A ∧ B ' B ∧ A Proof. We denote by

• injl : A → A × B, • injr : B → A × B, 0 • injl : B → B × A and 0 • injr : A → B × A the respective injections, by

δ : A ∨ B → B ∨ A 88 ANDREAS FRANZ the equivalence from lemma 5.3 and lastly define

γ : A × B → B × A by the equation γ(x, y) :≡ (y, x), which obviously is an equivalence. By theorem 4.29, in order to proof the above proposition, it suffices to show that

Y 0 c 0 c (injl ∨ injr) ◦ δ(w) = γ ◦ (injl ∨ injr)(w). w:A∨B We compute

0 c 0 0 c 0  (injl ∨ injr) ◦ δ ◦ inlA∨B(x) ≡ (injl ∨ injr) inrB∨A(x) ≡ (∗B, x), 0 c 0 0 c 0  (injl ∨ injr) ◦ δ ◦ inrA∨B(y) ≡ (injl ∨ injr) inlB∨A(y) ≡ (y, ∗A), c  γ ◦ (injl ∨ injr) inlA∨B(x) ≡ γ(x, ∗B) ≡ (∗B, x), and c  γ ◦ (injl ∨ injr) inrA∨B(y) ≡ γ(∗A, y) ≡ (y, ∗A). Therefore, over the points we can proof the claim by reflexivity. Then it is left to show that

0 c 0 c w7→(injl∨ injr)◦δ(w)=γ◦(injl∨ injr)(w) transport (glueA∨B, refl(∗B ,x)) = refl(y,∗A). But since the injections are base-point preserving by reflexivity and

apδ(glueA∨B) = glueB∨A, this follows immediately from theorem 3.26.  Remark 6.4. The smash product can be thought of the product where all pairs for which the first coordinate is the base-point and all pairs for which the second coordinate is the base-point are connected by a path to an external point (the base-point of the smash product). The ambiguity of this description lies in the fact that, as it stands, it gives two paths when both coordinates are the base-point. We want to establish that these paths have to be homotopic to each other. So assume we are in the situation of definition 6.2. We consider the type family P : A ∨ B → U, defined by the equation

c  P (w) :≡ ∗A∧B = [_, _] ◦ (injl ∨ injr)(w) . 89

Then it holds that Y Glue : P (w) w:A∨B and lemma 3.7 gives us an inhabitant

P   apdGlue(glue): transport glue, Glue inl(∗A) = Glue inr(∗B) . From theorem 3.26 we get that the left hand side of the above equality is equal to   Glue inl(∗ )  ap c (glue) = Glue inr(∗ ) . A [_,_]◦(injl∨ injr) B

We recall that injl and injr are base-point preserving by reflexivity and therefore, by the definitions of Col in 11 and the wedge of functions from lemma 5.4, we have that

ap c (glue ) = refl . injl∨ injr A∨B (∗A,∗B ) But, using lemma 3.1, this means that   apdGlue(glue): Glue inl(∗A) = Glue inr(∗B) Remark 6.5. We want to take a closer look at the recursion principle for the smash product. We recall from definition 4.1 that, in order to define a function of type A ∧ B → C for some given type C : U, we need to give a function f : A × B → C together with a dependent function

Y c Com : ∗A∧B = f ◦ (injl ∨ injr)(w). w:A∨B In other words, given some function f : A × B → C, we need to provide a dependent function Com that makes the square

inj ∨cinj A ∨ B l r / A × B

Com f

  1 / C 90 ANDREAS FRANZ commute. We claim that this can be done by giving the following data:

• For every x : A a proof BPP(x): ∗C = f(x, ∗B) that the function λy.f(x, y) is base-point preserving (in particular this means that f is base-point preserving).

• For every y : B a proof CON(y): ∗C = f(∗A, y), i.e. a proof that the constant function (∗C )B is pointwise equal to the function λy.f(∗A, y) (by function extensionality this also means that the functions are equal).

• A path of type BPP(∗A) = CON(∗B) between the proof that λy.f(∗A, y) is base-point preserving and the proof that it is constant at ∗B. To see that this data really suffices, we note that the first two points are just the proof of commutativity over the points, that is we can set • Cominl(x) :≡ BPP(x) and • Cominr(y) :≡ CON(y). To finish the definition of Com, it is then left to show that

c w7→∗C =f◦(inj ∨ inj )(w) transport l r (glue, BPP(∗A)) = CON(∗B) and by theorem 3.26 this means to show

(17) BPP(∗ )  ap c (glue) = CON(∗ ). A f◦(injl∨ injr) B

As in the previous remark, since injl and injr are base-point preserving by reflexivity, the definitions of Col in 11 and the wedge of functions from lemma 5.4 give us that

ap c (glue ) = refl . injl∨ injr A∨B (∗A,∗B ) Therefore, using lemma 3.1 and the computation rule from lemma 3.2, we get that

−1 −1 ap c (glue )  BPP(∗A) = refl  BPP(∗A) = BPP(∗A) f◦(injl∨ injr) A∨B f(∗A,∗B ) and hence, in order to show (17), we need to inhabit

BPP(∗A) = CON(∗B). 91

This is however inhabited by assumption. To summarize, the above data suffices to define a proof of commutativity

Y c Com : ∗C = f ◦ (injl ∨ injr)(w) w:A∨B and thereby a function f 0 : A ∧ B → C, such that 0 • f (∗A∧B) ≡ ∗C • f 0([x, y]) ≡ f(x, y)  • apf 0 Glue(w) = Com(w) 0 In particular, f is a pointed function, where refl∗C proves that it is base-point preserving.

Remark 6.6. Let us assume that we are in the same situation as in remark 6.5, i.e. that we are given pointed types A, B, C : U and some function f : A × B → C such that we have:

• For every x : A a proof BPP(x): ∗C = f(x, ∗B) that the function λy.f(x, y) is base-point preserving (in particular this means that f is base-point preserving).

• For every y : B a proof CON(y): ∗C = f(∗A, y), i.e. a proof that the constant function (∗C )B is pointwise equal to the function λy.f(∗A, y) (by function extensionality this also means that the functions are equal).

• A path of type BPP(∗A) = CON(∗B) between the proof that λy.f(∗A, y) is base-point preserving and the proof that it is constant at ∗B. We claim that this data suffices to define an inhabitant of

Map∗A, Map∗(B,C).

It is obvious that for every x : A it holds that

λy.f(x, y), BPP(x) : Map∗(B,C).

Then, what is left is to give a proof that the pointed function λy.f(∗A, y) (where BPP(∗A) proves that it is base-point preserving) is equal to the base-point in Map∗(B,C). But to this end we can use remark 4.14, where functExt(CON) proves that λy.f(∗A, y) is constant together with the fact that 92 ANDREAS FRANZ happly is left-inverse to functExt and then we can use the last point in the above list.

Proposition 6.7. Assume we are given three pointed types A, B, C : U. Then there is an equivalence:

Map∗(A ∧ B,C) ' Map∗A, Map∗(B,C)

Proof. Step 1: We firstly assume that we are given some base-point preserving map ∗ (f, pf ): Map (A ∧ B,C). Then we can compose f : A ∧ B → C with the bracket [_, _]: A × B → A ∧ B and this gives a function

f ◦ [_, _]: A × B → C.

Next we consider the type family

P : A ∨ B → U, given by the defining equation

c  P (w) :≡ ∗A∧B = [_, _] ◦ (injl ∨ injr)(w) and note that for every w : A ∨ B we have an inhabitant

Glue(w): P (w).

Additionally we have an inhabitant

pf : ∗C = f(∗A∧B) and therefore we can define a dependent function F of type

Y c  ∗C = f ◦ [_, _] ◦ (injl ∨ injr)(w) w:A∨B by the defining equation  (18) F(w) :≡ pf  apf Glue(w) . 93

In particular, since it holds for every x : A that

c  (injl ∨ injr) inl(x) ≡ (x, ∗B), we have for every x : A that  F inl(x) : ∗C = f([x, ∗B]) proves that the function λy.f([x, y]) is base-point preserving and this gives an inhabitant of

A → Map∗(B,C).

Next we note that since it holds for every y : B that

c  (injl ∨ injr) inr(y) ≡ (∗A, y), we have for every y : B that  F inr(y) : ∗C = f([∗A, y]) proves that the constant function (∗C )B is pointwise equal to

λy.f([∗A, y]). Lastly we note that

P   apdF(glue): transport (glue, F inl(∗A) ) = F inr(∗B) and by theorem 3.26 this is equivalent to   F inl(∗ )  ap c (glue) = F inr(∗ ) . A f◦[_,_]◦(injl∨ injr) B

But since injl and injr are base-point preserving by reflexivity and therefore, by the definitions of Col in 11 and the wedge of functions from lemma 5.4, we have that

ap c (glue ) = refl , injl∨ injr A∨B (∗A,∗B ) this gives that   F inl(∗A) = F inr(∗B) . 94 ANDREAS FRANZ

By 6.6, this shows that the function λy.f([∗A, y]) is equal to the base-point in Map∗(B,C). Putting everything together, we showed that

f ◦ [_, _]: A × B → C gives rise to an inhabitant of

Map∗A, Map∗(B,C).

Step 2: Conversely, assume that we are given

∗ ∗  (g, pg): Map A, Map (B,C) . We firstly note that g : A → Map∗(B,C), which means that the projection to the first coordinate gives for every x : A a function  π1 g(x) : B → C and the projection to the second coordinate proves that this function is base-point preserving   π2 g(x) : ∗C = π1 g(x) (∗B). Secondly we note that

∗  pg : g(∗A) =Map (B,C) (∗C )B, refl∗C and by theorem 3.16, from pg we get a path l  pg : π1 g(∗A) = (∗C )B such that

r h7→∗C =h(∗B ) l  pg : transport (pg, π2 g(∗A) ) = refl∗C . By theorem 3.26 we have that the left side of this equality is equal to

π g(∗ ) ap (pl ) 2 A  eval(∗B ) g and then we can use 3.11, which gives us that

π g(∗ ) ap (pl ) = π g(∗ ) happly(pl )(∗ ) 2 A  eval(∗B ) g 2 A  g B 95

Putting everything together, we get an inhabitant of

 l π2 g(∗A)  happly(pg)(∗B) = refl∗C l −1 and concatenating with happly(pg)(∗B) on both sides gives  l −1 π2 g(∗A) = happly(pg)(∗B) . We summarize, what we established so far in this step.   • We have for every x : A a proof π2 g(x) : ∗C = π1 g(x) (∗B) that  λy.π1 g(x) (y) is base-point preserving. l −1  • For every y : B a proof happly(pg)(y) : ∗C = π1 g(∗A) (y), i.e. a proof that the constant function (∗C )B is pointwise equal to the  function λy.π1 g(∗A) (y).  l −1 • A path of type π2 g(∗A) = happly(pg)(∗B) between the proof  that λy.π1 g(∗A) (y) is base-point preserving and the proof that it is constant at ∗B. By remark 6.5 (and using the recursion principle for product types), this gives rise to a function f 0 of type A ∧ B → C. Step 3: Next we want to show that what we defined is indeed an equivalence. So let us assume that we start of with an inhabitant

∗ (f, pf ): Map (A ∧ B,C), as in the first step. There we defined  F(w) :≡ pf  apf Glue(w) and then established that it holds for every x : A that  F inl(x) : ∗C = f([x, ∗B]) proves that the function λy.f([x, y]) is base-point preserving and that  Y λy.F inr(y) : ∗C = f([∗A, y]) y:B proves that the constant function (∗C )B is pointwise equal to

λy.f([∗A, y]). 96 ANDREAS FRANZ

Together with apdF(glue) this gave rise to an inhabitant of Map∗A, Map∗(B,C), by remark 6.6. Then, in step 2, we used remark 6.5, by which the very same data suffices to define a function f 0 : A ∧ B → C by the recursion principle of the smash product, such that: 0 • f (∗A∧B)) ≡ ∗C • f 0([x, y]) ≡ f(x, y)  • apf 0 Glue(w) = F(w) We want to show that this functions is the same one that we started with. So let us consider the type family

P : A ∧ B → U, defined by the equation

P (s) :≡ f(s) = f 0(s).

We aim to inhabit the type Y P (s). x:A∧B Firstly we check the definition of f 0 to compute

P (∗A∧B) ≡ f(∗A∧B) = ∗C . Therefore we can set −1 pf : P (∗A∧B). Similarly we have f 0([x, y]) ≡ f([x, y]) and therefore we can set

reflf([x,y]) : P ([x, y]). It is then left to show for every w : A ∨ B that

P −1 transport (Glue(w), pf ) = reflf([x,y]). By theorem 3.26 this means to show that

−1 −1  apf Glue(w)  pf  apf 0 Glue(w) = reflf([x,y]). 97

But we have again by definition that  apf 0 Glue(w) = F(w) and  F(w) ≡ pf  apf Glue(w) . Step 4: In the converse direction we started of with

∗ ∗  (g, pg): Map A, Map (B,C) and then derived data as in remark 6.5 to define a function f 0 : A ∧ B → C.  l −1 In particular we constructed a path of type δ : π2 g(∗A) = happly(pg)(∗B) (we did not write it down explicitly but we showed that such a path can be constructed). Next we defined a dependent function by the equation

0  F (w) :≡ pf 0  apf 0 Glue(w) , but as noted in the remark, f 0 is pointed by reflexivity and therefore we have for every w : A ∨ B that

0  F (w) = apf 0 Glue(w) . In the first step we used that composition of F0 with inl and inr gave proofs that f 0 ◦ [_, _]: A × B → C is base-point preserving for every x : A and constant if x ≡ ∗A. Together with apdF0 (glue), remark 6.6 gave us an inhabitant of Map∗A, Map∗(B,C).

But by construction we have firstly that    apf 0 Glue inl(x) = π2 g(x) , secondly that   l −1 apf 0 Glue inr(y) = happly(pg)(y) and that lastly that

apdF0 (glue) = δ

Therefore this inhabitant is determined by the same data as (g, pg).  98 ANDREAS FRANZ

Lemma 6.8. Assume we are given two pointed types A, B : U together with ∗ 0 ∗ 0 base-point preserving maps (α, pα): Map (A, A ) and (β, pβ): Map (B,B ). We let • inl : A → A ∨ B • inr : B → A ∨ B

• injl : A → A × B • injr : B → A × B Q c • Glue : (w:A∨B) ∗A∧B = (injl ∨ injr)(w) • inl0 : A0 → A0 ∨ B0 • inr0 : B0 → A0 ∨ B0 0 0 0 0 • injl : A → A × B 0 0 0 0 • injr : B → A × B 0 Q 0 c 0 0 • Glue : (w0:A0∨B0) ∗A0∧B0 = (injl ∨ injr)(w ) There is a canonical function α ∧ β : A ∧ B → A0 ∧ B0, called the smash of α and β, such that

• (α ∧ β)(∗A∧B) ≡ ∗A0∧B0 • (α ∧ β)([x, y]) ≡ [α(x), β(y)]  0 0  =  • ap(α∧β) Glue inl(x) = Glue inl α(x) ap[_,_] pair (reflα(x), pβ)  0 0  =  • ap(α∧β) Glue inr(y) = Glue inr β(y) ap[_,_] pair (pα, reflβ(x)) What is more, α ∧ β is base-point preserving by reflexivity.

Proof. We want to define a proof of commutativity Com for the right square in the diagram:

(inj ∨cinj ) 1 o A ∨ B l r / A × B

? refl? α∨β (α,β)

   1 o A0 ∨ B0 / A0 × B0 0 c 0 (injl∨ injr) To this end we consider the type family

P : A ∨ B → U, 99 defined by the equation

0 c 0 c P (w) :≡ (injl ∨ injr) ◦ (α ∨ β)(w) = (α, β) ◦ (injl ∨ injr)(w). We compute:

0 c 0  0 c 0  0  (injl ∨ injr) ◦ (α ∨ β) inl(x) ≡ (injl ∨ injr) inl α(x) ≡ (α(x), ∗B0 )

0 c 0  0 c 0  0   (injl ∨ injr) ◦ (α ∨ β) inr(y) ≡ (injl ∨ injr) inr β(y) ≡ ∗A0 , β(y) c   (α, β) ◦ (injl ∨ injr) inl(x) ≡ (α, β)(x, ∗B) ≡ α(x), β(∗B) c   (α, β) ◦ (injl ∨ injr) inr(y) ≡ (α, β)(∗A, y) ≡ α(∗A), β(y) Therefore we can define

=  pair (reflα(x), pβ): P inl(x) and =  pair (pα, reflβ(y)): P inr(y) . Then it is left to show that

P = = transport (glue, pair (reflα(x), pβ)) = pair (pα, reflβ(y)). By theorem 3.26 this means to show

−1 = = (19) ap 0 c 0 (glue)  pair (refl , pβ) = pair (pα, refl ) (injl∨ injr)◦(α∨β) α(x) β(y) and hence we need to compute

ap 0 c 0 (glue). (injl∨ injr)◦(α∨β) By lemma 4.27 we get that

0 c 0 0 c 0 (injl ∨ injr) ◦ (α ∨ β) ∼ (injl ◦ α) ∨ (injr ◦ β). We then use that 0 (injl ◦ α) is base-point preserving by

ap 0 (pα) injl and using the equivalence from corollary 3.18, lemma 3.21 and remark 3.5, we get = ap 0 (pα) = pair (pα, refl∗ 0 ). injl B 100 ANDREAS FRANZ

Similarly we have that 0 (injr ◦ β) is base-point preserving by

= ap 0 (p ) = pair (refl , p ). injr β ∗A β Checking the definition of a wedge of functions in lemma 5.4, we conclude that

ap 0 c 0 (glue) (injl∨ injr)◦(α∨β) is equal to = = pair (pα, refl∗B0 )  pair (refl∗A , pβ). We use lemma 3.20, by which we have that

= = = pair (pα, refl∗B0 )  pair (refl∗A , pβ) = pair (pα, pβ) and when we plug this in (19), we get

= −1 = = pair (pα, pβ)  pair (reflα(x), pβ) = pair (pα, reflβ(y)). Yet again, this can be proven by lemma 3.20 and this gives us the required proof Com of commutativity and we define α ∧ β to be the pushout function of the two commutative squares:

(inj ∨cinj ) 1 o A ∨ B l r / A × B

? refl? α∨β Com (α,β)

   1 o A0 ∨ B0 / B0 0 c 0 (injl∨ injr) Lastly we check that

(α ∧ β)(∗A∧B) ≡ ∗A0∧B0 , hence it holds indeed that reflexivity proves that (α ∧ β) is base-point pre- serving.  Remark 6.9. The following proof is incomplete. In step three and four we will want to show an equality between pointed functions

f, g : Map∗(A, Map∗(B,C)), 101 but we came only so far as to show that there is an

H : π1(f) = π1(g). What is missing is to show that

π2(f)  H(∗A) = π2(g), according to remark 4.14.

Proposition 6.10. Assume we are given two pointed types A, B and C. Then there is an equivalence:

(A ∨ B) ∧ C ' (A ∧ C) ∨ (B ∧ C)

Proof. Step 1: Defining a function of type

(A ∧ C) ∨ (B ∧ C) → (A ∨ B) ∧ C is easy, since we have at hand the pointed functions:

• inlA∨B : A → A ∨ B and • inrA∨B : B → A ∨ B and therefore we can use lemmas 6.8 and 5.4 to define

c Φ :≡ (inlA∨B ∧ idC ) ∨ (inrA∨B ∧ idC ), which has appropriate type

(A ∧ C) ∨ (B ∧ C) → (A ∨ B) ∧ C.

Step 2: Next, we want to give for any z : C a base-point preserving function fz of type: A ∨ B → (A ∧ C) ∨ (B ∧ C) l To this end, we aim to define for every z : C a pointed map fz in Map∗A, (A ∧ C) ∨ (B ∧ C)

r and a pointed map fz in Map∗B, (A ∧ C) ∨ (B ∧ C).

l We start with fz, which is defined by the equation l fz(x) :≡ inl(A∧C)∨(B∧C)([x, z]). 102 ANDREAS FRANZ

We recall that we chose

∗(A∧C)∨(B∧C) :≡ inl(A∧C)∨(B∧C)(∗A∧C ) as base-point. Now, since  GlueA∧C inrA∨C (z) : ∗A∧C = [∗A, z], we get that   ap Glue inr (z) inl(A∧C)∨(B∧C) A∧C A∨C has type

inl(A∧C)∨(B∧C)(∗A∧C ) = inl(A∧C)∨(B∧C)([∗A, z]) l and therefore proves that fz ≡ λx.inl(A∧C)∨(B∧C)([x, z]) is base-point preserv- ing for every z : C. Similarly we can define a function by the equation

r fz (y) :≡ inr(A∧C)∨(B∧C)([y, z]) and then we can use that

ap Glue (inr (z) inr(A∧C)∨(B∧C) B∧C B∨C has type

inr(A∧C)∨(B∧C)(∗B∧C ) = inr(A∧C)∨(B∧C)([∗B, z]) and that

glue(A∧C)∨(B∧C) : inl(A∧C)∨(B∧C)(∗A∧C ) = inr(A∧C)∨(B∧C)(∗B∧C ) r to define a proof that fz ≡ λy.inr(A∧C)∨(B∧C)([y, z]) is base-point preserving for every z : C by

glue ap Glue (inr (z). (A∧C)∨(B∧C)  inr(A∧C)∨(B∧C) B∧C B∨C Now we are able to define for every z : C an inhabitant of

A ∨ B → (A ∧ C) ∨ (B ∧ C) by l c r fz :≡ fz ∨ fz .

We note that for every z : C, by definition of fz we have  l fz(∗A∨B) ≡ fz inlA∨B(∗A) ≡ fz(∗A) 103 and hence fz is pointed by   ap Glue inr (z) . inl(A∧C)∨(B∧C) A∧C A∨C

Next we want to show that the constant function (∗(A∧C)∨(B∧C))A is pointwise equal to f∗C and we do so by applying the induction principle of A ∨ B. So let us consider the type family

P : A ∨ B → U, defined by the equation

P (w) :≡ ∗(A∧C)∨(B∧C) = f∗C (w). We note that for every x : A we have that   ap Glue inl (x) inl(A∧C)∨(B∧C) A∧C A∨C has type

inl(A∧C)∨(B∧C)(∗A∧C ) = inl(A∧C)∨(B∧C)([x, ∗C ]), i.e. proves that the constant function (∗(A∧C)∨(B∧C))A is pointwise equal to f l ≡ λx.inl ([x, ∗ ]). ∗C (A∧C)∨(B∧C) C Similarly, for any y : B, the term   ap Glue inl (y) inr(A∧C)∨(B∧C) B∧C B∨C has type

inr(A∧C)∨(B∧C)(∗B∧C ) = inr(A∧C)∨(B∧C)([y, ∗C ]) and

glue(A∧C)∨(B∧C) has type

inl(A∧C)∨(B∧C)(∗A∧C ) = inr(A∧C)∨(B∧C)(∗B∧C ). Hence we can define for every y : B an inhabitant of type

inl(A∧C)∨(B∧C)(∗A∧C ) = inr(A∧C)∨(B∧C)([y, ∗C ]) i.e. a proof that the constant function (∗(A∧C)∨(B∧C))A is pointwise equal to f r ≡ λy.inr ([y, ∗ ]) ∗C (A∧C)∨(B∧C) C 104 ANDREAS FRANZ by   glue ap Glue inl (y) . (A∧C)∨(B∧C)  inr(A∧C)∨(B∧C) B∧C B∨C Then we want to show that    transportP glue , ap Glue inl (∗ ) = A∨B inl(A∧C)∨(B∧C) A∧C A∨C A   = glue ap Glue inl (∗ ) . (A∧C)∨(B∧C)  inr(A∧C)∨(B∧C) B∧C B∨C B By theorem 3.26 this means to show that   ap GlueA∧C inlA∨C (∗A)  ap (glue ) = inl(A∧C)∨(B∧C) f∗C (A∧C)∨(B∧C) (20)   glue ap Glue inl (∗ ) (A∧C)∨(B∧C)  inr(A∧C)∨(B∧C) B∧C B∨C B But by definition of the wedge of functions (see lemma 5.4) we have that

ap (glue ) = f∗C (A∧C)∨(B∧C)  −1 = ap Glue inr (∗ ) inl(A∧C)∨(B∧C) A∧C A∨C C    glue ap Glue inr (∗ )  (A∧C)∨(B∧C)  inr(A∧C)∨(B∧C) B∧C B∨C C and by remark 6.4 that

apd (glue ): Glue inl (∗ ) = Glue inr (∗ ), GlueA∧C A∨C A∧C A∨C A A∧C A∨C C as well as

apd (glue ): Glue inl (∗ ) = Glue inr (∗ ). GlueB∧C B∨C B∧C B∨C B B∧C B∨C C Hence, after canceling inverses, we can proof (20) by

reflglue  ap (GlueB∧C (inlB∨C (∗B ))). (A∧C)∨(B∧C) inr(A∧C)∨(B∧C)

This shows that that the constant function (∗(A∧C)∨(B∧C))A is pointwise equal to f∗C . We now put everything together. Firstly we can define

λw.λc.fz(w):(A ∨ B) → C → (A ∧ C) ∨ (B ∧ C) and then, by the preceding discussion, the constant function (∗(A∧C)∨(B∧C))C is pointwise equal to λz.fz(∗A∨B). In particular, evaluated at ∗C , this is proven by   ap Glue inr (∗ ) . inl(A∧C)∨(B∧C) A∧C A∨C C 105

Secondly we showed that λz.fz(w) is base-point preserving for every w : A∨B and, evaluated at ∗A∨B ≡ inlA∨C (∗A), this is proven by   ap Glue inl (∗ ) . inl(A∧C)∨(B∧C) A∧C A∨C A By remark 6.4 we have that

apd (glue ): Glue inl (∗ ) = Glue (inr (∗ ) GlueA∧C A∨C A∧C A∨B A A∧C A∨C C and hence, from lemma 3.3, we get a proof

ap2 (apd glue ) inl(A∧C)∨(B∧C) GlueA∧C A∨C that   ap Glue inl (∗ ) = inl(A∧C)∨(B∧C) A∧C A∨C A   = ap Glue inr (∗ ) . inl(A∧C)∨(B∧C) A∧C A∨C C By remark 6.6, this data suffices to define an inhabitant of   Map∗ A ∨ B, Map∗C, (A ∧ C) ∨ (B ∧ C) and by lemma 6.7, this corresponds to a function Ψ of type

Map∗(A ∨ B) ∧ C, (A ∧ C) ∨ (B ∧ C), such that over the points we have the following computation rules:

• Ψ(∗(A∨B)∧C)) :≡ inl(A∧C)∨(B∧C)(∗(A∧C)) ≡ ∗(A∧C)∨(B∧C) • Ψ([w, z]) :≡ fz(w) In particular, Ψ is base-point preserving by reflexivity. Additionally we have     ap Glue inr (z) = ap Glue inr (z) , Ψ (A∨B)∧C (A∨B)∨C inl(A∧C)∨(B∧C) A∧C A∨C

  apΨ Glue(A∨B)∧C inl(A∨B)∨C (inlA∨B(x)) =   = ap Glue inl (x) inl(A∧C)∨(B∧C) A∧C A∨C and lastly   apΨ Glue(A∨B)∧C inl(A∨B)∨C (inrA∨B(y)) =   = glue ap Glue inl (y) . (A∧C)∨(B∧C)  inr(A∧C)∨(B∧C) B∧C B∨C 106 ANDREAS FRANZ

Step 3: Next, we aim to show that

c  Ψ ◦ (inlA∨B ∧ idC ) ∨ (inrA∨B ∧ idC ) ∼ id(A∧C)∨(B∧C). By lemma 4.27, we have:

c   c  Ψ◦ (inlA∨B∧idC )∨ (inrA∨B∧idC ) ∼ Ψ◦(inlA∨B∧idC ) ∨ Ψ◦(inrA∨B∧idC ) By Corollary 5.6, it suffices to show that the last function is pointwise equal to: c inl(A∧C)∨(B∧C) ∨ inr(A∧C)∨(B∧C) We compute for every x : A and z : C that

Ψ ◦ (inlA∨B ∧ idC )([x, z]) ≡ Ψ([inlA∨B(x), z]) ≡  l ≡ fz inlA∨B(x) ≡ fz(x) ≡ inl(A∧C)∨(B∧C)([x, z]). We want to avoid using the induction principle for the smash product, which is why we consider the images of

Ψ ◦ (inlA∨B ∧ idC ) and

Ψ ◦ (inrA∨B ∧ idC ) under the equivalence of proposition 6.7 instead. We compute for every x : A and z : C that

Ψ ◦ (inlA∨B ∧ idC )([x, z]) ≡ Ψ([inlA∨B(x), z]) ≡  l ≡ fz inlA∨B(x) ≡ fz(x) ≡ inl(A∧C)∨(B∧C)([x, z]).

Next we use that Ψ and (inlA∨B ∧ idC ) and therefore also Ψ ◦ (inlA∨B ∧ idC ), as well as inl(A∧C)∨(B∧C) are base-point preserving by reflexivity and proceed as in equation 18, i.e. for every w : A ∨ C we consider the terms

F (w) :≡ ap Glue (w) l Ψ◦(inlA∨B ∧idC ) A∧C and G (w) :≡ ap Glue (w). l inl(A∧C)∨(B∧C) A∧C 107

We check the definition of the smash of functions, lemma 6.8 and then, since idC is base-point preserving by reflexivity, we get for every x : A that   ap Glue inl (x) = (inlA∨B ∧idC ) A∧C A∨C   = Glue(A∨B)∧C inl(A∨B)∨C inlA∨B(x) and therefore   ap Glue inl (x) = Ψ◦(inlA∨B ∧idC ) A∧C A∨C    = apΨ Glue(A∨B)∧C inl(A∨B)∨C inlA∨B(x) =   = ap Glue inl (x) . inl(A∧C)∨(B∧C) A∧C A∨C Hence we get that for every x : A it holds:   Fl inlA∨C (x) = Gl inlA∨C (x)

Similarly, since also inlA∨B is base-point preserving by reflexivity, we have for every z : C that  ap (Glue (inr (z) = (inlA∨B ∧idC ) A∧C A∨C  = Glue(A∨B)∧C inr(A∨B)∨C (z) and therefore  ap (Glue (inr (z) = Ψ◦(inlA∨B ∧idC ) A∧C A∨C  = apΨ(Glue(A∨B)∧C (inr(A∨B)∨C (z) =  = ap (Glue (inr (z) . inl(A∧C)∨(B∧C) A∧C A∨C Hence we get that also for every z : C it holds that:   Fl inrA∨C (z) = Gl inrA∨C (z) Similarly for every y : B and z : C we compute

Ψ ◦ (inrA∨B ∧ idC )([y, z]) ≡ Ψ([inrA∨B(y), z]) ≡  r ≡ fz inrA∨B(y) ≡ fz (y) ≡ inr(A∧C)∨(B∧C)([y, z]).

Then we use that Ψ and (inrA∨B ∧ idC ) and therefore also Ψ ◦ (inrA∨B ∧ idC ) are base-point preserving by reflexivity and inr(A∧C)∨(B∧C) is base-point 108 ANDREAS FRANZ preserving by glue(A∧C)∨(B∧C) and proceed again as in equation 18, i.e. for every w : B ∨ C we consider the terms

F (w) :≡ ap Glue (w) r Ψ◦(inrA∨B ∧idC ) B∧C and G (w) :≡ glue ap Glue (w). r (A∧C)∨(B∧C)  inr(A∧C)∨(B∧C) B∧C

We check again lemma 6.8 and then, since idC is base-point preserving by reflexivity, we get that  ap (Glue (inl (y) = (inrA∨B ∧idC ) B∧C B∨C  = Glue(A∨B)∧C (inl(A∨B)∨C (inrA∨B(y) and therefore   ap Glue inl (y) = Ψ◦(inrA∨B ∧idC ) A∧C B∨C  = apΨ Glue(A∨B)∧C (inl(A∨B)∨C (inrA∨B(y) =   = glue ap Glue inl (y) . (A∧C)∨(B∧C)  inr(A∧C)∨(B∧C) B∧C B∨C Hence we get that for every y : B it holds:   Fr inlB∨C (y) = Gr inlB∨C (y) . Next, using again the definition of the smash of functions (see lemma 6.8) together with the fact that inrA∨B is base-point preserving by glueA∨B, we get for every z : C that   ap Glue inr (z) = inrA∨B ∧idC B∧C B∨C  =  = Glue(A∨B)∧C inr(A∨B)∨C (z)  ap[_,_] pair (glueA∨B, reflz) . and therefore (21)  ap (Glue (inr (y) = Ψ◦(inrA∨B ∧idC ) B∧C A∨C   =  = apΨ Glue(A∨B)∧C inr(A∨B)∨C (z)  ap[_,_] pair (glueA∨B, reflz) =

  =  = apΨ Glue(A∨B)∧C inr(A∨B)∨C (z)  apΨ◦[_,_] pair (glueA∨B, reflz) . 109

We use lemma 3.22, by which we have that

=  apΨ◦[_,_] pair (glueA∨B, reflz) =

= apλw.Ψ([w,z])(glueA∨B).

We check the definition λw.Ψ([w, z]) ≡ fz and then

apfz (glueA∨B) = −1 = ap (Glue (inr (z) inl(A∧C)∨(B∧C) A∧C A∨C   glue ap (Glue (inr (z)  (A∧C)∨(B∧C)  inr(A∧C)∨(B∧C) B∧C B∨C and by definition of Ψ we have that   apΨ Glue(A∨B)∧C inr(A∨B)∨C (z) =  = ap (Glue (inr (z) . inl(A∧C)∨(B∧C) A∧C A∨C Hence the rightmost term of equation (21) is in turn equal to   glue ap Glue inr (z) (A∧C)∨(B∧C)  inr(A∧C)∨(B∧C) B∧C B∨C and this shows that for every z : C it holds:   Fr inrB∨C (z) = Gr inrB∨C (z) Step 4: Next we want to show that also

c  (inlA∨B ∧ idC ) ∨ (inrA∨B ∧ idC ) ◦ Ψ ∼ id(A∨B)∧C . As in step 3, we consider the image of

c  (inlA∨B ∧ idC ) ∨ (inrA∨B ∧ idC ) ◦ Ψ under the equivalence from proposition 6.7 and then, for every w : A ∨ B and z : C we compute:

c  (inlA∨B ∧ idC ) ∨ (inrA∨B ∧ idC ) ◦ Ψ([w, z]) ≡ c  ≡ (inlA∨B ∧ idC ) ∨ (inrA∨B ∧ idC ) ◦ fz(w) ≡ c  l c r ≡ (inlA∨B ∧ idC ) ∨ (inrA∨B ∧ idC ) ◦ (fz ∨ fz )(w). 110 ANDREAS FRANZ

We quickly check the definitions

l fz(x) ≡ inl(A∧C)∨(B∧C)([x, z]) and r fz (y) ≡ inr(A∧C)∨(B∧C)([y, z]). Then, using lemma 5.7, we have that:

c  l c r (inlA∨B ∧ idC ) ∨ (inrA∨B ∧ idC ) ◦ (f ∨ f ) ∼  c  ∼ λx.λz.[inlA∨B(x), z] ∨ λy.λz.[inrA∨B(y), z] We note that we have for every x : A that  ap c  Glue(A∨B)∧C (inl(A∨B)∨C (inlA∨B(x) (inlA∨B ∧idC )∨ (inrA∨B ∧idC ) ◦Ψ proves that

λz.[inlA∨B(x), z] is base-point preserving and by definition of Ψ and the smash of functions   ap c  Glue(A∨B)∧C inl(A∨B)∨C inlA∨B(x) = (inlA∨B ∧idC )∨ (inrA∨B ∧idC ) ◦Ψ    = ap c  apinl GlueA∧C inlA∨C (x) = (inlA∨B ∧idC )∨ (inrA∨B ∧idC ) (A∧C)∨(B∧C)     = ap Glue inl (x) = Glue inl inl (x) . (inlA∨B ∧idC ) A∧C A∨C (A∨B)∧C (A∨B)∨C A∨B Similarly for every y : B we have that   ap c  Glue(A∨B)∧C (inl(A∨B)∨C (inrA∨B(y) (inlA∨B ∧idC )∨ (inrA∨B ∧idC ) ◦Ψ proves that

λz.[inrA∨B(y), z] is base-point preserving and by definition of Ψ and the smash of functions

 ap c  Glue(A∨B)∧C (inl(A∨B)∨C (inrA∨B (y) = (inlA∨B ∧idC )∨ (inrA∨B ∧idC ) ◦Ψ   = ap c  glue(A∧C)∨(B∧C)  apinr GlueB∧C inlB∨C (y) = (inlA∨B ∧idC )∨ (inrA∨B ∧idC ) (A∧C)∨(B∧C)   = ap c  apinr GlueB∧C inlB∨C (y) = (inlA∨B ∧idC )∨ (inrA∨B ∧idC ) (A∧C)∨(B∧C)   = ap Glue inl (y) = (inrA∨B ∧idC ) B∧C B∨C   = Glue(A∨B)∧C inl(A∨B)∨C inrA∨B (y) . 111

Next we use remark 5.2 and since inlA∨B is base-point preserving by reflexivity, we get for every z : C that  (Glue(A∨B)∧C inr(A∨B)∨C (z) proves that

λz.[inlA∨B(∗A), z] is equal to the base-point. Secondly, since inrA∨B is base-point preserving by glueA∨B we compute

ap c  l c r (glueA∨B) = (inlA∨B ∧idC )∨ (inrA∨B ∧idC ) ◦(fz∨ fz )   −1 = ap c  apinl GlueA∧C inrA∨C (z)  (inlA∨B ∧idC )∨ (inrA∨B ∧idC ) (A∧C)∨(B∧C)   glue ap Glue inr (z) =  (A∧C)∨(B∧C)  inr(A∧C)∨(B∧C) B∧C B∨C  −1 = Glue(A∨B)∧C inl(A∨B)∨C inlA∨B(x) 

  =  Glue(A∨B)∧C inl(A∨B)∨C inrA∨B(y)  ap[_,_] pair (glueA∨B, reflz) . Therefore, the proof that

λz.[inrA∨B(∗B), z] is equal to the base-point for every z : C is given by

  =  Glue(A∨B)∧C inl(A∨B)∨C inrA∨B(y)  ap[_,_] pair (glueA∨B, reflz) .



7. Conclusion We hope that the reader developed some intuition about basic concepts in homotopy type theory and how one translates ideas from homotopy theory to homotopy type theory, while reading this thesis. As far as the author is aware, there was no work yet showing that pushouts are invariant under homotopy, without adding any axioms. The most compelling and interesting task for future work would be to implement the results in a proof assistant. 112 ANDREAS FRANZ

References

[1] Steve Awodey and Michael A. Warren. Homotopy theoretic models of identity types. Mathematical Proceedings of the Cambridge Philosophical Society, 146:45–55, 2009. 23 [2] Erret Bishop. Foundations of constructive analysis. McGraw-Hill Book Co., New York, 1967.2 [3] Guillaume Brunerie. On the homotopy groups of spheres in homotopy type theory. PhD thesis, Université Nice-Sophia-Antipolis – UFR Sciences, 2016.4 [4] Guillaume Brunerie, Kuen-Bang Hou (Favonia), Evan Cavallo, Eric Finster, Jesper Cockx, Christian Sattler, Chris Jeris, Michael Shulman, et al. Homotopy type theory in Agda. https://github.com/HoTT/HoTT-Agda.4 [5] Evan Cavallo. Synthetic cohomology in homotopy type theory. Master’s thesis, Carnegie Mellon University, 2015.4 [6] R. Graham. Synthetic Homology in Homotopy Type Theory. ArXiv e-prints, June 2017.4 [7] Martin Hofmann and Thomas Streicher. The groupoid interpretation of type theory. In Giovanni Sambin and Jan M. Smith, editors, Twenty-five years of constructive type theory (Venice, 1995), volume 36 of Oxford Logic Guides, pages 83–111. Oxford University Press, New York, 1998.3, 23 [8] William A. Howard. The formulae-as-types notion of construction. In J. Roger Seldin, Jonathan P.; Hindley, editor, To H.B. Curry: Essays on Combinatory Logic, Lambda Calculus and Formalism, pages 479–490. Academic Press, 1980. original paper manu- script from 1969.5 [9] Per Martin-Löf. An intuitionistic theory of types: predicative part. In H.E. and J.C. Shepherdson, editors, Logic Colloquium ’73, Proceedings of the Logic Colloquium, volume 80 of Studies in Logic and the Foundations of Mathematics, pages 73–118. North-Holland, 1975.2 [10] The Univalent Foundations Program. Homotopy type theory: Univalent foundations of mathematics. Technical report, Institute for Advanced Study, 2013.4, 32, 51 [11] Vladimir Voevodsky. A very short note on the homotopy λ-calculus. http://www.math.ias.edu/~vladimir/Site3/Univalent_Foundations_files/ Hlambda_short_current.pdf, 2006. 23 113

Eidesstattliche Erklärung Hiermit versichere ich an Eides statt, die vorliegende Arbeit selbstständig verfasst und keine anderen als die angegebenen Quellen und Hilfsmittel be- nutzt, alle Ausführungen, die anderen Schriften wörtlich oder sinngemäß entnommen wurden, kenntlich gemacht zu haben und die Arbeit in gle- icher oder ähnlicher Fassung noch nicht Bestandteil einer Studien- oder Prüfungsleistung war.

Andreas Franz, ...... , August 2017

Betreuer: Dr. Iosif Petrakis, Mathematisches Institut der Ludwig-Maximilians- Universität München.