<<

HOMOTOPIC BUT NOT ISOTOPIC FAMILIES OF LINKS

BY

LYDIA J. HOLLEY

A Thesis Submitted to the Graduate Faculty of

WAKE FOREST UNIVERSITY GRADUATE SCHOOL OF ARTS AND SCIENCES

in Partial Fulfillment of the Requirements

for the Degree of

MASTER OF ARTS

Mathematics & Statistics

May 2020

Winston-Salem, North Carolina

Approved By:

Jason Parsley, Ph.D., Advisor

Stephen Robinson, Ph.D., Chair Frank Moore, Ph.D. Acknowledgments

I would first like to acknowledge the department at Wake Forest University as a whole for making this thesis possible. As a friendly, supportive, and welcoming environment, I found a nice home here for the last two years that immensely helped with my mathematical growth.

My thesis advisor, Dr. Jason Parsley, diligently sat with me each week in order to discuss my work, even when I had dug myself deep into a hole of literature on an only tangentially related topic. I am indebted to him for his guidance throughout the process of writing and research, and for always letting me explore new paths.

I am also grateful for Dr. Bakul Sathaye, who gifted me with this question arising out of her PhD work. I am grateful for all of the guidance in our meetings, whether they were over Skype or in crowded coffee shops.

Finally, I am indebted to Dr. Nancy Scherich for her consistent mentoring through- out my second year, her advice, and her amazing listening skills; I am grateful for Dr. Frank Moore for the conversations about algebra that continued to push me along in my research, as well as the encouragement and guidance; and, I am grateful for Dr. Stephen Robinson for agreeing to be on my thesis committee and for always being such a wonderful, supportive teacher.

ii Table of Contents

Acknowledgments ...... ii

List of Figures ...... iv

Abstract ...... viii

Chapter 1 Introduction ...... 1

Chapter 2 Background ...... 3

Chapter 3 Bakul Sathaye’s Work: Homotopic but not Isotopic to the . . 16 3.1 Geometric Motivation ...... 16 3.2 Sathaye’s Original Question - Unlink Version ...... 19 3.3 Generalization of Sathaye’s Question ...... 22

Chapter 4 Colorability and Determinants ...... 28 4.1 Introduction to p-Colorability and Link Determinants ...... 28 4.2 Application to our Question - Examples and a Proof ...... 31

Chapter 5 Introduction to Khovanov ...... 51 5.1 Computing the ...... 51 5.2 Computing ...... 56

Chapter 6 Khovanov Homology - Application to the Two-Component Case . . . 68

Bibliography ...... 103

Curriculum Vitae ...... 105

iii List of Figures

2.1 The trefoil ...... 3 2.2 An example of a , where we consider the possibility of placing infinitely many of the knotted pieces shown along our strand of . https://wildandnoncompactknots.wordpress.com/2015/06/11/how-can- one-make-a-wild-knot/ ...... 4 2.3 An example of a link...... 5 2.4 The three Reidemeister moves...... 6 2.5 At left, a variation on the with one knotted component. At right, the standard Hopf link...... 6 2.6 The Hopf link oriented two different ways, resulting in different signed crossings...... 8 2.7 A negative crossing at left, positive crossing at right...... 8 2.8 The two-component unlink at left, and the at right. .9 2.9 A doubling operation inside a torus...... 12 2.10 An n-doubling operation inside a torus...... 13 2.11 Whitehead doubling operation performed on the Hopf link...... 13 2.12 Moving toward “unlinking” the Hopf link via link homotopy by White- head 4-doubling the second component...... 14 2.13 An n-component ...... 14

3.1 An example of a geodesic triangle with a Euclidean comparison triangle overlayed, versus a standard Euclidean triangle...... 18 3.2 A Whitehead doubled Brunnian link...... 20 3.3 The change in intracomponent crossing is shown in the red circle. . . 21 3.4 A link homotopic to the link we started with, which can now be de- formed into the unlink...... 21 3.5 Detecting a maximum with respect to x...... 23 3.6 Shifting a small neighborhood of each component such that there is a unique point of each component that assumes the maximum value m with respect to y. Retain an ordering on the components with respect to x...... 25 3.7 Cutting open the points where this maximum is attained at each com- ponent, and adding in bands...... 25 3.8 The clear, well-defined left to right ordering on the components of n Wk (B). The dotted regions demonstrate where banding will occur. . 26

iv 3.9 Performing the exterior band sum in the 3-component case...... 26

4.1 A labeling of the strands of the Hopf link. Note that orientation is irrelevant when evaluating p-colorability, though here the orientation is shown...... 29 4.2 The two-component link L8a1...... 32

4.3 L8a1#bW2(B)...... 33

4.4 L8a1#bW4(B)...... 34 4.5 The two-component link L6a1...... 34

4.6 L6a1#bW4(B)...... 35

4.7 Base case diagram, L#bW3(B)...... 37

4.8 The area of the diagram of L#bW3(B) that we will fixate on. The rele- vant labels are given. Our coloring matrix follows from these particular labels...... 37

4.9 L#bW2(B)...... 40

4.10 Labeling for L#bW2(B)...... 40

4.11 L#bW1(B)...... 41

4.12 Labeling for L#bW1(B)...... 42

4.13 L#bWk(B), k > 3...... 44 4.14 The process of adding an arc, thereby adding a crossing in the twist region. The xN+3 represents the new arc. The labels which are not explicitly written in remain the same as before...... 44

4.15 det(L8a1#bW1(B)) = 44...... 49

4.16 det(L8a1#bW2(B)) = 48...... 50

5.1 Two options for orienting the understrand of a particular crossing. . . 51 5.2 Peforming a 0-splicing: the first diagram represents orienting the un- derstrand in the positive direction; second diagram represents cutting out the central area of the crossing, and imagining the direction to connect the 4 strands pairwise in a way that respects orientation; the third diagram represents the final result...... 52 5.3 Peforming a 1-splicing: the first diagram represents orienting the un- derstrand in the negative direction; second diagram represents cutting out the central area of the crossing, and imagining the direction to connect the 4 strands pairwise in a way that respects orientation; the third diagram represents the final result...... 53 5.4 A complete splicing diagram of the , with indexing of the crossings labelled. Label 010 is assigned to this particular complete splicing of the trefoil, since 0-splicings were performed at crossings 1 and 3, and a 1-splicing was performed at crossing 2...... 54

v 5.5 The Hopf link with orientation such that both crossings are assigned +1 by the right hand rule, i.e., n+ = 2 and n− = 0...... 54 5.6 The four complete splicing diagrams for the Hopf link, with their (r, k) quantities and polynomial terms beside them, discussed below. . . . . 55 5.7 Sorted complete splicing diagrams for the Hopf link...... 57 5.8 The three potential complete splicing diagrams of the trefoil with ex- actly one 1-splicing. From top to bottom: 100, 010, 001. This stack is then represented by V ⊕ V ⊕ V , which we can think of as sum- ming “down” through the column of splicing diagrams, which we have denoted using arrows. Note that these arrows do not represent maps. 58 5.9 Sorted complete splicing diagrams for the Hopf link...... 59 5.10 On the right, we have an 0-splicing at a particular crossing, and other splicing behavior beyond that crossing region caused there to be two distinct circles created by the 0-splicing. If we changed this to a 1- splicing, shown on the left, we instead obtain a single circle. This is an example of a merge...... 61 5.11 Maps between complete splicing diagrams for the Hopf link...... 62 5.12 The cochain complex for the Hopf link...... 63 5.13 Chosen orientation on the Hopf link...... 65 5.14 The table of Khovanov homology groups for our Hopf link...... 67

2 6.1 Chosen orientation and ordering on crossings for W2 (B)...... 68 6.2 A diagram of the all 0-splicing for the Whitehead link we are con- cerned with, and the maps going to the second stack of complete splicing diagrams - diagrams with single 1-splicings. This represents the leftmost portion of the chain complex, given by the following: 0 −→ V ⊗2 −→ V ⊗3 ⊕ V ⊕ V ⊕ V ⊗3 ⊕ V ⊗3 ⊕ V ⊗3 −→ ...... 70 6.3 The two possible complete splicing diagrams in the third stack (stack of diagrams with two 1-splicings) with 4 circles; that is, with algebraic assignment of V ⊗4...... 74 6.4 The two-component Brunnian link with n twists, where n ∈ 2N, given an orientation and an ordering on the crossings...... 76 2 6.5 A diagram of the all 0-splicing for Wk (B), and the maps going to the second stack of complete splicing diagrams, which are diagrams with single 1-splicings. Note that there are k diagrams with 3 circles created by performing 1-splicings in the twist region, represented by codes 000010...00, 000001...00, ..., 000000...10,000000...01...... 77 k 6.6 The 2 + 1 possible complete splicing diagrams in the third stack (stack of diagrams with two 1-splicings) with four circles; that is, with algebraic assignment of V ⊗4...... 79

vi 6.7 The Khovanov homology table for L, demonstrating the necessary hy- potheses...... 90 2 6.8 The Khovanov homology table for L t Wk (B), where Dm = dm + dfm and Hm = hm + hfm...... 93 6.9 , and two potential resolutions of the twist in the band. 95 6.10 Exterior band sum, and two potential resolutions of the twist in the second band. Note that in this diagram, we have assumed that the link with nontrivial alternating summation of ranks is the connected sum with the untwisted band...... 97 6.11 Exterior band sum, and two potential resolutions of the twist in the second band. Note that in this diagram, we have assumed that the link with nontrivial alternating summation of ranks is the connected sum with the twisted band...... 98 6.12 L6a1: {-10 : -4 : Z} ...... 101 6.13 L8n1: {-12 : -4 : Z × Z} ...... 101 6.14 L9n18: {6 : 0 : Z} ...... 101 6.15 L10a59: {-18 : -7 : Z} ...... 102

vii Abstract

A link is a collection of disjoint, smoothly embedded circles in R3. We might think of this object as a chain of in physical space. We will consider an arbitrary n- component link L, and ask the question of whether or not there exists a corresponding infinite family for this L, say {Li}, satisfying the following three properties:

(i) Each Li is link homotopic to L.

(ii) Each proper sublink of Li is link isotopic to the corresponding sublink of L ∀i.

(iii) Each Li is not link isotopic to L. These three properties reduce to the question of whether or not we can always produce infinitely many links which are almost, but not actually equivalent to some given link.

We begin by exploring the geometric motivation behind this question, and discussing how our question came to exist. We then describe particular examples of two- component links L having families which satisfy our three desired properties. We prove that properties (i) and (ii) are satisfied by explicitly constructing the families {Li}, using a construction given in [Sat10], and then prove property (iii) is satisfied using p-colorability for these examples. We then generalize further, using Khovanov homology to show that two-component links having torsion-free, unique minimum height homology group admit such infinite families. We conjecture that such infinite families always exist, for any n-component link L.

viii Chapter 1: Introduction

The field of is a field concerned, in part, with classification of certain mathematical objects - classification of knots themselves, like the knots that we envi- sion in our physical universe, and classification of the complements of these knots in 3-dimensional space, which make up the world of 3-manifolds. Knot invariants were developed in order to distinguish these objects from one another relative to the no- tion of equivalence appropriate for this mathematical world, which is called ambient isotopy. Looking at the value of any of these knot invariants is like looking at the areas of various shapes in the plane - if two shapes have different areas, then they cannot be exactly the same shape. In this case, area is the invariant of the class of objects called shapes.

One interesting way to study knots is to study the change in the value of invari- ants as we make changes to the knots themselves. In particular, we might define a way to add knots together in this mathematical world - this will be called the con- nected sum of knots - and see how the invariants change after performing such an operation. This is one way to study the malleability of our defined equivalence, and to discover interesting properties of the knots themselves. Notably, we might create many different versions of equivalence - some weaker, and some stronger - and again study invariants under these various notions of equivalence. Notably, we can study invariants in the same way for the class of links rather than knots, i.e., a “chain” of knots, where each knot is called a component of the link.

In this thesis, our primary goal is to answer a question that demonstrates some- thing fundamental about equivalence in the world of links. We are considering a arbitrary link L, and asking the question of whether or not we can always construct

1 an infinite family of links, {Li}, where each link in this family is not strictly equiv- alent to our link L, but satisfies two different, weaker notions of equivalence. That is to say, we are attempting to construct infinitely many objects which are almost the same, but not actually the same, as some originally chosen object. To prove that each link in our family is not actually the same as our original link L, we will use link invariants. This puts our set of invariants to the test - which will be strong enough to detect the nuanced differences between one notion of equivalence and the other?

We will begin by discussing the necessary mathematical background in terms of knots and links. We will then move to a motivation of the question addressed here - from which branch of did this question arise? Then, we produce examples of particular two-component links which satisfy our desired properties using colorabil- ity, and give a computational method for producing such families for particular n- component links using determinants. Finally, we introduce Khovanov homology, and give a proof for existence of such infinite families for two-component links L where L is a two-component link with an assumed structure on the Khovanov homology groups.

2 Chapter 2: Background

In order to understand links as mathematical objects, we will first present a def- inition, followed by two distinct notions of equivalence. We will proceed to discuss important invariants with respect to these equivalence relations, which will help us detect links which are “the same”.

However, before beginning our discussion of links, it is imperative to first under- stand knots. Intuitively, a knot can be created by taking a piece of yarn, tangling it however you like, and then gluing both ends together. An example of a knot is pictured in Figure 2.1.

Figure 2.1: The trefoil knot.

We need one preliminary definition before rigorously defining this object.

Definition 1. A embedding is a map f : X −→ Y such that the induced map f : X −→ f(X) is a homeomorphism.

With this particular kind of mapping in mind, we can define a knot rigorously.

Definition 2. A knot is an embedding of a circle into S3. That is, a knot is given by K = h(S1), where h : S1 → S3 is an embedding.

3 Although this is the definition we will use for our purposes, this definition does not exclude some special cases of knots which we could not fathom in the physical world. One such example is something called a wild knot. These are not relevant to this thesis, but see the image in Figure 2.2 for one example of a wild knot.

Figure 2.2: An example of a wild knot, where we consider the possibility of placing infinitely many of the knotted pieces shown along our strand of rope. https://wildandnoncompactknots.wordpress.com/2015/06/11/how-can-one-make-a-wild-knot/

In order to exclude such knots, we could instead define a knot as a subset of S3 that is a piecewise linear simple closed curve composed of finitely many pieces. Whichever definition is more comfortable to adopt for the reader is fine for the purposes of our questions.

Now that we understand knots, we can extend to the definition of links. We will build off of the former definition of a knot, though building off of the latter would work similarly.

Definition 3. An m-component link is an ordered collection (l1, ..., lm) of embeddings

1 3 1 1 li : S → S , such that li(S ) ∩ lj(S ) = ∅ ∀i, j.

A physical example of a link we can visualize is a metal chain. Each compo- nent of the chain is a circle, embedded disjointly in space with respect to the other components.

Of course, as long as we maintain that each component is disjoint, we can visualize replacing any component of the chain above with the trefoil knot, which we saw

4 Figure 2.3: An example of a link. earlier. In this way, we can put a knot in place of any component, and make more and more complicated links. Since our links can get so complicated, and we can visualize replacing any of our m components of the chain with a knot with as many crossings as we would like, how can we tell when any two links are the same?

In knot theory, we are concerned with emulating equivalence of two links in phys- ical space. That is, we should not allow an entire strand of yarn to be crunched down to a single point, or the strand of yarn to be broken and then glued back together in a different way. Continuous mappings between links themselves allow for these sorts of complications, so we need something stronger. We arrive at the notion of ambient isotopy - a mapping of all of S3, where the embedding of the link in space is carried along with everything else.

Definition 4. L1 and L2 are (ambient) isotopic if there exists an orientation-preserving

3 3 piecewise linear homeomorphism h : S → S such that h(L1) = L2.

For most links, finding such an explicit map of the ambient space would be nearly impossible. Thankfully, there are three simple moves that we can perform on a link diagram which generate all such ambient isotopies. These are called the Reidemeister moves, which are listed in Figure 2.4.

Any two links that are equivalent can be related to each other by a finite sequence of these moves. This notion of equivalence captures what we might visualize in phys-

5 Figure 2.4: The three Reidemeister moves. ical space for link equivalence. However, what if we are interested in some other notion of similarity between links? For example, what if we are interested in how two components of a link interact with each other, and we are no longer interested in how a single component interacts with itself? For example, consider the two links in Figure 2.5.

Figure 2.5: At left, a variation on the Hopf link with one knotted component. At right, the standard Hopf link.

Because the Trefoil knot is not isotopic to the , these links are not isotopic. There are many ways to prove this fact, but we will not address this now. However, the two links shown above clearly have something in common: the way in which the components interact - that is, the way in which they are linked together - is the same.

The knottedness in the second component of L2 is independent of the way the two-

6 components interact with each other. Isotopy is blind to this notion of equivalence because it is so strict. In order to detect this type of similarity, (c. 1952) developed link homotopy.

Definition 5. L = (l1, ..., lm) and Le = (le1, ..., lem) are link homotopic if there exist

1 1 homotopies hit between the maps li and lei such that hit(S ) ∩ hjt(S ) = ∅ ∀i, j and for each value of t.

As in the case of isotopy, explicit maps like this between links are exceedingly difficult to construct. Instead, we can think of link homotopy as a mapping between links which allows us to swap the undercrossing and overcrossing of intracomponent strands, but leaves intercomponent crossings fixed. This behavior is allowed by link

1 1 homotopy because of the only restrictions being continuity of h and hit(S )∩hjt(S ) = ∅. The second condition gives us that distinct components may not cross each other, but does not prohibit a component from passing through itself. Intuitively, this is exactly what we were looking for - the ability to classify links by how their components interact without having to worry about behavior isolated to a single component. A simple example of two links which are link homotopic but not link isotopic has already been given in Figure 2.5. We can change just one intracomponent crossing in the link

L2 shown above in order to achieve equivalence to L1. Now that we have developed these different notions of equivalence, we are in- terested in invariants of these equivalences; that is, we are interested in ways we can easily detect whether or not two links are either link isotopic or link homotopic without having to go through explicit Reidemeister moves or guess at the crossing changes that we must make to transform one link into another. One particularly simple invariant to calculate is called the . This numerical invariant is calculated using the notion of negative and positive crossings. This requires first placing an orientation on each component of the link.

7 Figure 2.6: The Hopf link oriented two different ways, resulting in different signed crossings.

Because of this, linking number actually distinguishes oriented two-component links rather than unoriented two-component links. Once we place an orientation on our link, there are exactly two possibilities for each crossing in the link diagram.

Definition 6. A positive crossing and a negative crossing are defined as follows:

Figure 2.7: A negative crossing at left, positive crossing at right.

An easy way to think about this is to think of using the right hand rule. If you place your right hand on top of the overstrand with your palm facing down, your fingers pointing in the direction of the orientation given to the overstrand, and the direction that your thumb points aligns with the orientation of the understrand, then the crossing is positive. If the direction that your thumb points is opposite from the orientation of the understrand, then the crossing is negative.

Now, we may move on to the definition of our linking number invariant.

8 Definition 7. The linking number of a two-component link L is defined as

n − n lk(L) = + − 2 where n+ is the total number of positive crossings in the link diagram, and n− is the total number of negative crossings in the link diagram.

Alternatively, we may fixate on one component of a two-component link, and for each crossing where this strand passes over the other strand, record if the crossing is assigned a +1 or a −1. Then, we calculate n+ − n−, only for these crossings. This also yields the linking number.

Let us examine the linking number of some special links. We can find an example of two links which have the same linking number but which are not isotopic rather easily. Consider the two-component unlink U and the Whitehead Link, W , shown in Figure 2.8.

Figure 2.8: The two-component unlink at left, and the Whitehead link at right.

Notice that lk(U) = lk(W ) = 0; however, the Whitehead Link is not isotopic to the unlink. (In order to argue this, we can use a link isotopy invariant called , which we define in Chapter 4. The unlink is tricolorable while the Whitehead link is not, implying that these two links are not isotopic.) Therefore, linking number does not characterize links up to link isotopy. So, our linking number

9 invariant fails to completely characterize our links in the strictest sense of equivalence. However, how does this invariant interact with our alternative notion of equivalence - link homotopy?

We will prove shortly that linking number is, in fact, a two-component link ho- motopy invariant, so if two links have different linking numbers, they cannot be link homotopic. Interestingly, linking number is also a complete invariant for two- component links up to link homotopy, meaning that if two links have the same linking number, they must be link homotopic.

Theorem 2.1. Linking number characterizes two-component links up to link homo- topy.

Proof. Assume that we begin with two link homotopic links, L1 and L2. Because these two links are link-homotopic, there is a sequence of intercomponent crossing changes and ambient isotopies which carry L1 to L2. These intercomponent crossing changes do not alter the linking number by our alternative definition of linking number, and link isotopy does not affect linking number - this can be verified by showing linking number does not change under the Reidemeister moves. Since this homotopy from

L1 to L2 does not change linking number, we have that link homotopic links must have the same linking number.

Conversely, assume that two two-component links, L1 and L2, have the same linking number: say lk(L1) = lk(L2) = k. Recall that via link homotopy, we are capable of eliminating all intercomponent crossings. By our alternative definition of linking number, these changes do not affect the linking number, since we only care about intracomponent crossings for the calculation of linking number. Eliminate all intercomponent crossings in the diagram for L1. Now, perform an isotopy on the link so that one component looks like a standard projection of the unknot (a circle in R3). We are able to do this since we can simultaneously adjust the position of the

10 other component to accommodate the component that we would like to appear as the unknot. This isotopy will not alter the linking number, as stated above, by the invariance under Reidemeister moves. This will result in a diagram that looks like an iterated Hopf link, and still has linking number k.

Repeat this process with L2. We again arrive at a diagram of an iterated Hopf link that is homotopic to L2, and has linking number k. If two iterated Hopf links have the same linking number, then they are isotopic. Then, since link homotopy is an equivalence relation, we may conclude by transitivity that L1 is link homotopic to

L2 since both are link homotopic to some iterated Hopf link with linking number k.

This characterization allows us to conclude that the Whitehead link and the two- component unlink are in fact link homotopic without having to identify which intra- component crossing(s) we need to change in the Whitehead Link in order to arrive at the two-component unlink. Interestingly, the Whitehead Link is a classic example of something that we will define in a moment: a two-component link in which one component has been Whitehead doubled. In our case, the Whitehead link is actually the Hopf link with one component Whitehead doubled. This is interesting because this operation of Whitehead doubling, discussed in Definition 7, changes fundamental properties of the link - the Hopf link is not link homotopic to the unlink, while the Whitehead link is.

Notice that if two links are link isotopic, then they are also link homotopic, be- cause isotopy is a stronger equivalence relation than homotopy. Isotopy only allows differentiable deformation of the ambient space, and no changes to crossings, while homotopy is a continuous mapping that allows particular crossing changes. So, any differentiable deformation of ambient space is also a continuous deformation of the link itself. Therefore, linking number is also an isotopy invariant, but it is not a com-

11 plete invariant (see the above scenario to justify that linking number is not a complete invariant of isotopy). This means that if two links have different linking numbers, they cannot be isotopic, but if they have the same linking number, we cannot conclude anything.

We will now build toward understanding an operation on links which creates two- component links which are link homotopic to the unlink.

th Definition 8. The Whitehead double of the k component Lk of some n-component

k link L = L1 ∪...∪Ln, where Lk must be an unknot, is denoted W2 (L). It is obtained by

3 placing a torus in S \(L\Lk) in place of Lk, then performing a doubling operation on

Lk within this torus, followed by removing the torus, and leaving the doubled version of Lk in place of the original. A doubling operation is best represented by picture, and is shown below.

The below picture represents the doubled version of an unknot inside of a torus.

Figure 2.9: A doubling operation inside a torus.

We can imagine adding n twists instead of just the two as shown in the picture. (We could also have an orientation on the strand, inherited from the orientation of the original component of the link.) This is called the Whitehead n-double, which can be represented by the image in Figure 2.10.

In Figure 2.11, we see that performing this operation is exactly how the Whitehead link is obtained from the Hopf link.

12 Figure 2.10: An n-doubling operation inside a torus.

Figure 2.11: Whitehead doubling operation performed on the Hopf link.

With some thought, we can see that by using this operation, we may create many links which are link homotopic but not link isotopic to the unlink of the appropriate number of components. In the Whitehead doubled version of the Hopf link above, we can visualize taking one of the crossing points in the double and changing the crossing by swapping the overstrand and the understrand. This is allowed via link homotopy since link homotopy allows intracomponent crossing changes. Then, we could pull the doubled strand around the other component in order to separate them via isotopy. In the end, we obtain the unlink. We can do this with the Whitehead n-doubled version as well as long as n is even, by changing every other crossing of the n-doubled component, then performing an isotopy (again, pulling the component through the other component to kill the linking). This is demonstrated in Figure 2.12 in the case of n = 4, but it is easy to see how the result generalizes to n even after following this visualization. In other words, any time we perform these Whitehead

13 n-doubles (for n ∈ 2Z) on arbitrary two-component links, we obtain links that are “almost” the unlink in the sense that a finite number of intracomponent crossing changes will destroy the linking, and leave us with an unlink. (Once we change every other crossings created by the Whitehead n-doubling, we will end up with a situation equivalent to a situation in which we cut the knot at a particular spot, and can simply unravel it around the other component of the link.)

Figure 2.12: Moving toward “unlinking” the Hopf link via link homotopy by White- head 4-doubling the second component.

Another special class of links which are “almost” the unlink in a different way are called Brunnian Links:

Definition 9. A Brunnian link is a nontrivial n-component link L = L1 ∪ ... ∪ Ln having the property that, by removing any one component Li, we obtain a new link L0 which is ambient isotopic to the unlink. Equivalently, L is Brunnian if every (n − 1)-component sublink is trivial.

Figure 2.13: An n-component Brunnian link.

14 These links are “almost” the unlink in the sense that by removing a single com- ponent, everything unravels to become the unlink.

As we proceed, we will combine the following two facts: (i) Whitehead n-doubling a component of a given link creates a new link which is homotopy equivalent to a link with the doubled component displaced from the rest of the diagram; and, (ii) removing a single component from a Brunnian link kills all remaining linking via isotopy and leaves us with an unlink. The combination of these facts will create a family of links with special properties that prove something powerful about the geometry of 4-manifolds.

15 Chapter 3: Bakul Sathaye’s Work: Homotopic but not Isotopic to the Unlink

3.1 Geometric Motivation

The question addressed in this thesis grew out of a different question related to hyperbolic geometry in Bakul Sathaye’s dissertation, “Obstructions to Riemannian Smoothings of Locally CAT(0) Manifolds” [Sat10]. In order to state this question, we must first define several concepts from hyperbolic geometry.

Definition 10. Let (X, d) be a metric space. A minimal geodesic path joining x ∈ X to y ∈ X is a map c : [0, l] → X such that c(0) = x, c(l) = y, and d(c(t), c(t0)) = |t−t0| ∀t, t0 ∈ [0, l]. The image of c is called a minimal geodesic segment in X with endpoints x and y.

These minimal geodesic paths along a metric space carve out the shortest possible path from one point to another. For the remainder of this thesis, we will refer to minimal geodesics as geodesics. We can build upon this definition to obtain an idea of what the nicest possible spaces are with respect to these paths:

Definition 11. A geodesic space is a metric space (X, d) in which any two points are joined by a geodesic segment. Denote a geodesic segment connecting points p and q as [p, q].

We can now create shapes with the respect to these geodesic paths in this setting:

Definition 12. Let (X, d) be a geodesic space. A geodesic triangle 4 in X consists of vertices p, q, r ∈ X and the geodesic segments joining them, [p, q], [q, r], [r, p].

Studying such a geodesic triangle and distances between points on the triangle only represents local behavior of the space X itself. In order to say something about

16 how triangles in this space compare to triangles in the spaces we know best, we must develop a direct method of comparison to triangles in R2.

Definition 13. Triangle 4 = 4(p, q, r) is called a comparison triangle for 4 if d(p, q) = d(p, q), d(q, r) = d(q, r), and d(r, p) = d(r, p). A point x ∈ [p, q] is called a comparison point for x ∈ [p, q] if d(q, x) = d(q, x). We can similarly define compari- son points on segments [q, r] and [r, p].

This type of comparison gives an idea of how the triangles look in X versus in R2 by describing how the edges of the triangles in the space X curve, and thus how the geodesics in our space look locally. Finally, we arrive at one of the crucial definitions we need for our question.

Definition 14. Let X be a geodesic space. X is called a CAT (0) space if, given a geodesic triangle 4 ⊂ X, ∀x, y ∈ 4 and all comparison points x, y ∈ 4, d(x, y) ≤ d(x, y). A metric space X is said to be locally CAT (0) if ∀x ∈ X, ∃rx > 0 such that the ball B(x, rx) endowed with the induced metric is a CAT (0) space. In that case, it is said to be non-positively curved.

Figure 3.1 demonstrates this idea more generally: triangles in a CAT(0) space can be no fatter than Euclidean triangles.

Now, we have a defined class of objects we can talk about; namely, locally CAT(0) manifolds. We would like to see how this class of objects compares to another: closed Riemannian manifolds with non-positive sectional curvature. We will present some definitions to give an idea of what this class of objects is.

Definition 15. A Riemannian metric g on a smooth manifold M is a smoothly chosen inner product gx : TxM × TxM → R on each of the tangent spaces TxM of M. In other words, ∀x ∈ M, g = gx satisfies:

17 Figure 3.1: An example of a geodesic triangle with a Euclidean comparison triangle overlayed, versus a standard Euclidean triangle.

1. g(u, v) = g(v, u) ∀u, v ∈ TxM

2. g(u, u) ≥ 0 ∀u ∈ TxM

3. g(u, u) = 0 ⇐⇒ u = 0

Furthermore, g is smooth in the sense that for any smooth vector fields X and Y , the function x 7→ gx(Xx,Yx) is smooth.

Definition 16. A smooth manifold M endowed with a Riemannian metric is called a Riemannian manifold.

We will omit a formal definition for sectional curvature of a Riemannian manifold, but give some intuition on the idea. Looking at the sectional curvature of a Rieman- nian manifold M allows us to describe how much the manifold curves. In order to evaluate the sectional curvature, we look at a point x ∈ M, then consider the tangent space TxM. The sectional curvature describes, in particular, the curvature of any

2-dimensional plane P ⊂ TxM. If, ∀x ∈ M, any sectional curvature value that we consider is non-positive, then M is a Riemannian manifold with non-positive sectional curvature.

The following definition gives a sense of how we might take a space that we know is locally CAT(0) and place a Riemannian structure on it, and when this will work.

18 Definition 17. Let M be a locally CAT(0) manifold. We say that M supports a Riemannian smoothing if there is a smooth Riemannian manifold (N, g) with g a Riemannian metric of non-positive sectional curvature, and a homeomorphism f : N → M.

It is known that a closed Riemannian manifold with non-positive sectional curva- ture is a locally CAT(0) manifold [BH99]. In dimensions 2 and 3, the converse holds, while in dimensions ≥ 5, the converse fails. That is, if M is a closed n-manifold with a locally CAT(0) metric, where n ≥ 5, the manifold M does not necessarily support a Riemannian metric with non-positive sectional curvature. In order to show this, counterexamples were produced by Davis and Januszkiewicz. The case of dimension 4 was left open a while longer until Davis, Januszkiewicz, and Lafont [DJL12] produced examples of the converse failing there as well. In order to produce such counterexam- ples, they demonstrated closed 4-manifolds with some knottedness in their boundary at infinity that obstructed the possibility for a Riemannian metric with non-positive sectional curvature on these manifolds.

3.2 Sathaye’s Original Question - Unlink Version

Shortly after, Sathaye [Sat10] extended these methods and produced counterexam- ples in dimension 4 as well. She used special links in S3 in order to construct these manifolds, and found particularly interesting examples in links having all pairwise linking numbers 0. After examining the properties of the links which generated the most intriguing examples of such counterexamples, she developed the following ques- tion:

Question A: Can we generate an infinite family of n-component links {Ln} which satisfy the following conditions?

19 1. Each Ln is link homotopic to the n-component unlink;

2. Each Ln is not link isotopic to the n-component unlink;

3. Each Ln has the property that each proper sublink of k < n components is isotopic to the k-component unlink.

Sathaye was able to find that we can indeed generate infinitely many of these links. The following construction satisfies properties (i) and (iii):

Figure 3.2: A Whitehead doubled Brunnian link.

This diagram features only 2 twists in the final component, but the following arguments will apply inductively for any even number of twists. We can visualize the satisfaction of these properties as follows:

(i): First, notice that the link above is an n-component Brunnian link. The last component is a Whitehead double of the unknot. Recall that link homotopy allows an overstrand of a crossing to pass through an understrand as long as the overstrand and understrand come from the same component of the link. Notice that the rightmost crossing in the diagram has this property, and thus we can flip the overstrand and the understrand via link homotopy to obtain the diagram in Figure 3.3.

At this point, we can visualize pulling the rightmost component of the link away from the rest of the components without interfering with the other components, which is allowed when performing an ambient isotopy. (Visualize pulling the upper portion

20 Figure 3.3: The change in intracomponent crossing is shown in the red circle. of the rightmost component to the left and through its neighboring component.) Ambient isotopy is stronger than link homotopy, so the diagram below shows a link which is still link homotopic to what we started with. The drawn arrow gives the next direction in order to visualize isotopically deconstructing the entire link in order to obtain the unlink.

Figure 3.4: A link homotopic to the link we started with, which can now be deformed into the unlink.

At this point, we have effectively removed a component from a Brunnian link. By definition of a Brunnian link, or by some more visualization of link isotopy as indicated by the red arrow in Figure 3.4, we can deform the diagram above into the n-component unlink.

(iii): This property - every proper sublink of k < n components is isotopic to the k component unlink - holds by definition of a Brunnian link. If we consider

21 a proper sublink of our given link, we will be considering a Brunnian link with at least one component removed. Hence, the remaining components will unravel. By drawing pictures, one may convince herself that the Whitehead doubling of the final component does not disrupt this process.

Property (ii) is not nearly as immediate, but Sathaye proved property (ii) using the Jones polynomial, calculating it via skein relations. By showing that the Jones polynomial of the link above is different than the Jones polynomial for the unlink, she showed than the link above is not isotopic to the unlink. Again, by induction, this argument works for any even number of twists, and in fact, the Jones polynomial of any two of these links with distinct numbers of twists in the final component do not posses the same Jones polynomial. Hence, a link like the one above with k twists is not isotopic to a link with k + j twists for any j > 0. So this construction produces an infinite family of nonisotopic links which satisfy the three desired properties.

3.3 Generalization of Sathaye’s Question

After proving this theorem, Sathaye was compelled to generalize this, and a more general version of the above question arose:

Question B: Given an arbitrary n-component link L, can we generate an infinite family of n-component links {Ln} satisfying the following conditions?

1. Each Ln is link homotopic to L;

2. Each Ln is not link isotopic to L;

3. Each Ln has the property that each proper sublink of k < n components is isotopic to the corresponding k-component sublink of L.

22 Ideally, we would like to use a similar construction since the properties are so related to those in the less general version. However, in this case, we must retain some structure from our arbitrary link L. In order to accomplish this, we will use an exterior band sum. We will denote the exterior band sum of two links L1 and L2 by

L1#bL2 where b denotes a particular choice of banding (i.e., the specification of which component is connected to which other component and how the bands interact with each other). In lieu of a strict definition, we will describe the process of performing such a band sum. Before we begin, we should notice that this operation is not well- defined on the level of links; that is, if we perform an exterior band sum in two different ways, we may obtain distinct links with respect to isotopy. In order to obtain the same link, we must specify which components we will band sum and how. However, for the purposes of our question, we do not need this operation to depend solely on the links L1 and L2. We only need to produce a single family that satisfies our properties for a given link L using this operation in some way of our choosing.

y L

x

Figure 3.5: Detecting a maximum with respect to x.

Consider an arbitrary n-component link L. Consider a link L in R3, i.e., disjoint embeddings of n copies of S1 in R3. Vertically translate this link diagram into the

23 subset {(x, y, z) ∈ R3 | z ≥ 1}. Then, consider the following projection map:

3 π1 : L ⊂ R → R

(x, y, z) 7→ y

Record the input value (x, y, z) ∈ L ⊂ R3 that yields the maximum value m attained by π1, and identify which component of the link this occurs in. If m can be attained using more than one input value, perform an ambient isotopy which infinitesimally shifts one of these input values to yield a unique maximum, and call it m.

Consider also the function

3 π2 : L ⊂ R → R

(x, y, z) 7→ x

Perform an ambient isotopy on L such that each component of L has exactly one (x, y, z) value such that π1(x, y, z) = m and, for these input values, π2(x, y, z) is strictly decreasing as we move from component to component. This is demonstrated in Figure 3.6.

What all of this basically says is that we smoothly wiggle our link around in R3, respecting the rules of ambient isotopy, until we have a well-defined notion of left to right on the components. Once we have this notion, we can actually add in the bands. Figure 3.7 demonstrates the cutting of the strands of various components at each of these particular (x, y, z) values we found for each component, and shows where the bands would descend from.

Since we already have a clear left to right ordering on our Whitehead k-doubled

n Brunnian link Wk (B), as demonstrated in Figure 3.8, we know of an easy choice for where to connect bands on this link.

24 y L

y = m x

Figure 3.6: Shifting a small neighborhood of each component such that there is a unique point of each component that assumes the maximum value m with respect to y. Retain an ordering on the components with respect to x.

y L

x

Figure 3.7: Cutting open the points where this maximum is attained at each compo- nent, and adding in bands.

25 n Figure 3.8: The clear, well-defined left to right ordering on the components of Wk (B). The dotted regions demonstrate where banding will occur. L

z = 1

z = 0

Figure 3.9: Performing the exterior band sum in the 3-component case.

n 3 Align Wk (B) in the subset {(x, y, z) ∈ R | z ≤ 0} according to the ordering of components 1, ..., n obtained via the function g for L. Then, add in the bands in the space {(x, y, z) ∈ R3 | z ∈ (0, 1)}, as shown in Figure 3.9. Notice that if we perform this operation and care about orientation, we can con-

n n sider L unoriented and L#Wk (B) can inherit the orientation of Wk (B), as shown in

26 Figure 3.9.

Note that another, perhaps more rigorous definition of this exterior band summing is given in Dr. Sathaye’s dissertation [Sat10].

As in the case where our arbitrary link L was the unlink, this construction satisfies our desired properties (i) and (iii) immediately. We can follow the same proof as in the case of Question A in order to see this, applying the arguments to the lower portion

n of our new link, L#bWk (B). Again, adding any even number of twists into the last component still satisfies these properties, creating our family. The proof is much the same as above, so we will omit it here. However, proving property (ii) is again the hurdle we encounter. Unfortunately, we can no longer calculate the Jones polynomial using skein relations, as we do not know anything about L. Complications arise when we attempt to find a formula for the polynomial of Ln in terms of L. However, we do believe that this band sum construction produces our desired families, and have yet to find a counterexample.

n Conjecture 1. Given any n-component link L, the family of links {L#Wk (B)} ful- fills the three properties given in Question B.

Although we do not prove this in full generality, we will proceed using a couple of different methods to make progress on this more general result. First, we will produce examples of two-component links satisfying all three of our properties using p-colorability, and provide a computational method for finding such examples for n- component links. Then, we will give a more general proof that families satisfying our three properties exist for a class of two-component links L using Khovanov homology.

27 Chapter 4: Colorability and Link Determinants

4.1 Introduction to p-Colorability and Link Determinants

Before launching into our discussion of examples and a general result using link de- terminants, we will give some general definitions and theorems about p-colorability and determinants, two important invariants in the field of knot theory.

Definition 18. A link L is p-colorable for p prime if every strand of the link pro- jection can be labeled using the numbers 0, 1, ..., p − 1, with at least two of the labels distinct, such that, at each crossing where x represents the label of the overstrand and y, z represent the labels of the understrands, we have 2x − y − z ≡ 0 (mod p).

Note that by the word “strand”, we mean a segment of a link diagram. Also note that, although p-colorability can be defined this way for any p ∈ N, we only focus on p prime. This is because if p were composite, we could deduce that L is p-colorable from showing L isp ˜-colorable forp ˜ prime divisors of p. It is also important to note that most sources discussing p-colorability say that p is an odd prime, thereby excluding 2-colorability. This is because it can be shown that no knot is 2-colorable, and every n-component link where n ≥ 2 has a nontrivial 2-coloring.

The importance of p-colorability comes with the fact that it is a link invariant.

Theorem 4.1. Consider two arbitrary links, L1 and L2. If L1 is p-colorable while L2 is not p-colorable for the same p, then L1 is not ambient isotopic to L2.

The proof of this uses invariance of p-colorability under Reidemeister moves. See a proof in [Pri20].

In fact, another invariant comes out of colorability as well, which we will touch on later. For now, we will work to understand how to determine that a link is p-colorable.

28 x1 x2

Figure 4.1: A labeling of the strands of the Hopf link. Note that orientation is irrelevant when evaluating p-colorability, though here the orientation is shown.

Given a value p, there is a relatively simple way to determine whether or not a given link is p-colorable. Given a link L with k strands, we label each of the strands with an indeterminate xi, where i is the value that indexes each strand, i.e., i ∈ {1, ..., k}. So, for a link with two strands, such as the Hopf link, we would label one strand x1 and the second strand x2.

From this labeling, we obtain a system of equations following the convention described in the definition of p-colorability given above. For the Hopf link, it is the following:

2x1 − x2 − x2 ≡ 0 (mod p)

2x2 − x1 − x1 ≡ 0 (mod p)

Clearly, these equations are equivalent to the statement that x1 ≡ x2 (mod p). This means that we do not have two distinct labels in our projection of the Hopf link, and so our Hopf link is, in fact, not p-colorable for any p.

The p-coloring that is easiest to discover by simply looking at any given link diagram is a 3-coloring. Tricolorability is perhaps the nicest version of p-colorability since we can physically color each of the strands of our link projection one of three colors, and the definition of p-colorability for p = 3 reduces to the following:

1. At least two colors must be used to color a link diagram.

29 2. At any crossing, either all three colors must appear, or only a single color appears.

We will mostly work in this context; however, we will also point out some inter- esting results in the more general setting.

We will now describe a numerical trick using the systems of equations previously described that allows us to determine for which p values a given link is p-colorable. This is nice, since then we do not have to guess at p values, or try to arbitrarily color any link diagram with three colors. Although this might be easy to see for simpler link diagrams, once we want to consider those with higher crossing numbers, guessing at tricolorings or trying to solve our system of equations for arbitrary p becomes much more complicated. In order to explain this trick, we first need to define the determinant of a link, and the coloring matrix of a link.

Definition 19. The coloring matrix of a link is a matrix representing the coloring system of equations, where each row represents a crossing and each column represents a strand.

For example, for the Hopf link, we would have a 2 × 2 matrix, since there are two crossings and two strands. (In fact, the number of strands of a link is always equal to the number of crossings of a link, so we will always obtain a square matrix.) In particular, our matrix would be as follows:

 2 −2 −2 2

From this matrix, we may calculate the link determinant, which is not the deter- minant of the coloring matrix. It is, instead, the absolute value of the determinant of any (n − 1) × (n − 1) minor of the coloring matrix, which is itself a link invariant,

30 and which is well defined among coloring matrices of any projection of the given link [Dix20].

Definition 20. Consider the coloring matrix of a link L. The determinant of L is given by the absolute value of the determinant of any (n − 1) × (n − 1) minor of a coloring matrix for L.

If we are able to calculate this determinant, then we can automatically determine all of the values p for which L is p-colorable.

Theorem 4.2. A link L with determinant d is p-colorable ⇐⇒ p | d.

A proof is given in [Pri20].

This theorem tells us, for example, that the Hopf link is 2-colorable since its determinant is 2, but not p-colorable for any p > 2, based on the matrix we calculated above. Thus, because p-colorability is a link invariant, we can conclude that any link which is p-colorable for p > 2 is not isotopic to the Hopf link. Notably, this is not a complete invariant: if we find another link which is 2-colorable, we may not conclude that it is isotopic to the Hopf link.

4.2 Application to our Question - Examples and a Proof

We begin this section by giving a couple of examples of two-component links L from which we construct links L#bWk(B) (for a few particular k) satisfying our three desired properties. We then move to proving that an infinite family of links satisfying our conjecture actually exists for the examples that we provided, and in fact, for any n-component link L satisfying det(L#bW1(B)) 6= det(L#bW2(B)).

Throughout this section, we will be using the naming conventions of LinkInfo, a database of links compiled by the University of Indiana at Bloomington.

31 Figure 4.2: The two-component link L8a1.

We start with the link L8a1. This link is shown in Figure 4.2. By giving the strands a labeling and writing out the resulting system of equations, we obtain the following coloring matrix.

−1 2 −1 0 0 0 0 0   0 0 2 −1 0 −1 0 0     0 −1 0 0 0 2 −1 0     0 −1 0 2 −1 0 0 0    −1 0 −1 0 2 0 0 0    −1 2 −1 0 0 0 0 0     0 0 0 0 0 −1 −1 2  2 0 0 −1 0 0 0 −1

Recall that in order to calculate the link determinant, we must consider a first minor of this matrix, and then take the absolute value of its determinant. The choice of a first minor and determinant is given below.

 0 2 −1 0 −1 0 0  −1 0 0 0 2 −1 0    −1 0 2 −1 0 0 0    |det( 0 −1 0 2 0 0 0 )| = 40    2 −1 0 0 0 0 0     0 0 0 0 −1 −1 2  0 0 −1 0 0 0 −1

32 Figure 4.3: L8a1#bW2(B).

The prime factorization of 40 is 5 ∗ 23. This means that this link is p-colorable for p = 2, 5 only.

Now, we construct the band sum of L8a1 with W2(B), denoted L8a1#bW2(B), as shown in Figure 4.3. We will omit the coloring matrix here, as it is large. The reader may verify that the absolute value of the determinant of any first minor is 48, which

4 has prime factorization 3 ∗ 2 . Therefore, L8a1#bW2(B) is 3-colorable, whereas L8a1 was not. Therefore, L8a1 and L8a1#bW2(B) are not isotopic.

We may also go through the same process for L8a1#bW4(B), where the band sum is given in Figure 4.4.

We again omit the coloring matrix, as it is large. The reader may verify that the absolute value of the determinant of any first minor is 56, which has prime factoriza-

3 tion 7 ∗ 2 . Therefore, L8a1#bW4(B) is 7-colorable, whereas L8a1 was not, and so these two links are distinct. Further, L8a1#bW4(B) and L8a1#bW2(B) are distinct, again since L8a1#bW4(B) is 7-colorable, while L8a1#bW2(B) is not.

One might imagine that we could continue this process, and the determinants would keep growing. We will show this later. Recall that we do not even need to

33 Figure 4.4: L8a1#bW4(B).

Figure 4.5: The two-component link L6a1. know the prime divisors p for which two links are p-colorable in order to show they are distinct - we need only know that the determinants themselves are distinct, since the determinant of a link is itself an invariant [Dix20]. However, the visualization of a coloring is a pleasant one, especially in the tricolorable case, where things can be purely visual.

Now, we move to considering a different link, L6a1. This link is pictured in Figure 4.5. A particular labeling yields the coloring matrix given below.

34 Figure 4.6: L6a1#bW4(B).

 2 −1 0 −1 0 0  −1 0 2 0 0 −1   −1 0 −1 2 0 0     0 0 −1 0 −1 2     0 −1 0 −1 2 0  0 2 0 0 −1 −1

Observe that

 0 2 0 0 −1  0 −1 2 0 0    |det( 0 −1 0 −1 2 )| = 12   −1 0 −1 2 0  2 0 0 −1 −1

This says that L6a1 is p-colorable for p = 2, 3 only. Now, construct the band sum of L6a1 and W4(B) given in Figure 4.6. After a calculation as in the first example, we obtain that det(L6a1#bW4(B)) = 4. Therefore, L6a1#bW4(B) is only 2-colorable, whereas L6a1 was 3-colorable. Thus, these links are not isotopic.

We will now prove a more general result for n-component links. This result relies

35 on L#bW1(B) and L#bW2(B) rather than L itself, but it says that if we can verify via calculation that det(L#bW1(B)) 6= det(L#bW2(B)), we get our infinite family.

Theorem 4.3. Let L be a nontrivial n-component link. Then, for a fixed choice of band sum b,

det(L#bW2(B)) 6= det(L#bW1(B)) =⇒ det(L#bWk(B))

takes on infinitely many values as k changes.

Thus, {L#bWk(B)} gives us our desired infinite family of links.

Proof. Notice that it suffices to show that det(L#bWk(B)) is a function of det(L#bW2(B)), det(L#bW1(B)), and k that takes on infinitely many values as k increases. We proceed by induction. Our base case will be k = 3. Let L be a non-split n- component link. Construct a band sum as pictured in Figure 4.7, having Whitehead 3-doubled the last component of the Brunnian link below before band summing. Throughout this proof, we will refer to the portion of the diagram with the twisting as the “twist region”, and we will refer to the crossings within the twist region as having an ordering from first to last, where the first crossing in the twist region is the left-most one. We choose a labeling of the relevant strands as shown, and since L is non-split, we can label each strand differently in the link diagram - for example, there is no way that x1 and x2 are actually the same strand, by L non-split, since the component the band is connected to must not be an unknot free from the rest of the link diagram.

We will focus only on crossings surrounding the twist region, and label things so that adding twists will not impact any portion of the coloring matrix other than the portion representing what is shown in Figure 4.8.

We obtain the following coloring matrix, given our labeling on the strands. Arbi- trary entries are indicated by stars. Note that what appears as the third column here

36 L

Figure 4.7: Base case diagram, L#bW3(B).

x2

xN−6 xN−4 xN

xN+2 xN−5

xN−3 xN+1 xN−2

xN−1

x1

Figure 4.8: The area of the diagram of L#bW3(B) that we will fixate on. The relevant labels are given. Our coloring matrix follows from these particular labels.

37 is actually some finite number of columns depending on the number of crossings in L. Also note that the top “row” is actually some number of rows again depending on the number of crossings of L. Finally, note that the second horizontal line indicates the crossings that are part of the twist region. The first row situated underneath this line will always be the final twist in the Whitehead k-double, and proceeding downward from there, we reach the first crossing in the twist region, second crossing in the twist region, and so on, up to the (k − 1)-st crossing in the twist region.

   ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ 0 0         0 2 −1 0 −1 0 0 0 0 0 0     0 2 0 0 −1 0 0 0 0 0 −1 0     0 −1 0 0 0 2 −1 0 0 0 0     −1 0 0 0 0 0 −1 2 0 0 0     0 0 0 0 0 0 0 2 −1 0 −1     0 0 0 0 0 0 −1 0 0 0 2 −1  0 0 0 0 0 0 0 −1 0 −1 2

We choose a first minor by deleting the last row and the last column, and obtain the following matrix, which we will denote A. Say this is an (N + 1) × (N + 1) matrix

(this comes from a convenience in assuming that the matrix associated to L#bW1(B) is N × N). We intend to take the determinant of this matrix in the algebraic sense to obtain the link invariant we desire.

   ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ 0         0 2 −1 0 −1 0 0 0 0 0    A =  0 2 0 0 −1 0 0 0 0 0 −1     0 −1 0 0 0 2 −1 0 0 0     −1 0 0 0 0 0 −1 2 0 0     0 0 0 0 0 0 0 2 −1 0  0 0 0 0 0 0 −1 0 0 0 2

38 We will use the method of cofactor expansion along the bottom row. Without actually calculating the determinant yet, let us examine the matrices that will appear by expanding along this row. The following two matrices are the only two relevant ones in this calculation, since there are only two nonzero entries along the bottom row.

   ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗         0 2 −1 0 −1 0 0 0 0  AN+1,N+1 =    0 2 0 0 −1 0 0 0 0 0     0 −1 0 0 0 2 −1 0 0     −1 0 0 0 0 0 −1 2 0  0 0 0 0 0 0 0 0 2 −1

   ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ 0         0 2 −1 0 −1 0 0 0 0  AN+1,N−3 =    0 2 0 0 −1 0 0 0 0 −1     0 −1 0 0 0 −1 0 0 0     −1 0 0 0 0 −1 2 0 0  0 0 0 0 0 0 0 2 −1 0

It is possible to check that AN+1,N+1 is a first minor of the coloring matrix for

L#bW2(B), obtained by deleting the last row and the last column, given the labels in Figure 4.10. Thus, det(AN+1,N+1) = ±det(L#bW2(B)).

By contrast, AN+1,N−3 is not yet a first minor of any relevant coloring matrix. So, we again perform cofactor expansion along the last column of the matrix. There is only one relevant matrix in this calculation. We will call this matrix C.

39 L

Figure 4.9: L#bW2(B).

x2

xN−6 xN−4 xN

xN−5

xN−3 xN+1 xN−2

xN−1

x1

Figure 4.10: Labeling for L#bW2(B).

40 L

Figure 4.11: L#bW1(B).

   ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗        C =  0 2 −1 0 −1 0 0 0     0 −1 0 0 0 0 −1 0 0     −1 0 0 0 0 −1 2 0  0 0 0 0 0 0 0 2 −1

It is possible to verify, using the labels given in Figure 4.12, that C is a first minor of the coloring matrix for L#bW1(B). Therefore, det(C) = ±det(L#bW1(B)).

We will now follow our argument from the beginning, proceeding in cases.

Case 1. [det(AN+1,N+1) = det(L#bW2(B) and det(C) = det(L#bW1(B))]

det(L#bW3(B)) = |det(A)|

2∗(N+1) 2N−2 = (−1) ∗ 2 ∗ det(AN+1,N+1) + (−1) ∗ (−1) ∗ det(AN+1,N−3)

2N−3 = 2 ∗ det(L#bW2(B)) − (−1) (−1) ∗ det(C)

= |2 ∗ det(L#bW2(B)) − det(L#bW1(B))|

Case 2. [det(AN+1,N+1) = det(L#bW2(B) and det(C) = −det(L#bW1(B))]

41 x2

xN−6 xN−4 xN

xN−5

xN−3

xN−2

xN−1

x1

Figure 4.12: Labeling for L#bW1(B).

det(L#bW3(B)) = |det(A)|

2∗(N+1) 2N−2 = (−1) ∗ 2 ∗ det(AN+1,N+1) + (−1) ∗ (−1) ∗ det(AN+1,N−3)

2N−3 = 2 ∗ det(L#bW2(B)) − (−1) (−1) ∗ det(C)

= |2 ∗ det(L#bW2(B)) + det(L#bW1(B))|

Case 3. [det(AN+1,N+1) = −det(L#bW2(B) and det(C) = det(L#bW1(B))]

det(L#bW3(B)) = |det(A)|

2∗(N+1) 2N−2 = (−1) ∗ 2 ∗ det(AN+1,N+1) + (−1) ∗ (−1) ∗ det(AN+1,N−3)

2N−3 = −2 ∗ det(L#bW2(B)) − (−1) (−1) ∗ det(C)

= |−2 ∗ det(L#bW2(B)) − det(L#bW1(B))|

= |2 ∗ det(L#bW2(B)) + det(L#bW1(B))|

Case 4. [det(AN+1,N+1) = −det(L#bW2(B) and det(C) = −det(L#bW1(B))]

42 det(L#bW3(B)) = |det(A)|

2∗(N+1) 2N−2 = (−1) ∗ 2 ∗ det(AN+1,N+1) + (−1) ∗ (−1) ∗ det(AN+1,N−3)

2N−3 = 2 ∗ det(L#bW2(B)) − (−1) (−1) ∗ det(C)

= |−2 ∗ det(L#bW2(B)) + det(L#bW1(B))|

For our inductive step, suppose that we have k > 3. Then we are working with a (N + k − 1) × (N + k − 1) coloring matrix for our link L#bWk(B), represented in Figure 4.13. We follow the labeling conventions demonstrated in the base case.

First, we need to recognize the general form our matrices will take as we continue to add twists. Notice that each time we add a twist, we must add one row and one column to our coloring matrix, and we will affect exactly three rows of our matrix in a nontrivial manner, since the new strand made by adding a twist will hit three crossings - the last three crossings in the twist region of our link. Since these crossings are in the twist region, the changes will be represented in rows below the second horizontal line. According to how we chose to structure our matrices, the rows impacted will be the first row below the horizontal line (which always represents the last twist crossing), the last row of the matrix, and the second to last row of the matrix. The changes are precisely the following.

1. Begin with the (N + k − 2) × (N + k − 2) coloring matrix for L#bWk−1(B). Call this matrix M. We will make changes now with respect to this matrix in order

0 to represent the matrix for L#bWk(B), which we will call M .

2. Add a row of zeros to the bottom of M and a column of zeros to the right side of M.

3. In the first row below the second horizontal line denoting the beginning of the representation of the twist region, we have information about the final crossing

43 L

Figure 4.13: L#bWk(B), k > 3.

xN xN

xN+2 xN+2 xN+3

xN+1 xN+1

xN−1 xN−1

Figure 4.14: The process of adding an arc, thereby adding a crossing in the twist region. The xN+3 represents the new arc. The labels which are not explicitly written in remain the same as before.

in the twist region. When we add a twist, we add a small arc (or strand) to the end of the twist region, as in Figure 3. This is a new strand of the link diagram, which then adds a row and a column to our coloring matrix by definition.

This means that the final crossing in the twist region will go from being rep-

resented by the equation 2xN−1 − xN − xN+k−2 to instead being represented

by the equation 2xN−1 − xN − xN+k−1. Therefore, we remove the −1 in the (N + k − 2)-nd column, replace it with a 0, and add a −1 to the newly added column - the (N + k − 1)-st column.

44 4. In the second to last row of this modified M - i.e., the (N + k − 2) − nd row of our (N + k − 1) × (N + k − 1) matrix - we have the row which represented information about the second to last twist crossing in the old link (the third to last twist crossing in this new link), which now has an understrand coming in

from xN+k−1 rather than xN−1, i.e., that crossing was previously represented by

the equation 2xN+k−2 − xN−1 − xN+k−3, but now is represented by the equation

2xN+k−2 − xN+k−1 − xN+k−3. Therefore, we remove the −1 in the (N − 1)-st column, replace it with a 0, and add a −1 to the newly added column - the (N + k − 1)-st column.

5. Finally, we need to fill in the new row we added on the bottom with informa- tion about our new crossing - where our newly added strand is actually the

overstrand. For this, we obtain the equation 2xN+k−1 − xN−1 − xN+k−2.

0 6. We now have the coloring matrix for L#bWk(B), called M .

The next step is to choose a first minor of the overall matrix, and take its deter- minant. We will choose to eliminate the (N + k − 1)-st row and the (N + k − 1)-st column (i.e., the last row and the last column). These are the areas that all of the new information was added to. Therefore, we are left with a (N +k −2)×(N +k −2) matrix which is almost equal to M, our coloring matrix for L#bWk−1(B); the only difference is this new matrix has two entries of −1 removed - one that was in the last column and one that was in the (N − 1)-st column, in the last row.

In order to visualize these changes, we will focus in on the relevant portion of our coloring matrix. The only portion of the matrix that changes under the operation of adding twists is the portion below the second horizontal line, starting from the (N − 1)-st column and stopping at the last column. The rest of the coloring matrix remains the same under the operation of adding twists, as described in the list above.

45 That lower right corner block of the matrix for an arbitrary number of twists is the following. The entries that will be deleted when taking the first minor previously described are shown in red.

 2 −1 0 0 0 0 ... 0 0 −1   0 0 2 −1 0 0 ... 0 00     .. −1 2 −1 0 ... 0 00     .. 0 −1 2 −1 ... 0 00     .. 0 0 −1 2 ... 0 00     .. 0 0 0 −1 ... 0 00     .. .     .. .     .. .     .. 0 0 0 0 ... −1 00     .. 0 0 0 0 ... 2 −10     0 0 0 0 0 0 ... −1 2 −1  −10 0000 ... 0 −12

The resulting block after taking the aforementioned first minor is the following.

Call this matrix B0.

 2 −1 0 0 0 0 ... 0 0 0   0 0 2 −1 0 0 ... 0 0 0     .. −1 2 −1 0 ... 0 0 0     .. 0 −1 2 −1 ... 0 0 0     .. 0 0 −1 2 ... 0 0 0     .. 0 0 0 −1 ... 0 0 0  B0 =    .. .     .. .     .. .     .. 0 0 0 0 ... 2 −1 0     0 0 0 0 0 0 ... −1 2 −1  0 0 0 0 0 0 ... 0 −1 2

We will choose to do cofactor expansion along the bottom row of the first minor. Notice that, to the left of the bottom row shown in this submatrix, the entries are all 0. So the only nonzero entries here are the two entries shown in the bottom corner.

46 When we do the cofactor expansion, we will obtain matrices that have the following two blocks as submatrices in the lower right hand corner. We will call the matrices containing these blocks B1 and B2, where B2 is the matrix whose determinant will have to be multiplied by 2, and B1 is the matrix whose determinant will have to be multiplied by −1 in the process of cofactor expansion. We will call the blocks

0 0 themselves B1 and B2 respectively.

 2 −1 0 0 0 0 ... 0 0 0   0 0 2 −1 0 0 ... 0 0 0     .. −1 2 −1 0 ... 0 0 0     .. 0 −1 2 −1 ... 0 0 0     .. 0 0 −1 2 ... 0 0 0    0  .. 0 0 0 −1 ... 0 0 0  B =   1  .. .     .. .     .. .     .. 0 0 0 0 ... 2 −1 0     .. 0 0 0 0 ... −1 2 0  0 0 0 0 0 0 ... 0 −1 −1

 2 −1 0 0 0 0 ... 0 0   0 0 2 −1 0 0 ... 0 0     .. −1 2 −1 0 ... 0 0     .. 0 −1 2 −1 ... 0 0     .. 0 0 −1 2 ... 0 0  0   B =  .. 0 0 0 −1 ... 0 0  2    .. .     .. .     .. .     .. 0 0 0 0 ... 2 −1  0 0 0 0 0 0 ... −1 2

Notice that B2 is a first minor of the coloring matrix for L#bWk−1(B), obtained by deleting the third to last row and the third to last column of the original coloring matrix for L#bWk−1(B). B1 is not equal to the first minor of any relevant matrix, but

47 we can do cofactor expansion again over the last column of B1, noticing that there is precisely one entry of −1 in this column. In that cofactor expansion, we obtain the

0 following relevant block, B3, and call the corresponding matrix B3.

 2 −1 0 0 0 0 ... 0 0   0 0 2 −1 0 0 ... 0 0     .. −1 2 −1 0 ... 0 0     .. 0 −1 2 −1 ... 0 0     .. 0 0 −1 2 ... 0 0  0   B =  .. 0 0 0 −1 ... 0 0  3    .. .     .. .     .. .     .. 0 0 0 0 ... 2 −1  .. 0 0 0 0 ... −1 2

Finally, this matrix is a first minor of the original coloring matrix for L#bWk−2(B).

Proceeding back through this work algebraically, we obtain the following, again case by case.

Case 1.

det(L#bWk(B)) = |det(B0)|

2(N+(k−2)) N+(k−2)+N+(k−3) = (−1) ∗ 2det(B2) + (−1) ∗ (−1)det(B3)

2N+2k−5 = 2det(L#bWk−1(B)) + (−1) ∗ (−1)det(B3)

2(N+(k−3)) = 2det(L#bWk−1(B)) + (−1) ∗ (−1)det(B3)

= |2det(L#bWk−1(B)) − det(L#bWk−2(B))|

By hypothesis,

det(L#bWk−1(B)) = |(k − 2)det(L#bW2(B)) − (k − 3)det(L#bW1(B))|

48 and

det(L#bWk−2(B)) = |(k − 3)det(L#bW2(B)) − (k − 4)det(L#bW1(B))|

Plugging in, we then obtain

det(L#bWk(B)) = |(k − 1)det(L#bW2(B)) + (k − 2)det(L#bW1(B))|

The other cases proceed similarly, and we obtain a formula for the determinant of

L#bWk(B) in terms of k, det(L#bW1(B)), and det(L#bW2(B)) that eventually mono- tonically increases or decreases in k as long as det(L#bW1(B)) 6= det(L#bW2(B)).

For an example, we might consider the link L8a1 that we calculated determinants for in the beginning of this section. The relevant band sums are shown in Figures 4.15 and 4.16.

Figure 4.15: det(L8a1#bW1(B)) = 44.

Clearly, 44 6= 48, and thus we may conclude that our desired infinite family exists for this link.

49 Figure 4.16: det(L8a1#bW2(B)) = 48.

50 Chapter 5: Introduction to Khovanov Homology

Khovanov Homology arose during the movement toward categorification of math- ematics - replacing theorems derived from set theory by their category theoretic ana- logues. In his paper, “A Categorification of the Jones Polynomial” [Kho00], created a new link invariant, the Khovanov Homology groups of a link, Hi,j(L). The calculation of this new link invariant follows much the same process as one method of calculating the Jones Polynomial. For this reason, we will first in- troduce this particular construction of the Jones Polynomial, and then add structure to our construction in order to compute the Khovanov homology of a link. We will do this via example, using the Hopf Link. We will be following the structure of the explanation of Khovanov Homology given by Bar-Natan [BN02]. Here, one may also find a proof that Khovanov homology is a link invariant.

5.1 Computing the Jones Polynomial

Before we present this construction, we must develop a couple of ideas. Given an unoriented link diagram, we can splice every crossing in the diagram in one of two ways. First, focus on a single crossing of our diagram. It has an understrand and an overstrand. Give the overstrand a fixed orientation. Then imagine the two different possibilities for placing an orientation on the understrand.

Figure 5.1: Two options for orienting the understrand of a particular crossing.

51 Notice that one of these choices corresponds to a positive crossing, and one cor- responds to a negative crossing. We can check this by using the right-hand rule. Now, along with these two diagrams, we can consider the two potential splicings of this crossing. When we say splicing, we mean cutting a crossing at its central point (where the overstrand physically passes over the understrand) and gluing a choice of pairs of strands together. When considering how we might re-glue in this manner, we can either follow the orientation given to the crossing, or resist the orientation. If we follow the orientation, we call the result an 0-splicing. If we resist the orientation, we call it a 1-splicing. These are also sometimes called A-splicings and B-splicings, respectively.

Figure 5.2: Peforming a 0-splicing: the first diagram represents orienting the under- strand in the positive direction; second diagram represents cutting out the central area of the crossing, and imagining the direction to connect the 4 strands pairwise in a way that respects orientation; the third diagram represents the final result.

For each crossing in our link, we have two possibilities for its splicing. We can imagine choosing a particular splicing for every crossing in our link diagram. After choosing a splicing for each crossing in the link and performing all of them, we call the result a complete splicing of the link diagram, or a complete splicing diagram.A complete splicing diagram is always a collection of disjoint circles. For a given link of n crossings, there are 2n possible complete splicing diagrams. In order to calculate the Jones polynomial or Khovanov Homology, we will need to keep track of each of these

52 Figure 5.3: Peforming a 1-splicing: the first diagram represents orienting the under- strand in the negative direction; second diagram represents cutting out the central area of the crossing, and imagining the direction to connect the 4 strands pairwise in a way that respects orientation; the third diagram represents the final result. complete splicing diagrams. To do so, we will use codes. We construct the codes for each splicing diagram in the following way: we first enumerate each of the crossings on a link, and give the complete splicing a code of 0’s and 1’s: a 0 corresponds to an 0-splicing and a 1 corresponds to a 1-splicing. The ordering of the 0’s and 1’s will correspond to the ordering the crossings were given. So, if we have a 3-crossing knot or link, and we wish to consider its complete splicing diagram with an 0-splicing at the first and third crossings, and a 1-splicing at the second, then this splicing diagram would be assigned a code of 010. Figure 5.4 is an example of this using the trefoil knot.

The Jones Polynomial can be calculated by the following process. First, pick a link, give it an orientation, and calculate the number of positive and negative crossings, n+ and n− respectively, present in the diagram. For example, consider the Hopf link shown in Figure 5.5.

Here, we have n+ = 2 and n− = 0.

Then, store that piece of information, and forget the orientation. We are left with an unoriented link diagram. We then consider all of the complete splicing diagrams of our link; there are 2n of them for a link with n crossings. Associated with each of

53 1 2

3

Figure 5.4: A complete splicing diagram of the trefoil knot, with indexing of the crossings labelled. Label 010 is assigned to this particular complete splicing of the trefoil, since 0-splicings were performed at crossings 1 and 3, and a 1-splicing was performed at crossing 2.

Figure 5.5: The Hopf link with orientation such that both crossings are assigned +1 by the right hand rule, i.e., n+ = 2 and n− = 0.

54 (0,2) (q + q−1)2

(1,1) −q(q + q−1)

(1,1) −q(q + q−1)

(2,2) q2(q + q−1)2

Figure 5.6: The four complete splicing diagrams for the Hopf link, with their (r, k) quantities and polynomial terms beside them, discussed below. these complete splicing diagrams, we have the number of 1-splicings we performed to obtain that particular splicing diagram, called r, and the number of circles present in the complete splicing diagram, called k.

In fact, we only need this information for each complete splicing diagram of our link paired with our information about positive and negative crossings that we calculated in the beginning in order to complete our calculation of the Jones polynomial. We assign each complete splicing diagram a polynomial term, and then sum over all complete splicing diagrams. Finally, we normalize by a polynomial term in terms of the signed crossing information we collected at the beginning of this process. (Notably, the information about each complete splicing diagram alone generates the Kauffman , which does not require an orientation on the link diagram. The orientation allows us to create our invariant, the Jones polynomial, by multiplying this Kauffman bracket polynomial by some normalization term given below, which is

55 in terms of n+ and n−.)

More precisely, for each splicing diagram with information (r, k), we assign the following term, where q is an indeterminate: (−1)rqr(q + q−1)k. Then, we take the following alternating summation over all splicing diagrams, which we will denote Sα:

X (−1)rα qrα (q + q−1)kα

∀Sα

As one might imagine, many, or even most, of these terms may cancel due to the fact that the summation is alternating. Finally, to normalize this polynomial, all we have to do is multiply our summation by the normalization term:

(−1)n− qn+−2n− . (q + q−1)

The end result is the Jones polynomial.

Theorem 5.1. Given a link L, the Jones polynomial for L is given by

(−1)n− qn+−2n− X V (L) = (−1)rα qrα (q + q−1)kα . (q + q−1) ∀Sα

Note that different conventions for the normalization do exist, and this gives us a classical polynomial invariant for knots and links.

5.2 Computing Khovanov Homology

In order to alter the previous construction to obtain Khovanov homology rather than the Jones polynomial, we will add structure to our set of complete splicing diagrams before assigning information to each of them. First, we organize our stack of splicing diagrams into piles. These piles are chosen according to the number of 1-splicings

56 10

00 11

01

Figure 5.7: Sorted complete splicing diagrams for the Hopf link. we have in our complete splicing diagram. We will organize it such that the left- most pile of splicing diagrams contains just a single complete splicing diagram - the unique complete splicing diagram with all 0-splicings. The next pile will contain all of the complete splicing diagrams with a single 1-splicing (recall that here, we will ob- tain more than one complete splicing diagram, since the following complete splicings correspond to distinct complete splicing diagrams: 100...0, 010...0, ..., 00...01). This pattern of sorting according to number of 1-splicings continues. The rightmost pile will contain the unique complete splicing diagram with all 1-splicings.

Now, we will assign an algebraic structure to every diagram and every pile. This structure is reminiscent of our k value before, which counted the number of circles in each splicing diagram, although it stores information not only with respect to the single diagrams, but it retains information about our pile structure.

For each complete splicing diagram, we assign a free R-module, V , to each circle in the diagram, where R is a commutative ring with identity. In our calculations, we will always let R = Z. Note that we can choose to assign each of these free modules

V a particular basis element, either a v+ or a v−. (This will become more important

57 V ⊕

V ⊕

V

Figure 5.8: The three potential complete splicing diagrams of the trefoil with exactly one 1-splicing. From top to bottom: 100, 010, 001. This stack is then represented by V ⊕ V ⊕ V , which we can think of as summing “down” through the column of splicing diagrams, which we have denoted using arrows. Note that these arrows do not represent maps. later when we discuss the bigrading we have on our homology groups and when we define our maps between vector spaces, but for now, we will not use this fact.) Then, to represent the entire complete splicing diagram at once, we take the tensor product of each of these vector spaces. For example, to a complete splicing diagram containing 5 circles, we would assign the free module V ⊗5. Then, to represent an entire pile of splicing diagrams, we take the direct sum of all of these tensor products of graded vector spaces in a single pile. This idea is demonstrated in Figure 5.8 with the trefoil knot.

We now apply this new algebraic structure to our complete splicing diagrams for the Hopf link.

To formally talk about what we now have, we need a definition:

58 V

V ⊗ V V ⊗ V 10

V 00 11

01

V ⊗ V V ⊕ V 99K99K V ⊗ V Figure 5.9: Sorted complete splicing diagrams for the Hopf link.

Definition 21. A cochain complex is a sequence of abelian groups or modules (Di)

i connected by homomorphisms (d : Di → Di+1) called differentials such that the composition of any two consecutive maps is the zero map. Formally, we can write out the chain complex in the following way:

di−1 di di+1 ... −−→ Di −→ Di+1 −−→ ... such that di+1 ◦ di = 0 ∀i.

With the information shown in Figure 5.9, we now have the skeleton of a cochain complex; that is, a cochain complex without the differentials. Notice that the in- formation in 5.9 is exactly the information that we needed in order to compute the Jones polynomial; the only real change in that information is that we have named things differently. The number of V ’s tensored together stores the number of circles in each complete splicing diagram, and the number of 1-splicings performed on a

59 given diagram is given by its stack (or its code). However, with our organization and the algebraic structure we will endow it with, we will be capable of tracking changes between diagrams using differentials, which allows us to view the collection of all complete splicing diagrams in a much more nuanced light.

We will now endow our skeleton with the necessary maps, which track how each of the complete splicing diagrams can change by changing individual 0-splicings into 1-splicings instead, therefore tracking the relationships between our complete splicing diagrams.

The relationships between these diagrams become clear when we begin to wonder what changes will occur if we begin with a 0-splicing and change it to a 1-splicing in any given diagram. This will of course move us to the next stack, since we are performing one more 1-splicing than before, but it will also alter the number of circles in our complete splicing diagram. When we focus in on the region of a single crossing in our complete splicing diagram and change a 0-splicing to a 1-splicing, we either merge two circles together, thereby reducing the total number of circles by one, or we split one circle into two, thereby adding one circle to the total. We will rigorously define these maps in a moment. Figure 5.10 gives a visual representation of how changing a 0-splicing to a 1-splicing changes the number of circles. Of course, there are other possibilities, but all of the possibilities change the total number of circles by ±1.

Taking all such maps between stacks of splicing diagrams into consideration, we fill up the diagram that previously had dotted lines where the maps should be, where each map will give explicit descriptions of what happens to each circle in terms of their assigned basis elements, and in terms of these merges and splits. These will form our differentials. Recall that we said basis elements could be either v+ or v− for a given circle, so we obtain the following rigorous definition.

60 1-splicing 0-splicing

Figure 5.10: On the right, we have an 0-splicing at a particular crossing, and other splicing behavior beyond that crossing region caused there to be two distinct circles created by the 0-splicing. If we changed this to a 1-splicing, shown on the left, we instead obtain a single circle. This is an example of a merge.

m : V ⊗ V → V

 v+ ⊗ v+ 7→ v+  v ⊗ v 7→ v m : + − − v ⊗ v 7→ v  − + −  v− ⊗ v− 7→ 0

∆ : V → V ⊗ V

( v 7→ v ⊗ v + v ⊗ v ∆ : + + − − + v− 7→ v− ⊗ v−

In terms of notation on our large diagram, our merge and split maps will be de- noted dω on our large diagram of complete splicings, where ω is the string of characters representing the complete splicing diagram that is the map’s domain, except for at the position where we are changing from an 0-splicing to a 1-splicing, where we put a star. For example, the map from 00 complete splicing diagram of the Hopf link to 10 is represented by d∗0, since the first crossing region is the region where we are changing an 0-splicing to a 1-splicing, and the second crossing region is being left as

61 V

V ⊗ V V ⊗ V d∗0 d1∗ 10

V

00 d0∗ d∗1 11

01

V ⊗ V V ⊕ V 99K99K V ⊗ V Figure 5.11: Maps between complete splicing diagrams for the Hopf link. a 0. Further, maps only exist where they physically can. Once we change a 1-splicing to a 0-splicing, we do not change it back again. Also, we can only change a single 0-splicing at a time, so our maps always travel from one stack to the next stack to the right - never two or more stacks over at a time.

There is a sign convention that we have to keep in mind that is necessary in order to make this into a cochain complex, i.e., in order to ensure that di+1 ◦ di = 0. The sign comes from examining the ω in our notation. For each map, consider the position of the ∗ in the subscript of the d relative to the 1’s and 0’s. Count the number of 1’s preceding the ∗. We scale by a negative sign if there is an odd number of 1’s preceding the ∗. For example, the only negative map in our Hopf link example will be d1∗, since there is an odd number (1) of 1’s prior to the ∗. Note that d1∗ : V → V ⊗ V is also a split map, so for a basis element v, either v+ or v−, we can say that d1∗(v) = −∆(v).

At this point, we can actually construct the maps that are the differentials in our cochain complex. They are simply the sum of the maps lying “above” the differential

62 V

V ⊗ V V ⊗ V d∗0 d1∗ 10

V

00 d0∗ d∗1 11

01

d0 d1 V ⊗ V −→ V ⊕ V −→ V ⊗ V Figure 5.12: The cochain complex for the Hopf link. we want. That is, just as we direct sum our graded vector spaces in each stack, we also sum our maps between each pair of consecutive stacks. Finally, we can complete our picture:

We finally no longer have the skeleton of a cochain complex, but an actual cochain complex, which is the following:

0 1 0 −→ V ⊗ V −→d V ⊕ V −→d V ⊗ V −→ 0

0 1 where d = d∗0 + d0∗ and d = −d1∗ + d∗1

Using this construction, we always get a cochain complex for any link [Kho00]. Now that we have a cochain complex, we may consider the groups of this complex, which are the Khovanov Homology groups. For this, we need to first look at the classical definition of cohomology.

ker(dn) Definition 22. The n-th cohomology group, Hn(L), is given by im(dn−1).

63 Khovanov Homology groups in particular are even more special than what this definition portrays, since we are able to obtain a bigrading on our cohomology groups given a particular assignment of basis elements. So our groups actually will look like Hi,j(L), where i is the degree of the cohomology group, and j is the height of the cohomology group. We will rigorously define what these quantities mean momentarily. Let us examine how we will get from the definition of cohomology given above to these bigraded groups.

Recall our example of the calculation of the Jones polynomial using complete splicing diagrams. At the end, we needed to adjust our summation with a normalizing term in terms of the variable q and the orientation placed on the original link. We must go through a similar renormalization process here, except we normalize the degree and height of the cohomology groups.

The degree and height of a given cohomology group is calculated according to a particular stack of splicing diagrams, which is determined by the number of 1-splicings performed, as discussed previously. We denote the number of 1-splicings as b from now on. Then the degree of the cohomology group, i, is given by b − n−.

Then, given an assignment of basis elements to each of the circles in all complete splicing diagrams in a fixed stack, we may assign numerical values to each diagram.

In a given diagram, each v+ is assigned a value of +1 and each v− is assigned a value of −1. Let p be the sum of each of these values. That is, if we have a complete splicing diagram with 5 circles, where 3 are assigned a v+ and 2 are assigned a v−, then p = 1 + 1 + 1 − 1 − 1 = 1. Then, the height, j, of a cohomology group is given by p + i + n+ − n−.

For an alternative discussion of this bigrading computational examples, including the Hopf link example that we are using, see Williams [Wil20]. Using this bigrading, we can now explicitly calcuate cohomology groups of a particular link. Let us try and

64 calculate a specific cohomology group for the Hopf link.

We begin by choosing an orientation on the Hopf link so that we can calculate Hi,j(L), where L is the Hopf link, for a particular degree i and height j, which are determined in part by the signed crossings on the link. We will choose the orientation shown in Figure 5.13. In this figure, we have that n+ = 2 and n− = 0. Then, i = b and j = p + b + 2.

Figure 5.13: Chosen orientation on the Hopf link.

We will calculate a cohomology group relative to the second “stack”, i.e., the stack with a single 1-splicing. Then i = 1, and j = p+1+2 = p+3. There are two different potential choices for degree i = 1 cohomology groups. We can either choose p = 1, by assigning v+ to each circle in the second stack of Figure 5.12 (the one 1-splicing stack), or we can choose p = −1 by assigning a v− to each of these circles. If we choose p = 1, then j = 4, and if we choose p = −1, then j = 2.

Notice that if we had a complete splicing diagram in this stack with, for example,

5 circles, then to obtain p = 1, we would assign 3 circles v+ and 2 circles v−. With 5 circles available, there are 10 potential ways to assign basis elements in this way. More circles create a bit more complication in this assignment process.

Let us calculate H1,2(L). This bigrading tells us that we are looking at the second stack in Figure 5.12, and we are assigning each circle in this stack a v− as a basis element (assigning p = −1 to each). Now we actually calculate the image of the map leaving this stack modulo the kernel of the map coming into this stack, restricted to

65 our particular choice of basis. Note that, in this case, there is a unique choice of basis given our height, so this calculation will not require huge amounts of information. Unfortunately, these calculations get exceedingly detailed, producing large matrices for links having more complicated splicing diagrams.

We will proceed by using the computational tricks in Hunt’s bachelor’s thesis [Hun14]. Notice that when we calculate the kernel of the relevant map modulo the image of the previous map, we only need to worry about the relevant height value.

If we are fixing j = 2, i.e.,, fixing v− as our basis elements, then the only way to obtain v− in the second stack by merging the two circles in the first stack is to assign v+ ⊗ v− or v− ⊗ v+ to the all 0-splicing diagram. This is because both of these choices of basis map to v− under the merge function m. These choices of basis correspond to a p value of 0. Therefore, if our outgoing map of concern is d1,2, our incoming map of concern must be d0,3, since j = p + 3 = 3 when p = 0.

ker(d1,2) H1,2(L) = im(d0,3)

where

1,2     d = −∆(v−) ∆(v−) = −(v− ⊗ v−) v− ⊗ v−

We can thus view d1,2 as −1 1.

m(v ⊗ v ) m(v ⊗ v ) v v  d0,3 = + − − + = − − m(v+ ⊗ v−) m(v− ⊗ v+) v− v−

1 1 We can thus view d0,3 as . 1 1

Therefore, we have that

66 Hi,j 0 1 2 6 Z 5 4 Z 3 2 Z 1 0 Z

Figure 5.14: The table of Khovanov homology groups for our Hopf link.

1 ker(d1,2) = span 1

1 im(d0,3) = span 1

Thus, ker(d1,2) = 0 im(d0,3) .

We will omit the calculation of the rest of these Khovanov homology groups, but their calculation works similarly. Notably, one way to visually represent these bigraded cohomology groups is with a table. This is commonly seen in papers in- troducing Khovanov homology and alongside calculation of the Khovanov homology groups of particular knots and links. Such a table is shown for the Hopf link, in Fig- ure 5.14. Here, the blank boxes represent the group being trivial. The values along the top of the table represent degrees, and the values along the left side of the table represent heights.

In the next chapter, we will use these computational tools in order to prove some- thing about our question in the two-component case.

67 Chapter 6: Khovanov Homology - Application to the Two-Component Case

Before proving our main result, we must first study our two-component Brunnian

2 link, Wk (B). Note that the two-component Brunnian link B in this case is the Hopf

2 link, and W2 (B) is the Whitehead link. In particular, we will study its minimum height Khovanov homology groups, and determine that for this minimum height, there is only a single nontrivial homology group.

Theorem 6.1. Consider the two-component Brunnian link with the second component

2 −1−k,−4−2k 2 Whitehead k-doubled, Wk (B). Then, H (Wk (B)) = Z. Further, this is the minimum degree, minimum height nontrivial homology group for this link.

2 Proof. We begin with a base case of sorts: the link W2 (B). We will not actually prove this by induction, but it is helpful to see this case before generalizing. First,

2 we place an orientation on W2 (B), so that we can determine n+ and n−. We choose the orientation in Figure 6.1. We also choose an ordering on the crossings shown in the figure.

1 2 5 6 3

4

2 Figure 6.1: Chosen orientation and ordering on crossings for W2 (B).

Based on the orientation given in this diagram, n+ = 2 and n− = 4. Then given our bigrading convention, this link has degree i = b−4, and height j = p+b−4+2−4 = p+b−6. Recall that b is the number of 1-splicings, and p is related to basis assignment.

68 −4,j 2 We will show that if b = 0, then H (W2 (B)) = 0 for all possible j values, and thus there is no nontrivial homology in this minimal degree. Further, we will show that the first, i.e., minimum degree/height, nontrivial homology group occurs at bidegree

−3,−8 2 −3,j 2 (-3,-8), i.e., H (W2 (B)) 6= 0, and H (W2 (B)) = 0 ∀j < −8. We start by noticing that b = 0 corresponds to the first stack of complete splicing

2 diagrams, containing just a single splicing diagram that is the all-0 splicing of W2 (B). The degree here is i = −4. This diagram has two circles and is given on the left of Figure 6.2. Let us determine the possible height values corresponding to this stack,

−4,j 2 i.e., the j values that might yield nontrivial H (W2 (B)). Because the all-0 splicing yields two circles, we have 4 choices for assignment of basis elements, with their corresponding p values:

v+ ⊗ v+ =⇒ p = 2

v+ ⊗ v− =⇒ p = 0

v− ⊗ v+ =⇒ p = 0

v− ⊗ v− =⇒ p = −2

This demonstrates that there are 3 possible height values when b = 0 and i = −4, since there are 3 distinct p values. Recall j = p + b − 6, so when b = 0, we have j = p − 6. Thus, we have the following distinct heights to evaluate when i = 4:

p = 2 =⇒ j = −4

p = 0 =⇒ j = −6

p = −2 =⇒ j = −8

Since this is all relative to the first stack, we are looking at the portion of our chain complex given by

69 V ⊗3

100000

d∗00000 V 010000

d0∗0000 ⊗2 V V d00∗000 000000 001000

⊗3 d000∗00 V

000100

d0000∗0 V ⊗3

000010

d00000∗ V ⊗3

000001

Figure 6.2: A diagram of the all 0-splicing for the Whitehead link we are concerned with, and the maps going to the second stack of complete splicing diagrams - diagrams with single 1-splicings. This represents the leftmost portion of the chain complex, given by the following: 0 −→ V ⊗2 −→ V ⊗3 ⊕ V ⊕ V ⊕ V ⊗3 ⊕ V ⊗3 ⊕ V ⊗3 −→ ...

70 −4 0 −→ V ⊗2 −−→d V ⊗3 ⊕ V ⊕ V ⊕ V ⊗3 ⊕ V ⊗3 ⊕ V ⊗3

The image of the first map, 0 −→ V ⊗2, is always 0. Therefore, to calculate our homology groups, the kernel of the second map modulo the image of the first, we need

−4 only calculate the kernel of the map V ⊗2 −−→d V ⊗3 ⊕ V ⊕ V ⊕ V ⊗3 ⊕ V ⊗3 ⊕ V ⊗3.

Observe:

−4,−4 2 −4,−4 H (W2 (B)) = ker(d )

−4,−6 2 −4,−6 H (W2 (B)) = ker(d )

−4,−8 2 −4,−8 H (W2 (B)) = ker(d )

where

    ∆(v+) v+ ⊗ v− + v− ⊗ v+ m(v+ ⊗ v+)  v+      −4,−4 m(v+ ⊗ v+)  v+  d =   =    ∆(v+)  v+ ⊗ v− + v− ⊗ v+      ∆(v+)  v+ ⊗ v− + v− ⊗ v+ ∆(v+) v+ ⊗ v− + v− ⊗ v+

    ∆(v+) ∆(v−) v+ ⊗ v− + v− ⊗ v+ v− ⊗ v− m(v+ ⊗ v−) m(v− ⊗ v+)  v− v−      −4,−6 m(v+ ⊗ v−) m(v− ⊗ v+)  v− v−  d =   =    ∆(v+) ∆(v−)  v+ ⊗ v− + v− ⊗ v+ v− ⊗ v−      ∆(v+) ∆(v−)  v+ ⊗ v− + v− ⊗ v+ v− ⊗ v− ∆(v+) ∆(v−) v+ ⊗ v− + v− ⊗ v+ v− ⊗ v−

    ∆(v−) v− ⊗ v− m(v− ⊗ v−)  0      −4,−8 m(v− ⊗ v−)  0  d =   =    ∆(v−)  v− ⊗ v−      ∆(v−)  v− ⊗ v− ∆(v−) v− ⊗ v−

71 Note that d−4,−4 and d−4,−8 clearly have trivial kernel, since they are just column vectors. Further, d−4,−6 has two columns which are linearly independent, and thus its kernel is 0 as well.

−4,j 2 Therefore, H (W2 (B)) = 0 ∀j.

−3,j 2 Let us now move on to proving that H (W2 (B)) = 0 ∀j < −8. When j = −8,

−3,−8 2 H (W2 (B)) = Z. We start by determining the implications of being in degree i = −3. When i = −3, b = 1. This means that we are looking at the homology group that corresponds to the second stack of complete splicing diagrams, i.e., the stack of splicing diagrams created by performing a single 1-splicing at each crossing region. Figure 6.2 demonstrates all of this as well.

Let us also examine our height value. j = p+b−6 = p+1−6 = p−5. Recall that p is the number associated to the basis elements v+ and v− assigned to a complete splicing diagram. Notice that if j = p − 5 = −8 =⇒ p = −3. Observe that, in Figure 6.2, four of the six complete splicing diagrams with a single 1-splicing have

3 circles; each of these four can be assigned basis element v− ⊗ v− ⊗ v−, having a p value of −3. This is the only way to obtain p = −3 from a complete splicing diagram with 3 circles. Notice that for the other two complete splicing diagrams, there is no way to obtain p = −3 since the minimum p value we can obtain from 1 circle occurs when we assign it a basis element v−, which gets a p value of −1.

−3,j 2 Further, notice that H (W2 (B)) = 0 for j < −8 since if j < −8, we need p < −3, which is impossible since our diagrams in the second stack have no more

−3,−8 2 than 3 circles. So, we can move to focusing on H (W2 (B)).

ker(d−3,−8) H−3,−8(W 2(B)) = 2 im(d−4,−8)

Modifying our map to include only information about the relevant height, we

72 obtain

  v− ⊗ v− −4,−8 v− ⊗ v− d =   v− ⊗ v− v− ⊗ v−

This can be viewed as 1 1   1 1

Thus,

1   −4,−8 1 im(d ) = span   1  1 

Now, in order to calculate d−3,−8, we need to know information about the maps going from the diagrams with 3 circles in the second stack to the diagrams in the third

6 stack, which is not pictured in Figure 6.2. This is because there are 2 diagrams in the third stack. Luckily, there is a trick to calculating our desired matrix given our choice of basis. Recall that since we are in height j = −8, we have assigned v− ⊗ v− ⊗ v− to our complete splicing diagrams with 3 circles, and the ones with only 1 circle are irrelevant. Any merge map coming out of a diagram having a basis composed of all v− will yield 0. This is because m(v− ⊗ v−) = 0. If we look at every one of the complete splicing diagrams in the second stack, and picture changing the 5 remaining 0-splicings in each to a 1-splicing instead, most of these options are merges. Therefore, in our matrix for d−3,−8, we obtain mostly 0 entries. The only time this does not happen is when we have a split instead of a merge. This occurs in exactly

73 two ways, as shown in Figure 6.3, and is given by the maps from codes 100000 and 000100 to 100100 and 000011. These codes resemble the diagrams with three circles splitting into four circles.

100100

000011

Figure 6.3: The two possible complete splicing diagrams in the third stack (stack of diagrams with two 1-splicings) with 4 circles; that is, with algebraic assignment of V ⊗4.

To write the differential d−3,−8 in matrix form, we must choose what each column and row of the matrix represents, then fill in the matrix to represent how splicing diagrams transition to each other. Let us say that the one 1-splicing diagrams in Figure 6.2 represent the columns of this matrix in the order that they appear in the figure. Remember that we only care about diagrams with 3 circles due to our height constraint, so those are the the only diagrams we will represent by columns in our matrix. Therefore, we do not represent codes 010000 and 001000 as columns in our matrix. Then, let us say that the first and second rows of our matrix represent the diagrams in Figure 6.3, again in the order in which they appear. Our matrix is therefore given by

74 −1 1 0 0

−3,−8  0 0 −1 1 d =   .   0

Note that, for example, in the first column we have a −1 in the first entry since this entry represents the transition from the splicing diagram 100000 to the splicing diagram 100100. Because there is exactly one 1 prior to the 1 we added to the code to represent this transition, we scale by a negative sign.

Therefore,

1 0   1 0 ker(d−3,−8) = span   ,   0 1  0 1 

Notice that

     1 0          1 0 span   ,       0 1       ker(d−3,−8)  0 1  H−3,−8(W 2(B)) = = 2 im(d−4,−8)    1      1 span   1      1 

.

Clearly, this is not isomorphic to Zn for any n ∈ N, as this is a cyclic group of infinite order; that is, this result is a torsion-free cohomology group. Therefore,

−3,−8 2 ∼ H (W2 (B)) = Z, and this is the minimum height nontrivial homology group in this degree.

75 1

2 n + 4 3 5 6 7 n + 3

4

Figure 6.4: The two-component Brunnian link with n twists, where n ∈ 2N, given an orientation and an ordering on the crossings.

2 −1−k,−4−2k 2 It remains to show the corresponding result for Wk (B), namely that H (Wk (B)) =

−1−k,j 2 −2−k,j 2 Z, while H (Wk (B)) = 0 ∀j < −4 − 2k, and H (Wk (B)) = 0 ∀j and ∀k ∈ 2N. We will use exactly the same strategy as in the 2-twist case. Again, we begin with an orientation and an ordering on the crossings, demonstrated in Figure 6.4.

As shown, we have n+ = 2 as before, and n− = k + 2. Therefore, i = b − k − 2, and j = p + b − 2k − 2. Let us first show, as in the 2-twist case, that when b = 0, homology groups are all 0.

We have a slightly modified diagram to help us: Figure 6.5.

If b = 0, then i = −k − 2. We have the same setup aside from having more diagrams in the second stack. In particular, our first stack, or our all 0-splicing diagram has 2 circles, and thus the same 4 choices of basis elements as in the 2-twist case. Our potential j values are given by the following:

v+ ⊗ v+ =⇒ p = 2 =⇒ j = −2k

v+ ⊗ v− =⇒ p = 0 =⇒ j = −2k − 2

v− ⊗ v+ =⇒ p = 0 =⇒ j = −2k − 2

v− ⊗ v− =⇒ p = −2 =⇒ j = −2k − 4

76 V ⊗3

100000...0

d∗00000...0 V

010000...0

d0∗0000...0 V

001000...0 d V ⊗2 00∗000...0 V ⊗3

d000∗00...0 000000...0 000100...0

V ⊗3

d0000∗0...0 . 000010...0 . . . V ⊗3 d00000...0∗ 000000...01

2 Figure 6.5: A diagram of the all 0-splicing for Wk (B), and the maps going to the second stack of complete splicing diagrams, which are diagrams with single 1-splicings. Note that there are k diagrams with 3 circles created by performing 1-splicings in the twist region, represented by codes 000010...00, 000001...00, ..., 000000...10,000000...01.

77 We proceed by constructing matrices in the same way, and come to exactly the same conclusions. The cases p = 2 and p = −2 give column vectors, and thus the kernels of the corresponding maps are 0. The p = 0 case yields a (k + 4) × 2 matrix, where the two columns are linearly independent vectors. Therefore, all homology groups of degree i = −k − 2 are trivial.

Next, we move on to the calculation of homology groups of degree i = −k −1. We can make the same type of argument with the basis elements. We have 2 diagrams in the second stack with 1 circle, and k + 2 diagrams in the second stack with 3 circles.

If p = −3 - that is, we assign a basis of v− ⊗ v− ⊗ v− to each of the diagrams with 3 circles - then we get j = −3 + 1 − 2k − 2 = −4 − 2k. Therefore, if j < −4 − 2k, then

−k−1,j 2 H (Wk (B)) = 0, because it is impossible to get p < −3.

−1−k,−4−2k 2 Therefore, H (Wk (B)) is the first homology group of degree −1 − k that may be nontrivial. We will calculate this homology group as in the 2-twist case. Again, we will neglect the diagrams in the second stack which have only 1 circle, as they cannot possibly be assigned a value of p = −3.

ker(d−k−1,−4−2k) H−k−1,−4−2k(W 2(B)) = k im(d−k−2,−4−2k)

    ∆(v−) v− ⊗ v− ∆(v−) v− ⊗ v−     ∆(v−) v− ⊗ v− −k−2,−4−2k     d =  .  =  .  .      .   .       .   .  ∆(v−) v− ⊗ v−

As in the 2-twist case,

78 100100...00

000011...00 k 2 diagrams 000000...11

k Figure 6.6: The 2 + 1 possible complete splicing diagrams in the third stack (stack of diagrams with two 1-splicings) with four circles; that is, with algebraic assignment of V ⊗4.

1   1   1 −k−2,−4−2k   im(d ) = span . .   .   .  1 

In order to calculate d−1−k,−4−2k, we examine the potential ways to create four circles, shown in Figure 6.6. This again mirrors the method used in the 2 twist case.

k There are 2 + 1 diagrams with two 1-splicings that have four circles. These can be represented by codes 0000X....X where there are are exactly k X’s representing splicings in the twist region, in which we can place two 1’s to obtain four circles.

k There are 2 such possibilities. Then, there is one extra diagram represented at the top of Figure 6.6 which gives us four circles, as in the 2-twist case.

Recall that all other cases are merges, having 2 circles, and thus yield a 0 entry in the matrix due to m(v− ⊗ v−) = 0. Again, let the columns represent the second stack of complete splicing diagrams (one 1-splicing), and the rows represent the third

79 stack (two 1-splicings).

This matrix is potentially very large depending upon n, but again we get two basis elements for the kernel of d−k−1,−4−2k. One of these basis elements comes from the maps into the first complete splicing diagram given in Figure 6.6. The other basis element corresponds to linearly dependent behavior coming from the twist region.

We will omit the calculations for brevity, and say that the result follows exactly as in the 2-twist case. Again, we have two basis elements for kernel of the outgoing map, and one basis element for the image of the incoming map. Again, we get a cyclic

−k−1,−4−2k 2 group of infinite order, and so H (Wk (B)) = Z. Therefore, the minimal degree/height homology group occurs at bidegree

(−1 − k, −4 − 2k) ∀k ∈ 2N.

Next, we will show that the nontrivial cohomology group of bidegree (−1−k, −4− 2k) is the only one at height −4 − 2k. That is, the cohomology groups of bidgree (i, −4 − 2k) are trivial except for the case when i = −1 − k.

Theorem 6.2. Consider the two-component Brunnian link with the second component

2 i,−4−2k 2 Whitehead k-doubled, Wk (B). For all k ∈ 2Z, i > −1−k =⇒ H (Wk (B)) = 0.

Proof. Our goal is to show that im(di,−4−2k) = ker(di+1,−4−2k) ∀i ≥ −k.

We will begin with several observations before we move to the core of this proof.

Recall that our formula for degree and height are the following:

i = b − n−

j = p + b + n+ − 2n−

2 Recall that for Wk (B), we have n+ = 2 and n− = k + 2. Thus, we have the following relations:

80 i = −k : −k = b − k − 2 =⇒ b = 2

i = −k + 1 : −k + 1 = b − k − 2 =⇒ b = 3

i = −k + 2 : −k + 2 = b − k − 2 =⇒ b = 4

. .

i = −k + d : −k + d = b − k − 2 =⇒ b = 2 + d

These relations tell us that the stack of splicing diagrams that we center around

−k+d,−4−2k 2 in order to calculate H (Wk (B)), where d ∈ N, is the stack with 2 + d 1-splicings.

2 Observe that our link diagram for Wk (B) has k + 4 crossings. Therefore, there is no possible way to perform more than (k + 4) 1-splicings. Hence, b ≤ k + 4, and our complete list of possibilities for i > −1 − k is finite, where the maximum value d may take on is k + 2.

Notice that if we restrict the height value to j = −4 − 2k, we obtain another list which gives us the p values for each of the b values given above. Notice first that n+ − 2n− = 2 − 2k − 4 = −2k − 2. So, j = p + b + n+ − 2n− = p + b − 2k − 2. Then we obtain the following list:

b = 2 : −4 − 2k = p + 2 − 2k − 2 = p − 2k =⇒ p = −4

b = 3 : −4 − 2k = p + 3 − 2k − 2 = p − 2k + 1 =⇒ p = −5

b = 4 : −4 − 2k = p + 4 − 2k − 2 = p − 2k + 2 =⇒ p = −6

. .

b = 2 + d : −4 − 2k = p + (2 + d) − 2k − 2 = p + d − 2k =⇒ p = −4 − d

81 Overall, we have a relationship between any degree i, the stack we are centering our maps around (i.e., which b value we have), and which basis elements we must assign each circle in our complete splicing diagrams within that stack (i.e., which p value we have). This relation can be demonstrated by the ordered triple

(i, b, p) = (−k + d, 2 + d, −4 − d), d ∈ N.

Now that we have these numerical constraints, let us examine how these ordered

2 triples would look given our link diagram for Wk (B). Recall from the proof of Theo-

2 rem 6.1 that our all 0-splicing diagram for Wk (B) has two circles, and corresponds to degree i = −k−2. Notice that in order to obtain the numerical constraint (−k, 2, −4), i.e., the case of our constraint where d = 0, we need at least four circles since p = −4, and we only have two 1-splicings to perform in order to obtain those 4 circles. This happens only by performing two 1-splicings in succession that are both splits rather

k than merges. This happens in 2 ways, as shown in Figure 6.6, and there are 1 + 2 k diagrams that give us this outcome. We know that 2 of them come from performing 2 2 1-splicings in the twist region of Wk (B). This generalizes as we raise the value of d. That is, in order to obtain the numer- ical constraint (−k + d, 2 + d, −4 − d), we have (2 + d) 1-splicings to perform in order to obtain 4 + d = 2 + (2 + d) circles. This means that each of the (2 + d) 1-splicings we perform must be a split. This, in fact, is satisfied only in the twist region. (Notice that the first diagram in Figure 6.6 from the proof of Theorem 6.1 has two 1-splicings performed so far, and any additional 1-splicing is a merge rather than a split, elimi-

k  nating this diagram from consideration.) Therefore, for d > 0, there are exactly 2+d diagrams, all obtained from performing 2 + d 1-splicings in the twist region of the diagram only. We will have to show the case of d = 0 more carefully, since it involves one extra complete splicing diagram possibility. However, for this case, there are still

k  2+d diagrams that result from performing 1-splicings in the twist region.

82 This implies that the only diagrams that we need to consider in this proof for are akin to the ones shown in Figure 6.6, with the number of 1-splicings in the twist region adjusted accordingly. Also, the top diagram will only be used for the d = 0 case.

We will show that the relevant maps are represented by matrices that are row equivalent to matrices of the form

 0 0  M = I B

where I is ± the identity matrix and B consists of 0’s, 1’s, and −1’s, and where

k−1 k−1 k−1  the size of I is 2+d × 2+d and the number of columns in B is 2+d−1 . This implies that  k − 1  k − 1 dim(ker(M)) = , dim(im(M)) = . 2 + d − 1 2 + d

Note that it suffices to show that this is true for d > 0, since we have that

 k − 1  dim(ker(d−k+d,−4−2k)) = 2 + d − 1

 k − 1  dim(im(d−k+(d−1),−4−2k)) = 2 + d − 1

Thus,

ker(d−k+d,−4−2k) = 0 =⇒ H−k+d,−4−2k(W 2(B)) = 0. im(d−k+(d−1),−4−2k) k

For d = 0, we will show that the matrices are of this form, with some additional columns due to the extra diagram. However, the general form of the matrices for the twist region will hold in this case as well.

83 We will induct on both d and the number of twists, k, in order to show that this holds for all k. The cases for k = 2, 3 can be shown via computation. We showed that our hypothesis holds on SageMath using the code “L.khovanov homology()” for these cases, i.e., all homology groups are trivial except for at the desired degree, given height −4 − 2k. They are not illuminating in terms of the pattern that appears in the matrices because the matrices are too small, so we will not discuss them further here. We begin with d = 0. We will be examining the case related to the constraint (−4, 2, −4). We proceed by induction on k.

Base. (k = 4) Our constraint in this case becomes (−4, 2, −4).

The map from the third stack (with two 1-splicings) to the fourth stack (with three 1-splicings), given our basis element restrictions, is given by

 0 1 −1 1 0 0 0   0 1 0 0 −1 1 0     0 0 1 0 −1 0 1  0 0 0 1 0 −1 1

Notice that if we row reduce with the goal of eliminating the nonzero elements in the top row, we obtain

 0 0 0 0 0 0 0   0 1 0 0 −1 1 0  M =    0 0 1 0 −1 0 1  0 0 0 1 0 −1 1

Notice that the matrix to the right of the first column of zeros is of the desired  0 0  form: M = . The first column of zeros represents the diagram that only I B occurs in the d = 0 case.

Additionally, for this matrix, we get that

84 3  k − 1  dim(ker(M)) = + 1 = + 1 1 2 + d − 1

3 k − 1 dim(im(M)) = = 2 2 + d

So, this matrix fits the form of our hypotheses, with an accommodation for the extra diagram for d = 0.

Let us also evaluate the incoming map. We need to show that the incoming map has image corresponding to the kernel of the outgoing map. The map from the second stack (with one 1-splicing) to the third stack (with two 1-splicings), given our basis element restrictions, is given by

 −1 1 0 0 0 0   0 0 −1 1 0 0     0 0 −1 0 1 0     0 0 0 −1 1 0     0 0 −1 0 0 1     0 0 0 −1 0 1  0 0 0 0 −1 1 where the first two columns compensate again for the extra splicing diagram in the d = 0 case. Again, we can row reduce to eliminate the nonzero block above the matrix corresponding to A in the hypotheses, and we will obtain

 −1 1 0 0 0 0   0 0 0 0 0 0     0 0 0 0 0 0  0   M =  0 0 0 0 0 0     0 0 1 0 0 −1     0 0 0 1 0 −1  0 0 0 0 1 −1

85 Observe that dim(im(M 0)) = 3 + 1 = 4 which matches the dimension of the kernel of the previous matrix, M. Also observe that we have the desired structure for our matrix here as well.

Therefore, for this case, representative of the constraint (−4, 2, −4), we get trivial cohomology, and our matrices match the form that we desire.

Step. (k arbitrary) We need to show that for the constraint (−k, 2, −4), our claim holds, and so we obtain trivial cohomology. In order to give intuition about how we construct our matrices for k twists, we will begin with the matrix for k − 1 twists, and build the one for k twists. Suppose that the matrix for k − 1 twists is called N. Visualize each column as corresponding to a binary code representing a complete splicing diagram. The structure that these codes endow our matrices with is crucial to the argument, so let us take a moment to examine the precise ordering of the codes, i.e., the precise ordering on our columns. We will demonstrate this structure through an example, and the generalized version will be clear.

Suppose that k − 1 = 5, and suppose we were looking at the differential at the relevant height that followed the transition from performing 2 twist region 1-splicings

5 to 3. Then, we would order the codes for the 2 options for initial splicing diagrams as follows, as decreasing binary numbers with precisely two 1’s:

11000, 10100, 01100, 10010, 01010, 00110, 10001, 01001, 00101, 00011.

3 Notice that the first 3 codes (and thus, first three columns), represent the 2 options for filling in the first 3 slots of the code with 1’s, and the final 2 slots with 0’s. After these, we proceed to move a 1 into the next possible position - the fourth position in this case - keep a 0 fixed in the remaining positions, the fifth position in

3 this case, and vary over the 1 possibilities for placing a 1 in one of the first three positions. We carry on in this manner to obtain an ordering of initial states.

86 To order the resulting states, or the states representing possibilities for performing three 1-splicings in the twist region, we will order codes in the same manner - as decreasing binary numbers with precisely three 1’s. We obtain the following list.

11100, 11010, 10110, 01110, 11001, 10101, 01101, 10011, 01011, 00111.

Note that we follow the same process. First, we fix the last two entries as 0’s, and obtain the one possibility for placing three 1’s in the first three slots. Then, we fix a

3 1 in the fourth position, and range over the 2 possibilities of placing two 1’s in the first three slots, and so on.

4 Observe that because of how carefully we have ordered everything, the final 2 4 codes, or those with a 1 fixed in the last position, exactly represent the first 2 codes in the prior list. These codes correspond to the transition that amounts to changing the 0 to a 1 in the last position in each of those codes. This will yield the identity matrix I in the lower left-hand block of our matrix. Notably, when d takes on different values, this might be ±I instead, but that still adheres to our requirement. This precise way of ordering can be done for any k value and for any d value, i.e., for any number of twists, and any possible number of 1-splicings in the twist region.

Now, we will construct the matrix for k twists. We do not just construct it arbitrarily; instead, we can build on the already existing matrix. We add a 0 to the end of each of the codes, in each row and each column. But this does not yet account for all of the changes we need to make.

k−1 Let us first address the columns. Notice that there are exactly 2 codes rep- resenting the columns, now with an extra 0 on the end of each code. In order to

k k−1 represent all of the 2 complete splicing diagrams, we need to also add in the 1 columns which represent codes having a 1 in this final new spot.

As for the rows, we go through the same process. So we obtain a matrix of the form

87  A 0  , I B

k−1 k−1 where A is the matrix for k − 1 twists, I is a submatrix of size 2 × 2 . Now, we need to show that this matrix can be row reduced to the matrix

 0 0  . I B

That is, there is some sequence of row operations we can perform to eliminate A while keeping the 0 block in the upper right hand corner.

Notice that I already has the size we desire by construction. Recall that for each code representing a column of I, there is exactly one code representing a row of I to which it may transition. Recall also that by our ordering conventions, we will obtain a diagonal matrix. Thus, by construction, I is automatically the identity submatrix.

It remains to show that we can row reduce A to a 0 block such that the end result retains the already existing 0 block on the right. In order to show this, we first recall that upward row operations can be represented by left matrix multiplication.

Claim: Left multiplication by the following matrix represents the row operations we require, where the identity blocks are of the appropriate sizes.

 I −A  0 I

Proof of Claim: Observe,

 I −A   A 0   0 −AB  = . 0 I I B I B

Thus, it suffices to show that −AB = 0.

88 By the precise construction of our matrices via the codes previously discussed, it can be verified that the submatrix A for this differential, i.e., the differential represent- ing the transition from two 1-splicings to three 1-splicings, is precisely the submatrix that appears in the next differential in the B block, i.e., the differential representing the transition from three 1-splicings to four 1-splicings. Precisely, B(di+1) = A(di), where di denotes a given differential at a fixed number of 1-splicings. So in order to show that A(di)B(di) = 0, we only need to show that B(di+1)B(di) = 0. But recall that we have a cochain complex, and so di+1 ◦ di = 0 ∀i. In the form of our constructed matrices, we have that,

 A(di+1) 0   A(di) 0  = 0 ⇐⇒ B(di+1)B(di) = 0. I B(di+1) I B(di)

Therefore, B(di+1)B(di) = 0, and our claim has been shown.

This all implies that our desired row operations exist, and thus, we can always obtain the row reduced matrix,  0 0  . I B

Thus, our inductive step is proven.

Notice that the construction of our codes and matrices works the same way for any d value, and so by following the same procedure, we obtain the same results. The only difference is that we will not have the extra rows and columns in order to compensate for the first complete splicing diagram in Figure 6.6.

Now, we state our main result, which answers our question, Question B, for a broad class of two-component links.

Theorem 6.3. Assume L is a two-component link. Consider the minimum height

i,hm j = hm for which H (L) is nontrivial for some degree dm. Assume L has the

89 i,j H dm ...... * * .

hm G 0

Figure 6.7: The Khovanov homology table for L, demonstrating the necessary hy- potheses.

dm,hm i,hm property that H (L) = G, where G is torsion-free, but H (L) = 0 ∀i 6= dm.

Then, there exists an infinite family of links Lei such that each Lei is: (i) link homotopic to L; (ii) not link isotopic to L; and (iii) has the property that each proper sublink of k components with k < n is isotopic to the corresponding k-component sublink of L.

Proof. In Khovanov’s original paper on knot cohomology theory [Kho00], he gives a closed formula for the cohomology groups of the disjoint union of two links.

n,m M i,j n−i,m−j M H (L1 t L2) = H (L1) ⊗ H (L2) i,j∈Z (6.1) M Z i,j n+1−i,m−j Tor1 (H (L1), H (L2)) i,j∈Z

Z Note that Tor1 (X1,X2) has output related to torsion in X1 and X2, and for our purposes, we will need the following:

Z Z Z Z Tor1 (0, −) = Tor1 (−, 0) = Tor1 (Z, −) = Tor1 (−, Z) = 0,

Z ×n ∼ ×m Tor1 (Zn, Zm) = ker(Zm −→ Zm) = ker(Zn −−→ Zn).

2 We know that our two-component Brunnian link with k twists, denoted Wk (B), has its unique minimum degree/height homology group occurring at degree −1 − k

90 and height −4 − 2k. Concisely, we have the following about this link.

−1−k,−4−2k 2 H (Wk (B)) = Z

i,−4−2k 2 H (Wk (B)) = 0 ∀i 6= −1 − k

−2−k,j 2 H (Wk (B)) = 0 ∀j ∈ Z

−1−k,j 2 H (Wk (B)) = 0 ∀j < −4 − 2k

Denote hfm := −4 − 2k, and dfm := −1 − k. Then, denote Dm = dm + dfm and

Hm = hm + hfm.

Dm,Hm 2 Now, we want to examine H (L t Wk (B)). We proceed by cases to discover that most of the terms in the formula for this particular homology group vanish. To begin, we have the following:

Dm,Hm 2 M i,j Dm−i,Hm−j 2 M H (L t Wk (B)) = H (L) ⊗ H (Wk (B)) i,j∈Z (6.2) M Z i,j Dm+1−i,Hm−j 2 Tor1 (H (L), H (Wk (B))) i,j∈Z

Case 1: Suppose j < hm. Then by hm the minimum height for which L has

i,j i,j Dm−i,Hm−j 2 nontrivial homology, H (L) = 0. Thus, H (L) ⊗ H (Wk (B)) = 0, and

Z i,j Dm+1−i,Hm−j 2 Tor1 (H (L), H (Wk (B))) = 0.

Case 2: Suppose j > hm. Then Hm −j = hm +hfm −j < hfm. Then since hfm is the

2 Dm+1−i,Hm−j 2 minimum height for which Wk (B) has nontrivial homology, H (Wk (B)) =

i,j Dm−i,Hm−j 2 Z i,j Dm+1−i,Hm−j 2 0. Thus, H (L)⊗H (Wk (B)) = 0 and Tor1 (H (L), H (Wk (B))) = 0.

Thus, we have discovered that j = hm is the only possibility for which either

i,j Dm−i,Hm−j Z i,j Dm+1−i,Hm−j 2 H (L) ⊗ H or Tor1 (H (L), H (Wk (B))) may be nonzero.

91 Case 3: Suppose j = hm and i 6= dm. Recall that dm is the unique degree for which

i,hm Dm−i,hfm 2 L has nontrivial homology at height hm. Then H (L)⊗H (Wk (B)) = 0 and

Z i,hm Dm+1−i,hfm 2 Tor1 (H (L), H (Wk (B))) = 0.

Thus, we know that i = dm and j = hm are the only possibilities for which the right-hand side of our formula for cohomology of the disjoint union of two links may be nontrivial. So we have the following:

Dm,Hm 2 dm,hm dfm,hfm 2 M H (L t Wk (B)) = H (L) ⊗ H (Wk (B)) (6.3) Z dm,hm dfm+1,hfm 2 Tor1 (H (L), H (Wk (B)))

dfm+1,hfm 2 ∼ However, we also know that H (Wk (B)) = 0, by Theorem 6.2. Therefore,

Z dm,hm dfm+1,hfm 2 Tor1 (H (L), H (Wk (B))) = 0. Thus,

Dm,Hm 2 dm,hm dfm,hfm 2 ∼ H (L t Wk (B)) = H (L) ⊗ H (Wk (B)) = G ⊗ Z = Z since G is torsion-free.

i,Hm 2 Next, we will verify that i = Dm is the only degree for which H (L t Wk (B)) is

D,Hm 2 nontrivial. Let i = D be arbitrary. We will show that H (L t Wk (B)) = 0 unless

D = Dm.

Notice that by cases 1 and 2 above, j = hm is the only height for which we

D,Hm 2 i,hm get nontrivial terms in H (L t Wk (B)). Then, by assumption, H (L) = 0

dm,hm unless i = dm. So our only possibility for nontrivial homology occurs if H (L) ⊗

D−dm,hfm 2 Z dm,hm D−dm+1,hfm 2 H (Wk (B)) 6= 0 or Tor1 (H (L), H (Wk (B))) 6= 0.

Z dm,hm D−dm+1,hfm 2 First, we know that Tor1 (H (L), H (Wk (B))) = 0 regardless of D,

k because at height hfm, the only nontrivial homology group of W2 (B) is Z, which is torsion-free.

92 i,j H Dm ...... * * .

Hm Z 0

2 Figure 6.8: The Khovanov homology table for L t Wk (B), where Dm = dm + dfm and Hm = hm + hfm.

dm,hm D−dm,hfm 2 D−dm,hfm 2 Next, H (L) ⊗ H (Wk (B)) 6= 0 ⇐⇒ H (Wk (B)) 6= 0 ⇐⇒

D − dm = dfm ⇐⇒ D = dm + dfm = Dm. Thus, we have shown the following:

i,Hm 2 •H (L t Wk (B)) = 0, ∀i 6= Dm

Dm,Hm 2 ∼ •H (L t Wk (B)) = Z

• Hm is minimal.

These conclusions can be represented again by a table, as shown in Figure 6.8.

In order to get this information to say something about the band sum that we would like to know about, we will use a result related to the rank of these groups. For this reason, we briefly interrupt our calculations in order to see a rigorous definition of the rank of a module over a domain.

Definition 23. Let R be a domain and M an R-module. The rank of M is given by

rank(M) = dimQ(R)(M ⊗ Q(R)), where Q(R) is the field of fractions of the domain R.

93 In particular, we have R = Z, so we may conclude that the rank of M is the number of copies of Z in the decomposition from the fundamental theorem of finitely generated modules over a principal ideal domain. This is relevant for us since Z is a principal ideal domain. Therefore, we may conclude the following about the homology groups we have been discussing by viewing them as Z-modules.

i,Hm 2 • rank(H (L t Wk (B))) = 0, ∀i 6= Dm

Dm,Hm 2 • rank(H (L t Wk (B))) = 1

2 Next, we will relate this minimal height homology group of LtWk (B) to homology

2 groups in their connected sum, L#Wk (B). In order to relate these two objects, we

2 first perform the connected sum of L and Wk (B) with a twisted band, which we

2 denote L#tWk (B). We can consider the two ways to resolve this crossing in the

2 band. One of them leads us to the connected sum L#Wk (B) with no twist in the

2 band, and one of them leads us to L t Wk (B). This resolution is associated to a long exact sequence [Kho00]. In particular, for each height j ∈ Z, we obtain the following long exact sequence of homology groups:

i−1,j−1 i−1,j−2 i,j ... → H (D3) → H (D5) → H (D4) →

i,j−1 i,j−2 i+1,j → H (D3) → H (D5) → H (D4) → ...

Since we can create such a long exact sequence for any height j, choose j = Hm +1. Then we obtain the following:

i−1,Hm i−1,Hm−1 i,Hm+1 ... → H (D3) → H (D5) → H (D4) →

i,Hm i,Hm−1 i+1,Hm+1 → H (D3) → H (D5) → H (D4) → ...

Notice that this long exact sequence is finite in the sense that it eventually becomes 0 on either end. This is because the chain complex for the band summed link itself terminates on either end. Therefore, we can apply the following theorem from algebra.

94 L

D3

L 2 Wk (B)

D5

2 L Wk (B)

D4

2 Wk (B)

Figure 6.9: Connected sum, and two potential resolutions of the twist in the band.

Theorem 6.4. Consider a long exact sequence of modules over a domain that termi- nates on either end:

... → Mi−1 → Mi → Mi+1 → ...

Then

X i (−1) rank(Mi) = 0. ∀i

What this theorem says in our context is the following.

X i i,Hm X i+1 i,Hm−1 X i i,Hm+1 (−1) rank(H (D3))+ (−1) rank(H (D5))+ (−1) rank(H (D4)) = 0 ∀i ∀i ∀i

Notice that D3 is our disjoint union, and we know that

X i i,Hm Dm Dm,Hm (−1) rank(H (D3)) = (−1) rank(H (D3)) = ±1. ∀i

95 Thus,

X i+1 i,Hm−1 X i i,Hm+1 (−1) rank(H (D5)) + (−1) rank(H (D4)) = ±1 ∀i ∀i

Hence, we can guarantee that either

X i+1 i,Hm−1 (−1) rank(H (D5)) ∀i is nontrivial, or

X i i,Hm+1 (−1) rank(H (D4)) ∀i is nontrivial.

However, we cannot determine much else due to the possibility for cancellation. We have also determined that the connected sum - either the version with a twist in the band or without a twist in the band, or potentially both - have nontrivial homology groups of the respective heights.

From here, we will focus on the version of the connected sum that has alternating summation above being nontrivial, for which there are two cases. Take this link, and connect the two components that were not previously connected, via a twisted band,

2 and denote the resulting link L#bt Wk (B). We then go through the same process of resolving the twist in this newly added band. Figures 6.10 and 6.11 represent this scenario - the first diagram represents the scenario where the connected sum with no twist in the band had nontrivial alternating summation, and the second diagram

0 represents the alternative. Importantly, D3 in either case is the connected sum of two links that we now know has nontrivial alternating summation of ranks. Again, we can put together a long exact sequence of homology groups according to one of the resolution diagrams above, choosing j = Hm +2 in the case of the untwisted band

96 L

0 D3

L 2 Wk (B)

0 D5

2 L Wk (B)

0 D4

2 Wk (B)

Figure 6.10: Exterior band sum, and two potential resolutions of the twist in the second band. Note that in this diagram, we have assumed that the link with nontrivial alternating summation of ranks is the connected sum with the untwisted band.

97 L

0 D3

L 2 Wk (B)

0 D5

2 L Wk (B)

0 D4

2 Wk (B)

Figure 6.11: Exterior band sum, and two potential resolutions of the twist in the second band. Note that in this diagram, we have assumed that the link with nontrivial alternating summation of ranks is the connected sum with the twisted band.

98 having nontrivial alternating summation of ranks, and choosing j = Hm in the case of the twisted band having nontrivial alternating summation of ranks.

In the untwisted band case, we obtain the following long exact sequence:

i−1,Hm+1 0 i−1,Hm 0 i,Hm+2 0 ... → H (D3) → H (D5) → H (D4) →

i,Hm+1 0 i,Hm 0 i+1,Hm+2 0 → H (D3) → H (D5) → H (D4) → ...

In the twisted band case, we instead obtain:

i−1,Hm−1 0 i−1,Hm−2 0 i,Hm 0 ... → H (D3) → H (D5) → H (D4) →

i,Hm−1 0 i,Hm−2 0 i+1,Hm 0 → H (D3) → H (D5) → H (D4) → ...

Again, in either case, we apply our theorem stating the alternating summation of the ranks of each of the elements in our exact sequence equals 0. In the untwisted band case, we obtain the following:

X i i,Hm+1 0 X i+1 i,Hm 0 X i i,Hm+2 0 (−1) rank(H (D3))+ (−1) rank(H (D5))+ (−1) rank(H (D4)) = 0 ∀i ∀i ∀i

In the twisted band case, we instead obtain:

X i i,Hm−1 0 X i+1 i,Hm−2 0 X i i,Hm 0 (−1) rank(H (D3))+ (−1) rank(H (D5))+ (−1) rank(H (D4)) = 0 ∀i ∀i ∀i

P i i,Hm+1 0 Let ∀i(−1) rank(H (D3)) = z, or, in the twisted band case,

P i i,Hm−1 0 ∀i(−1) rank(H (D3)) = z, where z is a nonzero integer. Then the above equations imply that, in the untwisted band case,

X i+1 i,Hm 0 X i i,Hm+2 0 (−1) rank(H (D5)) + (−1) rank(H (D4)) = −z, ∀i ∀i while in the twisted band case,

X i+1 i,Hm−2 0 X i i,Hm 0 (−1) rank(H (D5)) + (−1) rank(H (D4)) = −z. ∀i ∀i

99 Therefore, there is some nontrivial homology group in one of four links in the following heights:

0 0 Untwisted band case: Hm in D5, or height Hm + 2 in D4.

0 0 Twisted band case: Hm − 2 in D5, or height Hm in D4.

Finally, we need to show that Hm + 2 < hm in order to show that there is a nontrivial homology group of lower height than any homology group of L occurring

0 0 in either D4 or D5. By definition, Hm + 2 = hm + hfm + 2. Note that hm + hfm + 2 < hm ⇐⇒ hfm + 2 < 0 ⇐⇒ hfm < −2. But recall that hfm = −4 − 2k. In order to satisfy our link homotopy condition, we perform at least 2 twists in our Whitehead doubling operation. Therefore, hfm ≤ −8. So, we have verified that Hm + 2 < hm.

2 2 This demonstrates that L#bWk (B), or L#bt Wk (B) with a twist in the first band, second band, or both bands, is a link satisfying each of our three properties. In order to create an infinite family of such links, we can continue adding twists to obtain even lower height homology groups in our band summed link (or band summed link with a twist in the second band). We prove this by performing the same argument, with an even lower hfm, i.e., an even lower Hm.

We conclude by demonstrating a few links which satisfy the hypotheses for this the- orem, and therefore admit the desired infinite family of link homotopic, non-isotopic links via the band sum construction. Their captions use the naming conventions of LinkInfo, and show the unique, torsion-free minimum degree, minimum height co- homology group. The information about cohomology is formatted as a dictionary, where the outer key is the height, and the embedded key is the degree. Notably, with a brief search, we could not find a two-component alternating or non-alternating link that did not have Khovanov homology groups as in the desired table.

100 Figure 6.12: L6a1: {-10 : -4 : Z}

Figure 6.13: L8n1: {-12 : -4 : Z × Z}

Figure 6.14: L9n18: {6 : 0 : Z}

101 Figure 6.15: L10a59: {-18 : -7 : Z}

102 Bibliography

[BH99] Martin R. Bridson and Andr´eHaefliger. Metric spaces of non-positive curva- ture, volume 319 of Grundlehren der Mathematischen Wissenschaften [Fun- damental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1999.

[BN02] Dror Bar-Natan. On Khovanov’s categorification of the Jones polynomial. Algebr. Geom. Topol., 2:337–370, 2002.

[Dix20] Samantha Dixon. Composite Knot Determinants. University of Chicago, Chicago, Illinois, 2010 (accessed March 10, 2020). http://www.math.

uchicago.edu/~may/VIGRE/VIGRE2011/REUPapers/Dixon.pdf.

[DJL12] M. Davis, T. Januszkiewicz, and J.-F. Lafont. 4-dimensional locally CAT(0)- manifolds with no Riemannian smoothings. Duke Math. J., 161(1):1–28, 2012.

[Hun14] Hilary Hunt. Knots, isotopies, and khovanov homology. Undergraduate honors thesis, Australian National University, Canberra, Australia, 10 2014. Bachelor’s Thesis.

[Kho00] Mikhail Khovanov. A categorification of the Jones polynomial. Duke Math. J., 101(3):359–426, 2000.

[Pri20] Candice Price. Coloring Invariant and Determinants. Colorado State Uni- versity, Fort Collins, Colorado, 2003 (accessed January 23, 2020). http:

//educ.jmu.edu/~taalmala/OJUPKT/candiceprice.pdf.

103 [Sat10] Bakul Sathaye. Obstructions to Riemannian smoothings of locally CAT(0) manifolds. PhD thesis, The Ohio State University, Columbus, Ohio, 2010.

[Wil20] Brandon Williams. Computations in Khovanov Homology. Uni- versity of Illinois at Chicago, Chicago, Illinois, 2008 (accessed

February 14, 2020). http://homepages.math.uic.edu/~kauffman/ ComputationsKhovanovHomology.pdf.

104 Curriculum Vitae Lydia Holley

GRADUATE University of Illinois at Chicago STUDY: Chicago, Illinois Ph.D. Candidate, Mathematics

Wake Forest University Winston-Salem, North Carolina M.A., Mathematics, May 2020

UNDERGRADUATE Cornell University STUDY: Ithaca, NY B.A., Mathematics, May 2018

Honors and Awards

1. Top Graduate Student in Mathematics Award (Wake Forest University, 2020)

2. Teaching Assistantship (Wake Forest University, 2018-2020)

3. Pi Mu Epsilon (Wake Forest University 2018-2020)

4. Dean’s List (Cornell University, 4x)

5. Exceptional Senior Award (Cornell University, 2018)

6. Michael J. Harum Memorial Prize for Students of Slavic Languages (Cornell University, 2018)

7. Frederich Conger Wood Summer Research Fellowship (Cornell University, 2017)

105 Conferences and Talks

1. Attended Winter School on Cremona Groups - Geometric Topology and Alge- braic Geometry, Cuernavaca, Mexico; January 6-10, 2020 (received funding)

2. Gave AWM Brown Bag Research Talk at WFU - November 22, 2019 - “Playing with Links: Link Homotopy and a Generalized Version of Brunnian-ness”

3. Gave WFU Topology Seminar Research Talk - October 31, 2019 and November 14, 2019 - “Searching for Link Homotopic yet not Isotopic Families”, “Producing Examples using Colorability and Introduction to Khovanov Homology”

4. Attended Graduate Student Topology and Geometry Conference; March 30-31, 2019; UIUC (received funding)

5. Attended Graduate Student Conference in Algebra, Geometry, and Topology; June 1-2, 2019; Temple University (received funding)

Activities

1. Association for Women in Mathematics - WFU Student Chapter - Secretary (May 2018 - May 2020)

2. Member of Conference Planning Committee for the AWM Piedmont Triad Con- ference (October 2018 - March 2019)

3. Hiking and indoor gardening, playing cello

106