RELATIONSHIPS: FOR ALGEBRAIC TOPOLOGY

DISSERTATION

Presented in Partial Fulfillment of the Requirements for the Degree Doctor of

Philosophy in the Graduate School of the Ohio State University

By

Charles Estill, BS/MS

Graduate Program in

The Ohio State University

2013

Dissertation Committee:

Sergei Chmutov, Advisor

Matthew Kahle

Thomas Kerler

Azita Manouchehri c Copyright by

Charles Estill

2013 ABSTRACT

In [ACE+13] we found a relationship between two polynomials cellularly em- bedded in a surface, the Krushkal polynomial, based on the Tutte polynomial of a graph and using data from the algebraic topology of the graph and the surface, and the Las Vergnas polynomial for the matroid perspective from the bond ma- troid of the dual graph to the circuit matroid of the graph, B(G∗) → C(G). With

Vyacheslav Krushkal having (with D. Renardy) expanded his polynomial to the nth of a simplicial or CW decomposition of a 2n-dimensional mani- fold, a matroid perspective was found whose Las Vergnas polynomial would play a similar role to that in the 2-dimensional case. We hope that these matroids and the perspective will prove useful in the study of complexes.

ii This is dedicated to my family, whose trust in me has finally been justified.

iii ACKNOWLEDGMENTS

Thanks are due especially to my two thesis advisors: Ian Leary, who helped me learn so much, even if our wonderful possible result got snatched out from under us by a genius; and Sergei Chmutov, who helped me to the finish line. In addition, my gratitude to everyone associated with the Mathematics department of The Ohio State University is limitless. Finally, wothout the work of Ross

Askanazi, Jonathan Michel, and Patrick Stollenwork on the paper we wrote with Dr. Chmutov, this result may not have ever existed.

iv VITA

1972 ...... Year of birth

2004 ...... B.Sc. in Mathematics

2008 ...... MS in Mathematics

2004-Present ...... Graduate Teaching Associate, The Ohio State University

PUBLICATIONS

Askanazi, Ross; Chmutov, Sergei; Estill, Charles; Michel, Jonathan; Stollen- werk, Patrick Polynomial invariants of graphs on surfaces

FIELDS OF STUDY

Major : Mathematics

Specialization: Algebraic Topology

v TABLE OF CONTENTS

Abstract ...... ii

Dedication ...... ii

Acknowledgments ...... iv

Vita...... v

List of Figures ...... viii

List of Tables ...... ix

CHAPTER PAGE

1 Introduction ...... 1

2 Matroids ...... 3

2.1 Axioms of independence ...... 3 2.2 Bases ...... 6 2.3 The circuit axioms and graphical matroids ...... 10 2.4 The rank ...... 18 2.5 , hyperplanes, and spanners ...... 24 2.6 Bond matroids and more general dual matroids ...... 32 2.7 Minors ...... 39 2.8 Matroid perspectives ...... 41

3 Topology ...... 45

3.1 Cell complexes ...... 45 3.2 Cohomology and the cup product ...... 50 3.3 Manifolds and Poincarè ...... 51

vi 3.4 Krushkal’s polynomial ...... 53 3.5 Matroid perspectives for chain complexes ...... 55

4 Graphs on Surfaces ...... 57

4.1 The Las Vergnas polynomial ...... 57 4.2 Krushkal’s polynomial for graphs in surfaces ...... 60 4.3 Relationship ...... 61 4.4 An example ...... 66 4.5 Duality ...... 67

5 Higher Dimensions ...... 70

5.1 Main result ...... 70 5.2 Another simple example ...... 74

6 Further Directions ...... 76

6.1 More than just the middle, more than just one structure . . . 76 6.2 Orientation ...... 77 6.3 Infinite structures ...... 78 6.4 Simple-homotopy ...... 80

Bibliography ...... 81

vii LIST OF FIGURES

FIGURE PAGE

2.1 A simple four vertex graph ...... 15

4.1 A cellular graph (with loops and parallel edges) on a 2-holed torus. 58

4.2 The dual graph of the graph in figure 4.1 ...... 59

4.3 A graph cellularly embedded in the two-holed torus on the right, and its dual, represented as a ribbon graph, on the left...... 67

viii LIST OF TABLES

TABLE PAGE

4.1 Calculations by subset of the necessary data for our polynomials. 68

ix CHAPTER 1

INTRODUCTION

In the summer of 2010, in a working on knot theory funded by VIGRE we considered a possible relation between a polynomial defined by Vyacheslav

Krushkal in [Kru11] defined for graphs embedded in a surface and the Tutte polynomial for the matroid perspective between the bond matroid of the dual of a ribbon graph to the circuit matroid of the graph. This exploration led to our paper [ACE+13]. Subsequently, Krushkal, together with David Renardy, gave us [KR10], which expanded his polynomial to one defined on the nth level of a triangulation of a 2n-manifold.

In chapter 2, I introduce and explain many of the basic concepts concerning matroids. Most of this follows the work in [Oxl11] and [Wel10], which are the main reference works for matroids. There are many axiom systems, all equivalent, for defining matroids. We will need to know several of them. In addition, we will need to know about dual matroids and matroid perspectives, both covered, along with the useful notion of a minor of a matroid, in the later sections of chapter 2.

In chapter 3 I cover some of the basics of algebraic topology needed. Most of

1 what I cover is well known. I used [Hat02] and [Mun84] as my main references.

It is in this chapter that I also share the definition of Krushkal’s polynomial from [KR10]. I also define here the matroid perspective that will fulfill the role that B(G∗) → C(G) does in the 2-dimensional case.

To help our geometric understanding of the final result, I recapitulate [ACE+13] in chapter 4. This is followed by my main theorem in chapter 5. And we finish in chapter 6 with some associated topics and ideas that might be worth exploring in the future.

2 CHAPTER 2

MATROIDS

Matroids were first conceived by H. Whitney as an abstract generalization of matrices, with a focus on questions of independence of subsets of the of column vectors. Having previously defined an independent subgraph as one containing no cycles, he was able to simultaneously generalize graphs. Follow- ing his lead we define a matroid as giving some information on subsets of a

fixed set. There are several different ways of indicating this information, from the vector influenced notion of independence to the nearly topological closure operator.

2.1 Axioms of independence

A matroid, M, is a finite set E and a collection of subsets I ⊆ P(E) satisfying the following axioms, (i1)-(i3).

(i1) ∅ ∈ I.

(i2) If X ∈ I and Y ⊆ X then Y ∈ I.

3 (i3) If U and V are in I with |V | < |U|, then there exists an element of E,

x ∈ U r V , such that V ∪ {x} ∈ I.

The subsets in I are called independent and those not in I are called dependent.

Further, we’ll call the set E the ground set. Two matroids M & N are called isomorphic if there is a bijection between their ground sets and the structure is the same. That is to say, for example, that independent sets are mapped to independent sets in both directions.

The following proposition and many others in this chapter follow the path well-trod by James Oxley in [Oxl11]. Those proofs not guided by Oxley were guided by D. J. A. Welsh’s [Wel10].

Proposition 2.1.1. Let A be an m × n matrix over a field F. Set E as the set of column vectors of A and let I consist of those subsets X ⊆ E which are

m linearly independent in F , the m-dimensional vector field over F. (E, I) is a matroid.

Proof. Trivially ∅ ∈ I, as the empty set of vectors is trivially linearly indepen- dent, so (i1) is satisfied. And if X is an independent set of column vectors, so is any subset, which means that (i2) is satisfied.

To show that (i3) is satisfied, let U and V be two independent sets of column

m vectors of A with |V | < |U|. Let W be the subspace of F spanned by U ∪ V . Note that the dimension of W , dim W is at least |U|. If (i3) weren’t true and

4 hence V ∪ {x} were linearly dependent for every x ∈ U r V , then W would be contained in the span of V and thus

|U| ≤ dim W ≤ |V | < |U|, (2.1.1) a contradiction. Hence (i3) is satisfied.

We denote the matroid thus obtained M[A] and call it the vector matroid of A. Any matroid, M, for which we can find a field F and a matrix A such that M is isomorphic to M[A] is called representable over the field F or just representable.

Example 2.1.2. Consider the vector matroid of the following matrix over R.   0 0 1 0 1 0 0     A = 0 1 1 0 0 1 1     1 0 0 0 0 1 1

If we label the column vectors as one to seven from left to right, so that E =

{1, 2, 3, 4, 5, 6, 7}, then the set of independent subsets, is n I = ∅, {1}, {2}, {3}, {5}, {6}, {7}

{1, 2}, {1, 3}, {1, 5}, {1, 6}, {1, 7}, {2, 3}, {2, 5},

{2, 6}, {2, 7}, {3, 5}, {3, 6}, {3, 7}, {5, 6}, {5, 7},

{1, 2, 3}, {1, 2, 5}, {1, 3, 5}, {1, 3, 6}, {1, 3, 7}, {1, 5, 6}, o {1, 5, 7}, {2, 3, 6}, {2, 3, 7}, {2, 5, 6}, {2, 5, 7}, {3, 5, 6}, {3, 5, 7} .

Theorem 2.1.3. If U and V are independent sets such that |V | < |U| then there is a set W ⊆ U r V such that |V ∪ W | = |U| and V ∪ W ∈ I. 5 Proof. Consider the collection

n o X ⊆ U r V | V ∪ X ∈ I (2.1.2)

Choose W to be an element of this collection of maximal (finite) cardinality. To see that W fulfills the necessary condition, assume that it doesn’t: |V ∪ W | <

|U|. Then, by (i3), there is an x ∈ U r (V ∪ W ), such that V ∪ W ∪ {x} is independent. But W ∪ {x} ⊆ U r V and |W ∪ {x}| > |W |, contradicting the maximality of W .

2.2 Bases

If any subset of an independent set is independent, then for many purposes we need only concern ourselves with he maximal independent sets. That is to say, independent sets with no independent proper supersets. We call such a set a basis or a base.

Example 2.2.1. The bases of example 2.1.2 are the three element subsets which are in I.

Proposition 2.2.2. All bases have the same cardinality. I.e., If A and B are bases of a matroid M = (E, I), then |A| = |B|.

Proof. Let A, B ⊆ E be bases such that |A| ≤ |B|. If the cardinality of A were strictly less than that of B then by (i3) there would be an element x ∈ B r A such that A ∪ {x} is independent, which would contradict the maximality of A.

Hence |A| = |B|. 6 For a finite set E and a collection of its subsets B ⊆ P(E), consider the following axioms.

(b1) B is nonempty.

(b2) If B1,B2 ∈ B and x ∈ B1 r B2 then there is an element of y ∈ B2 r B1

such that (B1 ∪ {y}) r {x} ∈ B.

Proposition 2.2.3. The maximal independent sets of a matroid satisfy (b1) and (b2).

Proof. Let M = (E, I) be a matroid. By axiom (i1) ∅ is independent. Either

∅ is, itself, a maximal independent subset of E, in which case ∅ ∈ B, or there is some non-empty independent set I, which, since E is finite, must have some maximal independent superset, which would be in B. So (b1) is satisfied.

Let B1 and B2 be distinct maximal independent sets, and let x ∈ B1 r B2.

Then B1 r {x} and B2 are independent sets. And, by the previous proposition,

|B1| = |B2|, so |B1 r {x}| < |B2|. By axiom (i3) there is a

y ∈ B2 r (B1 r {x}) = B2 r B1 (2.2.1) such that (B1 r {x}) ∪ {y} is independent. But then, being independent, it 0 must be contained in a maximal independent set B ⊇ (B1 r {x}) ∪ {y}. The 0 previous proposition comes into play again to tell us that |B | = |B1|, and since

0 x is in B1 and y is not, |(B1 r {x}) ∪ {y}| = |B1|. So B = (B1 r {x}) ∪ {y} is a maximal independent set which means that (b2) is satisfied.

Proposition 2.2.4. If (E, B) are a pair satisfying (b1) and (b2) then all ele- ments of B are of the same size. 7 Proof. Suppose that this were not true, i.e. there were elements of B of different sizes. Thus the set

A = {A r B | A, B ∈ B and |B| < |A|} (2.2.2)

and its elements are nonempty. Choose a pair B1,B2 ∈ B with |B2| < |B1| such that B1 r B2 is of minimum size in A. Since B1 r B2 6= ∅ we can choose an x ∈ B1 r B2, and (b2) gives us a y ∈ B2 r B1 such that (B1 ∪ {y}) r {x} ∈ B.

Clearly |(B1 ∪ {y}) r {x}| = |B1| > |B2| and since we’ve exchanged x for y,

|((B1 ∪ {y}) r {x}) r B2| < |B1 r B2|, (2.2.3) contradicting the minimality of B1 r B2 in A.

Proposition 2.2.5. If (E, B) is a pair satisfying (b1) and (b2) and I is the set of subsets of elements of B, then (E, I) satisfy (i1)-(i3).

Proof. Since I = {I ⊆ E | I ⊆ B for some B ∈ B}, (i2) is clearly satisfied.

And ∅ ∈ B by (b1), so ∅ ∈ I and (i1) is satisfied.

Let’s assume, in order to reach a contradiction, that I does not satisfy (i3).

That is to say, there are two sets, U and V , in I with |V | < |U|, such that for all x ∈ U rV the set V ∪{x} is not in I. Choose BU ,BV ∈ B such that U ⊆ BU and V ⊆ BV , and such that |BU r (U ∪ BV )| is minimal among all choices of

BU and BV . Since V ⊆ BV , it is obvious that U r BV ⊆ U r V , but further, for our choice of U and V

U r BV = U r V. (2.2.4)

8 Otherwise there would be an x ∈ U r V with x ∈ BV , i.e. V ∪ {x} ⊆ BV and hence in I.

By (b2), if BU r (U ∪ BV ) is non-empty then for any x ∈ BU r (U ∪ BV ) ⊆

BU r BV , there is a y ∈ BV r BU such that (BU ∪ {y}) r {x} ∈ B. But this new base has the property that

|((BU ∪ {y}) r {x}) r (U ∪ BV )| < |BU r (U ∪ BV )|, (2.2.5) so BU r (U ∪ BV ) is empty. This means that BU r BV = U r BV . And putting this together with equation (2.2.4) above, we get

BU r BV = U r V. (2.2.6)

What about BV r (V ∩ BU )? If there is an element, x, in this set then there is, as above, an element y ∈ BU r BV such that (BV ∪ {y}) r {x} ∈ B. Note that, since x 6∈ V , V ∪ {y} ⊆ (BV ∪ {y}) r {x}, hence V ∪ {y} ∈ I. Equation

(2.2.6) now tells us that y ∈ BU r BV = U r V , contradicting the assumption that (i3) fails. So BV r (V ∩ BU ) is empty and BV r BU = V r BU . And since

V r BU ⊆ V r U we get that

BV r BU ⊆ V r U or, more importantly, |BV r BU | ≤ |V r U|. (2.2.7)

Proposition 2.2.4 tells us that |BU | = |BV |, so |BU rBV | = |BV −BU |. Together with equation (2.2.6) and the above, we get

|U r V | = |BU r BV | = |BV r BU | ≤ |V r U|. (2.2.8)

So |U rV | ≤ |V rU|, or |U| ≤ |V |, contradicting the assumption that |V | < |U|. Thus (i3) is satisfied, and the proposition is proved. 9 Propositions 2.2.3 and 2.2.5 together mean that we can now define a ma- troid either by specifying the independent sets, in which case the bases are the maximal ones, or by specifying the bases, in which case the independent sets are the subsets thereof.

2.3 The circuit axioms and graphical matroids

Clearly, if looking at the maximal independent sets is sufficient, then so should it be sufficient to look at minimal dependent sets, i.e. dependent sets, all of whose proper subsets are independent. Such a set is called a circuit.

Example 2.3.1. Recall the matrix over R from example 2.1.2:   0 0 1 0 1 0 0     A = 0 1 1 0 0 1 1     1 0 0 0 0 1 1

Labeling the column vectors as before, we find the circuits to be

n o C = {4}, {6, 7}, {1, 2, 6}, {1, 2, 7}, {2, 3, 5}, {1, 3, 5, 6}, {1, 3, 5, 7} .

If M is a matroid, we denote the set of circuits of M by C(M) or C.

Proposition 2.3.2. The set of circuits, C, of a matroid M = (E, I) satisfy the following conditions.

(c1) ∅ 6∈ C.

(c2) If C and C0 are members of C and C ⊆ C0 then C = C0.

10 (c3) If C and C0 are distinct elements of C and x ∈ C ∩ C0, then there is an

◦ ◦ 0 element of C, C , such that C ⊆ (C ∪ C ) r {x}.

Proof. Clearly (c1) holds, and minimality of circuits means that (c2) holds. So

0 0 let us assume that (C∪C )r{x} does not contain a circuit for some C 6= C , both 0 0 in C, and some x ∈ C ∩ C . Or, in other words, (C ∪ C ) r {x} is independent. 0 0 0 Since (c2) holds, C rC cannot be empty, otherwise C ⊆ C and hence C = C. 0 0 So choose a y ∈ C r C. Since C is a circuit, i.e. a minimal dependent set, 0 C r {y} is independent. To construct our contradiction, we consider those 0 0 subsets of C ∪ C which are independent and contain C r {y} and choose a maximal one. Call it I. Clearly, y 6∈ I, as otherwise C0, a dependent set, would be a subset of I. Since C is a circuit there must also be an element from it that is excluded from I. That is, there is a z ∈ C such that z 6∈ I, and since

0 y ∈ C r C, z 6= y. So

0 0 0 |I| ≤ |(C ∪ C ) r {y, z}| = |C ∪ C | − 2 < |(C ∪ C ) r {x}|. (2.3.1)

0 But then (C∪C )r{x} and I are two independent sets satisfying the hypotheses of (i3), whose conclusion is a set contradicting the maximality of I. And so

(c3) must hold.

This third, less obvious, axiom, (c3), is frequently called the circuit elimi- nation or weak circuit elimination axiom

Proposition 2.3.3. Let E be a set and let C ⊆ P(E) satisfy (c1)-(c3). If we set I as the collection of subsets of E that contain no element of C, then (E, I) is a matroid, and C is its collection of circuits. 11 Proof. Note that there are two things to be proved here: that I satisfies (i1)-

(i3), and that C is the collection of minimal dependent sets of this matroid.

But ∅ ∈ I and (i1) holds because (c1) says that ∅ doesn’t contain any element of C. And (i2) holds because if X contains no member of C and Y ⊆ X then

Y contains no member of C.

To prove (i3) we will, in order to again show a contradiction, assume that

U, V ⊆ E are such that |V | < |U|, neither contains any elements of C, and yet for all x ∈ U r V , V ∪ {x} 6∈ I. Consider those elements of I contained in U ∪ V whose cardinality is strictly greater than that of V , such as U. Choose one such, W , for which |V r W | is minimal. Since (i3) fails, W can’t contain

V , so V r W is nonempty. Let x ∈ V r W . For each element y ∈ W r V , define Sy = (W ∪ {x}) r {y}. Note that Sy ⊆ U ∪ V and |V r Sy| < |V r W | for all y, so no Sy is in I. In other words, each Sy contains a member of C, Cy.

And since Cy ⊆ (W ∪ {x}) r {y}, y 6∈ Cy for any y. In addition, x ∈ Cy for all y ∈ W r V , since otherwise, Cy ⊆ W ∈ I. Now, if Cy ∩ (W r V ) is empty then

Cy ⊆ ((V ∩ W ) ∪ {x}) r y ⊆ V which is a contradiction as V ∈ I so shouldn’t contain any elements of C. So there is some z ∈ Cy ∩ (W r V ) and since z 6∈ Cz,

Cy 6= Cz. Also, since x is an element of every Cβ, x ∈ Cy ∩ Cz. Axiom (c3)

0 0 then implies that there is an element, C , of C such that C ⊆ (Cy ∪ Cz) ⊆ {x}.

But Cy,Cz ⊆ W ∪ {x}, so C ⊆ W ; which is a contradiction. So we conclude that (i3) holds.

So now we have this ground set E, a collection of subsets thereof C, and a matroid M = (E, I) with the independent sets, I, defined as above. We want

12 to show that C is the collection of minimally dependent subsets (i.e. circuits) of

M, C(M). If C ∈ C then since it is contained in no element of I, it is dependent.

And since any proper subset of C, not being in C by (c2), is independent (i.e. containing no element of C), C is minimally dependent, a circuit. Now assume that C0 ∈ C(M). Then the fact that C0 is dependent, i.e. C0 6∈ I, and the definition of I means that C0 contains some element of C as a subset. But since

0 0 C r{x} is independent for all x ∈ C this element of C can’t be a proper subset, so must be C0 itself. Hence C = C(M).

Propositions 2.3.2 and 2.3.3 together mean that we now have circuits satis- fying (c1)-(c3) as a way of defining a matroid, in addition to independent sets,

(i1)-(i3), and bases, (b1) and (b2).

Graphical matroids

Remember that a cycle in a graph (sometimes called a circuit) is a connected subgraph all of whose vertices have degree two and that Whitney defined an independent subgraph as one containing no cycles. This leads to the second important group of matroids (following vector matroids as defined in proposition

2.1.1 above) considered in Whitney’s paper and explains the terminology we’ve applied to minimally dependent sets.

Proposition 2.3.4. Let G be a graph with edge set E, and define C as the set of edge sets of cycles of G. C is the collection of circuits of a matroid on E.

13 Proof. Obviously the empty subgraph is not a cycle and if one cycle is a sub- graph of another then the two are identical cycles, so (c1) and (c2) are satisfied by C. Let C and C0 be distinct cycles of G with an edge, e, in common. Name the endpoints of e, u and v. Let P be the path from u to v contained in C r{e} 0 0 and P the path from u to v contained in C r {e}. If you traverse P starting from u, then starting at some vertex x (which might be u itself) P and P 0 start not being the same path, and sometime before or at vertex v, they share a vertex y again. If we conjoin the piece of P from x to y to the piece of P 0 from

0 y to x we get a cycle in G contained in (C ∪ C ) r {e}, so (c3) is satisfied by the set of cycles.

The matroid here defined is called the cycle or circuit matroid of G and denoted M(G) or C(G). Note that for a subset of the edge set of a graph to be independent in the cycle matroid, it must, by definition, contain no cycles. Or in other terms, the spanning subgraph induced by the subset must be a forest.

When it won’t cause confusion, which is most of the time, the subsets and their associated subgraphs will be considered interchangeable.

Example 2.3.5. Let G be the graph shown in figure 2.1 and let M = M(G) be the cycle matroid of G. Then the ground set for the matroid is E = {e1, e2, e3, e4, e5, e6, e7} and C(M) = {{e4}, {e6, e7}, {e1, e2, e6}, {e1, e2, e7}, {e2, e3, e5}, {e1, e3, e5, e6}, {e1, e3, e5, e7}}.

Notice that, under the bijection ei ↔ i, these are the same circuits as C(M[A]) from example 2.3.1 and so as the discussion following proposition 2.3.3 indi- cates, M(G) = M[A]. Further, notice that the bases, which we can from example

14 e4

e1 Q Q u Q u Q e6 Q e3 Q e7 e Q 2 Q Q Q Q e u 5 u

Figure 2.1: A simple graph. Notice that loops and parallel edges are allowed.

2.2.1, are

n {e1, e2, e3}, {e1, e2, e5}, {e1, e3, e5}, {e1, e3, e6}, {e1, e3, e7}, {e1, e5, e6}, o {e1, e5, e7}, {e2, e3, e6}, {e2, e3, e7}, {e2, e5, e6}, {e2, e5, e7}, {e3, e5, e6}, {e3, e5, e7} , which are the spanning trees of G.

A matroid that is the cycle matroid of a graph, or is isomorphic to one, is called graphic, and one that is (or is isomorphic to) a vector matroid for a matrix

A over the field F is call representable over F, and A is called an F-representation of the matroid.

Theorem 2.3.6. Every graphical matroid is representable over every field. 15 Proof. Let G be a graph and F a field. Let {e1, . . . , en} be an enumeration of the edge set of G and {v1, . . . , vm} an enumeration of the vertex set. Choose an arbitrary orientation for each ei so that each edge has a head vertex and a tail vertex (perhaps the same vertex in the case of loops). Let A = {aij} be the matrix over F with   1 if vertex vi is the head of non-loop edge ej,   aij = −1 if vertex vi is the tail of non-loop edge ej, and (2.3.2)    0 otherwise, with columns labeled by their respective edges. If we can show that the circuits of M(G) are dependent in M[A] and that all circuits of M[A] are dependent in M(G), then by the following lemma 2.3.7, C(M(G)) = C(M[A]), and so

M(G) = M[A].

Let C be a cycle of G. If C is a loop, then the corresponding column in A is a zero vector, so clearly C is a circuit in M[A]. In the case that C is not a loop

there is a sequence of distinct edges, ei1 , ei2 , . . . , eik , constituting the cycle, with some orientation, perhaps not the same as above. In the cycle, each vertex is the head of one edge and the tail of another, and each vertex is visited exactly ~ ~ once, so if we define the matrix B as having column vectors bi with bi either the same or the negative of the associated column vector of A, negative when the correlated ei is in C and the original orientation of it is opposite its orientation ~ ~ ~ in the cycle. Then bi1 +bi2 +··· bik = 0 since tail ends cancel out head ends. But

then, for some choice of βi1 , . . . , βik = ±1 we get βi1~ai1 +βi2~ai2 +···+βik~aik = 0, where ~ai is the ith column vector of A. Hence, C is a circuit of M[A]. 16 Now assume that D = {ej1 , . . . , ej` } is a circuit of M[A]. If ` = 1, i.e.

D = {ej1 }, ej1 is by definition a loop in M[A]. So its associated column vector

in A, ~aj1 , is, by the definition of A, a zero vector, and so D is a circuit in M[A].

Otherwise, there are some non-zero elements j1 , . . . , j` ∈ F such that

` X ih~aih = 0. (2.3.3) h=1

So if any row of the matrix [~aj1~aj2 ···~aj` ] has a non-zero entry, then by the sum (2.3.3) it must have at least two non-zero entries. The rows of this submatrix of A correspond to vertices of G, hence this is saying that in the subgraph G0 of

G induced by {ej1 , . . . , ej` } every vertex that isn’t isolated has degree at least

0 two. Which means that G must contain a cycle, thus {ej1 , . . . , ej` } contains a circuit of M(G) and the lemma holds.

Lemma 2.3.7. If U and V are collections of subsets of a finite set E such that every element of U contains an element of V and every element of V contains an element of U, then the minimal members of U are the same as those of V.

Proof. Assume that the conclusion is not true, that is, there is a minimal element of U, call it U, that is not a minimal element in V. So either U is not an element of V or it is not minimal therein. U not being minimal in V implies that there is an element V ∈ V with V ⊂ U and V 6= U. But every element of V contains an

0 element of U so there is a U ⊆ V ( U in U and hence U is not minimal. And if U 6∈ V then it at least contains an element of V 3 V which again contains an

0 0 element of U 3 U , so that U ⊆ V ( U and again U is not minimal.

17 So if U is a minimal element of U then it is a minimal element of V.A symmetrical argument shows that if V is a minimal element of V then it must be a minimal element of U, and the lemma is proved.

Proposition 2.3.8. If B is a basis of a matroid M and x ∈ ErB, then B∪{x} contains a unique circuit C(x, B) containing x which we call the fundamental circuit of x with respect to B.

Proof. Since B ∪ {x} is necessarily dependent, it must contain a circuit, and any such circuit must contain x. Let C1 and C2 be two such. If C1 and C2 are distinct then (c3) tells us that (C1 ∪ C2) r {x} ⊆ B contains another circuit. But this is impossible, as B is independent, so there is a unique such circuit.

2.4 The rank function

When speaking of vector spaces and their subspaces, the dimension of these spaces comes up quite naturally. The column rank of a matrix, for example, is the dimension of the span of the column vectors of the matrix, that is to say the maximal number of independent vectors found in the matrix.

To apply this notion to matroids let us consider a rather natural construction for matroids. If M is a matroid with ground set E and independent sets I and

X ⊆ E, then if we let I|X = {I ⊆ X | I ∈ I} it is easy to see that (X, I|X) is itself a matroid, called the restriction of M to X, denoted M|X. Since

(X, I|X) is a matroid, all of its bases are the same size. We define the rank of

X – denoted r(X) or, when necessary rM (X) – to be the cardinality of a base

18 of X in (X, I|X). For simplicity’s sake, we will write r(E(M)), the cardinality of a base of M, as r(M).

≥0 Two obvious properties of the function r : P(E) → Z are:

(r1) if X ⊆ E, then 0 ≤ r(X) ≤ |X|, and

(r2) if X ⊆ Y ⊆ E, then r(X) ≤ r(Y ).

There is, in addition, a third property of the rank function, analogous to a formula from the study of vector spaces. Namely, if U and W are subspaces of a finite-dimensional vector then

dim(U + W ) + dim(U ∩ W ) = dim U + dim W. (2.4.1)

The following property is sometimes called the submodular or semimodular in- equality.

Proposition 2.4.1. (r3) If X and Y are subsets of the ground set E of a matroid M with rank function r, then

r(X ∪ Y ) + r(X ∩ Y ) ≤ r(X) + r(Y ). (2.4.2)

Proof. Throughout the proof we will denote by BZ a basis for Z ⊆ E. I.e., BZ is a maximal independent set of the matroid M|Z and an independent set of

M|Z0 for any superset Z0 ⊇ Z. In particular, we can choose bases such that

BX∩Y ⊆ BX∪Y . Note that BX∪Y ∩ X is independent in M|X and BX∪Y ∩ Y is

19 similarly independent in Y . So r(X) ≥ |BX∪Y ∩ X| and r(Y ) ≥ |BX∪Y ∩ Y |, therefore

r(X) + r(Y ) ≥ |BX∪Y ∩ X| + |BX∪Y ∩ Y |

= |(BX∪Y ∩ X) ∪ (BX∪Y ∩ Y )| + |(BX∪Y ∩ X) ∩ (BX∪Y ∩ Y )|

= |BX∪Y ∩ (X ∪ Y )| + |BX∪Y ∩ (X ∩ Y )|

(2.4.3)

However BX∪Y ∩ (X ∪ Y ) = BX∪Y and BX∪Y ∩ (X ∩ Y ) = BX∩Y so

r(X) + r(Y ) ≥ |BX∪Y | + |BX∩Y | = r(X ∪ Y ) + r(X ∩ Y ). (2.4.4)

Hence, (r3) holds for matroids.

This proposition tells us that matroids satisfy the conditions (r1)-(r3), so we just need to show that a set and rank function that satisfy these axioms is a matroid.

≥0 Theorem 2.4.2. Let E be a finite set. If r : P(E) → Z satisfies (r1) - (r3) and I is the collection of subsets of E whose rank is equal to their cardinality, i.e.

I = {X ⊆ E | r(X) = |X|}, (2.4.5) then (E, I) is a matroid with r as its rank function.

Proof. Because of (r1), 0 ≤ r(∅) ≤ |∅| = 0, so ∅ ∈ I and (i1) is satisfied. To show (i2) holds, let Y ⊆ X ∈ I, so r(X) = |X|. By (r3)

r(Y ∪ (X r Y )) + r(Y ∩ (X r Y )) ≤ r(Y ) + r(X r Y ). (2.4.6) 20 And, noting that Y ∩ (X r Y ) = ∅ and Y ∪ (X r Y ) = X, we get

r(X) + r(∅) = |X| ≤ r(Y ) + r(X r Y ). (2.4.7)

Further, (r1) tells us that r(Y ) ≤ |Y | and r(X r Y ) ≤ |X r Y |, so

|X| ≤ r(Y ) + r(X r Y ) ≤ |Y | + |X r Y | = |X|. (2.4.8)

The first and last terms are equal, so equality holds throughout the whole equation and r(Y ) + r(X r Y ) = |Y | + |X r Y |. Finally, (r1) tells us that r(Y ) = |Y | (since otherwise we would have r(X r Y ) ≥ |X r Y |). To see that (i3) is satisfied, we assume the converse: there are sets U, V ∈ I with |V | < |U| and such that for all x ∈ U r V , V ∪ {x} 6∈ I. So r(V ) = |V |, but r(V ∪{x}) < |V |+1. Or, more precisely, |V | = r(V ) ≤ r(V ∪{x}) ≤ |V | for all x ∈ U The lemma that follows, lemma 2.4.3, then tells us that r(V ∪ U) = r(V ) = |V |. Also, r(V ∪ U) ≥ r(U) = |U|. Putting these two together, we find that |V | = r(V ∪ U) ≥ |U|, contradicting the assumption that |V | < |U|. So

(i3) must hold and (M, I) is a matroid.

We now must show that the function r is the rank function of the matroid, i.e. r = rM . Consider two cases: either X ⊆ E is independent or not. If

X ∈ I then r(X) = |X| by definition of I. And rM (X) = |X| since X is independent in M and hence a basis of M|X. So, suppose that X 6∈ I and let B be a basis for M|X. This means that rM (X) = |B|. Also, B ∪ {x} 6∈ I for any x ∈ X r B, implying that |B| = r(B) ≤ r(B ∪ {x}) < |B ∪ {x}| so that r(B ∪ {x}) = r(B) for all x ∈ X r B. Lemma 2.4.3 then tells us that r(X) = r(B ∪ X) = r(B) = |B| = rM (X) and r = rM . 21 ≥0 Lemma 2.4.3. If E is a set, r : P(E) → Z is a function satisfying (r2) and (r3), and X,Y ⊆ E such that r(X ∪ {y}) = r(X) for all y ∈ Y , then r(X ∪ Y ) = r(X).

Proof. The argument is by induction on the cardinality of Y . If Y = {y} then the conclusion is immediate. Let Y = {y1, . . . , yn, yn+1} and assume that r(X ∪{y1, . . . , yn}) = r(X). Then by the inductive hypothesis and the condition that r(X ∪ {y}) = r(X) for y ∈ Y in the first step, (r3) in the second, and (r1) in the last

r(X) + r(X) = r(X ∪ {y1, . . . , yn}) + r(X ∪ {yn+1}   ≥ r (X ∪ {y1, . . . , yn}) ∪ (X ∪ {yn+1}   + r (X ∪ {y1, . . . , yn}) ∩ (X ∪ {yn+1} (2.4.9)

= r(X ∪ Y ) + r(X)

≥ r(X) + r(X).

Note that the first and last lines are equal, so equality holds throughout and r(X ∪ {y1, . . . , yn+1}) = r(X). So, by induction, the lemma is proven.

The meaning of rank.

The rank of a subset of the ground set of a vector matroid is clearly the di- mension of the subspace generated by the associated column vectors, but the situation is somewhat less clear for our other main class of matroids, graphical matroids. Let G be a graph with edge set E(G) and vertex set V (G), and let

X be a subset of V (G) (or, equivalently, a subset of the ground set of M(G)).

22 Define G[X] to be the subgraph of G induced by X, having X as edge set and all ends of edges in X as vertex set. We also define c(G) to be the number of connected components of G (and similarly for c(G[X])).

Proposition 2.4.4. If G is a graph and X ⊆ V (G) then rM(G)(X) = |V (G[X])|− c(G[X]).

Proof. Consider first the case when G is connected. A basis for M(G) must be independent, and hence must be the set of edges of a spanning forest of

G. Also, it must be maximal, so it is the set of edges of a spanning tree of G, T . A well-known fact about trees is that the cardinality of the set of vertices is one more than the cardinality of the set of edges in the tree. So r(M(G)) = |E(T )| = |V (T )| − 1 = |V (G)| − 1.

If G has more than one connected component than we can calculate its rank by adding the ranks of each component, so r(M(G)) = |V (G)| − c(G). And similarly rM(G)(X) = |V (G[X])| − c(G[X]).

The connection between rank and the other characteristics of more general matroids is mostly straightforward. In the following propositions, let M be a matroid with ground set E, rank function r, and X ⊆ E.

Proposition 2.4.5. X is independent if and only if |X| = r(X).

Proof. If X is independent then it is a base of itself and so r(X) = |X|. On the other hand, if r(X) = |X| and B ⊆ X is a base of X, then |B| = r(X) = |X| and we must have B = X, that is to say, X is independent.

23 Proposition 2.4.6. X is a basis of M if and only if |X| = r(X) = r(M).

Proof. X is a basis if and only if it is a maximal independent set. So the previous proposition, proposition 2.4.5, tells us that |X| = r(X) if X is a basis, hence independent. And, by the definition of rank, r(X) = r(M) if X is a basis. Obversely, if |X| = r(X) = r(M) then proposition 2.4.5 tells us that X is independent, and if it were not maximal then r(X) = r(M) would not be possible.

Proposition 2.4.7. X is a circuit if and only if X 6= ∅ and for all x ∈ X, r(X r {x}) = |X| − 1 = r(X).

Proof. X is a circuit if and only if it is a minimal dependent set, i.e. a set all of whose proper subsets are independent. If X is a circuit then clearly X 6= ∅ and for all x ∈ X, X r {x} is independent, in fact, each is a basis for X, being maximally independent in X. So by proposition 2.4.6, r(X r {x}) =

|X r {x}| = |X| − 1 = r(X). And if X is nonempty and for all x ∈ X, r(Xr{x}) = |X|−1 = r(X) then all of the proper subsets of X are independent, the co-degree ones, X r {x} by proposition 2.4.5, all others by (i2). Further, since r(X) 6= |X|, we know X is dependent. Hence X is a circuit.

2.5 Closure, hyperplanes, and spanners

A vector, ~v, in a , V , over a field, F, is in the span of a set of vectors

{~v1,~v2, . . . ,~vk} if ~v can be written in terms of the ~vi’s, i.e. ~v = α1~v1 + ··· + αk~vk

24 for αi ∈ F. Or equivalently, if h~v1,~v2, . . . ,~vki and h~v1,~v2, . . . ,~vk,~vi have the same dimension.

Now that we have a concept of rank for a general matroid M with, we can extend this idea. Let r be the rank function of M and let E be its ground set, then we define the closure operator cl : P(E) → P(E) by setting

cl(X) = {x ∈ E | r(X ∪ {x}) = r(X)} (2.5.1) for X ⊆ E. cl(X) is called the closure or span of X and x is said to be in the span of X if x ∈ cl(X). In addition, we say that X is spanning if cl(X) = E, or that it spans Y if cl(X) = Y .

Proposition 2.5.1. If X is a subset of the ground set of a matroid M, then r(X) = r(cl(X)).

Proof. Let B be a basis for X, that is to say a subset of X such that r(B) =

|B| = r(X). For any x ∈ cl(X) r X

r(B ∪ {x}) ≥ r(B) = |B| = r(X) = r(X ∪ {x}) ≥ r(B ∪ {x}). (2.5.2)

So B ∪ {x} is a dependent set, which means that B is a basis for cl(X) and r(cl(X)) = |B| = r(X).

By the definition, it is clear that

(cl1) for X ⊆ E, X ⊆ cl(X).

Proposition 2.5.2. The closure operator of a matroid M with ground set E satisfies, in addition to (cl1), the following properties: 25 (cl2) for X ⊆ Y ⊆ E, cl(X) ⊆ cl(Y ),

(cl3) for X ⊆ E, cl(cl(X)) = cl(X) (i.e. cl is idempotent), and

(cl4) for X ⊆ E and x ∈ E, if y ∈ cl(X ∪ {x}) r cl(X), then x ∈ cl(X ∪ {y}).

Proof. To see that the closure operator as defined in (2.5.2) satisfies (cl2) we consider two sets X ⊆ Y ⊆ E and suppose that x ∈ cl(X). Note that if x ∈ X then x ∈ Y and hence x ∈ cl(Y ), as we wish to show for the more general case.

So we consider, in particular, x ∈ cl(X) r X. By definition, this means that r(X ∪ {x}) = r(X). So if BX is a basis for X, then it’s a basis for X ∪ {x} as well. Extending that basis, we get a basis, BY ∪{x}, of Y ∪ {x} that contains

BX , but not x. Since BY ∪{x} does not contain x, it is a basis for Y as well, so r(Y ∪ {x}) = |BY ∪{x}| = r(Y ), and x ∈ cl(Y ) as desired. Hence cl(X) ⊆ cl(Y ). By (cl1) cl(X) ⊆ cl(cl(X)), so to prove the closure operator satisfies (cl3) all we need is the reverse inclusion. Let x ∈ cl(cl(X)), that is to say r(cl(X)∪{x}) = r(cl(X)). Now, (r2) and (cl1) tell us that

r(cl(X)) = r(cl(X) ∪ {x}) ≥ r(X ∪ {x}) ≥ r(X), (2.5.3) but proposition 2.5.1 tells us that r(cl(X)) = r(X), so equality holds throughout that inequality. Meaning that r(X ∪{x}) = r(X) and x ∈ cl(X). So cl(cl(X)) = cl(X).

Now suppose X ⊆ E and x ∈ E such that there is an element y ∈ cl(X ∪

{x}) r cl(X). So r(X ∪ {x, y}) = r(X ∪ {x}) but r(X ∪ {y}) 6= r(X). We do know, from (r3), that

r(X ∪{y}) ≤ r(X)+r({y})−r(X ∩{y}) ≤ r(X)+1−r(∅) = r(X)+1. (2.5.4) 26 So r(X ∪ {y}) = r(X) + 1. Hence

r(X) + 1 = r(X ∪ {y}) ≤ r(X ∪ {x, y}) = r(X ∪ {x}) ≤ r(X) + 1, (2.5.5) and thus r(X ∪ {x, y}) = r(X ∪ {y}), i.e. x ∈ cl(X ∪ {y}).

Proposition 2.5.3. If E is a set, cl : P(E) → P(E) satisfies (cl1)-(cl4), and a collection of subsets of E is defined as

 I = X ⊆ E | ∀x ∈ X, x 6∈ cl(X r {x}) , (2.5.6) them (E, I) is a matroid with cl as its closure operator.

To prove this we’ll need the following lemma (and its contrapositive).

Lemma 2.5.4. If I is defined as in (2.5.6) and if X ∈ I but X ∪ {x} is not, then x ∈ cl(X).

Proof. By definition, since X ∪ {x} 6∈ I there is an element y ∈ X ∪ {x}  such that y ∈ cl (X ∪ {x}) r {y} . If y = x then we’re done. If y 6= x then  (X∪{x})r{y} = (Xr{y})∪{x} and hence y ∈ cl (Xr{y})∪{x} rcl(Xr{y}), wherein y 6∈ cl(X r {y}) because y ∈ X ∈ I. (cl4) then tells us that x ∈  cl (X r {y}) ∪ {y} = cl(X).

Proof of theorem 2.5.3. The empty set is trivially in I so (i1) is satisied.

Suppose that X ∈ I and Y ⊆ X. If y ∈ Y , then, since y ∈ X, y 6∈ cl(Xr{y}). But (cl2) tells us that cl(Y r{y}) ⊆ cl(Xr{y}), so y 6∈ cl(Y r{y}). Hence Y ∈ I and (i2) is satisfied.

27 Consider U, V ∈ I such that |V | < |U|. Suppose that (i3) fails for this pair.

That is to say, for all x ∈ U r V , V ∪ {x} 6∈ I. Suppose, further, that we have chosen this pair such that |U ∩ V | is maximal. Let z ∈ U r V . Consider the two sets V and cl(U r {z}). If V ⊆ cl(U r {z}) then (cl2) and (cl3) tell us that cl(V ) ⊆ cl(cl(Ur{z})) = cl(Ur{z}). Now, z ∈ U and U ∈ I, so z 6∈ cl(Ur{z}), which also means that z 6∈ cl(V ). The contrapositive of lemma 2.5.4 then tells us that V ∪{z} ∈ I, contrary to our assumptions. If V 6⊆ cl(U r{z}) then there is an element w ∈ V that isn’t an element of cl(U r {z}). Clearly, w ∈ V r U.

Since I satisfies (i2), and since U r {z} ⊆ U ∈ I, U r {z} ∈ I. Moreover, since w 6∈ cl(U r {z}), lemma 2.5.4 again implies that (U r {z}) ∪ {w} ∈ I.  Note that | (U r {z}) ∪ {w} ∩ V | > |U ∩ V |, so by maximality of the pair  (U, V ) (i3) must hold for the new pair of sets in I, (U r {z}) ∪ {w},V . That  is to say, there is an x ∈ (U r {z}) ∪ {w} r V such that V ∪ {x} ∈ I. But  (U r{z})∪{w} rV ⊆ U rV , so (U, V ) satisfy (i3), once again contradicting our assumption. Thus, (E, I) = M is a matroid.

We must now verify that the original closure operator, cl, coincides with the one that rises from the matroid, clM . Let X ⊆ E, and if one such exists, take an element x ∈ clM (X) r X. So r(X ∪ {x}) = r(X) and if B is a basis for X then it is one for X ∪ {x} as well. But while B ∈ I, B ∪ {x} is not, which means, by lemma 2.5.4, that x ∈ cl(X). Hence clM (X) ⊆ cl(X).

Now suppose that x ∈ cl(X) r X and that B is a basis for X. For any y ∈ X r BB ∪ {y} 6∈ I, so, by lemma 2.5.4 again, y ∈ cl(B), or in other words X ⊆ cl(B). So by (cl2) and (cl3) cl(X) ⊆ cl(B) and, hence x ∈ cl(B). The

28 usual lemma tells us that B∪{x} 6∈ I which implies that B is a basis for X∪{x}.

Hence r(X ∪ {x}) = |B| = r(X) and x ∈ clM (X). Thus, cl(X) = clM (X).

These last two propositions show that it is possible to define a matroid using the axioms for a closure operator, (cl1)-(cl4), in addition to those for independent sets, bases, circuits, or the rank function. And as with all the axiom systems, this one comes with its own terminology.

A subset, X, of the ground set of a matroid is called closed or a flat if

X = cl(X). A flat of rank one less than the rank of the matroid is called a hyperplane, and a set whose closure is the whole ground set is called a spanning set. In addition, we say that X spans Y if Y ⊆ cl(X).

Lemma 2.5.5. If U and V are flats of a matroid M with ground set E, with

V ⊆ U and r(V ) = r(U) − 1, then there exists a hyperplane H such that

V = U ∩ H.

Proof. Let B be a basis for V (a.k.a. a maximal independent subset of V ) and choose an x ∈ U r V = cl(U) r cl(V ). Then by lemma 2.5.4, B ∪ {x} is independent as well, and since |B ∪ {x}| = r(V ) + 1 = r(U), B ∪ {x} is a basis for U. Extend B∪{x} to a basis for M, A ⊇ B∪{x} and define H = cl(Ar{x}).

Note that H is a flat of rank r(H) = |A r {x}| = |A| − 1 = r(M) − 1, so it is a hyperplane. Also, if y ∈ V then y ∈ cl(B) since lemma 2.5.4 says that otherwise

B∪{y} would be an independent subset of V . So V ⊆ cl(B) ⊆ cl(Ar{x}) = H,

29 or, more importantly, V ⊆ U ∩ H, so r(V ) ≤ r(U ∩ H). Finally, (r3) and the fact that A ⊆ U ∩ H tells us that

r(U ∩H) ≤ r(U)+r(H)−r(U ∪H) = r(U)+(r(M)−1)−|A| = r(U)−1. (2.5.7)

Since r(V ) = r(U) − 1, r(U ∩ H) = r(V ) and hence U ∩ H ⊆ cl(V ) = V .

Theorem 2.5.6. If X is a flat of M with rank r(X) = r(M) − k < r(M) then  k there are distinct hyperplanes Hi i=1 such that

k \ X = Hi. (2.5.8) i=1 Proof. When k = 1, X is the single “distinct” hyperplane needed. Using in- duction on k, let X have rank r(M) − k and let A = {a1, a2, . . . , ar(M)−k} be a basis for X. Theorem 2.1.3 allows us to extend A to a basis B =

0 {a1, a2, . . . , ar(M)−k, . . . , ar(M)} of M. Let X = cl({a1, a2, . . . , ar(M)−k+1}). Since the rank of X0 is r(M)−k +1 = r(M)−(k −1), the inductive hypothesis allows k−1 0 \ us to find k − 1 distinct hyperplanes, Hi such that X = Hi. And by lemma i=1 0 2.5.5 above, there exists a hyperplane H such that X = X ∩H. Clearly H 6= Hi for any 1 ≤ i ≤ k − 1, as otherwise X = X0 ∩ H = X0.

Proposition 2.5.7. A subset of the ground set of a matroid is a spanning set if and only if it has full rank.

Proof. Let X be a spanning set of M, i.e. cl(X) = E. Proposition 2.5.1 tells us that r(X) = r(cl(X)) = r(E) = r(M). Obversely, if r(X) = r(M) then (r2) tells us that for all x ∈ E, r(X ∪ {x}) = r(X), i.e. x ∈ cl(X). So E ⊆ cl(X) and X is spanning. 30 Proposition 2.5.8. The following are equivalent:

(1) X is a basis of M,

(2) X is both spanning and independent, and

(3) X is a minimal spanning set.

Proof. If X is a basis of M then, by proposition 2.4.6, r(X) = r(M), so X is

0 0 spanning. Choose x ∈ X and consider the set X = X r {x}. X , of course, is in I. Proposition 2.4.5 tells us that r(X0) = r(X) − 1 < r(M), implying that

X0 is not spanning. Hence X is a minimal spanning set and (1) =⇒ (3).

Assume that X is a minimal spanning set. Then, since it is spanning, r(X) =

0 r(M). Let x ∈ X. Since X is minimal, X = X r {x} is not spanning, in other 0 0 0 words cl(X ) ( cl(X) = E. If x ∈ cl(X ) then r(M) = r(X) = r(X ∪ {x}) = 0 0 r(X ), and X would be spanning. So x 6∈ cl(X r {x}) for any x in X, meaning that X is independent and (3) =⇒ (2).

If X is spanning and independent, then proposition 2.4.5 tells us that r(X) =

|X| and proposition 2.5.7 above tells us that r(X) = r(M). Putting those together with proposition 2.4.6 tells us that X is a basis. So (2) =⇒ (1).

Having seen relationships between the closure operator and the notions of independence and bases, we now turn to circuits.

Proposition 2.5.9. If M is a matroid with ground set E and X ⊆ E, then X is a circuit if and only if X is a minimal non-empty set such that x ∈ cl(X r {x}) for all x ∈ X. Also, for general subsets X,

cl(X) = X ∪ x ∈ E | M has a circuit C such that x ∈ C ⊆ X ∪ {x} . (2.5.9)

31 Proof. If X is a circuit of M and x ∈ X then, since X is dependent and X r{x} is independent, lemma 2.5.4 tells us that x ∈ cl(X r {x}). And X is, being a circuit, a minimal non-empty set. Conversely, assume that X is a minimal non-empty set such that x ∈ cl(X r {x}) for every x ∈ X. This means that X is dependent. And, since it is minimally so, X r {x} is independent. So X is a minimal dependent set, i.e. a circuit.

Suppose that x ∈ cl(X) r X. Thence, r(X ∪ {x}) = r(X). So if B is a basis for X, then B ∪{x} is dependent. By proposition 2.3.8, there is a circuit C such that x ∈ C ⊆ B∪{x} ⊆ X ∪{x}. So the closure is contained in the union above.

Conversely, if x ∈ E r X and there is a circuit C such that x ∈ C ⊆ X ∪ {x}, then, by the characterization of circuits that started this proposition and (cl2), x ∈ cl(C r {x}) ⊆ cl(X) and the equality holds.

2.6 Bond matroids and more general dual matroids

The dual matroid is a concept introduced by Whitney [Whi87] to extend two ideas: orthogonal vector spaces and planar duals of planar graphs.

We will need the following lemma to show that the dual of a matroid is a matroid as well.

Lemma 2.6.1. If M is a matroid and B is its set of bases, then

(b2*) If B1,B2 ∈ B and x ∈ B2 r B1, then there is an element y ∈ B1 r B2

such that (B1 r {y}) ∪ {x} ∈ B.

Note that this is genuinely different from (b2), not simply a relabeling. 32 Proof. Proposition 2.3.8 indicates that there is a unique circuit C(x, B1) con- tained in B1∪{x}. This circuit is dependent and B2 is independent, so C(x, B1)r

B2 must contain at least one element. Let y be such an element. Clearly y ∈ B1, since otherwise y = x ∈ B2 which contradicts our assumptions. So y ∈ B1 r B2.

Further, C(x, B1) 6⊂ (B1 r {y}) ∪ {x}

Theorem 2.6.2. Let M be a matroid with ground set E and let B be its set

∗ ∗ of bases. If we define B as {E r B | B ∈ B} then B is the set of bases of a matroid on E.

Proof. Since B is nonempty, so is B∗, meaning that B∗ satisfies (b1). Take

∗ ∗ ∗ ∗ two sets B1 ,B2 ∈ B , and let Bi = E r Bi for i = 1 or 2. This means that

B1,B2 ∈ B and

∗ ∗ ∗ ∗ B1 r B2 = B1 ∩ (E r B2 ) = (E r B1) ∩ B2 = B2 r B1. (2.6.1)

∗ ∗ If we let x ∈ B1 r B2 = B2 r B1 then (b2*) tells us that there is an element ∗ ∗ y ∈ B1 r B2 = B2 r B1 such that (B1 r {y}) ∪ {x} ∈ B. Note that E r (B1 r {y}) ∪ {x} ∈ B∗ and that

  ∗ E r (B1 r {y}) ∪ {x} = (E r B1) ∪ {y} r {x} = (B1 ∪ {y}) r {x}. (2.6.2)

Hence B∗ satisfies (b2), and is the set of bases of a matroid on E.

The matroid thus defined is called the dual of M and is denoted by M ∗.

We call the bases of M ∗ cobases of M, the circuits of M ∗ cocircuits, and the independent sets of M ∗ coindependent.

33 ∗ Clearly, since if B is a basis for M, E r B is a basis for M we have

r(M) + r(M ∗) = |E| (2.6.3)

To find the rank function of M ∗, which we’ll call r∗, in general we need the following lemma

Lemma 2.6.3. Let I and I∗ be two disjoint subsets of E such that I is inde- pendent in a matroid, M, on E and I∗ is independent in its dual, M ∗. Then there are a pair of bases, B & B∗, of M and M ∗ such that B ∩ B∗ = ∅, I ⊆ B, and I∗ ⊆ B∗

∗ ∗ Proof. Since I is independent in M , it is contained in some basis B0. So

∗ E r B0 is a basis for M, by definition. That means that the set E r I contains ∗ a basis of M, so r(M) ≥ r(E r I ) ≥ r(E r B0) = r(M). The set I is independent in any restriction of M to a set containing I, so,

∗ in particular, it’s independent in M|(E r I ). Hence, it is contained in a basis ∗ ∗ of this matroid, I ⊆ B ⊆ (E r I ). Therefore, r(B) = r(E r I ), which we’ve ∗ already seen is equal to r(M). So B is a basis for M. Letting B = E r B, we see that I ⊆ B and I∗ ⊆ B∗. And clearly B ∩ B∗ = ∅.

Proposition 2.6.4. If M is a matroid with ground set E, and X ⊆ E, then

∗ r (X) = r(E r X) + |X| − r(M). (2.6.4)

∗ ∗ Proof. Let BX be a basis for X in M , that is to say a basis for M |X. Similarly, choose a basis for E r X, BErX , in M. By the definition of the rank function, ∗ r (X) = |BX | and r(E r X) = |BErX |. Since BErX and BX are independent 34 in, respectively, M and M ∗, lemma 2.6.3 tells us that there are bases, B and

∗ ∗ ∗ ∗ B , of M and M with BErX ⊆ B, BX ⊆ B , and B ∩B = ∅. Since BErX and ∗ BX are bases of subsets we get that B ∩ (E r X) = BErX and B ∩ X = BX . Note that this last, together with the disjointedness of B and B∗, means

∗ that if x ∈ B ∩ X then x 6∈ BX ⊆ B . And if x ∈ X but x 6∈ BX , then either

∗ ∗ x ∈ B r BX , which contradicts the containment above, or, perforce, x 6∈ B . ∗ And since B ∪ B = E that means that x ∈ B. So B ∩ X = X r BX . Putting the two intersections with B together we can see that B is the disjoint union of

X r BX and BErX . Hence

|B| = |X r BX | + |BErX | = |X| − |BX | + |BErX |, (2.6.5)

∗ or in terms of the rank function r(M) = |X| − r (X) + r(E r X), which can be rearranged to give us the required equation.

Bond matroids

One special case of the dual matroid is the dual matroid of a graphic matroid.

For a graph G with edge set E and X ⊆ E it is common to denote the subgraph of G obtained by retaining all vertices and deleting the edges in X by G \ X. If

G \ X has more connected components than G we call X an edge cut of G. An edge cut that is minimal with respect to inclusion (i.e. no proper subsets are edge cuts) is called a bond of G.

Theorem 2.6.5. Let G be a graph with edge set E. The collection of bonds of

G, B, is the set of circuits of a matroid on E, called the bond matroid of G and denoted B(G). In fact, B(G) = (C(G))∗. 35 Proof. Clearly, the empty set is not a bond of G, so ∅ 6∈ B. And if B and B0 are bonds such that B ⊆ B0, then since B0 is a minimal edge cut, we must have

B = B0. So (c1) and (c2) are satisfied.

Now assume B,B0 ∈ B are distinct and e ∈ B ∩ B0. Suppose e connects vertices u and v in G, then u and v are in different components of G \ B and

G \ B0. Since B 6= B0, B ∩ B0, being a subset of both, is not an edge cut. So

0 there is a sequence of edges, f1, f2, . . . , f`, from u to v in G \ (B ∩ B ). Since

0 0 B and B are edge cuts, we must have fi ∈ B and fj ∈ B for some i and j.

Choose a vertex, w, between fi and fj. Any path in G between u and w, or equivalently between v and w, that doesn’t include e must intersect both B and

0 0 B . Hence, u and w must be in different components of G \ ((B ∪ B ) r {e}). 0 ◦ This means that (B∪B )r{e} is an edge cut, so it must contain a bond B ∈ B. Therefore, with (c3) satisfied, B is the set of circuits of a matroid B(G).

Let S ⊆ E be a base for B. That is to say, it is an independent set which is maximal with respect to inclusion. Which is to say, by proposition 2.3.3, that S is a maximal set containing no elements of B. Consider G \ S. Since S is not a base cut, G\S must contain a spanning tree. And if G\S contained any cycles, then S would not be maximal as we could add any edge of the cycle to S and S plus that edge would still not contain an edge cut. So G \ S must be a spanning tree. But the edges of spanning trees are exactly the bases of M(G) = C(G).

Hence, by theorem 2.6.2, B(G) = (C(G))∗.

Without proving we quote this theorem, equivalent to the Whitney planarity condition from [Whi31].

36 Theorem 2.6.6. If G is planar then B(G) is graphic.

Representability of Duals and the Connection to the Vector Dual

The bond matroid will clearly be important in our work. But no less so will be the matroid duals of representable matroids.

Theorem 2.6.7. If the matroid M is representable over a field F, then so is M ∗.

Proof. Let E be the ground set of M, n = |E|, and let r = r(M) be the rank of

M. Since M is representable over F there is a matrix A = (aij) with aij ∈ F such n r that, up to a bijection, M = M[A]. Let ψ : F → F be the linear transformation n t associated with A. Note that ker ψ = {x ∈ F | Ax = 0} and dim Im ψ = n − r.

Let B be an n × (n − r) matrix over F whose columns are linearly independent t n−r and span ker ψ. Which is to say, Ax = 0 if and only if there is a y ∈ F such that Byt = xt. We will see that M[Bt] is the dual matroid to M[A]. To show this we need to show that r columns of A are linearly independent (i.e. a basis for M[A]) if and only if the complementary set of n − r columns of Bt are linearly independent. But, since we can reorder the columns of A, which forces a reordering on the columns of Bt, it is sufficient to consider the first r columns of A and the last n − r columns of Bt (also known as the bottom n − r rows of B). If the first r columns are linearly dependent then Axt = 0 for some

n nonzero x. We know that x = (x1, . . . , xr, 0,..., 0) ∈ F r {0} has the property

37 t n−r that Ax = 0 if and only if there is a y = (y1, . . . , yn−r) ∈ F r {0} such that xt = Byt. The matrix B can be written as   B  1 B =   , (2.6.6) B2 where B1 is an r × (n − r) matrix and B2 is a square (n − r) × (n − r) matrix.

t By the zeroes in x, B2y = 0, and since y 6= 0, B2 is singular. Hence the rows are linearly dependent and the theorem is proved.

Definition 2.6.8. If U and V are vector spaces, f : U → V is a linear trans- formation, and E ⊆ U is a finite set of vectors, then we can define a matroid, which I will denote Mf,E, where a subset X ⊆ E is independent if f(X) is in- dependent in V . If E is a basis for U, Mf,E = Mf is the same as the matroid

M[Af ], where Af is the matrix corresponding to f, E, and a basis for V .

Proposition 2.6.9. Let

φ ψ U −→ V −→ W (2.6.7) be an exact sequence of based finite-dimensional vector spaces, i.e. ker ψ = Im φ.

∗ ∗ ∗ ∗ Then Mψ = (Mφ∗ ) , where φ : V → U is the dual of φ.

∗ Proof. Let E be a basis for V and let B ⊆ E be a basis for Mφ∗ . I.e., φ (B) spans Im φ∗ ⊆ U ∗ ∼= U. Consider b ∈ B. There is a u ∈ U, such that φ(u) = b.

But then ψ(b) = ψ(φ(u)) = 0. So the elements of E r B must span Mψ. This means that the complement of a basis of Mφ∗ is a basis of Mψ, hence they are dual.

38 2.7 Minors

When introducing the rank function, we introduced the restriction of a matroid to a subset, M|X. This is also called the deletion of E rX from M and denoted

M \ (E r X). This operation and its dual notion, the contraction of a matroid to a subset, are fundamental operations on matroids. The resultant matroids are called minors of M.

Let M be a matroid with ground set E and let X ⊆ E. The contraction of

X from M is defined and denoted as

M/X = (M ∗ \ X)∗ (2.7.1)

Note that if G is a graph and H is a subset of its edge set, then

C(G \ H) = C(G) \ H. (2.7.2)

The rank function of M \ X for X ⊆ E, the ground set of M, is simply the restriction of the rank function of M to subsets of E r X, i.e.

rM\X (Y ) = rM (Y ) (2.7.3) for all Y ⊆ E r X.

Proposition 2.7.1. For a matroid M with ground set E and X ⊆ E,

rM/X (Y ) = rM (X ∪ Y ) − rM (X) (2.7.4) for all Y ⊆ E r X.

39 Proof. We have, by the definition, that rM/X (Y ) = r(M ∗\X)∗ (Y ). So proposition 2.6.4 and equation (2.7.4) tell us that

rM/X (Y ) =r(M ∗\X)∗ (Y )

=rM ∗\X ((E r X) r Y ) + |Y | − rM ∗\X (E r X)

=rM ∗ (E r (X ∪ Y )) + |Y | − rM ∗ (E r X)  = rM (X ∪ Y ) + |E r (X ∪ Y )| − r(M) + |Y |  − rM (X) + |E r X| − r(M) .

The r(M)’s cancel out and since Y ⊆ E r X, i.e. X ∩ Y = ∅, |E r (X ∪ Y )| +

|Y | − |E r X| = 0. Hence rM/X (Y ) = rM (X ∪ Y ) − rM (X).

Consider the notion of a contraction of a graph. If e is an edge in G, a graph, then the contraction of e from G, denoted G/e, is the graph obtained from G by deleting e and identifying its endpoints. There are several cases, but it is easy to check that for a graph G with edges e and f, (G/e)/f = (G/f)/e, so for a subset of the edge set of the graph, X, G/X is well defined.

Proposition 2.7.2. If G is a graph and X is a subset of its edge set then

C(G)/X = C(G/X). (2.7.5)

Proof. If we can show this result when X is a singleton {e}, then induction will yield to us the result. If e is a loop of G, then G/e = G \ e and

C(G)/{e} = (B(G) \{e})∗ = B(G \ e)∗ = C(G \ e), (2.7.6)

40 where the middle equality holds because if B is a bond (i.e. minimal edge cut) of G, then e, being a loop, cannot be in B, and hence B is a bond of G \ e.

Equation (2.7.2) then tells us that C(G)/{e} = C(G)\{e} = C(G\e) = C(G/e).

So the result holds when e is a loop.

Suppose e is not a loop of G. Consider a subset I ⊆ E r {e}. If I contains a cycle of G/e, then that set of edges, together with e, will be a cycle of G, since we got G/e by deleting e and identifying its endpoints. So I ∪{e} will contain a cycle of G. Similarly, if I ∪ {e} contains a cycle of G, then I contains a cycle of

G/e. So the independent sets of C(G)/{e} and C(G/e) coincide, meaning they are the same matroid.

This proposition and equation (2.7.2) give us the following corollary.

Corollary 2.7.3. Every minor of a graphic matroid is graphic.

2.8 Matroid perspectives

As with any , one of the first things considered are the pos- sible maps between objects. For the most part, in matroids, we end up dealing with a highly restricted class of maps φ : E1 → E2, such that Ei is the ground set of a matroid Mi for i = 1, 2, |E1| = |E2|, and φ is a bijection. Equivalently, we could relabel so that E1 = E2, that is to say, we look at matroids on the same ground set.

We say that a bijection φ : E1 → E2 is a weak map between matroids M1 &

−1 M2 if φ (I) is independent in M1 for every independent I in M2. We say that

41 −1 1 a bijection ψ : E1 → E2 is a strong map if ψ (L) is a flat in M1 for every flat

L in M2. Following Las Vergnas, in the case that E1 = E2 and the map is the identity we also call a strong map a matroid perspective.

Proposition 2.8.1. Let φ : E1 → E2 be a bijection between the ground sets of matroids M1 and M2 which have ranks r1 and r2. The following are equivalent:

(1) φ is a weak map,

(2) if D is dependent in M1 then φ(D) is dependent in M2,

(3) if C is a circuit of M1 then φ(C) contains a circuit of M2, and

(4) if X ⊆ E1 then r1(X) ≥ r2(φ(X)).

Proof. Assume that φ is a weak map and that D ⊆ E1 is dependent. If φ(D) were independent, then φ−1φ(D) would be independent, but since φ is a bijec- tion, φ−1φ(D) = D and we arrive at a contradiction. Hence φ(D) is dependent and (1) =⇒ (2).

If C is a circuit of M1 then it is by definition dependent in M1. Hence, if we assume (2), then φ(C) is dependent in M2 for every circuit C of M1. But any dependent set of M2 contains a minimal dependent set, i.e. a circuit. So φ(C) contains a circuit and (2) =⇒ (3).

Now assume that (3) holds and X ⊆ E1. X contains a maximal independent set, that is to say, a basis, B. Note that r1(X) = |B|. If X = B then r2(X) ≤

|X| = |B| = r1(X), so the inequality holds in the case that X is independent.

1Remember, X is a flat in M if cl(X) = X. 42 So, if we assume that x ∈ X r B, then proposition 2.3.8 tells us that there is a unique circuit C = C(x, B) ⊆ B ∪ {x} ⊆ X. Hence, φ(C) ⊆ φ(B ∪ {x}) contains a circuit. But this is true of every element of X r B, so r2(φ(X)) ≤ r2(φ(B)) ≤ r1(B) and (3) =⇒ (4).

Finally if r1(X) ≥ r2(φ(X)) for every X ⊆ E1 and I ⊆ E2 is independent in

−1 −1 M2 then r2(I) = |I| = |φ (I)| ≤ r1(φ (I)). But the rank of any subset of the

−1 −1 ground set is less than or equal to its cardinality, so r1(φ (I)) = |φ (I)| and φ−1(I) is independent. Hence, φ is a weak map and (4) =⇒ (1).

Proposition 2.8.2. Let ψ : E1 → E2 be a bijection between the ground sets of matroids M1 and M2 which have rank functions r1 & r2 and closure operators cl1 and cl2. Then ψ is a strong map if and only if every circuit of M1 is a union of circuits of M2.

Proof. For simplicity’s sake we will assume that M1 and M2 are matroids on the same ground set, E = E1 = E2, and that ψ = 1E. The general case follows by inserting all the appropriate ψ−1 and ψ’s whose only purpose is to make the proof hard to read.

Assume that 1E : M1 → M2 is a strong map and that C is a circuit in M1, meaning that proposition 2.5.9 tells us that x ∈ cl1(C r {x}) for all x ∈ C.

Consider the set cl2(C r {x}) ⊆ E. Since C r {x} ⊆ cl2(C r {x}), (cl2) tells us  1 that cl1(Cr{x}) ⊆ cl1 cl2(Cr{x}) , and since E : M1 → M2 is a strong map,  cl2(C r {x}) is closed with respect to M1, so cl1 cl2(C r {x}) = cl2(C r {x}).

Putting things together we get that x ∈ cl2(C r {x}) for every x ∈ C, so C satisfies part of the requirements of proposition 2.5.9 to be a circuit of M2, but 43 not necessarily minimality. Let B be a base of C in M2. Proposition 2.5.9 tells us that

 C ⊆ cl2(B) = B ∪ x ∈ E | M2 has a circuit Cx such that x ∈ Cx ⊆ B ∪ {x} . (2.8.1)

Notice that [ [ C ⊆ Cx ⊆ B ∪ {x} = C. (2.8.2) x∈C x∈C So C is a union of circuits and one direction holds.

We now assume that every circuit of M1 is a union of circuits of M2. Also assume that X ⊆ E is a flat of M2, i.e. cl2(X) = X. Proposition 2.5.9 once again tells us that

 cl1(X) = x ∈ E | M1 has a circuit Cx such that x ∈ Cx ⊆ X ∪ {x} . (2.8.3)

Since every circuit of M1 is a union of circuits of M2 we see that we can get

 0 0 cl1(X) ⊆ x ∈ E | M2 has a circuit Cx such that x ∈ Cx ⊆ X ∪ {x} (2.8.4)

0 by choosing, for each x, a Cx ⊆ Cx containing x. But the big set on the right is cl2(X) = X, so X ⊆ cl1(X) ⊆ X, and the identity is a strong map.

If M → M 0 is a matroid perspective on a set E, then from [Las80], [Las78], and [Las99] we get the Tutte polynomial of a matroid perspective

LVM→M 0 (x, y, z) =

X 0 0 (x − 1)r(M )−rM0 (L)(y − 1)|L|−rM (L)z(r(M)−rM (L))−(r(M )−rM0 (L). (2.8.5) L⊆E

44 CHAPTER 3

TOPOLOGY

3.1 Cell complexes

Simplicial complexes

1 N If {v0, . . . , vn} are an affinely independent set of points in R , that is to say n the only set of real numbers, {ti}i=0, that satisfy the equations

n n X X ti = 0 and tivi = 0 (3.1.1) i=0 i=0 are t0 = t1 = ··· = tn = 0, then the n-simplex, σ, spanned by the vertex set n N X (i.e. the vi’s) is the set of all points x ∈ R such that x = tivi for some real i=0 n n X non-negative numbers {ti}i=0 for which ti = 1. Any simplex spanned by a i=0 (proper) subset of {v0, . . . , vn} is called a (proper) face of σ.

Geometrically, the simplex σ is the same no matter what order {v0, . . . , vn} come in. But to put things together properly, we need to consider the notion of the “orientation” of σ. If n = dim σ > 0 and we consider two orderings of the

1 For n > 0 this is equivalent to the vectors v1 − v0, . . . , vn − v0 being linearly independent.

45 vertex set of σ as equivalent if you can reach one from the other via an even permutation, then the orderings fall into two equivalence classes, each called an orientation of σ. If the vertex set of a simplex is {v0, v1, . . . , vn} then we can denote the oriented simplex with orientation by [v0, v1, . . . , vn]. The codimension one faces of σ = [v0, v1, . . . , vn] can be written as so: if we are looking at the face whose vertices are {v0, v1, . . . , vi−1, vi+1, . . . vn} we denote that simplex as

[v0, v1,..., vˆi, . . . vn].

N N A simplicial complex K in R is a collection of simplices in R such that

• every face of a simplex of K is in K, and

• the intersection of any two simplices of K is a face of both simplices.

If L is a sub collection of K that contains all faces of its elements then L itself is a simplicial complex and is called a subcomplex of K. In particular, for n ≥ 0, consider the subcomplex of K consisting of all simplices of dimension less than or equal to p. This subcomplex is denoted K(n) and called the n-skeleton of K.

A spanning n-subcomplex is a subcomplex L of dimension n (i.e., no simplices of any higher dimension) and containing K(n−1). The points of K(0) are called

N the vertices of K. We denote the subset of R that is the union of the simplices of K by |K| and call it the underlying space of K.

If σ is a simplex (of dimension greater than zero) then there is a useful op- eration, defined recursively with respect to dimension, that gives us a simplicial complex whose underlying space is σ called the barycentric subdivision of σ.

Firstly, the barycentric subdivision of a 0-cell, i.e. a point, is the 0-cell itself.

One chooses a point, x ∈ σ, called the barycenter of σ, in the of σ, 46 perform the barycentric subdivision of the boundary of σ and call that space

X, and then construct the cone on X, CX = (X × [0, 1])/(X × {0}) with x as the cone point. The resulting space will be a simplicial complex whose underly- ing space is σ By doing the barycentric subdivision of all its simplices, we can similarly find the barycentric subdivision of a simplicial complex.

Proposition 3.1.1. If K is a simplicial complex then |K| is Hausdorff, i.e. if x, y ∈ |K| are distinct then there are disjoint neighborhoods of x and y.

Proof. For any vertex v of K we can define a function tv : |K| → R, called the barycentric coordinate of x with respect to v, by noting that each x is interior to precisely one simplex of K. So, if v0, . . . , vn are the vertices of this simplex then n X x = tivi (3.1.2) k=0 for some positive real numbers {ti}. If v is not one of these vertices, set tv(x) = 0.

And if v = vi for some i then set tv(x) = ti. Note that, if we restrict this function to a single simplex σ, tv is continuous. Let x 6= y be two points of |K|. There is at least one vertex, v, such that tv(x) 6= tv(y). Set r = (x + y)/2. And if we let U = {z | tv(z) < r} and

V = {z | tv(z) > r}, then these are the necessary neighborhoods.

Clearly if K is finite, then |K| is a finite union of compact subspaces, so we get the following.

Proposition 3.1.2. If K is finite, then |K| is compact.

47 If K is a simplicial complex then a p-chain on K is a function c from the set of oriented p-simplices of K to one’s chose algebraic object, in our case,

0 mostly R. This function must satisfy the following conditions: if σ and σ are opposite orientations of the same underlying p-simplex, then c(σ) = −c(σ0); and c(σ) = 0 for all but finitely many p-simplices. For two such chains c and c0,

0 0 (c + c )(σ) = c(σ) + c (σ). The R-module of such chains is called the R-module of oriented p-chains of K and is denoted ∆p(K). For each simplex σ of K there is a special p-chain called the elementary chain of σ, σ˜ such that σ˜(σ) = 1, σ˜(σ0) = −1 if σ0 is the opposite orientation of σ, and

σ˜(τ) = 0 for all other simplices τ. It should be clear that each p-chain can be written in terms of these elementary p-chains. In fact, if we choose orientations for each p-simplex, say {σi}i∈I , then we can write any p-chain as

X niσ˜i (3.1.3) i∈I for some ni, only finitely many of which are zero. By an abuse of notation (or X a different viewpoint: formal sums), we can write niσi.

For each p > 0, there is a function, called the boundary operator, from ∆p(K) to ∆p−1(K). If σ = [v0, v1, . . . , vp] is an oriented simplex then the boundary operator is defined as

p X k ∂p(σ) = (−1) [v0, v1,..., vˆk, . . . , vn]. (3.1.4) k=0 0 Note that if σ and σ are opposite orientations of the same simplex, then ∂p(σ) =

0 −∂p(σ ). This means that we can extend ∂p to a function on ∆p(K). It is straightforward to see that ∂p−1 ◦ ∂p = 0. 48 We now have what is called a chain complex, a sequence of module homo- morphisms between modules, with the composition of any composable pair of homomorphisms is zero. In our case we get

∂p+1 ∂p ∂1 ∂0 · · · → ∆p+1(K) −−→ ∆p(K) −→ ∆p−1(K) → · · · ∆1(K) −→ ∆0(K) −→ 0 (3.1.5) where ∂0 is the zero map. For each ∂p, elements of ker ∂p are called p-cycles

(and sometimes denoted Zp(K)) and elements of Im ∂p+1 are called p-boundaries

(sometimes denoted Bp(K)). Since ∂p ◦ ∂p+1 = 0, Im ∂p+1 ⊆ ker ∂p, so we can

th form the modules Hp(K; R) = ker ∂p/ Im ∂p+1 which is called the p simplicial homology R-module of K.

CW complexes

Computing the homology of a simplicial complex is a straightforward proce- dure, but often we want more freedom in the construction of our spaces. For this purpose, we consider CW complexes. A space is called an n-cell if it is

n n homeomorphic to B = {x ∈ R | kxk ≤ 1}. It is called an open n-cell if it is homeomorphic to the interior of Bn.A CW complex is a space X and a collection of disjoint open cells eα whose union is X such that:

• X is Hausdorff,

n n • for each open n-cell eα there is a continuous map ψα : B → X that maps

n n the interior of B homeomorphically onto eα and maps the boundary of Bn into a finite union of open cells of lower dimension, and

49 • a set U ⊆ X is closed in X if U ∩e¯α is closed in e¯α (the topological closure

of eα) for each α.

It is worth noting for our purposes that, although computing the specific boundary maps and chain modules is a complex operation involving relative simplicial homology, in essence the chain modules of CW complexes are free modules generated by oriented simplices of the appropriate dimension and the boundary operator takes a cell to a finite linear combination of the codimension one cells on its boundary.

3.2 Cohomology and the cup product

Just as we can associate with each (through singular homology, not covered here) the homological sequence of modules Hp(X), we can associate another, related sequence called the cohomology of the space. The cohomology modules, in fact, can be naturally put together into a graded ring, a much more difficult operation on the homology.

In the case that K is a simplicial complex let G be an abelian group and let ∆p(K) be the simplicial chain complex with coefficients in Z for all p. Then the p-dimensional co-chains of K with coefficients in G is the group

p ∆ (K; G) = Hom(∆p(K),G), where Hom(A, B) is the group of homomor- phisms from the abelian group A to the abelian group B. The co-boundary

p p p p+1 operator, δ is defined as the dual of ∂p+1, i.e. δ : ∆ (K) → ∆ (K) and if

p p f ∈ ∆ (K) = Hom(∆p(K),G) then δ (f) = f ◦ ∂p+1.

50 Note that if {σα} is the set of p-simplices of K then the set of dual functions

∗ p {σα} generate ∆ (K), and so we can write any p co-chain as the (possibly infinite) formal sum

X ∗ gασα. (3.2.1)

Let R be a ring, C∗(X) be an integral (simplicial or singular) chain complex

∗ on a space X, and C (X; R) = Hom(C∗,R) be the dual complex. If φ ∈ Ck(X; R) and ψ ∈ C`(X; R) then we can define an element of Ck+`(X; R),

φ ` ψ, called the cup product as follows. If σ = [v0, . . . , vk+`] is an element of

Ck+`(X) then

(φ ` ψ)(σ) = φ(σ|[v0,...,vk])ψ(σ|[vk,...,vk+`]), (3.2.2) where σ| is the j-dimensional face of σ with the indicated vertices and [vi1 ,...,vij ] k orientation. This product induces a product for cohomology `: H (X; R) × H`(X; R) → Hk+`(X; R) which is bilinear and associative.

The cup product, in general, is not commutative. But it does satisfy some-

k ` k` thing close: if φ ∈ H (X; R) and ψ ∈ H (X; R) then φ ` ψ = (−1) ψ ` φ. So for fixed k and ` the product, in this context viewed as a bilinear form, is either symmetric or skew-symmetric.

3.3 Manifolds and Poincarè duality

m Let H denote the euclidean half-space

m m H = {(x1, ··· , xm) ∈ R | xm ≥ 0}. (3.3.1)

51 m m−1 Then the boundary of H can be written as R × 0. A non-empty Hauss- dorff space M is called an m-manifold if every point in M has a neighborhood

m homeomorphic to an open subset of R . Similarly, M is called an m-manifold with boundary if every point has a neighborhood homeomorphic to an open set

m m of either R or H . A compact manifold is said to be closed. If M is an m-manifold then for any point x ∈ M the relative homology groups

Hi(M,M r {x}) are zero if i 6= m and an infinite cyclic group if i = m. We call a generator, µx, for that group a local orientation of M at x. An orientation of

M is a function x 7→ µx assigning to each point in M a local orientation such that every x ∈ M has a neighborhood U containing an open ball of finite radius,

B, and for every y ∈ B ⊆ U ⊂ M, the associated local orientations, µy, are the ∼ m m ∼ images of one generator µB of Hm(M,M r B) = Hm(R ,B ) = Z. If such a function exists we call our manifold orientable.

The simplest case of Poincaré duality theorem is as follows from [Mun84].

Theorem 3.3.1. If M is a closed orientable m-manifold and G is an arbitrary coefficient group then for all p there is an isomorphism

p ∼ H (M; G) = Hm−p(M; G). (3.3.2)

In addition to Poincaré duality, there is another useful dual structure as- sociated with any manifold. If M is an n-manifold, K is a simplicial complex structure for M, and σ is a k-simplex of K, then there is a dual cell, Dσ, of di- mension n − k which is the union of the simplices of the barycentric subdivision that have the barycenter of σ as a vertex and which are transverse to σ. The union of all such Dσ gives a CW structure on M, K∗. 52 In fact, many CW complexes, for example any CW complex arising from a handle decomposition (see [Nic07] or [Sma62]) or a Morse function (see [Mil63]), have associated CW complexes which fulfill all the necessary conditions, i.e.

2 1 1 transversality. For example, if you get a CW complex for the torus, T = S ×S , 2 2 from decomposing R with 0-cells at all points of Z , 1-cells connecting vertically or horizontally adjacent 0-cells, and 2-cells in the remaining squares and then taking the quotient modulo the relations x ∼ x + 1 & y ∼ y + 1; then the dual

2 cell complex comes from almost the same decomposition of R , but with every 1 1 cell shifted by ( 2 , 2 ). Using such a dual structure and given a spanning n-subcomplex, L, of a 2n- dimensional simplicial complex, K, we construct L as a spanning n-subcomplex of K∗ containing all n-simplices of K∗ except those dual to the n-simplices of

L. Using the notion of handle decompositions of a space Krushkal, in [KR10]’s lemma 3.3, gives us the following.

Lemma 3.3.2. If M is a closed orientable 2n-dimensional manifold, K is a simplicial structure for M, L is a spanning n-subcomplex of K, and L is as above, then L is homotopy equivalent to M r L.

3.4 Krushkal’s polynomial

In [KR10] Krushkall and Renardy introduced a polynomial invariant for trian- gulations of an orientable even-dimensional manifold which was based an the

Tutte polynomial for complexes. Following them we consider any closed oriented

2n-dimensional manifold, M, with an embedded simplicial or CW complex, K. 53 We will mostly be concerned with the specific example wherein K is a triangu- lation for M (or, in the case of a CW complex, that |K|, the underlying space of K, is M).

Let L be a spanning n-subcomplex of K and let i : L → M be the which comes from the embedding of K. With all homology groups being over

R, define

k(L) = dim(ker(i∗ : Hn(L) → Hn(M))). (3.4.1)

Letting · : Hn(M) × Hn(M) → R denote the intersection pairing on M, which is the Poincaré dual of the cup product mentioned in section 3.2, we examine two subspaces of the vector space Hn(M):

V = V (L) = Im(i∗ : Hn(L) → Hn(M)), and (3.4.2)

⊥ ⊥ V = V (L) = {u ∈ Hn(M) | ∀v ∈ V (L), u · v = 0} . (3.4.3)

Using these we can define corresponding invariants of the embedding i∗ : L → M: V s(L) = dim , and (3.4.4) V ∩ V ⊥ V ⊥ s⊥(L) = dim . (3.4.5) V ∩ V ⊥ In chapter 4 we will see a geometric interpretation of the numbers s and s⊥ in the case that n = 1.

Definition 3.4.1. If M is a closed oriented 2n-manifold and K is a simplicial complex embedded in M then we can define the polynomial

⊥ X dim Hn−1(L)−dim Hn−1(K) k(L) s(L) s (L) PK,M (X, Y, A, B) = X Y A B (3.4.6) L⊆K(n) 54 where the sum is taken over all spanning n-subcomplexes of K.

3.5 Matroid perspectives for chain complexes

In order to construct our second polynomial we will need a way of using matroids to extract data from the algebraic topology data. Consider a chain complex

∂n+1 ∂n · · · → Cn+1 −−−→ Cn −→ Cn−1 → · · · (3.5.1) of finite-dimensional, based vector spaces. We can truncate this at the nth position to get

∂n+1 ∂n · · · → Cn+1 −−−→ Cn −→ Im ∂n → 0. (3.5.2)

∼ Note that Im ∂n = Cn/ ker ∂n. Taking that into account, we consider the map

π : Cn/ Im ∂n+1 → Cn/ ker ∂n which takes c + Im ∂n+1 ∈ Cn/ Im ∂n+1 to c + ker ∂n ∈ Cn/ ker ∂n. Since Im ∂n+1 ⊆ ker ∂n this map is surjective. And if

π(c + Im ∂n+1) = 0 then c ∈ ker ∂n, so c + Im ∂n+1 ∈ ker ∂n/ Im ∂n+1 = Hn(C∗), giving us the short exact sequence

π 0 → Hn(C∗) → Cn/ Im ∂n+1 −→ Cn/ ker ∂n → 0. (3.5.3)

55 Putting it all together, along with the projection ρ : Cn → Cn/ Im ∂n+1, we get the following diagram.

0

 Hn(C∗) 8 0

 Cn/ Im ∂n+1 7 ρ=ρk π

∂n+1  Cn+1 / Cn / Im ∂n / 0 ∂n

 0

Theorem 3.5.1. The map π in the diagram induces a matroid perspective

Mρ → M∂n .

Proof. Let E be the base of Cn. Then we have two matroids with E as their

k ground set: Mρ and M∂n . Let C = {ei}i=1 ⊆ E be a circuit of Mρ. In a

k vector matroid, this means that there is a set of constants {ci}i=1 such that c1ρ(e1)+···+ckρ(ek) = 0 and no proper subset of C has such a set of constants.

Or in other words c1e1 + ··· + ckek ∈ Im ∂n+1. And since Im ∂n+1 ⊆ ker ∂n we have that c1e1 + ··· + ckek ∈ ker ∂n, or c1∂n(e1) + ··· + ck∂n(ek) = 0. Which

means that C is a union of circuits in M∂n .

56 CHAPTER 4

GRAPHS ON SURFACES

In [ACE+13] we considered graphs cellularly embedded in closed surfaces. We found that there was a relation between two known polynomials, with the only extra information being the genus of the surface, the fundamental discriminant of oriented surfaces.

4.1 The Las Vergnas polynomial

In [Whi31], Hassler Whitney showed that a graph has a dual if and only if it is planar, that is it can be embedded in S2. But by using information from specific depictions of a graph, namely the cyclic order in which half-edges attach to vertices. Note that two “cyclic” graphs may not be isomorphic even if their underlying graphs are the same. One way to visualize this is with the notion of a ribbon graph (or fat graph). In a ribbon graph each vertex is represented

3 by a 2-disc in R and each edge is represented as an elongated copy of I × I attached to the appropriate vertices without overlap. Since we are considering oriented surfaces only, the edges should in addition be attached without twists,

57 i.e. so that the resulting surface with boundary is orientable. The boundary of this surface is a closed 1-manifold, so must be a union of circles. We can, then, attach the boundary of a topologic 2-disc to each boundary component along a homeomorphism of S1 to attain a closed 2-manifold.

When a graph, G, is embedded in a surface, Σ, so that each connected component of Σ r G is homeomorphic to a disc, i.e. cellularly, like in figure ∗ 4.1 there is a natural dual graph G . For each component (or face) of Σ r G

Figure 4.1: A cellular graph (with loops and parallel edges) on a 2-holed torus.

we choose a specific point. These will be the vertices of G∗. Each edge of

G separates two (not necessarily distinct) components and we choose a line transverse to that edge connecting the specified points of the two faces. We can choose the collection of lines so that no two intersect and none pass through a 58 vertex (of G∗ or G). These are the edges of G∗. In essence, we have found the

Figure 4.2: The dual graph of the graph in figure 4.1

dual cellulation of Σ.

We examine the bond graph of G∗, B(G∗), as mentioned in section 2.6. If Σ is the sphere, or in other words G is planar, then B(G∗) is isomorphic to C(G).

So the differences between C(G) and B(G∗) can be considered as a measure of the non-planarity of the embedding. To measure the difference, we’ll need the following.

Proposition 4.1.1. The map B(G∗) → C(G) induced by the correspondence of edges between G and G∗ is a matroid perspective.

59 Proof. Let X be a bond of G∗, that is to say a minimal cut set for G∗ or a circuit of B(G∗). Cutting these edges to (further) disconnect G∗ is the same as cutting Σ apart along the corresponding edges of G to (further) disconnect

Σ. The edges of the boundaries of the components of Σ r X are a collection of circuits whose union is X. So by proposition 2.8.2, B(G∗) → C(G) is a matroid perspective.

With this and the Tutte polynomial for a matroid perspective from [Las80],[Las78], and [Las99],

TM→M 0 (x, y, z)

X 0 0 = (x − 1)r(M )−rM0 (L)(y − 1)|L|−rM (L)z(r(M)−rM (L))−(r(M )−rM0 (L)), (4.1.1) L⊆E we get the Las Vergnas polynomial for a graph on a surface:

LVG,Σ(x, y, z) = TB(G∗)→C(G)(x, y, z) =

X ∗ (x − 1)r(C(G))−rC(G)(L)(y − 1)|L|−rB(G∗)(L)z(r(B(G ))−rB(G∗)(L))−(r(C(G))−rC(G)(L)). L⊆E (4.1.2)

4.2 Krushkal’s polynomial for graphs in surfaces

In [KR10], Vyacheslav Krushkal defined a polynomial for a graph G embedded in a surface (i.e. a 2-manifold) Σ,

X c(L)−c(G) k(L) s(L)/2 s⊥(L)/2 PG,Σ(X, Y, A, B) = X Y A B , (4.2.1) L⊆G

60 wherein the sum is taken over all spanning subgraphs L of G, c(L) is the number of connected components of L, k(L) = dim ker(i∗ : H1(L; R) → H1(Σ; R)), s(L) is twice the genus of the surface obtained by taking a regular neighborhood L of

L in Σ and attaching disks to every boundary circle of L, and s⊥(L) is similarly twice the genus of the surface obtained by removing L from Σ.

Notice two things:

• like Krushkal’s other polynomial but unlike the Las Vergnas polynomial,

G does not have to be cellularly embedded in Σ, although we will be

chiefly concerned with just that situation, and

• there is a crucial difference in the powers of A and B between this poly-

nomial and that from definition 3.4.1.

4.3 Relationship

Theorem 4.3.1. If the graph G is cellularly embedded in an orientable surface

Σ of genus g, then

g −1 LVG,Σ(x, y, z) = z PG,Σ(x − 1, y − 1, z , z). (4.3.1)

Proof. If, for each L, considered on the left hand as a subset of the edge set of

G and on the right hand as a spanning subgraph of G, we have

∗ (x − 1)r(C(G))−rC(G)(L)(y − 1)|L|−rB(G∗)(L)z(r(B(G ))−rB(G∗)(L))−(r(C(G))−rC(G)(L))

= (x − 1)c(L)−c(G)(y − 1)k(L)zg−s(L)/2+s⊥(L)/2, (4.3.2) then the theorem is proved. The following lemmas will show that 61 • r(C(G)) − rC(G)(L) = c(L) − c(G),

•| L| − rB(G∗)(L) = k(L), and

∗ ⊥ • (r(B(G )) − rB(G∗)(L)) − (r(C(G)) − rC(G)(L)) = g − s(L)/2 + s (L)/2, and so equation (4.3.1) holds.

Throughout the following, L is both a spanning subgraph of G and its edge set.

Lemma 4.3.2. r(C(G)) − rC(G)(L) = c(L) − c(G).

Proof. The rank in the circuit matroid of a graph is the size of a spanning tree, so r(C(G)) = v(G) − c(G), the number of vertices of G minus the number of connected components of G. Similarly, for L, rC(G)(L) = V (L)−c(L) = V (G)− c(L). So r(C(G))−rC(G)(L) = (V (G)−c(G))−(V (G)−c(L)) = c(L)−c(G).

Lemma 4.3.3. k(L) = |L| − rB(G∗)(L).

Proof. Let M = B(G∗) and let N = M ∗ = C(G∗). As proposition 2.6.4 indicates

rM (L) = rN (E r L) + |L| − r(N). (4.3.3)

∗ So |L| − rM (L) = r(N) − rN (E r L). Since N is the circuit matroid of G , proposition 2.4.4 tells us that

∗ ∗  ∗ ∗  |L| − rM (L) = |V (G )| − c(G ) − |V (G [E r L])| − c(G [E r L])

∗ ∗ = c(G [E r L]) − c(G ), (4.3.4)

62 ∗ where V (H) is the vertex set of a graph H and G [E r L] is the spanning subgraph of G∗ consisting of edges not in L. Clearly c(G∗) = c(G) and both are equal to the number of connected components of Σ, c(Σ).

Now consider L as a spanning subgraph of G. We will be removing its regular neighborhood, L, from Σ and counting the number of connected components, c(ΣrL). First, we remove small discs around all vertices of L (i.e. all vertices of G). These are the faces of the cellular embedding of G∗ in Σ, so all that is left at this stage is a regular neighborhood of G∗ in Σ. Second, removing neighborhoods of edges in L ⊆ G from Σ results in the same surface, topologically, as removing neighborhoods of the corresponding edges of H ⊆ G∗ from Σ. This is because corresponding edges are transverse to each other. So the number of connected

∗ components of Σ r L are the same as those of G r L and we get |L| − rM (L) = c(Σ r L) − c(G). Let us examine the left side of our equation, k(L). As in the definition, denote by i∗ : H1(L; R) → H1(σ; R) the linear map induced by the composition of L,→ G,→ Σ. k(L) equals the dimension of the kernel of this map. As we have a subspace, L ⊆ Σ, the long exact sequence of the topological pair (Σ,L),

δ i∗ · · · → H2(L) → H2(Σ) → H2(Σ,L) −→ H1(L) −→ H1(Σ) → · · · , (4.3.5) exists. Since L is one dimensional, H2(L) is trivial. H2(Σ) has as dimension the

c(Σ) number of connected components of Σ so is isomorphic to R . And H2(Σ,L) has dimension equal to the number of connected components of Σ r L, i.e. c(Σ r L). 63 If we pull out the short exact sequence,

c(Σ) 0 → R → H2(Σ,L) → Im δ → 0, (4.3.6) from the middle of the long exact sequence above, then, noting that Im δ = ker i∗, we see that

k(L) = dim ker i∗ = dim Im δ

c(Σ) = dim H2(Σ,L) − dim R = c(Σ r L) − c(G)

= |L| − rM (L). (4.3.7)

That is to say, k(L) = |L| − rB(G∗)(L).

Lemma 4.3.4. 2g = r(B(G∗)) − r(C(G)).

Proof. Let M = B(G∗) and M 0 = C(G). Since M 0 is a circuit graph, r(M 0) = v(G) − c(G) = e(T ) where T is a spanning forest in G with the same number of components as G. Obversely, the rank of a bond matroid is the maximal num- ber of edges that can be removed without increasing the number of connected components. So r(M) = e(G∗) − e(T ∗) = e(G) − e(T ∗), where T ∗ is a spanning forest of G∗, again with the same number of components as G∗. As the number of edges in such a spanning forest is equal to the number of vertices minus the number of components, and, for T ∗, since the vertices of G∗ correspond to the faces of the cellulation of G in Σ,

r(M) − r(M 0) = e(G) − e(T ∗) − e(T )

= e − (f − c(Σ)) − (v − c(Σ)) = −v + e − f + 2c(Σ) = 2g, (4.3.8)

64 where v, e, and f are the number of vertices, edges, and faces of G embedded in Σ and g is the genus of Σ.

⊥ Lemma 4.3.5. g + s(L)/2 − s (L)/2 = rB(G∗)(L) − rC(G)(L).

∗ 0 Proof. As before, let M = B(G ) and M = C(G). Clearly, rM (L) − rM 0 (L) =

(|L| − rM 0 (L)) − (|L| − rM (L)). Lemma 4.3.3 tells us that |L| − rM (L) = k(L).

0 As for |L|−rM 0 (L), since M is a circuit matroid, the rank of L is the number of edges in a maximal spanning forest, T , of L. The topological quotient space L/T is a disjoint union (the number of components, being the number of components of L) of sets of circles, each set joined at a common point, with the total number of circles being |L| − rM 0 (L). This is, of course, the same as the rank of the

first homology group of L, H1(L). In his paper [Kru11], Krushkal denotes this as n(L), the nullity of L. Formula (2.5) of that paper tells us that

⊥ |L| − rM 0 (L) = n(L) = k(L) + g + s(L)/2 − s (L)/2. (4.3.9)

So

⊥ rM (L) − rM 0 (L) = k(L) + g + s(L)/2 − s (L)/2 − k(L)

= g + s(L)/2 − s⊥(L)/2. (4.3.10)

∗   Lemma 4.3.6. r(B(G )) − rB(G∗)(L) − r(C(G)) − rC(G)(L) = g − s(L)/2 + s⊥(L)/2.

Proof. Letting M and M 0 be as before,

 0  0   r(M) − rM (L) − r(M ) − rM 0 (L) = r(M) − r(M ) − rM (L) − rM 0 (L) . (4.3.11) 65 The previous two lemmas tell us that

 0  r(M) − rM (L) − r(M ) − rM 0 (L) =

2g − g + s(L)/2 − s⊥(L)/2 = g − s(L)/2 + s⊥(L)/2, (4.3.12) proving this lemma and, hence, the theorem.

4.4 An example

As there are 212 = 4096 possible subsets of edges for the graph with 12 edges of figure 4.1 it would be too cumbersome to calculate its polynomials. Instead we’ll consider the simpler graph embedded in the genus 2 oriented surface of

figure 4.3.

The calculations from table 4.1 tell us that

LVG,Σ(x, y, z)

= (x−1)z4 +z4 +4(x−1)z3 +4z3 +6(x−1)z2 +6z2 +4(x−1)z +4z +(x−1)+1

= xz4 + 4xz3 + 6xz2 + 4xz + x = x(z + 1)4, (4.4.1) and

PG,Σ(X, Y, A, B) =

XB2 + B2 + 4XB + 4B + 4X + 2XAB + 4 + 2AB + 4XA + 4A + XA2 + A2.

(4.4.2)

66 e*1 e2 e3

e* e*2 3

e1 e*5 e4 e5 e*4

Figure 4.3: A graph cellularly embedded in the two-holed torus on the right, and its dual, represented as a ribbon graph, on the left.

And note that

2 −1 z PG,Σ((x − 1), (y − 1), z , z)

= z2(x−1)z2+z2+4(x−1)z+4z+6(x−1)+6+4(x−1)z−1+4z−1+(x−1)z−2+z−2

= xz4 + 4xz3 + 6xz2 + 4xz + x = x(z + 1)4. (4.4.3)

4.5 Duality

A fact about duality for a graph G cellularly embedded in a surface Σ, analogous to that for the Tutte polynomial TG(X,Y ) = TG∗ (Y,X), is given in the following restatement of theorem 3.1 of Krushkal’s paper.[Kru11] 67 Table 4.1: Calculations by subset of the necessary data for our polynomials. Subsets of E Las Vergnas data and monomials Krushkal data and monomials ⊥ rC(G)(L) rB(G∗)(L) c(L) k(L) s(L) s (L) ∅ 0 0 (x − 1)z4 2 0 0 4 XB2 4 2 {e1} 1 1 z 1 0 0 4 B 3 {e2} 0 1 (x − 1)z 2 0 0 2 XB 3 {e3} 0 1 (x − 1)z 2 0 0 2 XB 3 {e4} 0 1 (x − 1)z 2 0 0 2 XB 3 {e5} 0 1 (x − 1)z 2 0 0 2 XB 3 {e1, e2} 1 2 z 1 0 0 2 B 3 {e1, e3} 1 2 z 1 0 0 2 B 3 {e1, e4} 1 2 z 1 0 0 2 B 3 {e1, e5} 1 2 z 1 0 0 2 B 2 {e2, e3} 0 2 (x − 1)z 2 0 0 0 X 2 {e2, e4} 0 2 (x − 1)z 2 0 2 2 XAB 2 {e2, e5} 0 2 (x − 1)z 2 0 0 0 X 2 {e3, e4} 0 2 (x − 1)z 2 0 0 0 X 2 {e3, e5} 0 2 (x − 1)z 2 0 2 2 XAB 2 {e4, e5} 0 2 (x − 1)z 2 0 0 0 X 2 {e1, e2, e3} 1 3 z 1 0 0 0 1 2 {e1, e2, e4} 1 3 z 1 0 2 2 AB 2 {e1, e2, e5} 1 3 z 1 0 0 0 1 2 {e1, e3, e4} 1 3 z 1 0 0 0 1 2 {e1, e3, e5} 1 3 z 1 0 2 2 AB 2 {e1, e4, e5} 1 3 z 1 0 0 0 1

{e2, e3, e4} 0 3 (x − 1)z 2 0 2 0 XA

{e2, e3, e5} 0 3 (x − 1)z 2 0 2 0 XA

{e2, e4, e5} 0 3 (x − 1)z 2 0 2 0 XA

{e3, e4, e5} 0 3 (x − 1)z 2 0 2 0 XA

{e1, e2, e3, e4} 1 4 z 1 0 2 0 A

{e1, e2, e3, e5} 1 4 z 1 0 2 0 A

{e1, e2, e4, e5} 1 4 z 1 0 2 0 A

{e1, e3, e4, e5} 1 4 z 1 0 2 0 A 2 {e2, e3, e4, e5} 0 4 (x − 1) 2 0 4 0 XA 2 {e1, e2, e3, e4, e5} 1 5 1 1 0 4 0 A Theorem 4.5.1. If a graph G is cellularly embedded in a surface Σ then the

Krushkal polynomials of these graphs are related by

PG,Σ(X, Y, A, B) = PG∗,Σ(Y,X,B,A). (4.5.1)

Applying theorem 4.3.1 we get the following result

Corollary 4.5.2. If a graph G is cellularly embedded in a surface Σ then the

Las Vergnas polynomials of these graphs are related by

−1 LVG,Σ(x, y, z) = LVG∗,Σ(y, x, z ). (4.5.2)

69 CHAPTER 5

HIGHER DIMENSIONS

5.1 Main result

Let M be a closed, oriented 2n-dimensional manifold with K a simplicial or

CW complex for M. Let LVK,M (x, y, z) be the Las Vergnas polynomial of the

matroid perspective, Mρ → M∂n from theorem 3.5.1, where the chain complex in question is that of K with coefficients in R.

Theorem 5.1.1. The Krushkal polynomial from definition 3.4.1 and the chain matroid polynomial at n are related by the equation

βn/2 −1/2 1/2 LVK,M (x, y, z) = z PK,M (x − 1, y − 1, z , z ), (5.1.1)

th where βn(M) is the n Betti number of M, i.e. the dimension of Hn(M; R).

We will need the following lemmas. Throughout, let L be both a subset of

E = {n-cells of K} and the spanning subcomplex K(n−1) ∪ L. Also, throughout homology will be taken with real coefficients.

Lemma 5.1.2. r(M ) − r (L) = dim H (L) − dim H (M). ∂n M∂n n−1 n−1

70 Proof. In M∂n , since Hn−1(L) = ker ∂n−1/ Im ∂n,

r (L) = dim span ∂ (L) = dim(ker ∂ | ) − dim H (L). (5.1.2) M∂n n n−1 span L n−1

Also, K and L have the same (n − 1) and (n − 2)-cells, so ker ∂n−1|span L = ker ∂ . So r (L) = dim ker ∂ − dim H (L). With L = E (or, alterna- n−1 M∂n n−1 n−1 tively, L = K(n)), this gives us

r(M ) = r (E) = dim ker ∂ − dim H (K) ∂n M∂n n−1 n−1

= dim ker ∂n−1 − dim Hn−1(M). (5.1.3)

And so r(M ) − r (L) = dim H (L) − dim H (M). ∂n M∂n n−1 n−1

Lemma 5.1.3. |L| − rMρ (L) = k(L).

Recall the diagram that gave us Mρ:

0

 Hn(C∗) 8 0

 Cn/ Im ∂n+1 ρ 7 π

∂n+1  Cn+1 / Cn / Im ∂n / 0 ∂n

 0

71 Proof. If we take the crooked sequence from our diagram above and dualize it we get two exact sequences:

∂n+1 ρ Cn+1 −−−→Cn −→ Cn/ Im ∂n+1 → 0, and (5.1.4)

n+1 Cn+1 ←−−δ Cn ←−i ker δn+1 ← 0 (5.1.5)

n+1 ∗ since Cn/ Im ∂n+1 = coker ∂n+1 = (ker δ ) . Noting that the set of generators

n ∗ of C = (Cn) have a natural bijection with E we construct a matroid Mδn+1 which proposition 2.6.9 tells us is dual to M . As in the calculation of r ρ M∂n above, we get the following, where L is as in the construction for lemma 3.3.2.

r (E L) = dim ker δn+2 − dim Hn+1(L) = dim ker δn+2 − dim H (L). Mδn+1 r n−1 (5.1.6)

The second equality comes from Poincaré duality. Using this, the dual rank formula from proposition 2.6.4, and lemma 5.3 from [KR10] we see that

  r (L) + k(L) = r (E L) + |L| − r(M n+1 ) Mρ Mδn+1 r δ  + dim Hn−1(L) − dim Hn−1(K)

 n+2 n+2  = dim ker δ − dim Hn−1(L) + |L| − dim ker δ + dim Hn−1(K)  + dim Hn−1(L) − dim Hn−1(K) (5.1.7) = |L|

So |L| − rMρ (L) = k(L).

Lemma 5.1.4. r(M ) − r (L) − r(M ) − r (L) = s⊥(L)/2 − s(L)/2 + ρ Mρ ∂n M∂n

βn/2.

72 Proof. First, note that since we are working over a field,

V ⊥ V s⊥(L) − s(L) = dim − dim V ∪ V ⊥ V ∪ V ⊥ = (dim V ⊥ − dim(V ∪ V ⊥)) − (dim V − dim(V ∪ V ⊥))

= dim V ⊥(L) − dim V (L). (5.1.8)

Recall that V (L) = Im i∗ : Hn(L) → Hn(M), and in fact, V (L) = (span ρ(L))∩

Hn(M). For if i∗[a] ∈ V (L) for some [a] ∈ Hn(L) then, since L has no n+1 sim- plices, which means, Hn(L) = ker(∂n|L), ∂n(a) = 0. So [a] ∈ Hn(M). And [a] is clearly in span ρ(L) since a ∈ Cn(L). Conversely, if [b] ∈ span ρ(L) ∩ Hn(M) = ˜ ˜ ρ(span L) ∩ Hn(M), then b = ρ(b) for some b ∈ span L and ∂n(b) = 0. Hence

[b] = i∗([b]) ∈ Im i∗ and V (L) = span ρ(L) ∩ Hn(M). And if we consider the vertical exact sequence from our diagram,

π 0 → Hn(M) → Cn/ Im ∂n+1 −→ Im ∂n → 0 (5.1.9)

in context, we see that we must have dim V (L) = dim span ρ(L)−dim span ∂n(L) = r (L) − r (L). Mρ M∂n ⊥ For dim V (L) it is worth noting that the intersection product makes Cn into a nonsingular metric vector space, with the product being either symmetric or skew-symmetric depending on n. Theorem 11.8 of [Rom05] tells us, then that

⊥ ⊥ dim V (L) + dim V (L) = dim Hn(M), so dim V (L) = βn − dim V (L). Hence

1 ⊥ 1 ⊥ 1 2 (s (L)−s(L)+βn) = 2 (dim V (L)−dim V (L)+βn) = 2 (βn−2 dim V (L)+βn) = β − dim V (L) = β − (r (L) − r (L)). (5.1.10) n n Mρ M∂n

73 Finally, again because we’re working over a field,

βn = dim Hn(M) = dim ker ∂n − dim Im ∂n+1

= dim(Cn/ Im ∂n+1) − dim(Cn/ ker ∂n) (5.1.11)

= r(Mρ) − r(M∂n ).

Together with equation (5.1.10) this gives, after some rearranging of the side with matroid ranks, our claim.

We now proceed with the proof of the main theorem.

Proof of Theorem 5.1.1. If for each L ⊆ E we get the equality

r(M∂ )−rM (L) |L|−r (L) (r(Mρ)−rM (L))−(r(M∂ )−rM (L)) (x − 1) n ∂n (y − 1) Mρ z ρ n ∂n

⊥ = zβn/2(x − 1)dim Hn−1(L)−dim Hn−1(M)(y − 1)k(L)z−s(L)/2zs (L)/2, (5.1.12) then the claim will hold. But this is exactly what lemmas 5.1.2 through 5.1.4 show.

5.2 Another simple example

2 n 1 2n 1 Consider (T ) = (S ) . If we decompose S cellularly as one 0-cell e0 and 2n 2 n Y one 1-cell e1, then the cells of (T ) are of the form eki for ki ∈ Z2 and i=1 ~ ~ ~ if we denote by k = (k1, . . . , k2n) and let |k| be the number of ones in k (i.e. ~ X ~ |k| = ki), then |k| also is the dimension of this cell. The boundary map in the resulting chain complex is the zero map, so the n-homology is generated by the n-cells of (T 2)n. Denote this cellulation, K.

74 2n Let E be the set of n-cells. There are such, and the intersection n product of any two is zero if and only if they have an e1 in the same spot, i.e. 2n 2n Y Y eki · eji = 0 if and only if ki = ji = 1 for some i. (5.2.1) i=1 i=1 2n Y If u is an n-cell eki then define i=1 2n Y u¯ = e1−ki , (5.2.2) i=1 and if L is a subset of the set of n-cells (and also a spanning n-subcomplex), then let L¯ be the obvious application of this operation to everything in L.

⊥ Then V (L) = hLi (the subspace generated by L), V (L) = hE r L¯i, and V (L)∩V ⊥(L) = {u ∈ V (L) | u¯ 6∈ V (L)}. The dimension of this last is calculable for specific subsets, but as I will be doing a mass calculation, I will denote it, fol-

2 n lowing Krushkal, `(L). Note that k(L) = 0 and dim Hn−1(L) = dim Hn−1((T ) ) for all L because of the trivial boundary maps. Using this we see that

X |L|−`(L) (2n)−|L|−`(L) PK,(T 2)n (X, Y, A, B) = A B n . (5.2.3) L⊆K(n)

The calculation of the Las Vergnas polynomial is much simpler. Since ∂n is the trivial map (and hence Im ∂ = 0) we get that r (L) = 0 and r (L) = |L| n M∂n Mρ for all L. So the chain matroid polynomial at n is

X (2n)−|L| (2n) LVK,(T 2)n (x, y, z) = z n = (z + 1) n . (5.2.4) L⊆E And, of course,   1 (2n) X −1/2 |L|−`(L) 1/2 (2n)−|L|−`(L) (2n) z 2 n  (z ) (z ) n  = (z + 1) n . (5.2.5) L⊆K(n)

75 CHAPTER 6

FURTHER DIRECTIONS

6.1 More than just the middle, more than just one struc-

ture

The Krushkal polynomial is only definable on the middle dimension of an even- dimensional manifold, but the chain matroid polynomial does not have that restriction. One of the first things I plan to explore is the question of what happens when we look at all possible perspectives along a chain complex. For example, following section 5.2, consider (S1)n built as the product complex of

1 n finite cell-complexes, each S = e0 ∪ e1. For each k ∈ {1, . . . , n}, M∂k is the n trivial matroid (i.e. rM (L) = 0 for all subsets L) on elements E, and ∂k k

Mρn is the free matroid (i.e. rMρn (L) = |L| for all L) on the same elements, so

n (k) n n   n n k X ( )−|L| X k ( )−j ( ) LV n 1 n (x, y, z) = z k = z k = (z + 1) k , (6.1.1) (e0∪e1) ,(S ) j L⊆E j=0

k th where LVK,M is the chain matroid polynomial on the k level of a simplicial or CW complex K for an n-manifold M.

76 Or consider Sn built from two k-cells for each k ∈ {0, . . . , n}. In this case

1 Mρk = M∂k = U2,1, the uniform matroid of rank one on two elements. So

k LVK,Sn (x, y, z) = (x − 1) + 2 + (y − 1) = x + y (6.1.2) for all k.

Another thing worth considering is what changes when we change the struc- ture on the manifold. That is to say, what is the relationship between the matroids, perspectives, and polynomials of two different structures on a mani- fold.

6.2 Orientation

We have been studying only oriented manifolds as yet, however there is a concept of oriented manifolds that may yield a more general form for the perspective.

Central to the idea is the concept of a signed set. A signed set is a set X together with a partition of X into the set of positive elements, X+, and the set of negative elements, X−. We say that a signed set X is a restriction of the signed set Y (or conforms to Y ) if X+ ⊆ Y + and X− ⊆ Y −, and call two signed sets equal if they are restrictions of each other, i.e. if X+ = Y + and X− = Y −.

A signed set X is called positive if X− = ∅ and negative if X+ = ∅. The empty set gets the obvious and only partition ∅+ = ∅ & ∅− = ∅. The opposite of signed set X is a signed set Y such that X+ = Y − and X− = Y +. It is denoted −X

1 The general uniform matroid Un,k is a matroid on n elements such that a subset, L, is independent if and only if |L| ≤ k.

77 We can now create axiom systems for oriented matroids. For example, a collection C of signed subsets of a set E is the set of signed circuits of an oriented matroid on E if and only if it satisfies the following axioms from [BLS+99].

(sC0) ∅ 6∈ C,

(sC1) if X ∈ C then −X ∈ C,

(sC2) if X,Y ∈ C such that X ⊆ Y as unsigned sets then X = ±Y , and

(sC3) for all X,Y ∈ C and e ∈ X+ ∩ Y −, there is a Z ∈ C such that Z+ ⊆

+ + − − − (X ∪ Y ) r {e} and Z ⊆ (X ∪ Y ) r {e}.

6.3 Infinite structures

k Notice that the only restriction on the construction of LVK,M in 6.1 is that the k-skeleton of the simplicial complex K for M (or the set of k-cells of a

CW decomposition) be finite. Potentially, we could use this tool on infinite- dimensional complexes. For example, we could extend the computation on Sn of that section onto S∞. But if there are an infinite number of k-simplices or cells, then, currently, we have no recourse while using standard matroids.

However, work is being done on infinite matroids ([Wel10], [BDK+10]) which could allow us to examine infinite simplicial complexes. To start, we can carry over some of the rules of independence from the finite version. Let S be a set.

If there is a non-empty family, I, of subsets of S, called independent sets, such that

(I1) if A ∈ I and B ⊆ A then B ∈ I, and 78 (I2) if A, B ∈ I are finite with |A| = |B| + 1 then there exists x ∈ A r B such that B ∪ {x} ∈ I, then we call (S, I) a pre-independence space or pi-space.

These two axioms, by themselves, prove to be insufficient, as we see if we consider the case when S is infinite and I consists of all finite subsets of S.

Notice that we cannot get a basis for this space. So we consider the following axiom.

(m) If X is an independent subset of S then there exists an independent set

containing X which is maximal with respect to containment.

We call a pi-space that also satisfies (m) an mpi-space. We can now define a base as a maximal independent subset.

But we may now end up with bases of different cardinalities. For consider two infinite cardinal numbers, α and β. Let S and T be two disjoint sets of cardinality α and β, respectively. We can define an mpi-space (S ∪ T, I) by letting the maximal members of I be all sets of the form (S r A) ∪ B or

(T r B) ∪ A for finite subsets A ⊆ S and B ⊆ T such that |A| = |B|. This mpi-space has bases of cardinality both α and β.

An independence space is a pi-space (S, I) that also satisfies:

(FC) if X ⊆ S and every finite subset of X is independent, then X ∈ I.

This is called the finite character axiom. An independence space is a good distance towards having all the needed features to use them in the study of complexes, e.g. circuits. 79 6.4 Simple-homotopy

Simple homotopy equivalences were an attempt to develop combinatorial topology.[Coh73]

It is a refinement of the notion of homotopy equivalence. To remind the reader, two spaces X and Y are homotopy equivalent if there are maps f : X → Y and g : Y → X such that f ◦ g and g ◦ f are homotopic to the appropriate identity maps. In an effort to render this combinatorial, J. W. Alexander and

J. H. C. Whitehead both considered elementary adjustments to existing simpli- cial complexes to arrive at complexes which can be considered “combinatorially equivalent”.

Following Cohen, we consider Whitehead’s method. If K and L are simplicial complexes such that L is a subcomplex of K, K = L ∪ aA where a is a vertex of

K and A & aA are simplexes of K, then we say there is an elementary simplicial collapse from K to L or an elementary simplicial expansion from L to K. If there is a finite chain of elementary simplicial collapses and expansions, then K and L are said to have the same simple-homotopy type.

There seems to be the hint of a connection between simple-homotopy and the chain matroid perspective. Whether it is simply that the perspective is a simpler way to calculate the differences that simple-homotopy detects or a sort of complementary notion, with the two together coming closer to a full combinatorial topology, I do not know. Further exploration may answer.

80 BIBLIOGRAPHY

[ACE+13] R. Askanazi, S. Chmutov, C. Estill, J. Michel, and P. Stollenwerk, Polynomial invariants of graphs on surfaces, Quantum Topology 4 (2013), no. 1, 77–90.

[BDK+10] Henning Bruhn, Reinhard Diestel, Matthias Kriesell, Rudi Pendavingh, and Paul Wollan, Axioms for infinite matroids, arXiv preprint arXiv: . . . (2010), 1–33.

[BLS+99] Anders Björner, Michel Las Vergnas, B. Sturmfels, N. White, and G. M. Ziegler, Oriented matroids, Cambridge University Press, Cam- bridge New York, 1999.

[Coh73] Marshall Cohen, A course in simple-homotopy theory, Springer-Verlag, New York, 1973.

[Hat02] Allen Hatcher, Algebraic Topology, vol. 227, Algorithms and Com- putation in Mathematics, no. 5259, Cambridge University Press, 2002.

[KR10] Vyacheslav Krushkal and David Renardy, A polynomial invariant and duality for triangulations, arXiv preprint arXiv:1012.1310 (2010), 1–19.

[Kru11] Vyacheslav Krushkal, Graphs, links, and duality on surfaces, Com- binatorics, Probability & Computing (2011), 1–23.

[Las78] Michel Las Vergnas, Eulerian circuits of 4-valent graphs imbedded in surfaces, Colloquia Mathematica Societas János Bolyai 25 (1978), no. Algebraic Methods in Graph Theory, Szeged (Hungary), 451– 477.

81 [Las80] , On the Tutte Polynomial of a Morphism of Matroids, Annals of Discrete Mathematics 8 (1980), 7–20.

[Las99] , The Tutte Polynomial of a morphism of matroids I. Set- pointed matroids and matroid perspectives, Annales de l’institut Fourier 3 (1999), 973–1015.

[Mil63] John Milnor, Morse theory, Princeton University Press, Princeton, N.J, 1963.

[Mun84] James R Munkres, Elements of Algebraic Topology, Perseus Books, Reading, Mass, 1984.

[Nic07] Liviu Nicolaescu, An invitation to Morse theory, Springer, New York London, 2007.

[Oxl11] James Oxley, Matroid Theory, second ed., Oxford Mathematics, 2011.

[Rom05] Steven Roman, Advanced linear algebra, Springer, New York, 2005.

[Sma62] Steven Smale, On the structure of manifolds, Amer. J. Math (1962).

[Wel10] D. J. A. Welsh, Matroid Theory, Dover Publications, 2010.

[Whi31] Hassler Whitney, Non-Separable and Planar Graphs., Proceedings of the National Academy of Sciences of the United States of America 17 (1931), no. 2, 125–7.

[Whi87] , On the Abstract Properties of Linear Dependence, Classic Papers in Combinatorics 57 (1987), no. 3, 509–533.

82