MATROID RELATIONSHIPS: MATROIDS FOR ALGEBRAIC TOPOLOGY
DISSERTATION
Presented in Partial Fulfillment of the Requirements for the Degree Doctor of
Philosophy in the Graduate School of the Ohio State University
By
Charles Estill, BS/MS
Graduate Program in Mathematics
The Ohio State University
2013
Dissertation Committee:
Sergei Chmutov, Advisor
Matthew Kahle
Thomas Kerler
Azita Manouchehri c Copyright by
Charles Estill
2013 ABSTRACT
In [ACE+13] we found a relationship between two polynomials cellularly em- bedded in a surface, the Krushkal polynomial, based on the Tutte polynomial of a graph and using data from the algebraic topology of the graph and the surface, and the Las Vergnas polynomial for the matroid perspective from the bond ma- troid of the dual graph to the circuit matroid of the graph, B(G∗) → C(G). With
Vyacheslav Krushkal having (with D. Renardy) expanded his polynomial to the nth dimension of a simplicial or CW decomposition of a 2n-dimensional mani- fold, a matroid perspective was found whose Las Vergnas polynomial would play a similar role to that in the 2-dimensional case. We hope that these matroids and the perspective will prove useful in the study of complexes.
ii This is dedicated to my family, whose trust in me has finally been justified.
iii ACKNOWLEDGMENTS
Thanks are due especially to my two thesis advisors: Ian Leary, who helped me learn so much, even if our wonderful possible result got snatched out from under us by a genius; and Sergei Chmutov, who helped me to the finish line. In addition, my gratitude to everyone associated with the Mathematics department of The Ohio State University is limitless. Finally, wothout the work of Ross
Askanazi, Jonathan Michel, and Patrick Stollenwork on the paper we wrote with Dr. Chmutov, this result may not have ever existed.
iv VITA
1972 ...... Year of birth
2004 ...... B.Sc. in Mathematics
2008 ...... MS in Mathematics
2004-Present ...... Graduate Teaching Associate, The Ohio State University
PUBLICATIONS
Askanazi, Ross; Chmutov, Sergei; Estill, Charles; Michel, Jonathan; Stollen- werk, Patrick Polynomial invariants of graphs on surfaces
FIELDS OF STUDY
Major Field: Mathematics
Specialization: Algebraic Topology
v TABLE OF CONTENTS
Abstract ...... ii
Dedication ...... ii
Acknowledgments ...... iv
Vita...... v
List of Figures ...... viii
List of Tables ...... ix
CHAPTER PAGE
1 Introduction ...... 1
2 Matroids ...... 3
2.1 Axioms of independence ...... 3 2.2 Bases ...... 6 2.3 The circuit axioms and graphical matroids ...... 10 2.4 The rank function ...... 18 2.5 Closure, hyperplanes, and spanners ...... 24 2.6 Bond matroids and more general dual matroids ...... 32 2.7 Minors ...... 39 2.8 Matroid perspectives ...... 41
3 Topology ...... 45
3.1 Cell complexes ...... 45 3.2 Cohomology and the cup product ...... 50 3.3 Manifolds and Poincarè duality ...... 51
vi 3.4 Krushkal’s polynomial ...... 53 3.5 Matroid perspectives for chain complexes ...... 55
4 Graphs on Surfaces ...... 57
4.1 The Las Vergnas polynomial ...... 57 4.2 Krushkal’s polynomial for graphs in surfaces ...... 60 4.3 Relationship ...... 61 4.4 An example ...... 66 4.5 Duality ...... 67
5 Higher Dimensions ...... 70
5.1 Main result ...... 70 5.2 Another simple example ...... 74
6 Further Directions ...... 76
6.1 More than just the middle, more than just one structure . . . 76 6.2 Orientation ...... 77 6.3 Infinite structures ...... 78 6.4 Simple-homotopy ...... 80
Bibliography ...... 81
vii LIST OF FIGURES
FIGURE PAGE
2.1 A simple four vertex graph ...... 15
4.1 A cellular graph (with loops and parallel edges) on a 2-holed torus. 58
4.2 The dual graph of the graph in figure 4.1 ...... 59
4.3 A graph cellularly embedded in the two-holed torus on the right, and its dual, represented as a ribbon graph, on the left...... 67
viii LIST OF TABLES
TABLE PAGE
4.1 Calculations by subset of the necessary data for our polynomials. 68
ix CHAPTER 1
INTRODUCTION
In the summer of 2010, in a working group on knot theory funded by VIGRE we considered a possible relation between a polynomial defined by Vyacheslav
Krushkal in [Kru11] defined for graphs embedded in a surface and the Tutte polynomial for the matroid perspective between the bond matroid of the dual of a ribbon graph to the circuit matroid of the graph. This exploration led to our paper [ACE+13]. Subsequently, Krushkal, together with David Renardy, gave us [KR10], which expanded his polynomial to one defined on the nth level of a triangulation of a 2n-manifold.
In chapter 2, I introduce and explain many of the basic concepts concerning matroids. Most of this follows the work in [Oxl11] and [Wel10], which are the main reference works for matroids. There are many axiom systems, all equivalent, for defining matroids. We will need to know several of them. In addition, we will need to know about dual matroids and matroid perspectives, both covered, along with the useful notion of a minor of a matroid, in the later sections of chapter 2.
In chapter 3 I cover some of the basics of algebraic topology needed. Most of
1 what I cover is well known. I used [Hat02] and [Mun84] as my main references.
It is in this chapter that I also share the definition of Krushkal’s polynomial from [KR10]. I also define here the matroid perspective that will fulfill the role that B(G∗) → C(G) does in the 2-dimensional case.
To help our geometric understanding of the final result, I recapitulate [ACE+13] in chapter 4. This is followed by my main theorem in chapter 5. And we finish in chapter 6 with some associated topics and ideas that might be worth exploring in the future.
2 CHAPTER 2
MATROIDS
Matroids were first conceived by H. Whitney as an abstract generalization of matrices, with a focus on questions of independence of subsets of the set of column vectors. Having previously defined an independent subgraph as one containing no cycles, he was able to simultaneously generalize graphs. Follow- ing his lead we define a matroid as giving some information on subsets of a
fixed set. There are several different ways of indicating this information, from the vector algebra influenced notion of independence to the nearly topological closure operator.
2.1 Axioms of independence
A matroid, M, is a finite set E and a collection of subsets I ⊆ P(E) satisfying the following axioms, (i1)-(i3).
(i1) ∅ ∈ I.
(i2) If X ∈ I and Y ⊆ X then Y ∈ I.
3 (i3) If U and V are in I with |V | < |U|, then there exists an element of E,
x ∈ U r V , such that V ∪ {x} ∈ I.
The subsets in I are called independent and those not in I are called dependent.
Further, we’ll call the set E the ground set. Two matroids M & N are called isomorphic if there is a bijection between their ground sets and the structure is the same. That is to say, for example, that independent sets are mapped to independent sets in both directions.
The following proposition and many others in this chapter follow the path well-trod by James Oxley in [Oxl11]. Those proofs not guided by Oxley were guided by D. J. A. Welsh’s [Wel10].
Proposition 2.1.1. Let A be an m × n matrix over a field F. Set E as the set of column vectors of A and let I consist of those subsets X ⊆ E which are
m linearly independent in F , the m-dimensional vector field over F. (E, I) is a matroid.
Proof. Trivially ∅ ∈ I, as the empty set of vectors is trivially linearly indepen- dent, so (i1) is satisfied. And if X is an independent set of column vectors, so is any subset, which means that (i2) is satisfied.
To show that (i3) is satisfied, let U and V be two independent sets of column
m vectors of A with |V | < |U|. Let W be the subspace of F spanned by U ∪ V . Note that the dimension of W , dim W is at least |U|. If (i3) weren’t true and
4 hence V ∪ {x} were linearly dependent for every x ∈ U r V , then W would be contained in the span of V and thus
|U| ≤ dim W ≤ |V | < |U|, (2.1.1) a contradiction. Hence (i3) is satisfied.
We denote the matroid thus obtained M[A] and call it the vector matroid of A. Any matroid, M, for which we can find a field F and a matrix A such that M is isomorphic to M[A] is called representable over the field F or just representable.
Example 2.1.2. Consider the vector matroid of the following matrix over R. 0 0 1 0 1 0 0 A = 0 1 1 0 0 1 1 1 0 0 0 0 1 1
If we label the column vectors as one to seven from left to right, so that E =
{1, 2, 3, 4, 5, 6, 7}, then the set of independent subsets, is n I = ∅, {1}, {2}, {3}, {5}, {6}, {7}
{1, 2}, {1, 3}, {1, 5}, {1, 6}, {1, 7}, {2, 3}, {2, 5},
{2, 6}, {2, 7}, {3, 5}, {3, 6}, {3, 7}, {5, 6}, {5, 7},
{1, 2, 3}, {1, 2, 5}, {1, 3, 5}, {1, 3, 6}, {1, 3, 7}, {1, 5, 6}, o {1, 5, 7}, {2, 3, 6}, {2, 3, 7}, {2, 5, 6}, {2, 5, 7}, {3, 5, 6}, {3, 5, 7} .
Theorem 2.1.3. If U and V are independent sets such that |V | < |U| then there is a set W ⊆ U r V such that |V ∪ W | = |U| and V ∪ W ∈ I. 5 Proof. Consider the collection
n o X ⊆ U r V | V ∪ X ∈ I (2.1.2)
Choose W to be an element of this collection of maximal (finite) cardinality. To see that W fulfills the necessary condition, assume that it doesn’t: |V ∪ W | <
|U|. Then, by (i3), there is an x ∈ U r (V ∪ W ), such that V ∪ W ∪ {x} is independent. But W ∪ {x} ⊆ U r V and |W ∪ {x}| > |W |, contradicting the maximality of W .
2.2 Bases
If any subset of an independent set is independent, then for many purposes we need only concern ourselves with he maximal independent sets. That is to say, independent sets with no independent proper supersets. We call such a set a basis or a base.
Example 2.2.1. The bases of example 2.1.2 are the three element subsets which are in I.
Proposition 2.2.2. All bases have the same cardinality. I.e., If A and B are bases of a matroid M = (E, I), then |A| = |B|.
Proof. Let A, B ⊆ E be bases such that |A| ≤ |B|. If the cardinality of A were strictly less than that of B then by (i3) there would be an element x ∈ B r A such that A ∪ {x} is independent, which would contradict the maximality of A.
Hence |A| = |B|. 6 For a finite set E and a collection of its subsets B ⊆ P(E), consider the following axioms.
(b1) B is nonempty.
(b2) If B1,B2 ∈ B and x ∈ B1 r B2 then there is an element of y ∈ B2 r B1
such that (B1 ∪ {y}) r {x} ∈ B.
Proposition 2.2.3. The maximal independent sets of a matroid satisfy (b1) and (b2).
Proof. Let M = (E, I) be a matroid. By axiom (i1) ∅ is independent. Either
∅ is, itself, a maximal independent subset of E, in which case ∅ ∈ B, or there is some non-empty independent set I, which, since E is finite, must have some maximal independent superset, which would be in B. So (b1) is satisfied.
Let B1 and B2 be distinct maximal independent sets, and let x ∈ B1 r B2.
Then B1 r {x} and B2 are independent sets. And, by the previous proposition,
|B1| = |B2|, so |B1 r {x}| < |B2|. By axiom (i3) there is a
y ∈ B2 r (B1 r {x}) = B2 r B1 (2.2.1) such that (B1 r {x}) ∪ {y} is independent. But then, being independent, it 0 must be contained in a maximal independent set B ⊇ (B1 r {x}) ∪ {y}. The 0 previous proposition comes into play again to tell us that |B | = |B1|, and since
0 x is in B1 and y is not, |(B1 r {x}) ∪ {y}| = |B1|. So B = (B1 r {x}) ∪ {y} is a maximal independent set which means that (b2) is satisfied.
Proposition 2.2.4. If (E, B) are a pair satisfying (b1) and (b2) then all ele- ments of B are of the same size. 7 Proof. Suppose that this were not true, i.e. there were elements of B of different sizes. Thus the set
A = {A r B | A, B ∈ B and |B| < |A|} (2.2.2)
and its elements are nonempty. Choose a pair B1,B2 ∈ B with |B2| < |B1| such that B1 r B2 is of minimum size in A. Since B1 r B2 6= ∅ we can choose an x ∈ B1 r B2, and (b2) gives us a y ∈ B2 r B1 such that (B1 ∪ {y}) r {x} ∈ B.
Clearly |(B1 ∪ {y}) r {x}| = |B1| > |B2| and since we’ve exchanged x for y,
|((B1 ∪ {y}) r {x}) r B2| < |B1 r B2|, (2.2.3) contradicting the minimality of B1 r B2 in A.
Proposition 2.2.5. If (E, B) is a pair satisfying (b1) and (b2) and I is the set of subsets of elements of B, then (E, I) satisfy (i1)-(i3).
Proof. Since I = {I ⊆ E | I ⊆ B for some B ∈ B}, (i2) is clearly satisfied.
And ∅ ∈ B by (b1), so ∅ ∈ I and (i1) is satisfied.
Let’s assume, in order to reach a contradiction, that I does not satisfy (i3).
That is to say, there are two sets, U and V , in I with |V | < |U|, such that for all x ∈ U rV the set V ∪{x} is not in I. Choose BU ,BV ∈ B such that U ⊆ BU and V ⊆ BV , and such that |BU r (U ∪ BV )| is minimal among all choices of
BU and BV . Since V ⊆ BV , it is obvious that U r BV ⊆ U r V , but further, for our choice of U and V
U r BV = U r V. (2.2.4)
8 Otherwise there would be an x ∈ U r V with x ∈ BV , i.e. V ∪ {x} ⊆ BV and hence in I.
By (b2), if BU r (U ∪ BV ) is non-empty then for any x ∈ BU r (U ∪ BV ) ⊆
BU r BV , there is a y ∈ BV r BU such that (BU ∪ {y}) r {x} ∈ B. But this new base has the property that
|((BU ∪ {y}) r {x}) r (U ∪ BV )| < |BU r (U ∪ BV )|, (2.2.5) so BU r (U ∪ BV ) is empty. This means that BU r BV = U r BV . And putting this together with equation (2.2.4) above, we get
BU r BV = U r V. (2.2.6)
What about BV r (V ∩ BU )? If there is an element, x, in this set then there is, as above, an element y ∈ BU r BV such that (BV ∪ {y}) r {x} ∈ B. Note that, since x 6∈ V , V ∪ {y} ⊆ (BV ∪ {y}) r {x}, hence V ∪ {y} ∈ I. Equation
(2.2.6) now tells us that y ∈ BU r BV = U r V , contradicting the assumption that (i3) fails. So BV r (V ∩ BU ) is empty and BV r BU = V r BU . And since
V r BU ⊆ V r U we get that
BV r BU ⊆ V r U or, more importantly, |BV r BU | ≤ |V r U|. (2.2.7)
Proposition 2.2.4 tells us that |BU | = |BV |, so |BU rBV | = |BV −BU |. Together with equation (2.2.6) and the above, we get
|U r V | = |BU r BV | = |BV r BU | ≤ |V r U|. (2.2.8)
So |U rV | ≤ |V rU|, or |U| ≤ |V |, contradicting the assumption that |V | < |U|. Thus (i3) is satisfied, and the proposition is proved. 9 Propositions 2.2.3 and 2.2.5 together mean that we can now define a ma- troid either by specifying the independent sets, in which case the bases are the maximal ones, or by specifying the bases, in which case the independent sets are the subsets thereof.
2.3 The circuit axioms and graphical matroids
Clearly, if looking at the maximal independent sets is sufficient, then so should it be sufficient to look at minimal dependent sets, i.e. dependent sets, all of whose proper subsets are independent. Such a set is called a circuit.
Example 2.3.1. Recall the matrix over R from example 2.1.2: 0 0 1 0 1 0 0 A = 0 1 1 0 0 1 1 1 0 0 0 0 1 1
Labeling the column vectors as before, we find the circuits to be
n o C = {4}, {6, 7}, {1, 2, 6}, {1, 2, 7}, {2, 3, 5}, {1, 3, 5, 6}, {1, 3, 5, 7} .
If M is a matroid, we denote the set of circuits of M by C(M) or C.
Proposition 2.3.2. The set of circuits, C, of a matroid M = (E, I) satisfy the following conditions.
(c1) ∅ 6∈ C.
(c2) If C and C0 are members of C and C ⊆ C0 then C = C0.
10 (c3) If C and C0 are distinct elements of C and x ∈ C ∩ C0, then there is an
◦ ◦ 0 element of C, C , such that C ⊆ (C ∪ C ) r {x}.
Proof. Clearly (c1) holds, and minimality of circuits means that (c2) holds. So
0 0 let us assume that (C∪C )r{x} does not contain a circuit for some C 6= C , both 0 0 in C, and some x ∈ C ∩ C . Or, in other words, (C ∪ C ) r {x} is independent. 0 0 0 Since (c2) holds, C rC cannot be empty, otherwise C ⊆ C and hence C = C. 0 0 So choose a y ∈ C r C. Since C is a circuit, i.e. a minimal dependent set, 0 C r {y} is independent. To construct our contradiction, we consider those 0 0 subsets of C ∪ C which are independent and contain C r {y} and choose a maximal one. Call it I. Clearly, y 6∈ I, as otherwise C0, a dependent set, would be a subset of I. Since C is a circuit there must also be an element from it that is excluded from I. That is, there is a z ∈ C such that z 6∈ I, and since
0 y ∈ C r C, z 6= y. So
0 0 0 |I| ≤ |(C ∪ C ) r {y, z}| = |C ∪ C | − 2 < |(C ∪ C ) r {x}|. (2.3.1)
0 But then (C∪C )r{x} and I are two independent sets satisfying the hypotheses of (i3), whose conclusion is a set contradicting the maximality of I. And so
(c3) must hold.
This third, less obvious, axiom, (c3), is frequently called the circuit elimi- nation or weak circuit elimination axiom
Proposition 2.3.3. Let E be a set and let C ⊆ P(E) satisfy (c1)-(c3). If we set I as the collection of subsets of E that contain no element of C, then (E, I) is a matroid, and C is its collection of circuits. 11 Proof. Note that there are two things to be proved here: that I satisfies (i1)-
(i3), and that C is the collection of minimal dependent sets of this matroid.
But ∅ ∈ I and (i1) holds because (c1) says that ∅ doesn’t contain any element of C. And (i2) holds because if X contains no member of C and Y ⊆ X then
Y contains no member of C.
To prove (i3) we will, in order to again show a contradiction, assume that
U, V ⊆ E are such that |V | < |U|, neither contains any elements of C, and yet for all x ∈ U r V , V ∪ {x} 6∈ I. Consider those elements of I contained in U ∪ V whose cardinality is strictly greater than that of V , such as U. Choose one such, W , for which |V r W | is minimal. Since (i3) fails, W can’t contain
V , so V r W is nonempty. Let x ∈ V r W . For each element y ∈ W r V , define Sy = (W ∪ {x}) r {y}. Note that Sy ⊆ U ∪ V and |V r Sy| < |V r W | for all y, so no Sy is in I. In other words, each Sy contains a member of C, Cy.
And since Cy ⊆ (W ∪ {x}) r {y}, y 6∈ Cy for any y. In addition, x ∈ Cy for all y ∈ W r V , since otherwise, Cy ⊆ W ∈ I. Now, if Cy ∩ (W r V ) is empty then
Cy ⊆ ((V ∩ W ) ∪ {x}) r y ⊆ V which is a contradiction as V ∈ I so shouldn’t contain any elements of C. So there is some z ∈ Cy ∩ (W r V ) and since z 6∈ Cz,
Cy 6= Cz. Also, since x is an element of every Cβ, x ∈ Cy ∩ Cz. Axiom (c3)
0 0 then implies that there is an element, C , of C such that C ⊆ (Cy ∪ Cz) ⊆ {x}.
But Cy,Cz ⊆ W ∪ {x}, so C ⊆ W ; which is a contradiction. So we conclude that (i3) holds.
So now we have this ground set E, a collection of subsets thereof C, and a matroid M = (E, I) with the independent sets, I, defined as above. We want
12 to show that C is the collection of minimally dependent subsets (i.e. circuits) of
M, C(M). If C ∈ C then since it is contained in no element of I, it is dependent.
And since any proper subset of C, not being in C by (c2), is independent (i.e. containing no element of C), C is minimally dependent, a circuit. Now assume that C0 ∈ C(M). Then the fact that C0 is dependent, i.e. C0 6∈ I, and the definition of I means that C0 contains some element of C as a subset. But since
0 0 C r{x} is independent for all x ∈ C this element of C can’t be a proper subset, so must be C0 itself. Hence C = C(M).
Propositions 2.3.2 and 2.3.3 together mean that we now have circuits satis- fying (c1)-(c3) as a way of defining a matroid, in addition to independent sets,
(i1)-(i3), and bases, (b1) and (b2).
Graphical matroids
Remember that a cycle in a graph (sometimes called a circuit) is a connected subgraph all of whose vertices have degree two and that Whitney defined an independent subgraph as one containing no cycles. This leads to the second important group of matroids (following vector matroids as defined in proposition
2.1.1 above) considered in Whitney’s paper and explains the terminology we’ve applied to minimally dependent sets.
Proposition 2.3.4. Let G be a graph with edge set E, and define C as the set of edge sets of cycles of G. C is the collection of circuits of a matroid on E.
13 Proof. Obviously the empty subgraph is not a cycle and if one cycle is a sub- graph of another then the two are identical cycles, so (c1) and (c2) are satisfied by C. Let C and C0 be distinct cycles of G with an edge, e, in common. Name the endpoints of e, u and v. Let P be the path from u to v contained in C r{e} 0 0 and P the path from u to v contained in C r {e}. If you traverse P starting from u, then starting at some vertex x (which might be u itself) P and P 0 start not being the same path, and sometime before or at vertex v, they share a vertex y again. If we conjoin the piece of P from x to y to the piece of P 0 from
0 y to x we get a cycle in G contained in (C ∪ C ) r {e}, so (c3) is satisfied by the set of cycles.
The matroid here defined is called the cycle or circuit matroid of G and denoted M(G) or C(G). Note that for a subset of the edge set of a graph to be independent in the cycle matroid, it must, by definition, contain no cycles. Or in other terms, the spanning subgraph induced by the subset must be a forest.
When it won’t cause confusion, which is most of the time, the subsets and their associated subgraphs will be considered interchangeable.
Example 2.3.5. Let G be the graph shown in figure 2.1 and let M = M(G) be the cycle matroid of G. Then the ground set for the matroid is E = {e1, e2, e3, e4, e5, e6, e7} and C(M) = {{e4}, {e6, e7}, {e1, e2, e6}, {e1, e2, e7}, {e2, e3, e5}, {e1, e3, e5, e6}, {e1, e3, e5, e7}}.
Notice that, under the bijection ei ↔ i, these are the same circuits as C(M[A]) from example 2.3.1 and so as the discussion following proposition 2.3.3 indi- cates, M(G) = M[A]. Further, notice that the bases, which we can from example
14 e4
e1 Q Q u Q u Q e6 Q e3 Q e7 e Q 2 Q Q Q Q e u 5 u
Figure 2.1: A simple graph. Notice that loops and parallel edges are allowed.
2.2.1, are
n {e1, e2, e3}, {e1, e2, e5}, {e1, e3, e5}, {e1, e3, e6}, {e1, e3, e7}, {e1, e5, e6}, o {e1, e5, e7}, {e2, e3, e6}, {e2, e3, e7}, {e2, e5, e6}, {e2, e5, e7}, {e3, e5, e6}, {e3, e5, e7} , which are the spanning trees of G.
A matroid that is the cycle matroid of a graph, or is isomorphic to one, is called graphic, and one that is (or is isomorphic to) a vector matroid for a matrix
A over the field F is call representable over F, and A is called an F-representation of the matroid.
Theorem 2.3.6. Every graphical matroid is representable over every field. 15 Proof. Let G be a graph and F a field. Let {e1, . . . , en} be an enumeration of the edge set of G and {v1, . . . , vm} an enumeration of the vertex set. Choose an arbitrary orientation for each ei so that each edge has a head vertex and a tail vertex (perhaps the same vertex in the case of loops). Let A = {aij} be the matrix over F with 1 if vertex vi is the head of non-loop edge ej, aij = −1 if vertex vi is the tail of non-loop edge ej, and (2.3.2) 0 otherwise, with columns labeled by their respective edges. If we can show that the circuits of M(G) are dependent in M[A] and that all circuits of M[A] are dependent in M(G), then by the following lemma 2.3.7, C(M(G)) = C(M[A]), and so
M(G) = M[A].
Let C be a cycle of G. If C is a loop, then the corresponding column in A is a zero vector, so clearly C is a circuit in M[A]. In the case that C is not a loop
there is a sequence of distinct edges, ei1 , ei2 , . . . , eik , constituting the cycle, with some orientation, perhaps not the same as above. In the cycle, each vertex is the head of one edge and the tail of another, and each vertex is visited exactly ~ ~ once, so if we define the matrix B as having column vectors bi with bi either the same or the negative of the associated column vector of A, negative when the correlated ei is in C and the original orientation of it is opposite its orientation ~ ~ ~ in the cycle. Then bi1 +bi2 +··· bik = 0 since tail ends cancel out head ends. But
then, for some choice of βi1 , . . . , βik = ±1 we get βi1~ai1 +βi2~ai2 +···+βik~aik = 0, where ~ai is the ith column vector of A. Hence, C is a circuit of M[A]. 16 Now assume that D = {ej1 , . . . , ej` } is a circuit of M[A]. If ` = 1, i.e.
D = {ej1 }, ej1 is by definition a loop in M[A]. So its associated column vector
in A, ~aj1 , is, by the definition of A, a zero vector, and so D is a circuit in M[A].
Otherwise, there are some non-zero elements j1 , . . . , j` ∈ F such that
` X ih~aih = 0. (2.3.3) h=1
So if any row of the matrix [~aj1~aj2 ···~aj` ] has a non-zero entry, then by the sum (2.3.3) it must have at least two non-zero entries. The rows of this submatrix of A correspond to vertices of G, hence this is saying that in the subgraph G0 of
G induced by {ej1 , . . . , ej` } every vertex that isn’t isolated has degree at least
0 two. Which means that G must contain a cycle, thus {ej1 , . . . , ej` } contains a circuit of M(G) and the lemma holds.
Lemma 2.3.7. If U and V are collections of subsets of a finite set E such that every element of U contains an element of V and every element of V contains an element of U, then the minimal members of U are the same as those of V.
Proof. Assume that the conclusion is not true, that is, there is a minimal element of U, call it U, that is not a minimal element in V. So either U is not an element of V or it is not minimal therein. U not being minimal in V implies that there is an element V ∈ V with V ⊂ U and V 6= U. But every element of V contains an
0 element of U so there is a U ⊆ V ( U in U and hence U is not minimal. And if U 6∈ V then it at least contains an element of V 3 V which again contains an
0 0 element of U 3 U , so that U ⊆ V ( U and again U is not minimal.
17 So if U is a minimal element of U then it is a minimal element of V.A symmetrical argument shows that if V is a minimal element of V then it must be a minimal element of U, and the lemma is proved.
Proposition 2.3.8. If B is a basis of a matroid M and x ∈ ErB, then B∪{x} contains a unique circuit C(x, B) containing x which we call the fundamental circuit of x with respect to B.
Proof. Since B ∪ {x} is necessarily dependent, it must contain a circuit, and any such circuit must contain x. Let C1 and C2 be two such. If C1 and C2 are distinct then (c3) tells us that (C1 ∪ C2) r {x} ⊆ B contains another circuit. But this is impossible, as B is independent, so there is a unique such circuit.
2.4 The rank function
When speaking of vector spaces and their subspaces, the dimension of these spaces comes up quite naturally. The column rank of a matrix, for example, is the dimension of the span of the column vectors of the matrix, that is to say the maximal number of independent vectors found in the matrix.
To apply this notion to matroids let us consider a rather natural construction for matroids. If M is a matroid with ground set E and independent sets I and
X ⊆ E, then if we let I|X = {I ⊆ X | I ∈ I} it is easy to see that (X, I|X) is itself a matroid, called the restriction of M to X, denoted M|X. Since
(X, I|X) is a matroid, all of its bases are the same size. We define the rank of
X – denoted r(X) or, when necessary rM (X) – to be the cardinality of a base
18 of X in (X, I|X). For simplicity’s sake, we will write r(E(M)), the cardinality of a base of M, as r(M).
≥0 Two obvious properties of the function r : P(E) → Z are:
(r1) if X ⊆ E, then 0 ≤ r(X) ≤ |X|, and
(r2) if X ⊆ Y ⊆ E, then r(X) ≤ r(Y ).
There is, in addition, a third property of the rank function, analogous to a formula from the study of vector spaces. Namely, if U and W are subspaces of a finite-dimensional vector space then
dim(U + W ) + dim(U ∩ W ) = dim U + dim W. (2.4.1)
The following property is sometimes called the submodular or semimodular in- equality.
Proposition 2.4.1. (r3) If X and Y are subsets of the ground set E of a matroid M with rank function r, then
r(X ∪ Y ) + r(X ∩ Y ) ≤ r(X) + r(Y ). (2.4.2)
Proof. Throughout the proof we will denote by BZ a basis for Z ⊆ E. I.e., BZ is a maximal independent set of the matroid M|Z and an independent set of
M|Z0 for any superset Z0 ⊇ Z. In particular, we can choose bases such that
BX∩Y ⊆ BX∪Y . Note that BX∪Y ∩ X is independent in M|X and BX∪Y ∩ Y is
19 similarly independent in Y . So r(X) ≥ |BX∪Y ∩ X| and r(Y ) ≥ |BX∪Y ∩ Y |, therefore
r(X) + r(Y ) ≥ |BX∪Y ∩ X| + |BX∪Y ∩ Y |
= |(BX∪Y ∩ X) ∪ (BX∪Y ∩ Y )| + |(BX∪Y ∩ X) ∩ (BX∪Y ∩ Y )|
= |BX∪Y ∩ (X ∪ Y )| + |BX∪Y ∩ (X ∩ Y )|
(2.4.3)
However BX∪Y ∩ (X ∪ Y ) = BX∪Y and BX∪Y ∩ (X ∩ Y ) = BX∩Y so
r(X) + r(Y ) ≥ |BX∪Y | + |BX∩Y | = r(X ∪ Y ) + r(X ∩ Y ). (2.4.4)
Hence, (r3) holds for matroids.
This proposition tells us that matroids satisfy the conditions (r1)-(r3), so we just need to show that a set and rank function that satisfy these axioms is a matroid.
≥0 Theorem 2.4.2. Let E be a finite set. If r : P(E) → Z satisfies (r1) - (r3) and I is the collection of subsets of E whose rank is equal to their cardinality, i.e.
I = {X ⊆ E | r(X) = |X|}, (2.4.5) then (E, I) is a matroid with r as its rank function.
Proof. Because of (r1), 0 ≤ r(∅) ≤ |∅| = 0, so ∅ ∈ I and (i1) is satisfied. To show (i2) holds, let Y ⊆ X ∈ I, so r(X) = |X|. By (r3)
r(Y ∪ (X r Y )) + r(Y ∩ (X r Y )) ≤ r(Y ) + r(X r Y ). (2.4.6) 20 And, noting that Y ∩ (X r Y ) = ∅ and Y ∪ (X r Y ) = X, we get
r(X) + r(∅) = |X| ≤ r(Y ) + r(X r Y ). (2.4.7)
Further, (r1) tells us that r(Y ) ≤ |Y | and r(X r Y ) ≤ |X r Y |, so
|X| ≤ r(Y ) + r(X r Y ) ≤ |Y | + |X r Y | = |X|. (2.4.8)
The first and last terms are equal, so equality holds throughout the whole equation and r(Y ) + r(X r Y ) = |Y | + |X r Y |. Finally, (r1) tells us that r(Y ) = |Y | (since otherwise we would have r(X r Y ) ≥ |X r Y |). To see that (i3) is satisfied, we assume the converse: there are sets U, V ∈ I with |V | < |U| and such that for all x ∈ U r V , V ∪ {x} 6∈ I. So r(V ) = |V |, but r(V ∪{x}) < |V |+1. Or, more precisely, |V | = r(V ) ≤ r(V ∪{x}) ≤ |V | for all x ∈ U The lemma that follows, lemma 2.4.3, then tells us that r(V ∪ U) = r(V ) = |V |. Also, r(V ∪ U) ≥ r(U) = |U|. Putting these two together, we find that |V | = r(V ∪ U) ≥ |U|, contradicting the assumption that |V | < |U|. So
(i3) must hold and (M, I) is a matroid.
We now must show that the function r is the rank function of the matroid, i.e. r = rM . Consider two cases: either X ⊆ E is independent or not. If
X ∈ I then r(X) = |X| by definition of I. And rM (X) = |X| since X is independent in M and hence a basis of M|X. So, suppose that X 6∈ I and let B be a basis for M|X. This means that rM (X) = |B|. Also, B ∪ {x} 6∈ I for any x ∈ X r B, implying that |B| = r(B) ≤ r(B ∪ {x}) < |B ∪ {x}| so that r(B ∪ {x}) = r(B) for all x ∈ X r B. Lemma 2.4.3 then tells us that r(X) = r(B ∪ X) = r(B) = |B| = rM (X) and r = rM . 21 ≥0 Lemma 2.4.3. If E is a set, r : P(E) → Z is a function satisfying (r2) and (r3), and X,Y ⊆ E such that r(X ∪ {y}) = r(X) for all y ∈ Y , then r(X ∪ Y ) = r(X).
Proof. The argument is by induction on the cardinality of Y . If Y = {y} then the conclusion is immediate. Let Y = {y1, . . . , yn, yn+1} and assume that r(X ∪{y1, . . . , yn}) = r(X). Then by the inductive hypothesis and the condition that r(X ∪ {y}) = r(X) for y ∈ Y in the first step, (r3) in the second, and (r1) in the last
r(X) + r(X) = r(X ∪ {y1, . . . , yn}) + r(X ∪ {yn+1} ≥ r (X ∪ {y1, . . . , yn}) ∪ (X ∪ {yn+1} + r (X ∪ {y1, . . . , yn}) ∩ (X ∪ {yn+1} (2.4.9)
= r(X ∪ Y ) + r(X)
≥ r(X) + r(X).
Note that the first and last lines are equal, so equality holds throughout and r(X ∪ {y1, . . . , yn+1}) = r(X). So, by induction, the lemma is proven.
The meaning of rank.
The rank of a subset of the ground set of a vector matroid is clearly the di- mension of the subspace generated by the associated column vectors, but the situation is somewhat less clear for our other main class of matroids, graphical matroids. Let G be a graph with edge set E(G) and vertex set V (G), and let
X be a subset of V (G) (or, equivalently, a subset of the ground set of M(G)).
22 Define G[X] to be the subgraph of G induced by X, having X as edge set and all ends of edges in X as vertex set. We also define c(G) to be the number of connected components of G (and similarly for c(G[X])).
Proposition 2.4.4. If G is a graph and X ⊆ V (G) then rM(G)(X) = |V (G[X])|− c(G[X]).
Proof. Consider first the case when G is connected. A basis for M(G) must be independent, and hence must be the set of edges of a spanning forest of
G. Also, it must be maximal, so it is the set of edges of a spanning tree of G, T . A well-known fact about trees is that the cardinality of the set of vertices is one more than the cardinality of the set of edges in the tree. So r(M(G)) = |E(T )| = |V (T )| − 1 = |V (G)| − 1.
If G has more than one connected component than we can calculate its rank by adding the ranks of each component, so r(M(G)) = |V (G)| − c(G). And similarly rM(G)(X) = |V (G[X])| − c(G[X]).
The connection between rank and the other characteristics of more general matroids is mostly straightforward. In the following propositions, let M be a matroid with ground set E, rank function r, and X ⊆ E.
Proposition 2.4.5. X is independent if and only if |X| = r(X).
Proof. If X is independent then it is a base of itself and so r(X) = |X|. On the other hand, if r(X) = |X| and B ⊆ X is a base of X, then |B| = r(X) = |X| and we must have B = X, that is to say, X is independent.
23 Proposition 2.4.6. X is a basis of M if and only if |X| = r(X) = r(M).
Proof. X is a basis if and only if it is a maximal independent set. So the previous proposition, proposition 2.4.5, tells us that |X| = r(X) if X is a basis, hence independent. And, by the definition of rank, r(X) = r(M) if X is a basis. Obversely, if |X| = r(X) = r(M) then proposition 2.4.5 tells us that X is independent, and if it were not maximal then r(X) = r(M) would not be possible.
Proposition 2.4.7. X is a circuit if and only if X 6= ∅ and for all x ∈ X, r(X r {x}) = |X| − 1 = r(X).
Proof. X is a circuit if and only if it is a minimal dependent set, i.e. a set all of whose proper subsets are independent. If X is a circuit then clearly X 6= ∅ and for all x ∈ X, X r {x} is independent, in fact, each is a basis for X, being maximally independent in X. So by proposition 2.4.6, r(X r {x}) =
|X r {x}| = |X| − 1 = r(X). And if X is nonempty and for all x ∈ X, r(Xr{x}) = |X|−1 = r(X) then all of the proper subsets of X are independent, the co-degree ones, X r {x} by proposition 2.4.5, all others by (i2). Further, since r(X) 6= |X|, we know X is dependent. Hence X is a circuit.
2.5 Closure, hyperplanes, and spanners
A vector, ~v, in a vector space, V , over a field, F, is in the span of a set of vectors
{~v1,~v2, . . . ,~vk} if ~v can be written in terms of the ~vi’s, i.e. ~v = α1~v1 + ··· + αk~vk
24 for αi ∈ F. Or equivalently, if h~v1,~v2, . . . ,~vki and h~v1,~v2, . . . ,~vk,~vi have the same dimension.
Now that we have a concept of rank for a general matroid M with, we can extend this idea. Let r be the rank function of M and let E be its ground set, then we define the closure operator cl : P(E) → P(E) by setting
cl(X) = {x ∈ E | r(X ∪ {x}) = r(X)} (2.5.1) for X ⊆ E. cl(X) is called the closure or span of X and x is said to be in the span of X if x ∈ cl(X). In addition, we say that X is spanning if cl(X) = E, or that it spans Y if cl(X) = Y .
Proposition 2.5.1. If X is a subset of the ground set of a matroid M, then r(X) = r(cl(X)).
Proof. Let B be a basis for X, that is to say a subset of X such that r(B) =
|B| = r(X). For any x ∈ cl(X) r X
r(B ∪ {x}) ≥ r(B) = |B| = r(X) = r(X ∪ {x}) ≥ r(B ∪ {x}). (2.5.2)
So B ∪ {x} is a dependent set, which means that B is a basis for cl(X) and r(cl(X)) = |B| = r(X).
By the definition, it is clear that
(cl1) for X ⊆ E, X ⊆ cl(X).
Proposition 2.5.2. The closure operator of a matroid M with ground set E satisfies, in addition to (cl1), the following properties: 25 (cl2) for X ⊆ Y ⊆ E, cl(X) ⊆ cl(Y ),
(cl3) for X ⊆ E, cl(cl(X)) = cl(X) (i.e. cl is idempotent), and
(cl4) for X ⊆ E and x ∈ E, if y ∈ cl(X ∪ {x}) r cl(X), then x ∈ cl(X ∪ {y}).
Proof. To see that the closure operator as defined in (2.5.2) satisfies (cl2) we consider two sets X ⊆ Y ⊆ E and suppose that x ∈ cl(X). Note that if x ∈ X then x ∈ Y and hence x ∈ cl(Y ), as we wish to show for the more general case.
So we consider, in particular, x ∈ cl(X) r X. By definition, this means that r(X ∪ {x}) = r(X). So if BX is a basis for X, then it’s a basis for X ∪ {x} as well. Extending that basis, we get a basis, BY ∪{x}, of Y ∪ {x} that contains
BX , but not x. Since BY ∪{x} does not contain x, it is a basis for Y as well, so r(Y ∪ {x}) = |BY ∪{x}| = r(Y ), and x ∈ cl(Y ) as desired. Hence cl(X) ⊆ cl(Y ). By (cl1) cl(X) ⊆ cl(cl(X)), so to prove the closure operator satisfies (cl3) all we need is the reverse inclusion. Let x ∈ cl(cl(X)), that is to say r(cl(X)∪{x}) = r(cl(X)). Now, (r2) and (cl1) tell us that
r(cl(X)) = r(cl(X) ∪ {x}) ≥ r(X ∪ {x}) ≥ r(X), (2.5.3) but proposition 2.5.1 tells us that r(cl(X)) = r(X), so equality holds throughout that inequality. Meaning that r(X ∪{x}) = r(X) and x ∈ cl(X). So cl(cl(X)) = cl(X).
Now suppose X ⊆ E and x ∈ E such that there is an element y ∈ cl(X ∪
{x}) r cl(X). So r(X ∪ {x, y}) = r(X ∪ {x}) but r(X ∪ {y}) 6= r(X). We do know, from (r3), that
r(X ∪{y}) ≤ r(X)+r({y})−r(X ∩{y}) ≤ r(X)+1−r(∅) = r(X)+1. (2.5.4) 26 So r(X ∪ {y}) = r(X) + 1. Hence
r(X) + 1 = r(X ∪ {y}) ≤ r(X ∪ {x, y}) = r(X ∪ {x}) ≤ r(X) + 1, (2.5.5) and thus r(X ∪ {x, y}) = r(X ∪ {y}), i.e. x ∈ cl(X ∪ {y}).
Proposition 2.5.3. If E is a set, cl : P(E) → P(E) satisfies (cl1)-(cl4), and a collection of subsets of E is defined as
I = X ⊆ E | ∀x ∈ X, x 6∈ cl(X r {x}) , (2.5.6) them (E, I) is a matroid with cl as its closure operator.
To prove this we’ll need the following lemma (and its contrapositive).
Lemma 2.5.4. If I is defined as in (2.5.6) and if X ∈ I but X ∪ {x} is not, then x ∈ cl(X).
Proof. By definition, since X ∪ {x} 6∈ I there is an element y ∈ X ∪ {x} such that y ∈ cl (X ∪ {x}) r {y} . If y = x then we’re done. If y 6= x then (X∪{x})r{y} = (Xr{y})∪{x} and hence y ∈ cl (Xr{y})∪{x} rcl(Xr{y}), wherein y 6∈ cl(X r {y}) because y ∈ X ∈ I. (cl4) then tells us that x ∈ cl (X r {y}) ∪ {y} = cl(X).
Proof of theorem 2.5.3. The empty set is trivially in I so (i1) is satisied.
Suppose that X ∈ I and Y ⊆ X. If y ∈ Y , then, since y ∈ X, y 6∈ cl(Xr{y}). But (cl2) tells us that cl(Y r{y}) ⊆ cl(Xr{y}), so y 6∈ cl(Y r{y}). Hence Y ∈ I and (i2) is satisfied.
27 Consider U, V ∈ I such that |V | < |U|. Suppose that (i3) fails for this pair.
That is to say, for all x ∈ U r V , V ∪ {x} 6∈ I. Suppose, further, that we have chosen this pair such that |U ∩ V | is maximal. Let z ∈ U r V . Consider the two sets V and cl(U r {z}). If V ⊆ cl(U r {z}) then (cl2) and (cl3) tell us that cl(V ) ⊆ cl(cl(Ur{z})) = cl(Ur{z}). Now, z ∈ U and U ∈ I, so z 6∈ cl(Ur{z}), which also means that z 6∈ cl(V ). The contrapositive of lemma 2.5.4 then tells us that V ∪{z} ∈ I, contrary to our assumptions. If V 6⊆ cl(U r{z}) then there is an element w ∈ V that isn’t an element of cl(U r {z}). Clearly, w ∈ V r U.
Since I satisfies (i2), and since U r {z} ⊆ U ∈ I, U r {z} ∈ I. Moreover, since w 6∈ cl(U r {z}), lemma 2.5.4 again implies that (U r {z}) ∪ {w} ∈ I. Note that | (U r {z}) ∪ {w} ∩ V | > |U ∩ V |, so by maximality of the pair (U, V ) (i3) must hold for the new pair of sets in I, (U r {z}) ∪ {w},V . That is to say, there is an x ∈ (U r {z}) ∪ {w} r V such that V ∪ {x} ∈ I. But (U r{z})∪{w} rV ⊆ U rV , so (U, V ) satisfy (i3), once again contradicting our assumption. Thus, (E, I) = M is a matroid.
We must now verify that the original closure operator, cl, coincides with the one that rises from the matroid, clM . Let X ⊆ E, and if one such exists, take an element x ∈ clM (X) r X. So r(X ∪ {x}) = r(X) and if B is a basis for X then it is one for X ∪ {x} as well. But while B ∈ I, B ∪ {x} is not, which means, by lemma 2.5.4, that x ∈ cl(X). Hence clM (X) ⊆ cl(X).
Now suppose that x ∈ cl(X) r X and that B is a basis for X. For any y ∈ X r BB ∪ {y} 6∈ I, so, by lemma 2.5.4 again, y ∈ cl(B), or in other words X ⊆ cl(B). So by (cl2) and (cl3) cl(X) ⊆ cl(B) and, hence x ∈ cl(B). The
28 usual lemma tells us that B∪{x} 6∈ I which implies that B is a basis for X∪{x}.
Hence r(X ∪ {x}) = |B| = r(X) and x ∈ clM (X). Thus, cl(X) = clM (X).
These last two propositions show that it is possible to define a matroid using the axioms for a closure operator, (cl1)-(cl4), in addition to those for independent sets, bases, circuits, or the rank function. And as with all the axiom systems, this one comes with its own terminology.
A subset, X, of the ground set of a matroid is called closed or a flat if
X = cl(X). A flat of rank one less than the rank of the matroid is called a hyperplane, and a set whose closure is the whole ground set is called a spanning set. In addition, we say that X spans Y if Y ⊆ cl(X).
Lemma 2.5.5. If U and V are flats of a matroid M with ground set E, with
V ⊆ U and r(V ) = r(U) − 1, then there exists a hyperplane H such that
V = U ∩ H.
Proof. Let B be a basis for V (a.k.a. a maximal independent subset of V ) and choose an x ∈ U r V = cl(U) r cl(V ). Then by lemma 2.5.4, B ∪ {x} is independent as well, and since |B ∪ {x}| = r(V ) + 1 = r(U), B ∪ {x} is a basis for U. Extend B∪{x} to a basis for M, A ⊇ B∪{x} and define H = cl(Ar{x}).
Note that H is a flat of rank r(H) = |A r {x}| = |A| − 1 = r(M) − 1, so it is a hyperplane. Also, if y ∈ V then y ∈ cl(B) since lemma 2.5.4 says that otherwise
B∪{y} would be an independent subset of V . So V ⊆ cl(B) ⊆ cl(Ar{x}) = H,
29 or, more importantly, V ⊆ U ∩ H, so r(V ) ≤ r(U ∩ H). Finally, (r3) and the fact that A ⊆ U ∩ H tells us that
r(U ∩H) ≤ r(U)+r(H)−r(U ∪H) = r(U)+(r(M)−1)−|A| = r(U)−1. (2.5.7)
Since r(V ) = r(U) − 1, r(U ∩ H) = r(V ) and hence U ∩ H ⊆ cl(V ) = V .
Theorem 2.5.6. If X is a flat of M with rank r(X) = r(M) − k < r(M) then k there are distinct hyperplanes Hi i=1 such that
k \ X = Hi. (2.5.8) i=1 Proof. When k = 1, X is the single “distinct” hyperplane needed. Using in- duction on k, let X have rank r(M) − k and let A = {a1, a2, . . . , ar(M)−k} be a basis for X. Theorem 2.1.3 allows us to extend A to a basis B =
0 {a1, a2, . . . , ar(M)−k, . . . , ar(M)} of M. Let X = cl({a1, a2, . . . , ar(M)−k+1}). Since the rank of X0 is r(M)−k +1 = r(M)−(k −1), the inductive hypothesis allows k−1 0 \ us to find k − 1 distinct hyperplanes, Hi such that X = Hi. And by lemma i=1 0 2.5.5 above, there exists a hyperplane H such that X = X ∩H. Clearly H 6= Hi for any 1 ≤ i ≤ k − 1, as otherwise X = X0 ∩ H = X0.
Proposition 2.5.7. A subset of the ground set of a matroid is a spanning set if and only if it has full rank.
Proof. Let X be a spanning set of M, i.e. cl(X) = E. Proposition 2.5.1 tells us that r(X) = r(cl(X)) = r(E) = r(M). Obversely, if r(X) = r(M) then (r2) tells us that for all x ∈ E, r(X ∪ {x}) = r(X), i.e. x ∈ cl(X). So E ⊆ cl(X) and X is spanning. 30 Proposition 2.5.8. The following are equivalent:
(1) X is a basis of M,
(2) X is both spanning and independent, and
(3) X is a minimal spanning set.
Proof. If X is a basis of M then, by proposition 2.4.6, r(X) = r(M), so X is
0 0 spanning. Choose x ∈ X and consider the set X = X r {x}. X , of course, is in I. Proposition 2.4.5 tells us that r(X0) = r(X) − 1 < r(M), implying that
X0 is not spanning. Hence X is a minimal spanning set and (1) =⇒ (3).
Assume that X is a minimal spanning set. Then, since it is spanning, r(X) =
0 r(M). Let x ∈ X. Since X is minimal, X = X r {x} is not spanning, in other 0 0 0 words cl(X ) ( cl(X) = E. If x ∈ cl(X ) then r(M) = r(X) = r(X ∪ {x}) = 0 0 r(X ), and X would be spanning. So x 6∈ cl(X r {x}) for any x in X, meaning that X is independent and (3) =⇒ (2).
If X is spanning and independent, then proposition 2.4.5 tells us that r(X) =
|X| and proposition 2.5.7 above tells us that r(X) = r(M). Putting those together with proposition 2.4.6 tells us that X is a basis. So (2) =⇒ (1).
Having seen relationships between the closure operator and the notions of independence and bases, we now turn to circuits.
Proposition 2.5.9. If M is a matroid with ground set E and X ⊆ E, then X is a circuit if and only if X is a minimal non-empty set such that x ∈ cl(X r {x}) for all x ∈ X. Also, for general subsets X,
cl(X) = X ∪ x ∈ E | M has a circuit C such that x ∈ C ⊆ X ∪ {x} . (2.5.9)
31 Proof. If X is a circuit of M and x ∈ X then, since X is dependent and X r{x} is independent, lemma 2.5.4 tells us that x ∈ cl(X r {x}). And X is, being a circuit, a minimal non-empty set. Conversely, assume that X is a minimal non-empty set such that x ∈ cl(X r {x}) for every x ∈ X. This means that X is dependent. And, since it is minimally so, X r {x} is independent. So X is a minimal dependent set, i.e. a circuit.
Suppose that x ∈ cl(X) r X. Thence, r(X ∪ {x}) = r(X). So if B is a basis for X, then B ∪{x} is dependent. By proposition 2.3.8, there is a circuit C such that x ∈ C ⊆ B∪{x} ⊆ X ∪{x}. So the closure is contained in the union above.
Conversely, if x ∈ E r X and there is a circuit C such that x ∈ C ⊆ X ∪ {x}, then, by the characterization of circuits that started this proposition and (cl2), x ∈ cl(C r {x}) ⊆ cl(X) and the equality holds.
2.6 Bond matroids and more general dual matroids
The dual matroid is a concept introduced by Whitney [Whi87] to extend two ideas: orthogonal vector spaces and planar duals of planar graphs.
We will need the following lemma to show that the dual of a matroid is a matroid as well.
Lemma 2.6.1. If M is a matroid and B is its set of bases, then
(b2*) If B1,B2 ∈ B and x ∈ B2 r B1, then there is an element y ∈ B1 r B2
such that (B1 r {y}) ∪ {x} ∈ B.
Note that this is genuinely different from (b2), not simply a relabeling. 32 Proof. Proposition 2.3.8 indicates that there is a unique circuit C(x, B1) con- tained in B1∪{x}. This circuit is dependent and B2 is independent, so C(x, B1)r
B2 must contain at least one element. Let y be such an element. Clearly y ∈ B1, since otherwise y = x ∈ B2 which contradicts our assumptions. So y ∈ B1 r B2.
Further, C(x, B1) 6⊂ (B1 r {y}) ∪ {x}
Theorem 2.6.2. Let M be a matroid with ground set E and let B be its set
∗ ∗ of bases. If we define B as {E r B | B ∈ B} then B is the set of bases of a matroid on E.
Proof. Since B is nonempty, so is B∗, meaning that B∗ satisfies (b1). Take
∗ ∗ ∗ ∗ two sets B1 ,B2 ∈ B , and let Bi = E r Bi for i = 1 or 2. This means that
B1,B2 ∈ B and
∗ ∗ ∗ ∗ B1 r B2 = B1 ∩ (E r B2 ) = (E r B1) ∩ B2 = B2 r B1. (2.6.1)
∗ ∗ If we let x ∈ B1 r B2 = B2 r B1 then (b2*) tells us that there is an element ∗ ∗ y ∈ B1 r B2 = B2 r B1 such that (B1 r {y}) ∪ {x} ∈ B. Note that E r (B1 r {y}) ∪ {x} ∈ B∗ and that