Distance-Transitive Graphs

Submitted for the module MATH4081

Robert F. Bailey (4MH) Supervisor: Prof. H.D. Macpherson

May 10, 2002 2

Robert Bailey Department of Pure University of Leeds Leeds, LS2 9JT May 10, 2002

The cover illustration is a diagram of the Biggs-Smith graph, a -transitive graph described in section 11.2. Foreword

A graph is distance-transitive if, for any two arbitrarily-chosen pairs of vertices at the same distance, there is some automorphism of the graph taking the first pair onto the second.

This project studies some of the properties of these graphs, beginning with some relatively simple combinatorial properties (chapter 2), and moving on to dis- cuss more advanced ones, such as the adjacency (chapter 7), and Smith’s Theorem on primitive and imprimitive graphs (chapter 8).

We describe four infinite families of distance-transitive graphs, these being the Johnson graphs, odd graphs (chapter 3), Hamming graphs (chapter 5) and Grass- mann graphs (chapter 6). Some theory used in describing the last two of these families is developed in chapter 4.

There is a chapter (chapter 9) on methods for constructing a new graph from an existing one; this concentrates mainly on line graphs and their properties.

Finally (chapter 10), we demonstrate some of the ideas used in proving that for a given integer k > 2, there are only finitely many distance-transitive graphs of valency k, concentrating in particular on the cases k = 3 and k = 4. We also (chapter 11) present complete classifications of all distance-transitive graphs with these specific valencies.

Acknowledgements

I would like to thank my supervisor, Prof. H.D. Macpherson, for his assistance and encouragement throughout the duration of the project, and for suggesting the topic in the first place. Thanks are also due to my tutor, Dr. R.B.J.T. Allenby, for (voluntarily!) reading a preliminary version of the project, to Profs. J.K. Truss and J.C. McConnell for answering my “Do you know anything about....” questions, and to the assessor for his positive comments during the preliminary assessments. I should also thank Prof. E.R. Vrscay of the University of Waterloo for supplying [26].

3 Contents

1 Introduction 6 1.1 Basic Definitions ...... 6 1.2 A Little ...... 7 1.3 Automorphisms ...... 9 1.4 Different Kinds of ‘Transitive’ ...... 10

2 Introducing the Distance-Transitive Graph 14 2.1 Basic Properties ...... 14 2.2 Distance Partitions ...... 17 2.3 Intersection Numbers and Intersection Arrays ...... 18 2.4 Distance-Regular Graphs ...... 24

3 Uniform Subset Graphs 27 3.1 Introduction ...... 27 3.2 The Johnson Graphs J(n,2,1) ...... 28 3.3 The Johnson Graphs J(n,k,k 1) ...... 30 − 3.4 Pretty Pictures ...... 33 3.5 The Odd Graphs ...... 33

4 Some Permutation Group Theory 39 4.1 Primitive and Imprimitive Actions ...... 39 4.2 Direct and Semi-direct Products ...... 40 4.3 Wreath Products ...... 41 4.4 Projective Groups ...... 41

5 Hamming Graphs 43 5.1 Introduction ...... 43 5.2 Distance-Transitivity ...... 44 5.3 The k- ...... 46

6 Grassmann Graphs 47 6.1 Introduction ...... 47 6.2 Distance-Transitivity ...... 49 6.3 Intersection Arrays ...... 52

4 CONTENTS 5

6.4 Linking the Grassmann and Johnson Graphs ...... 54

7 and Distance-Transitive Graphs 56 7.1 The Spectrum and the Adjacency Algebra ...... 56 7.2 Distance Matrices ...... 60 7.3 The Intersection Matrix ...... 62 7.4 Algebraic Constraints on the Intersection Array ...... 67

8 Primitive and Imprimitive Graphs 72 8.1 Introduction ...... 72 8.2 Antipodal Graphs ...... 75 8.3 Bipartite Distance-Transitive Graphs ...... 77 8.4 Smith’s Theorem ...... 80

9 New Graphs from Old 84 9.1 Line Graphs ...... 84 9.2 Automorphisms of Line Graphs ...... 86 9.3 Eigenvalues of Line Graphs ...... 88 9.4 Distance-Transitive Line Graphs ...... 90 9.5 Bipartite Doubles ...... 95

10 Bounding the Diameter 99 10.1 Introduction ...... 99 10.2 Cubic Graphs ...... 99 10.3 Tetravalent Graphs ...... 104 10.4 Extending to Higher Valencies ...... 107

11 Graphs of Low Valency 110 11.1 Smith’s Program ...... 110 11.2 Cubic Distance-Transitive Graphs ...... 110 11.3 Tetravalent Distance-Transitive Graphs ...... 117 Chapter 1

Introduction

1.1 Basic Definitions

There are many simple definitions from that the reader is probably familiar with already (if not, consult an introductory text such as Wilson [40]). However, we will include these, if only to demonstrate our terminology and nota- tion, which varies considerably between texts.

Definitions 1.1.1 – Graph Theory Definitions

Let Γ be a graph, with VΓ denoting the set of vertices of Γ and EΓ the set of • edges of Γ.

A graph is simple if all edges join two distinct vertices, and between any pair • of vertices u,v there is at most one edge. In this project, we will always be considering simple graphs.

Two vertices u,v VΓ are adjacent if there is a single edge joining them. • We write u v to∈ denote this. (Note that this relation is symmetric, but not reflexive∼ or transitive.) Also, two edges are adjacent∼ if they are incident with a common .

The , or valency of a vertex v is the number of edges incident with v, • denoted deg(v).

A graph Γ is said to be regular if deg(u) = deg(v) for all u,v VΓ. It is • k-regular if deg(v) = k for all v VΓ. In this case, we refer to the∈valency of Γ. A 3- is frequently∈ described as a , or sometimes as a trivalent graph.

A path π in Γ is a finite sequence of edges from vertex u to vertex v where • all the intermediate vertices are distinct.

6 1.2 A Little Group Theory 7

Γ is said to be connected if for any u,v VΓ there exists a path π from u to • v. Otherwise, we say Γ is disconnected.∈ A maximal connected subgraph of a disconnected graph Γ is called a component of Γ.

A geodesic in Γ is a path from u to v containing the least number of edges. • The distance from u VΓ to v VΓ is the least number of edges in a path • from u to v. This is denoted∈ by d∈(u,v).

The maximum distance in a graph Γ is called the diameter of Γ. • A circuit in Γ is a path from v to v. • The of Γ is the length of the shortest circuit in Γ. • A graph is bipartite if VΓ = V1 ˙V2 and each edge of Γ has one end in V1 and • ∪ the other end in V2. It can be shown that Γ is bipartite if and only if it has no circuits of odd length.

An isomorphism from a graph Γ to a graph ∆ is a bijective function ϕ : VΓ • V∆ such that for u,v VΓ, ϕ(u) ϕ(v) (in ∆) if and only if u v (in Γ).→ In Γ ∈ ∆ ∼ Γ ∆ ∼ this case we say and are isomorphic, denoted by ∼= . An isomorphism from a graph Γ to itself is called an automorphism of Γ. • 1.2 A Little Group Theory

It is assumed that the reader is already familiar with basic group theory, for example groups, Abelian groups, subgroups, direct products, cosets, Lagrange’s Theorem, homomorphisms, isomorphisms and factor groups. (If not, see a book such as Al- lenby [2] or Gallian [18].) In this project, we are particularly concerned with the concept of a on a set (specifically on the set of vertices of a graph). Formally, this is defined as follows:

Definition 1.2.1 A group action of a group G on a set X is a function ρ : X G X satisfying × → for all x X and for all g,h G, x(gh) = (xg)h; • ∈ ∈ for all x X, xe = x (where e is the identity element of G). • ∈ An action is said to be faithful if the only element of G fixing all elements of X is the identity.

(The basics of group actions are covered well in Slomson [31]). This definition leads us straight away to a number of others: 8 Introduction

Definition 1.2.2 The orbit of an element x X is the set OrbG(x) = xg g G (i.e. all elements ∈ { ∈ } of X that are the image of x under some permutation g G). OrbG(x) is a subset of X. ∈

Definition 1.2.3 Suppose G is a permutation group acting on a set X. We say G is transitive on X if, for all x,y X, there exists g G such that xg = y. ∈ ∈ We can also characterise transitivity in this way:

Proposition 1.2.4 G is transitive on X if and only if, for all x X, OrbG(x) = X. ∈ Proof: Suppose G is transitive on X. Then, by definition 1.2.3, for any x X there exists ∈ some g G such that xg = y for all y X. Thus y OrbG(x) for all x X. ∈ ∈ ∈ ∈ Conversely, suppose OrbG(x) = X for all x X. Then, by definition 1.2.2, for any x,y X, there exists g G such that xg = y∈. So G is transitive on X . ∈ ∈  Definition 1.2.5 The stabiliser of an element x X is the set StabG(x) = g G xg = x (i.e. all ∈ { ∈ } permutations of x that map x to itself).

It is easy to show the following:

Proposition 1.2.6 StabG(x) is a subgroup of G.

Proof: StabG(x) is closed, as for any g,h StabG(x), we have x(gh) = (xg)h = xh = x. ∈ Associativity is automatic, as StabG(x) is a subset of G. The identity is obviously in StabG(x), as xe = x is always true. Every element of StabG(x) clearly has an 1 inverse in StabG(x), as if xg = x, then x = xg . Hence StabG(x) G . − ≤  We can now determine the orbit and stabiliser of an element under the action of a small permutation group, and determine whether this action is transitive.

Example 1.2.7 Consider the permutation group S4 acting on the set X = 1,2,3,4 . The orbit { } of 1 X, OrbS (1) = 1,2,3,4 , as S4 contains permutations that map any ele- ∈ 4 { } ment of X to any other element. Thus we can see S4 is transitive on X (but not on 1,2,3,4,5 , for example). The stabiliser of 1 is the group of permutations { } StabG(x) = e,(2 3),(2 4),(3 4),(2 3 4),(2 4 3) . { } 1.3 Automorphisms 9

You might have noticed that, in the above example, OrbS (1) = 4, and that | 4 | StabS4 (1) = 6, while S4 = 24 = 4 6. This is explained by the following theo- |rem. | | | ×

Theorem 1.2.8 – The Orbit-Stabiliser Theorem Let G be a group acting on a set X. Then for any g G and any x X, ∈ ∈ OrbG(x) StabG(x) = G . | | × | | | | In particular, the size of an orbit divides G . | | Proof: Lagrange’s Theorem tells us that, for any subgroup H G, ≤ G = H G : H | | | | × | | where G : H is the size of [G : H], the set of right cosets of H in G. So to prove the Orbit-Stabiliser| | Theorem, we just have to establish a 1-1 correspondence between OrbG(x) and [G : StabG(x)]. So suppose g1,g2 G, and let H = StabG(x) for clarity. Then ∈ 1 Hg1 = Hg2 H = Hg2(g1− ) (by standard group theory) ⇔ 1 g2g1− H ⇔ ∈1 x(g2g1− ) = x (since H is the stabiliser of x) ⇔ 1 (xg2)g− = x (by the definition of a group action) ⇔ 1 xg2 = xg1. ⇔ So we have the 1-1 correspondence we require, so OrbG(x) = G : StabG(x) , and | | | | the result follows .

In the following section, we bring groups and graphs together.

1.3 Automorphisms

We have already defined an automorphism of Γ to be an isomorphism from Γ to it- self. This can also be thought of as follows: an automorphism of Γ is a permutation of the vertices of Γ that maps edges to edges (and non-edges to non-edges). There is an identity automorphism (put simply: doing nothing) and each automorphism has an inverse. We can therefore see quite easily that the set of all automorphisms of Γ forms a group.

Definition 1.3.1 The group of all automorphisms of a graph Γ is called the automorphism group of Γ, denoted by Aut(Γ).

For a graph on n vertices, Aut(Γ) is a subgroup of Sn. 10 Introduction

Examples 1.3.2 – Some graphs with familiar groups as their automorphism groups

Figure 1.1: The K4

Aut(K4) ∼= S4, as every permutation of VK4 is an automorphism.

Figure 1.2: The square, or 4-circuit C4

Aut(C4) ∼= D4 (the dihedral group on 4 vertices), as we can rotate or reflect C4.

Figure 1.3: A graph with Aut(Γ) = e { } This graph Γ has only one automorphism, the identity, so Aut(Γ) = e . { }

1.4 Different Kinds of ‘Transitive’

In this project, we are going to consider a class of graphs that have special condi- tions on their automorphism groups. Such a condition is defined below:

Definition 1.4.1 We say that Γ is vertex-transitive if Aut(Γ) acts transitively on VΓ.

A similar definition is as follows: 1.4 Different Kinds of ‘Transitive’ 11

Definition 1.4.2 Γ is said to be edge-transitive if Aut(Γ) acts transitively on EΓ.

These two properties are not interchangable; there exist graphs that are vertex- transitive but not edge-transitive, and vice-versa, also graphs that are both vertex- and edge-transitive and graphs that satisfy neither property. We now give examples of each.

Examples 1.4.3

Figure 1.4: A vertex- and edge-transitive graph

Figure 1.5: A graph that is neither vertex- nor edge-transitive

Examples 1.4.4 – Graphs with only one of the two properties

Figure 1.6: ∆ (left) and K1,4 (right) 12 Introduction

∆ is clearly vertex-transitive, but is not edge-transitive. (This is an example of a . Consult Godsil & Royle [22] for details of these.) K1,4 (a com- plete ) is clearly edge-transitive, but is not vertex-transitive as it is not regular (it has one vertex of degree 4, and four of degree 1).

Examples 1.4.5 – Two graphs with the same automorphism group

Figure 1.7: C5 (left) and W5 (right)

Both C5 and W5 (the W stands for “wheel”) have D5 as their isomorphism group, but it does not act vertex-transitively on W5, as the vertex v is fixed by all elements of D5. This is because it is the only vertex of degree 5, while all other vertices have degree 3, so OrbD (v) = v . It does not act edge-transitively on W5, as we can 5 { } partition EW5 into two orbits under D5: the “rim” of the wheel and the “spokes”. No automorphism moves an edge between these two sets.

A stronger condition than either of the above is that of s-arc-transitivity. To explain this, we first need to know what an s-arc is.

Definition 1.4.6 An s-arc in a graph Γ is a sequence of s + 1 vertices v0,...,vs, such that

vi vi 1, and • ∼ −

vi 1 = vi+1 • − 6 for 0 < i < s.

Notice the subtle difference between the definitions of s-arc and path: an s-arc is not necessarily a path. Consider the following example:

Example 1.4.7 In figure 1.8, abdehgi, abdgi and abd fi are all paths, of which the last two are 1.4 Different Kinds of ‘Transitive’ 13 geodesics (so d(a,i) = 4). The first path is also a 6-arc, and the last two are 4-arcs. abdc f dgi is a 7-arc, but not a path, as the vertex d is repeated.

Figure 1.8: Examples of paths, geodesics and arcs

The next definition is a natural one, bearing in mind the previous one.

Definition 1.4.8 A graph Γ is s-arc-transitive if Aut(Γ) acts transitively on the set of all s-arcs of Γ.

A 0-arc transitive graph is just another name for a vertex-transitive graph. A 1-arc transitive graph is both vertex- and edge-transitive and is often known, ac- cording to Biggs [8], as a . It is clear that an s-arc-transitive graph is also (s 1)-arc-transitive, and thus is (s 2)-arc-transitive, and so on induc- tively. Going− the other way, however, we require− the following definition:

Definition 1.4.9 Γ is strictly s-arc-transitive if it is s-arc-transitive but not (s + 1)-arc-transitive.

We are going to consider in detail a class of graphs with a condition that is stronger than any of the above, namely distance-transitive graphs. They are defined as follows:

Definition 1.4.10 For any vertices u,v,u0,v0 VΓ satisfying d(u,v) = d(u0,v0), Γ is distance-transitive if there exists some g Aut∈ (Γ) satisfying ug = u and vg = v . ∈ 0 0

Examples of distance-transitive graphs include the complete graphs Kn, the n- circuits Cn, the platonic graphs and many others, as we will discover. Chapter 2

Introducing the Distance-Transitive Graph

2.1 Basic Properties

In this chapter, we begin to determine the properties of distance-transitive graphs that we are going to use. We start by finding relationships between the different notions of transitivity we saw in 1.4, specifically concerning distance-transitive graphs. However, we also need some properties of s-arc-transitive graphs, which we can then relate to the distance-transitive case. We start with a fairly obvious property.

Proposition 2.1.1 Any distance-transitive graph is vertex-transitive.

Proof: Let Γ be a distance-transitive graph. Take vertices u = v and u = v VΓ (so that 0 0 ∈ d(u,v) = d(u0,v0) = 0). We know there exists g Aut(Γ) such that ug = u0 and Γ Γ ∈ vg = v0 (as is distance-transitive). Hence is also vertex-transitive .

However, the converse is not true in general; there exist vertex-transitive graphs that are not distance-transitive. Consider the following counterexample:

Example 2.1.2 The cyclic n-ladder Ln (L6 is shown in Figure 2.1) is clearly vertex-transitive – we can rotate and reflect Ln (so that Dn Aut(Γ)); we can also move any of the ‘inner ring’ to anywhere on the ‘outer ring’.≤ In fact, Aut(Γ) = r,v,s : rn = v2 = s2 = e, vrn 1 = rv, rs = sr, vs = sv . However, Γ is not distance-transitive:h consider − i u,v,u0,v0 as shown. Clearly d(u,v) = d(u0,v0) = 2. But there is no automorphism that moves u,v to u ,v , as there is only one geodesic path from u to v, while { } { 0 0} there are two from u0 to v0 .

14 2.1 Basic Properties 15

Figure 2.1: A vertex-transitive graph that is not distance-transitive

We now take a detour into the theory of s-arc-transitive graphs, with the aim of finding when such a graph is also distance-transitive. The following two lemmas are both due to W.T. Tutte [37], although Godsil & Royle [22] explain them using more modern terminology.

Lemma 2.1.3 (Tutte 1947) Suppose Γ is s-arc-transitive, with valency 3 and girth g. Then g 2s 2. ≥ ≥ − Proof: By assumption, Γ contains a circuit of length g, say (v0,v1,...,vg 1,v0). This is, in particular, a g-arc. Since each vertex of Γ has degree 3, we− can change the ≥ terminal vertex to obtain a different g-arc (v0,v1,...,vg 1,vg). Clearly, no auto- morphism of Γ cam map the first g-arc onto the second, so− s < g. Any circuit in Γ contains an s-arc, so by the s-arc-transitivity of Γ, any s-arc lies in some circuit of length g.(?) Now let α = (v0,...,vs) be an s-arc in Γ. By (?) above, α lies in some circuit of length g, C1 say. Since all vertices of Γ have degree 3, vs 1 is adjacent to ≥ − some vertex w, which is not one of vs 2 or vs. (In fact, w cannot be any of the vertices in α, as that would give a circuit− of length < s, contradicting s < g.) So β = (v0,...,vs 1,w) is another s-arc of Γ, with α β as an (s 1)-arc. By (?) − ∩ − above, β is in some other circuit of length g, C2 say. So we have the following situation: 16 Introducing the Distance-Transitive Graph

Figure 2.2: Schematic for 2.1.3

We can construct a new circuit, C = (C1 α) ˙ (vsvs 1) ˙ (vs 1w) ˙ (C2 β), which has length (g s)+1+1+(g s) = 2g \2s+∪2. Since− ∪ the girth− of∪ Γ is\g, we have 2g 2s + 2 −g, and so g 2s− 2. − − ≥ ≥ −  An obvious follow-up question is: what happens when the girth of an s-arc- transitive graph is at its minimum value of 2s 2? The next lemma helps in an- swering this. −

Lemma 2.1.4 (Tutte) Suppose Γ is s-arc-transitive with girth 2s 2. Then diam(Γ) = s 1, and Γ is bipartite. − −

Proof: Since the girth of Γ is 2s 2, any s-arc lies in at most one circuit of length 2s 2. Since Γ is s-arc-transitive,− every s-arc lies in such a circuit, also any such circuit− must contain an s-arc. ( ) ∗ Now diam(Γ) s 1, the distance of opposite vertices in a circuit. Then sup- pose we have vertices≥ u−,v with d(u,v) = s. By definition, there is an s-arc from u to v, so these lie in a circuit of length 2s 2 (from ( ) above). Hence there is a path of length s 2 from u to v, contradicting−d(u,v) = s∗. Hence we have diam(Γ) s 1, and so diam− (Γ) = s 1. ≤ − − Now suppose Γ is not bipartite (so therefore Γ contains an odd circuit) and that diam(Γ) = s 1. Let C be an odd circuit of minimal length. Since the girth of Γ is 2s 2, which− is even, C must have length 2s 1. − −

Figure 2.3: Schematic for 2.1.4 2.2 Distance Partitions 17

Now let u,v,v be as shown, so we have an s-arc α = (u,...,v,v ). From ( ) 0 0 ∗ above, this must lie in a circuit C0 of length 2s 2. So (C α) ˙ (C0 α) is a circuit of length (2s 1 s) + (2s 2 s) = 2s 3,− contradicting\ ∪ the girth\ of Γ being 2s 2. Hence−Γ must− be bipartite.− − − −  We now have the tools to show that certain s-arc-transitive graphs are also distance-transitive.

Theorem 2.1.5 A connected s-arc-transitive graph with girth 2s 2 is distance-transitive. − Proof: Let Γ be s-arc-transitive with girth 2s 2, and choose (u,u0) and (v,v0) to be two pairs of vertices at distance i. By Lemma− 2.1.4, diam(Γ) = s 1, so we have i s 1. By assumption, there is a path of length i, and thus an−i-arc, from u to ≤ − u0 and from v to v0. Since Γ is s-arc-transitive, it is also i-arc-transitive (for i s), so there is some automorphism of Γ mapping the first arc to the second, and≤ thus Γ mapping (u,u0) to (v,v0). Hence is distance-transitive. 

2.2 Distance Partitions

Definition 2.2.1 Let Γ be any connected graph. Then for some vertex v VΓ, we define ∈ Γi(v) = u VΓ d(u,v) = i , { ∈ } known as cells of Γ. The set of all these cells form a distance partition of Γ. (Note that Γ0(v) = v for all v VΓ.) { } ∈ If Γ has a fairly small number of vertices, we can easily draw Γ by putting the vertices of Γ0(v), Γ1(v), etc. in columns from left to right. For example:

Example 2.2.2 Consider the graph Γ shown in the figure below:

Figure 2.4: A distance partition of Γ 18 Introducing the Distance-Transitive Graph

Observation 2.2.3 If Γ is vertex-transitive, the Γi(v) are independent of v. So in this case we can just write Γi. For example, consider the cyclic 6-ladder we saw in example 2.1.2:

Figure 2.5: A distance partition of L6

These distance partitions give us an alternative characterisation of distance- transitivity.

Lemma 2.2.4 Suppose Γ is connected, has diam(Γ) = d and automorphism group Aut(Γ) = G. Then Γ is distance-transitive if and only if it is vertex-transitive and StabG(v) is transitive on Γi(v) for i = 1,...,d and for all v VΓ. ∈ Proof: First, suppose Γ is distance-transitive. By 2.1.1, Γ is also vertex-transitive. Con- sider u,u Γi(v), i.e. with d(u,v) = d(u ,v) = i. Then there exists an automor- 0 ∈ 0 phism g G with vg = v and ug = u . Thus g StabG(v), and StabG(v) is transitive ∈ 0 ∈ on Γi(v). Conversely, suppose Γ is vertex-transitive and that StabG(v) is transitive on Γi(v). Consider u,w,u ,w VΓ, such that d(u,w) = d(u ,w ) = i. Let g G be 0 0 ∈ 0 0 ∈ such that wg = w and choose h StabG(w ) so that (ug)h = u . Then for the com- 0 ∈ 0 0 position gh, we get u(gh) = u0 and w(gh) = (w0)h = w0. So gh is an automorphism taking u,w to u ,w . Hence Γ is distance-transitive. { } { 0 0} 

2.3 Intersection Numbers and Intersection Arrays

Definition 2.3.1 – Intersection Numbers For any connected graph Γ, any vertices u,v, VΓ and for h,i N, define ∈ ∈

Shi(u,v) = w VΓ d(u,w) = h,d(v,w) = i { ∈ | } 2.3 Intersection Numbers and Intersection Arrays 19

That is, Shi is the set of all vertices w such that are distance h from u and distance i from v. We denote the number of such w by shi(u,v) = Shi(u,v) . | |

In a distance-transitive graph, these shi are not dependent directly on the ver- tices u,v but on the distance d(u,v) = j. This should be obvious from the definition of distance-transitivity, as any pair of vertices distance j apart is equivalent to any other such pair. So in this case, we just write shi j. These are called the intersection numbers of a distance-transitive graph.

As h,i, j are all distances, we have h,i, j 0,1,...,d , where d = diam(Γ). From this observation, we can see that there∈ are { (d + 1)3}intersection numbers. However, some are more interesting than others.

Definition 2.3.2 – The Intersection Array Suppose Γ is distance-transitive. Fix h = 1 and choose two vertices u,v such that d(u,v) = j. Now s1i j is the number of vertices w that are distance 1 from u and distance i from v, i.e. w u and d(w,v) = i. So we have the following possible values for i: ∼ i = j 1 i = j − or i = j + 1

(This can easily be seen on a distance partition of a distance-transitive graph.) Thus all the intersection numbers s1i j other than s1( j 1) j, s1 j j and s1( j+1) j must be zero. − To simplify our notation, denote s1( j 1) j = c j, s1 j j = a j and s1( j+1) j = b j. The intersection array of a distance-transitive− graph Γ is as follows:

c1 cd 1 cd  ∗ ··· −  ι(Γ) = a0 a1 ad 1 ad  ··· −  b0 b1 bd 1  ··· − ∗  Note that c0 and bd are not defined (that’s what the represent), as having pairs of vertices distance 1 or d + 1 apart is nonsense. ∗ − Alternatively, we can define c j,a j,b j as follows: Fix v VΓ and u Γ j(v). Then ∈ ∈ c j = Cj , where Cj = w VΓ d(u,w) = 1,d(v,w) = j 1 | | { ∈ | − } a j = A j , where A j = w VΓ d(u,w) = 1,d(v,w) = j | | { ∈ | } b j = B j , where B j = w VΓ d(u,w) = 1,d(v,w) = j + 1 | | { ∈ | }

Observation 2.3.3 ι(Γ) can be obtained from a distance partition of Γ, as if we fix v to be the single vertex of Γ0, StabG(v) acts transtively on each of Γ1,...Γd (see 2.2.4), thus giving the same c j,a j,b j for any vertex in Γ j. (However, the converse of this is not true in 20 Introducing the Distance-Transitive Graph general: it is possible for two non-isomorphic graphs to have the same intersection array.)

Now we have an easy way to determine the intersection array ι(Γ) for some fairly small graphs, by looking at their distance partitions.

Examples 2.3.4

1 ι  ∗  (K4) =  0 2  3  ∗ 

1 4 ι  ∗  (Oct) =  0 2 0  4 1  ∗ 

1 1 ι  ∗  (O3) =  0 0 2  3 2  ∗ 

Figure 2.6: The complete graph K4, the Oct and the O3

Observations 2.3.5 In general, for any distance-transitive graph: The sum of the entries of each column of ι(Γ) is always k, as any distance- • transitive graph is k-regular for some k (as it is also vertex-transitive by 2.3 Intersection Numbers and Intersection Arrays 21

1.4.5).

a0 = 0, b0 = k (for k as above). •

c1 = 1 – this should be obvious. • This means that if we know the top and bottom rows, we can calculate the middle row from these, by subtracting these from k. So we can rewrite the intersection array as

k,b1,...,bd 1; 1,c2,...,cd . { − } We now identify some properties of the entries of the intersection array.

Theorem 2.3.6 Suppose Γ is a distance-transitive graph, with diam(Γ) = d and deg(v) = k for all v VΓ. Let Γ have intersection array ι(Γ) = k,b1,...,bd 1;1,c2,...,cd and ∈ { − } with ki vertices in Γi(v) for any v VΓ. Then we have the following: ∈

1. ki 1bi 1 = kici (for 1 i d); − − ≤ ≤

2. 1 c2 cd; ≤ ≤ ··· ≤

3. k b1 bd 1. ≥ ≥ ··· ≥ − Proof:

1. Each vertex in Γi 1(v) is adjacent to bi 1 vertices in Γi(v). Also each vertex − − in Γi(v) is adjacent to ci vertices in Γi 1(v). Thus the total number of edges − joining Γi 1(v) and Γi(v) is ki 1bi 1 = kici. − − −

2. Fix some v VΓ. Choose u Γi+1(v), so that d(u,v) = i + 1 (for i = 1,...,d 1).∈ Choose some path∈v,x,...,u of length i+1, so that d(x,u) = i. − Then choose some w Γi 1(x) Γ1(u). (See figure 2.7.) ∈ − ∩ Hence w Γi(v) Γ1(u) also, so we have ∈ ∩

(Γi 1(x) Γ1(u)) (Γi(v) Γ1(u)) − ∩ ⊆ ∩ and thus Γi 1(x) Γ1(u) Γi(v) Γ1(u) . | − ∩ | ≤ | ∩ | As Γ is distance-transitive, we have

Γi 1(x) Γ1(u) = ci 1 | − ∩ | − and Γi(v) Γ1(u) = ci. | ∩ |

Hence ci 1 ci for all i. − ≤ We already know that c0 is undefined and c1 = 1, so we have 1 c2 cd. ≤ ≤ ··· ≤ 22 Introducing the Distance-Transitive Graph

Figure 2.7: Schematic for 2.3.6, part 2

3. Fix some v VΓ. Choose w Γi(v) (for i = 0,...,d 2) and some z adjacent ∈ ∈ − to v. Then for any u Γi+1(v) Γ1(w), by 2.3.2 we have either ∈ ∩ d(z,u) = i + 2 d(z,u) = i + 1 or d(z,u) = i

Suppose d(z,u) = i + 2. Any z-u geodesic must take in v, otherwise a path from z Γ1(v) to u Γi+1(v) could only pass through Γ1(v),...,Γi+1(v), giving a∈ shorter path∈ from z to u, which we assumed didn’t exist. So we have

(Γi+2(z) Γ1(w)) (Γi+1(v) Γ1(w)) ∩ ⊆ ∩ and thus Γi+2(z) Γ1(w) Γi+1(v) Γ1(w) | ∩ | ≤ | ∩ |

As Γ is distance-transitive, we have

Γi+2(z) Γ1(w) = bi+1 | ∩ | and Γi+1(v) Γ1(w) = bi. | ∩ | 2.3 Intersection Numbers and Intersection Arrays 23

Hence bi bi+1 for all i. ≥ We already know that b0 = k and bd is undefined, so we have

k b1 bd 1 ≥ ≥ ··· ≥ −

This concludes the proof. 

Figure 2.8: Schematic for 2.3.6, part 3

Corollary 2.3.7 Assuming we know ι(Γ), a recursion formula for ki (the number of vertices in Γi) is

ki 1bi 1 ki = − − ci with initial condition k0 = 1.

Proof: For the formula, just rearrange 2.3.6, part 1. For the initial condition, observe that Γ 0 has only one vertex by definition, so k0 = 1. 

The two previous results give us some combinatorial constraints on whether an arbitrary array can be the intersection array of a distance-transitive graph. Further 24 Introducing the Distance-Transitive Graph such constraints are given below.

Proposition 2.3.8 Suppose we have have an array of integers of the form

1 c2 cd 1 cd  ∗ ··· −  ι = a0 a1 a2 ad 1 ad .  ··· −  k b1 b2 bd 1  ··· − ∗  Then if ι is the intersection array of a distance-transitive graph, the following hold:

1. For 2 j d, the numbers ≤ ≤ kb1 b j 1 k j = ··· − 1c2 c j ··· are positive integers.

2. If part 1 holds and n = 1 + k + k2 + + kd, then nk is even. ··· Proof:

1. In a distance-transitive graph Γ of valency k with intersection array ι(Γ), the number k j denotes the number of vertices in Γ j(v) (for some v VΓ and 0 j d). By repeated application of the recursion formula in∈ 2.3.7, we obtain≤ ≤

kb1 b j 1 k j = ··· − . 1c2 c j ··· These numbers must be positive integers.

2. Clearly, since k0=1 and k1 = k, VΓ = 1 + k + k2 + + d = n, say. Since Γ is k-regular, the sum of the degrees| | of the vertices··· of Γ is nk. By the from basic graph theory (Wilson [40], p.12), this must be an even integer.  Thus we now have some necessary conditions for an arbitrary array to be the intersection array of a distance-transitive graph. We will return to this problem in section 7.4.

2.4 Distance-Regular Graphs

Definition 2.4.1 We can always find a distance partition for any graph (e.g. example 2.2.2). Sup- th pose that it happens that for some graph Γ, say, the i cell Γi(v) has the same size for any v VΓ and for any i, and that each vertex in Γi(v) is adjacent to the same ∈ number of vertices in Γi 1(v), Γi(v) and Γi+1(v). Clearly we can write down an − 2.4 Distance-Regular Graphs 25 intersection array for Γ, as all the numerical condtions in the previous section are satisfied. Under these cirumstances, Γ is said to be a distance-regular graph.

Distance-regularity is a purely combinatorial condition: nowhere in the above definition do we consider any automorphisms of Γ. Clearly, any distance-transitive graph is distance-regular, but the converse is certainly not true. Unlike exam- ple 2.1.2, however, there isn’t such a simple counterexample. This is taken from Adel’son-Vel’skii et al [1].

Example 2.4.2 – The Adel’son-Vel’skii Graph1

b_08

b_09

b_11 a_04 b_07 a_09 b_06 a_03 b_05 a_10 b_03 a_01 a_00 b_02 a_12 b_01 b_04 a_11 b_10 a_08 b_00 a_07 b_12 a_06

a_05

a_02

Figure 2.9: The Adel’son-Velskii Graph (drawn using MAPLE)

Let VΓ = a0,...,a12,b0,...,b12 . Then let the following vertices be adjacent: { }

ai a j i j = 1,3,4 • ∼ ⇔ | − |

bi b j i j = 2,5,6 • ∼ ⇔ | − |

ai b j i j = 0,1,3,9 • ∼ ⇔ − (all taken modulo 13). Then the graph in figure 2.9 is obtained. It is distance-

1Not are all edges appear, due to limitations in the software – it draws edges in the same cell over the top of each other! 26 Introducing the Distance-Transitive Graph regular, with intersection array

1 4 ι Γ  ∗  ( ) =  0 3 6 . 10 6  ∗  However, it is not distance-transitive or even vertex-transitive: there is no auto- mophism taking any ai to any b j. For a proof, see [1]; it uses some quite advanced group theory.

We conclude this chapter with a diagram relating most of the different kinds of graph we have seen so far:

DISTANCE-TRANSITIVE = Distance-Regular ⇒ ⇓ Edge-Transitive = Symmetric (1-arc-transitive) ⇐ ⇓ Vertex-Transitive

Figure 2.10: A hierarchy of conditions Chapter 3

Uniform Subset Graphs

3.1 Introduction

These graphs form our first large class of examples. They are defined as follows:

Definition 3.1.1 Consider the set Ω = 1,...,n . Let Xn,k be the set of all subsets of Ω of size k (which we shall call k-subsets).{ } We define the uniform subset graph J(n,k,i) has vertex set Xn,k, and two vertices u = i1,...,ik , v = j1,..., jk being adjacent if and only if u v = i. { } { } | ∩ | First, we make some observations about the uniform subset graphs that follow fairly quickly from the definition.

Proposition 3.1.2 n 1. J(n,k,i) has vertices. k

k n k 2. J(n,k,i) is regular, each vertex having valency . ik− i − Proof: 1. The number of vertices of J(n,k,i) is just the number of subsets of size k of n Ω, which has size n, which is given by . k 2. Suppose u v. Then (regarding u,v as k-sets) v contains i elements from u (of which there∼ are k to choose from) and also k i elements that are not in u (of which there are n k). − −  This is clearly a very large family of graphs. However, it is not as large as it first appears, as we can assume that n 2k from the following result. ≥

27 28 Uniform Subset Graphs

Lemma 3.1.3 For n k i, we have J(n,k,i) = J(n,n k,n 2k + i). ≥ ≥ ∼ − − Proof: Suppose u = j1,..., jk Xn,k. Define a function { } ∈

ψ : Xn,k Xn,n k where ψ(u) =→ (Ω u)− = uc. \ Suppose u v in J(n,k,i). Then, by definition, u v = i. So we have ∼ | ∩ | ψ(u) ψ(v) = uc vc | ∩ | = |(u∩ v)|c by De Morgan’s Laws = n| ∪u v| = n −( |2k∪ |i) = n − 2k +−i. − Therefore ψ(u) ψ(v) in J(n,n k,n 2k +i), so ψ preserves adjacency. Also, ψ is clearly a bijection.∼ So ψ is an isomorphism− − from J(n,k,i) to J(n,n k,n 2k+i), − − hence J(n,k,i) = J(n,n k,n 2k + i). ∼ − −  For n 2k, the graphs J(n,k,k 1) are known as Johnson graphs, the graphs J(n,k,0) are≥ known as Kneser graphs− and the graphs J(2k 1,k 1,0) are known as the odd graphs. These families had been investigated− before− the general def- inition of a uniform subset graph was given by Chen and Lih [15] in 1987. The Johnson and odd graphs are the ones which this chapter is devoted to, as we shall see that they are distance-transitive.

3.2 The Johnson Graphs J(n,2,1)

Let us first consider a simple example.

Example 3.2.1

Figure 3.1: J(4,2,1) 3.2 The Johnson Graphs J(n,2,1) 29

J(4,2,1) is the smallest . We have

X4,2 = 1,2 , 1,3 , 1,4 , 2,3 , 2,4 , 3,4 . {{ } { } { } { } { } { }} In fact, J(4,2,1) is isomorphic to the octahedron.

As a preview of the next section, we have this result:

Theorem 3.2.2 The Johnson graph J(n,2,1) is distance-transitive.

Proof: First, note that any permutation of Ω = 1,...,n immediately induces a permuta- { } tion on Xn,2. For example, with n = 4, σ = 1 2 3 4 S4, we get  ∈ ( 1,2 )σ = ( (1)σ,(2)σ ) = 2,3 ({1,3})σ = {2,4 } { } ({1,4})σ = {2,1} = 1,2 ({2,3})σ = {3,4} { } ({2,4})σ = {3,1} = 1,3 ({3,4})σ = {4,1} = {1,4} { } { } { } Clearly, Sn Aut(J(n,2,1)) and acts transitively on Xn,2. Hence J(n,2,1) is vertex-transitive.≤ Secondly, notice that diam(J(n,2,1)) = 2: if i1,i2 j1, j2 , we know that { } 6∼ { } between them lie the vertices i1, j1 , i1, j2 , i2, j1 and i2, j2 , since are all { } { } { } { } adjacent to both i1,i2 and j1, j2 . Thus to determine distance-transitivity, we only have two distances{ } to consider:{ } 1 and 2.

1. Suppose d ( i1,i2 , j1, j2 ) = 1 { } { } That is, i1,i2 j1, j2 { } ∼ { } i1,i2 j1, j2 = 1 ⇔ |{ } ∩ { 1 }| Assume WLOG that i1 = j1, i2 = j2. 6 Then, for any σ Sn, (i1)σ = ( j1)σ,(i2)σ = ( j2)σ. ∈ 6 So ( i1,i2 )σ ( j1, j2 )σ = 1 | { } ∩ { } | ( i1,i2 )σ ( j1, j2 )σ ⇔ { } ∼ { } d (( i1,i2 )σ,( j1, j2 )σ) = 1. ⇔ { } { } 2. Suppose d ( i1,i2 , j1, j2 ) = 2. { } { } Then i1,i2 j1, j2 , so i1, i2, i1 ,i2 are all distinct. { } 6∼ { } So for any σ Sn, (i1)σ, (i2)σ, ( j1)σ, ( j2)σ are also all distinct. ∈ Hence ( i1,i2 )σ ( j1, j2 )σ, so d (( i1,i2 )σ,( j1, j2 )σ) = 2. { } 6∼ { } { } { } So for any two adjacent vertices (i.e. vertices of distance 1) we can find a per- mutation σ Sn which maps them onto any other two adjacent vertices. Likewise, ∈ 1Without Loss Of Generality 30 Uniform Subset Graphs any two non-adjacent vertices (i.e. vertices of distance 2) can be mapped onto any other non-adjacent vertices. Hence J(n,2,1) is distance-transitive. 

Note that the graphs J(n,2,1) are sometimes known as the triangle graphs, denoted ∆n.

3.3 The Johnson Graphs J(n,k,k 1) − Now we want to generalise 2-subsets of Ω to k-subsets, adjacent as vertices when intersecting in a (k 1)-set; that is, the Johnson graphs J(n,k,k 1) (for n 2k, from 3.1.3). We start− with a weaker result than the one we actually− want. ≥

Proposition 3.3.1 J(n,k,k 1) is vertex-transitive. − Proof: As with 2-subsets, Sn has an induced action on k-subsets, so for all σ Sn, ∈ ( i1,...,ik )σ = (i1)σ,...,(ik)σ . All the (iα)σ are distinct. So for vertices { } { } i1 ik u = i1,...,ik and v = j1,..., jk , the permutation σ =  ···  takes { } { } j1 jk vertex u to vertex v. So J(n,k,k 1) is vertex-transitive. ··· −  We now outline our strategy for showing distance-transitivity: Show that permutations fix the size of the intersection of two vertices (when • viewed as k-sets); Show that there is a 1-1 correspondence between intersect size and distance • of two vertices; Conclude that permutations fix distance. • So we start with this lemma.

Lemma 3.3.2 For any permutation σ Sn and any pair u,v Xn,k, u v = (u)σ (v)σ . ∈ ∈ | ∩ | | ∩ | Proof: Suppose u v = i1,...,ik j1,..., jk = m say. Then WLOG assume that | ∩ | |{ } ∩ { }| iα = jα for 1 α m, iα = jα otherwise. So for any σ Sn, ≤ ≤ 6 ∈ (iα)σ = ( jα)σ for 1 α m, ≥ ≥ (iα)σ = ( jα)σ otherwise. 6 Hence ( i1,...,ik )σ ( j1,..., jk )σ = m also. So σ preserves the “intersect | { } ∩ { } | size” for all σ Sn. ∈  3.3 The Johnson Graphs J(n,k,k 1) 31 −

The key step is this next one:

Lemma 3.3.3 For any two vertices u,v of J(n,k,k 1), d(u,v) = m if and only if u v = k m (when regarding u,v as k-sets). − | ∩ | −

Proof – by induction on m: Basis: Suppose d(u,v) = 1, i.e. u v, so by definition we have u v = k 1. Induction Hypothesis (IH): Suppose∼ the theorem holds for m | ∩r, i.e.| for− each m r, d(u,v) = m if and only if u v = k m. ≤ ≤Choose vertices u,v,w such that| ∩ d|(u,v)− = r and d(u,w) = 1. By the IH, we have u v = k r, so | ∩ | −

u = s1,...,sk r,i1,...,ir { − } v = s1,...,sk r, j1,..., jr { − } where the i’s and j’s are all distinct. Also, by definition, we have v w = k 1. So we must have one of the following possibilities for w: | ∩ | −

(i) w = s1,...,sk r, j1,..., jr 1,iα for some iα i1,...,ir ; { − − } ∈ { } (ii) w = s1,...,sk r, j1,..., jr 1,a where a Ω (u v); { − − } ∈ \ ∪ (iii) w = s1,...,sk r 1,iα, j1,..., jr. { − − } (iv) w = s1,...,sk r 1,a, j1,..., jr ; { − − } In case (i), we have u w = k (r 1), so by the IH, d(u,w) = r 1. In cases (ii) and (iii), u w = k| ∩r, so| again− by− the IH we have d(u,w) = r.− Thus it remains to check case| ∩ (iv),| where− u w = k (r + 1). We know that there exists| ∩ | a path− of length r + 1 from u to w, as there is a path of length r from u to v and because v w. Also, we know that there is no path of length r, as by the IH that would∼ give u w k r, contradicting u w = k (r≤+ 1). Hence the shortest path from| u∩to |w ≥is of− length r + 1, so | ∩ | − d(u,w) = r + 1, and the result follows by induction. 

Thus we can now prove our main result of the section:

Theorem 3.3.4 The Johnson graph J(n,k,k 1) is distance-transitive. − Proof: By Lemmas 3.3.2 and 3.3.3, any permutation σ of 1,...,n takes two vertices at distance m to some other pair of distance m. This{ is clearly} bijective, so we can conclude that for any two pairs of vertices u,v and u0,v0 with d(u,v) = d(u0,v0), there is some permutation σ that takes u,v to u ,v . Hence J(n,k,k 1) is distance- 0 0 − transitive.  32 Uniform Subset Graphs

As the Johnson graph is distance-transitive, it has an intersection array, which we shall now determine.

Theorem 3.3.5 The intersection array of J(n,k,k 1) is given by ι(J(n,k,k 1)) = − − 12 i2 (k 1)2 k2 ( 0∗ ··· ··· − nk 2k2 ). k(n k)(k 1)(···············n k 1) (k i)(n k i) n 2k + 1 − − − − − ··· − − − ··· − ∗

Proof: We know from 2.3.5 that it is sufficient to determine bi for i = 0,1,...,d 1 and − ci for i = 1,...,d. First, we’ll find the ci: Recall that for a fixed v VΓ, and some u VΓ such that d(u,v) = i, ∈ ∈

ci = w VΓ : d(v,w) = i 1,d(u,w) = 1 . | ∈ − | Regarding vertices as k-sets, d(u,v) = i u v = k i (by 3.3.3), so ⇔ | ∩ | −

v = α1,...,αk i,β1,...,βi , and { − } u = α1,...,αk i,γ1,...,γi . { − } We want w such that d(v,w) = i 1 and d(u,w) = 1, i.e. v w = k (i 1) and − | ∩ | − − u w = k 1. Then w must contain one of β1,...,βi in place of one of γ1,...,γi. | ∩ | − There are i choices of β j, and also i choices for which γ j it replaces. 2 Hence ci = i , for i = 1,...,d. Now we’ll find the bi: Recall that (for u,v as above),

bi = w VΓ : d(v,w) = i + 1,d(u,w) = 1 . | ∈ | This time we want w such that d(v,w) = i + 1 and d(u,w) = 1, i.e. v w = | ∩ | k (i+1) and u w = k 1. So we must have w = α1,...,αk i 1,δ,γ1,...,γi , − | ∩ | − { − − } where we’ve replaced one of the k i α j’s with some δ chosen from outside of u and v, of which there are n (2k −(k i)) = n k i. − − − − − Hence bi = (k i)(n k i), for i = 0,...,d 1. A quick check shows that − − − − b0 = k(n k), which is the valency of J(n,k,k 1) as required (see 2.3.5 and 3.1.2) − − and that c1 = 1 as required (2.3.5 again). So we have the result we require. 

So, for example, the intersection array of J(5,2,1) is

1 4 ι  ∗  (J(5,2,1)) =  0 3 2 . 6 2  ∗  3.4 Pretty Pictures 33

3.4 Pretty Pictures

The notation J(n,k,k 1) is a very succinct notation for a collection of objects that get very complicated− very quickly. For example, J(6,3,2) is only the fourth Johnson graph (after J(4,2,1), J(5,2,1) and J(6,2,1)), but it has 20 vertices and 90 edges, as the diagram shows.

Figure 3.2: J(6,3,2)

Its diameter is 3, and it has intersection array

1 4 9 ι  ∗  (J(6,3,2)) =  0 4 4 0 . 9 4 1  ∗  3.5 The Odd Graphs

The odd graphs are another family of uniform subset graphs: using familiar no- tation, they are the graphs J(2k 1,k 1,0). That is, their vertices are all the (k 1)-subsets of a set Ω of size− 2k 1,− adjacent if the two sets are disjoint. By − k 1 −k 3.1.2, they have valency = k. We shall use the less cumbersome  −0 k 1 − symbol Ok to denote J(2k 1,k 1,0). O3 is the Petersen graph, shown below. − − 34 Uniform Subset Graphs

Example 3.5.1

Figure 3.3: The Petersen graph O3

A (realistic?) construction of O6 was given by Biggs [5]: “In the little English hamlet of Croam the consuming passion of the inhabitants is Association Football. In fact, the members of the village football team have become so ruthless in their will to win that no other team will play against them. “Thus the eleven footballers of Croam (who are, incidentally, the only able-bodied men in the village) are forced to arrange their own math- ches between two teams of five, with the eleventh man as referee. Fur- ther, such is the bitterness of recrimination which follows even these matches, that is has proved necessary to rule that only one match can be played with the same teams and the same referee. This rule was originally regarded with some misgiving, as it was felt that it might seriously limit the number of matches which could be played. How- ever, a villager who has a head for figures worked out that there are 1386 different ways of splitting the eleven men into two teams of five and a referee. This number is thought to be adequate but not generous, for the footballers of Croam are dedicated men.”

The odd graphs have several interesting properties, such as whether Ok is hamiltonian, or how many colours are needed for an edge-colouring (which was the motivation for Biggs’ description of O6 above). See Holton & Sheehan [29] for more details on these problems. We shall prove, of course, that Ok is distance- transitive.

We have completed a great deal of the work for this already in this chapter. Just as with the Johnson graphs, the group Sym(Ω) = S2k 1 has an induced action ∼ − on the vertices of Ok when regarded as subsets of Ω, so S2k 1 Aut(Ok). (Using − ≤ the Erdos-Ko-Rado¨ Theorem, it can be shown that Aut(Ok) = S2k 1: see [29].) By ∼ − 3.3.2, we have that all elements of S2k 1 fix the size of u v for any two vertices − | ∩ | 3.5 The Odd Graphs 35

(i.e. (k 1)-sets) u and v. So it remains to show that there is a direct relationship between−d(u,v) and u v . | ∩ | Analagous to 3.3.3, we have:

Lemma 3.5.2 Let u,v be vertices of Ok. Then for m 0, ≥ 2m u v = (k 1) m d(u,v) =  2m + 1 ⇔ |u ∩ v| = m.− − ⇔ | ∩ | (Note that this is only well-defined if m = (k 1) m for all m, i.e. if (k 1) = 2m for all m. This is achieved when 2m < (6k 1−).) − − 6 − Proof: We proceed by induction on m in the two cases separately.

1. First suppose d(u,v) is even, so d(u,v) = 2m for some m. Basis: suppose m = 0. Then d(u,v) = 0, i.e. u = v, so u v = u = k 1 = (k 1) 0. | ∩ | | | − Induction− − Hypothesis (IH): suppose the theorem holds for m r. So d(u,v) = 2r if and only if u v = (k 1) r. ≤ | ∩ | − − Now choose a vertex w such that d(u,w) = 2(r + 1) and d(v,w) = 2. (Ad- ditionally, in the case r = 1 we require explicitly that u = w.) By the IH, u v = (k 1) r. Also, there exists a vertex x such that d6 (v,x) = d(x,w) = 1.| ∩ So| by− construction,− v x = x w = 0, so we have v Ω x and w Ω x. Since v = |w∩=|k |1∩ and| Ω x = (2k 1) ⊂(k \1) = k, it must⊂ \ be that v w| | = (|k | 1) −1. | \ | − − − | ∩ | − − So what about u w ? We know (by the IH) that u v = (k 1) r and (by the above)| ∩ that| v w = (k 1) 1. So w| has∩ exactly| − one− el- ement different from those| ∩ in |v, i.e.− one− element chosen from Ω v re- placing an element chosen from v. This can be done in four ways.\ Sup- pose (for clarity) that u v = s, that u = x1,...,xs,us+1,...,uk 1 and | ∩ | { − } v = x1,...,xs,vs+1,...,vk 1 . Then we have the following possibilities: { − }

(a) w = x1,...,xs,vs+1,...,vk 2,u j for some j. Then{ u w = u v + 1. − } | ∩ | | ∩ | (b) w = x1,...,xs,vs+1,...,vk 2,α for some α Ω (u v). Then{ u w = u v . − } ∈ \ ∪ | ∩ | | ∩ | (c) w = x1,...,xs 1,u j,vs+1,...,vk 1 for some j. Then{ u w =−u v 1 + 1 = −u }v . | ∩ | | ∩ | − | ∩ | (d) w = x1,...,xs 1,α,vs+1,...,vk 1 for some α Ω (u v). Then{ u w =−u v 1. − } ∈ \ ∪ | ∩ | | ∩ | − 36 Uniform Subset Graphs

In case (a), we have u w = (k 1) r + 1 = (k 1) (r 1), so by the IH we have d(u,w) =| 2∩(r | 1), contradicting− − our assumption− − − that d(u,w) = 2(r + 1). Similarly, in cases− (b) and (c) we obtain u w = (k 1) r, giving d(u,w) = 2r, another contradiction. So we are| ∩ left| with− case− (d), where u w = (k 1) r 1 = (k 1) (r +1). Hence d(u,w) = 2(r +1) if and only| ∩ if| u w−= (−k −1) (r +−1),− and the result follows by induction. | ∩ | − − 2. Now we have that d(u,v) is odd, so equals 2m + 1 for some m. Basis: suppose m = 0. Then d(u,v) = 1, i.e. u v, so by construction u v = 0. ∼ Induction| ∩ | Hypothesis (IH): Suppose the theorem holds for m r. In partic- ular, this implies d(u,v) = 2r + 1 if and only if u v = r. ≤ | ∩ | Now choose w such that d(u,w) = 2(r +1)+1 and d(v,w) = 2. By the same arguments as in the first part of the proof, we know that v w = (k 1) 1, and by the IH we have u v = r. So we have to check| the∩ same| four− cases− (a,b,c,d) as in part 1. (Only| ∩ this| time, we’ll do them backwards!) In case (d), we have u w = u v 1 = r 1. But by the IH, this gives d(u,w) = 2(r 1)+1,| contradicting∩ | | ∩ the| − assumption− that d(u,w) = 2(r+1)+ 1. In cases (c)− and (b) we obtain u w = r, giving d(u,w) = 2r, another contradiction. Thus the only way| out∩ is| case (a), where u w = r + 1. Therefore d(u,w) = 2(r + 1) + 1 if and only if u w = r +|1,∩ and| the result | ∩ | follows by induction.  A consequence of the requirement that 2m < (k 1) is that we obtain a bound for the distance between two vertices. Because 2m−< (k 1), we have d(u,v) < (k 1) if the distance is even, and d(u,v) 1 < (k 1), so−d(u,v) (k 1) if the − − − ≤ − distance is odd. Hence the diameter of Ok is (k 1). − We now have the facts we require to prove the next theorem.

Theorem 3.5.3 The Ok is distance-transitive.

Proof: We know from Lemma 3.5.2 that if d(u,v) = d(w,x), then u v = w x (regard- ing u,v,w,x as (k 1)-sets). As with the Johnson graphs,| from∩ | Lemma| ∩ | 3.3.2 we − know that, for any two such pairs, there exists g S2k 1 such that (u)g = w and ∈ − (v)g = x. Hence Ok is distance-transitive. 

All that remains in this chapter is to calculate the intersection array of Ok. 3.5 The Odd Graphs 37

Theorem 3.5.4 The intersection array of Ok, is given by

1 1 1 1 m m + 1 2 (k 1) 2 (k 1) ι  ∗ ··· ··· − 1 −  (Ok) = 0 0 0 0 0 0 2 (k + 1)  ··· ··· 1  k k 1 k 1 k m k m 1 2 (k + 1)  − − ··· − − − ··· ∗  for k odd, and

1 1 1 1 1 m m + 1 2 k 1 2 k 1 2 k ι  ∗ ··· ··· − − 1  (Ok) = 0 0 0 0 0 0 0 2 k  ··· ··· 1 1  k k 1 k 1 k m 1 k m 2 k + 1 2 k + 1  − − ··· − − − ··· ∗  for k even.

Proof: As before, we only calculate ci and bi as these determine the intersection array completely. However, we have to consider the cases where i is even and i is odd separately.

1. Suppose i is even, so i = 2m for some m. Fix u,v VOk such that d(u,v) = 2m, so by Lemma 3.5.2, u v = k m 1. Recall∈ that | ∩ | − −

c2m = w VOk : d(v,w) = 2m 1,d(u,w) = 1 | ∈ − | = w VOk : v w = m 1, u w = 0 . | ∈ | ∩ | − | ∩ | | Now w contains m 1 elements of v and 0 elements of u. So these m 1 elements of v must− be chosen from those not in u v, of which there− are m∩ (k 1) (k 1 m) = m. Hence there are   = m choices. The − − − − m 1 remaining (k 1) (m 1) = k m elements of−w must be chosen from those outside− of u and− v,− of which− there are (2k 1) (k 1 + m) = k m, − − − − so we must include all of them. Hence c2m = m. Now recall that

b2m = w VOk : d(v,w) = 2m + 1,d(u,w) = 1 ∈ = w VOk : v w = m, u w = 0 . ∈ | ∩ | | ∩ |

This time, w must include all m elements of v not in u, and k m 1 of the k m − − k m elements outside. So we have  −  = k m choices, and so − k m 1 − − − b2m = k m. − 2. Now suppose i is odd, so i = 2m + 1 for some m. Fix u,v VOk such that d(u,v) = 2m + 1, so by Lemma 3.5.2, u v = m. Hence we∈ have | ∩ |

c2m+1 = w VOk : v w = k m 1, u w = 0 . | ∈ | ∩ | − − | ∩ | | 38 Uniform Subset Graphs

So w contains k m 1 elements of v and no elements of u. But v only contains k m −1 elements− not in u, so we must include all of these in w. The remaining− −m elements of w can be chosen from the m + 1 outside of u m + 1 and v in = m + 1 ways. Hence c = m + 1.  m  2m+1 Finally, we have

b2m+1 = w VOk : v w = (k 1) (m + 1), u w = 0 . | ∈ | ∩ | − − | ∩ | | This time, w contains k m elements of the k m 1 elements of v that aren’t k m− 1 − − in u, so we have  − −  = k m 1 choices. We must also include all k m − − − of the m + 1 elements from outside u and v. Hence b2m+1 = k m 1. − − So if k is odd, diam(Ok) = k 1 is even, so we have k 1 = 2m and so ck 1 = 1 − − − 2 (k 1). If k is even, diam(Ok) is odd, so then we have k 1 = 2m + 1 and then − 1 1 − ck 1 = (k 2) + 1 = k. Hence we have proved the required result.  − 2 − 2 Chapter 4

Some Permutation Group Theory

4.1 Primitive and Imprimitive Actions

In this section, we always suppose G is a permutation group acting transitively on a set X.

Definition 4.1.1 A G-congruence on X is a G-invariant equivalence relation on X.

That is, for all g G and for all x,y X, we have an equivalence relation such that x y if and∈ only if (xg) (yg∈). The equivalence classes of have≡ a special name.≡ ≡ ≡

Definition 4.1.2 The equivalence classes of are called blocks. The set of all blocks is referred to as a block system of X. ≡

Informally, a block is a subset of X whose elements are ‘equivalent’ in some way that is unaffected by the action of G on X. Every set always has the following two G-congruences: the trivial G-congruence: where the blocks are all singletons; • the universal G-congruence: where there is only one block, namely the • whole of X, so x y for all x,y X. ≡ ∈ In addition, the empty set Ø is always a block. These three types of block are called the trivial blocks.

Definition 4.1.3 If the only blocks are the trivial blocks listed above, we say G acts primitively on X. If there is a non-trivial block system, we say G acts imprimitively on X.

39 40 Some Permutation Group Theory

We will encounter examples of primitive and imprimitive groups in chapter 8, where we meet the concept of primitive and imprimitive graphs, where G is the automorphism group of a distance-transitive graph.

4.2 Direct and Semi-direct Products

Recall the direct product of two groups G and H is

G H = (g,h) g G,h H ×  ∈ ∈ with binary operation (g,h)(g0,h0) = (gg0,hh0).

The semi-direct product can be viewed as a generalisation of this.

Definition 4.2.1 Let H,K be groups, with H acting on K in such a way that the group structure of K is preserved, so for each u K and x H the action u ux is an automorphism of K. (Note that the action of∈H on K is∈ not specified directly.)7→ We define the semi-direct product of K by H as

K o H = (u,x) u K,x H  ∈ ∈

1 with binary operation (u,x)(v,y) = (uvx− ,xy).

Check that this is a group:

Associativity is the hardest axiom to check – for (u,x),(v,y),(w,z) K oH, • we get ∈

1 x− (u,x)(v,y)(w,z) = uv , xy(w,z) 1 1 x− (xy)− = uv w , xyz 1 1 y− x− = u(vw ) , xyz 1 y− = (u,x)vw , yz

= (u,x)(v,y)(w,z).

The identity is obviously (eK,eH ), as we have • 1 x− (u,x)(eK,eH ) = ueK , xeH  = (u,x) 1 eH− and (eK,eH )(u,x) = eKu , eH x = (u,x). 4.3 Wreath Products 41

The inverse of (u,x) K o H is (u 1)x,x 1 , since • ∈  − − 

1 1 x 1 1 xx− 1 1 1 (u,x) (u− ) ,x− = u(u− ) ,xx− = (uu− ,xx− ) = (eK,eH )    and similarly for (u 1)x,x 1 (u,x). − −  Observation 4.2.2 The direct product is a special case of the semi-direct product, when each x H acts as the identity map on K, i.e. for all u K, ux = u. ∈ ∈ 4.3 Wreath Products

The wreath product is an example of the semi-direct product.

Definition 4.3.1 Suppose we have a group G acting on a set X. Then consider the cartesian product Y = X1 X2 Xn where each Xi = X (the subscripts are just labels). Clearly × × ··· ×n the direct product G = G1 G2 Gn (where each Gi = G, with G j being the × ×···× copy of G acting on Xj) has an induced action on Y. Now let H be a group of permutations acting on the labels 1,...,n , so H has an induced action on both Y (so H Sym(Y)) and Gn. We can{ form a semi-direct} product of Gn by H, Gn o H, where≤ the action of H on Gn is as above. This is called the wreath product of G by H, written GWr H. It has two actions on Y: the “G” part acts on each of the Xi in turn (which is an imprimitive action) and the “H” part moves around the Xi, taking Xi to Xj, where j = ih for some h H (a primitive action). ∈

A classic example of a wreath product is the automorphism group of the Ham- ming graph, which we will meet in the next chapter. For more on semi-direct and wreath products in general, consult Cameron [13] or Dixon & Mortimer [17].

4.4 Projective Groups

When dealing with a vector space V, the first group one should think of is the gen- eral linear group, GL(V). In particular, if V is an n-dimensional vector space over Fq, the finite field with q elements, this would be GLn(Fq), which we shall denote by GL(n,q). This groups acts on V; this action is not transitive on the whole space, but is transitive on V 0 . (The zero vector is fixed by all linear transformations of V.) \{ }

This group has an induced action on the k-dimensional subspaces of V. More precisely, it acts transitively on ordered bases of a k-space. However, this action is not faithful: matrices of the form λIn, i.e. scalar multiples of the identity matrix, 42 Some Permutation Group Theory each have the effect of multiplying all basis vectors by the constant λ, thus keep- ing the space spanned by that basis fixed. These matrices form the centre, Z, of GL(n,q), which is a normal subgroup. So we have:

Definition 4.4.1 The factor group GL(n,q)/Z is known as the projective general linear group, denoted by PGL(n,q).

Recall that GL(V) has a normal subgroup SL(V), the special linear group of V. In terms of matrices, this is the set of linear transformations of V that have n determinant 1. Again, if V = Fq, we denote this group by SL(n,q). As with the GL(n,q), the centre of SL(n,q) consists of diagonal matrices, but only those with determinant 1. Because the determinant of a diagonal matrix is the product n of the diagonal elements, this is those matrices λIn, such that λ = 1. That is, Z(SL(n,q)) = SL(n,q) Z = N (where Z is the centre of GL(n,q) as above), say. Hence the next definition:∩

Definition 4.4.2 The factor group SL(n,q)/N is called the projective special linear group, denoted by PSL(n,q).

We will encounter these groups later, as the automorphism groups of the Grass- mann graphs (chapter 6) and also as the groups of some other ‘sporadic’ distance- transitive graphs in chapter 11. Chapter 5

Hamming Graphs

5.1 Introduction

The Hamming graphs are defined in a similar way to the Johnson graphs we saw in chapter 3. Again, we consider a set Ω of size n (typically, we think of Zn, the set of integers modulo n, or the finite field with q elements Fq), but this time we consider Ωd, the cartesian product of d copies of Ω.

Definition 5.1.1 The , denoted H(d,n), has vertex set Ωd, with two vertices be- ing adjacent if, when regarded as ordered d-tuples, they differ in exactly one co- ordinate.

It is clear that two vertices are at distance k if and only if they differ in exactly k co-ordinates. (Recall the definition of the Hamming distance of two vectors being the number of co-ordinates in which they differ.) Let us consider the following example:

Figure 5.1: The Hamming graph H(2,3)

43 44 Hamming Graphs

Two properties of the Hamming graphs are immeditaely apparent. First, the diameter of H(d,n) is d, as the maximum distance occurs when the two vertices (regarded as ordered d-tuples) differ in all d co-ordinates. Secondly, the valency of H(d,n) is d(n 1). So for our example H(2,3), the diameter is 2 and valency is 2(3 1) = 4. − − 5.2 Distance-Transitivity

To show that the Hamming graphs are distance-transitive, we first need to know their automorphism groups. As was mentioned in section 4.3, we have:

Lemma 5.2.1

Sn Wr Sd Aut(H(d,n)). ≤ Proof: d It is quite clear that the direct product of d copies of Sn, i.e. Sn = Sn Sn Sn acts on Ωd in a way that preserves adjacency. Suppose we have u×,v ×···×Ωd with ∈ u v. Regarding u and v as d-tuples, we have u = (u1,...,ud) and v = (v1,...,vd), with∼

ui = vi for i = j 6 ui = vi for i = j 6 σ σ σ d for 1 i d and some particular j. Take some = ( 1,..., d) Sn . Then we get that≤ ≤ ∈

uσ = (u1,u2,...,ud)(σ1,σ2,...,σd) = (u1σ1,u2σ2,...,udσd) and likewise for vσ. As the σi are all permutations, we then have

uiσi = viσi for i = j 6 uiσi = viσi for i = j 6 and so adjacency is preserved. At first it may seem that these are the only automorphisms. (These are sufficient to show transitivity.) However, consider a permutation ρ on d symbols, permuting the order of the d co-ordinates. So we have

uiρ = viρ for iρ = jρ 6 uiρ = viρ for iρ = jρ 6 d for 1 i d. So Sd also acts on Ω , also preserving adjacency. Hence we get ≤ ≤ d S o Sd = Sn Wr Sd Aut(H(d,n)). n ∼ ≤  5.2 Distance-Transitivity 45

It is clear from the above discussion that Sn Wr Sd acts vertex-transitively on H(d,n). To show the action is distance-transitive is also fairly straightforward. Formally, we have:

Theorem 5.2.2 The Hamming graphs H(d,n) are distance-transitive.

Proof: We just have to recall that two vertices are at distance k if and only if they differ in exactly k co-ordinates. Then we can the proceed using exactly the same argument as above, with (for some σ S d) ∈ n uiσi = viσi for i = j1,..., jk 6 uiσi = viσi for i = j1,..., jk 6 for 1 i d, and some j1,..., jk 1,...,d with 1 k d. (We also get a sim- ≤ ≤ ∈ { } ≤ ≤ ilar result for ρ St .) So Sn Wr Sd acts transitively on pairs of vertices at distance ∈ k. Hence H(d,n) is distance-transitive. 

As with the Johnson graphs and odd graphs, we know that H(d,n) has an in- tersection array, which is determined below.

Theorem 5.2.3 The intersection array of H(d,n) is given by ι(H(d,n)) = 1 j d 1 d  ∗ ··· ··· −   0 d(n 2) . d(n 1)(d 1···············)(n 1) (d j)(n 1) 1(n 1) −  − − − ··· − − ··· − ∗  Proof: Consider two vertices u,v of H(d,n), with d(u,v) = j, so u = (u1,...,ud) and v = (v1,...,vd) differ in exactly j co-ordinates. Suppose WLOG that u1 = v1,...,u j = 6 6 v j and u j+1 = v j+1,...,ud = vd. Then consider some w Γ j 1(v) Γ1(u), i.e. w differs in one co-ordinate (m say) from u, and in j 1 co-ordinates∈ − ∩ from v. Hence that mth co-ordinate must be − one of u1,...,u j, so there are j choices for this. Once that co-ordinate m is chosen, what is put there is fixed; it must be vm, so that we have d(v,w) = j 1. Hence − there are j choices for w, so c j = Γ j 1(v) Γ1(u) = j. | − ∩ | Now consider some x Γ j+1(v) Γ1(u), i.e. x differs in one co-ordinate (l say) from u, and in j +1 co-ordinates∈ from∩ v. This time, the lth co-ordinate must be one of u j+1 = v j+1,...,ud = vd, so there are d j choices for this. It can be changed to anything, so there are n 1 choices for this.− Hence there are (d j)(n 1) choices − − − for x (d j places, n 1 choices of what to put there). So b j = Γ j+1(v) Γ1(u) = (d j)(−n 1). − | ∩ | − − Combining these two facts, we obtain the required result.  46 Hamming Graphs

For example, we have

1 2 ι  ∗  (H(2,3)) =  0 1 2 . 4 2  ∗  5.3 The k-Cubes

The k-cubes, Qk, are a well-known and frequently studied family of graphs. They are in fact the Hamming graphs H(k,2), so their vertices can be regarded as binary k-tuples, adjacent if their Hamming distance is 1. By the preceeding sections, we already know that they are distance-transitive, have both diameter and valency k, and their intersection arrays are given by

1 k 1 k ι  ∗ ··· −  (Qk) =  0 0 0 0 . k k 1 ··· 1  − ··· ∗  The more common alternative construction for Qk is given recursively as fol- lows: starting with Q1 = K2, take two copies of Qk 1 (which we’ll label Qk 1 ∼ − − and Qk0 1). Then for each vertex v VQk 1, join it to the corresponding vertex − ∈ − v0 VQk0 1. ∈ −

Thus Q2 is just the square, and Q3 is what would normally be referred to as the , and Q4 is the graph shown below.

1 2 3 4 ι  ∗  (Q4) =  0 0 0 0 0  4 3 2 1  ∗ 

Figure 5.2: The 4-cube Q4

We’ll meet Q3 and Q4 again in chapter 11. Chapter 6

Grassmann Graphs

6.1 Introduction

The Grassmann graphs are another family of graphs defined similarly to the John- son graphs. This time, however, we are concerned with subspaces of a vector space, rather than with subsets of a set.

Definition 6.1.1 Let V be an n-dimensional vector space over Fq, the finite field with q elements, where q is a power of a prime. Then let Vk be the set of all k-dimensional sub- spaces of V. We define the Grassmann graph, denoted by G(q,n,k), as having vertex set Vk, and two vertices (i.e. subspaces) U,W being adjacent if and only if dim(U W) = k 1. ∩ − There are numerous similarities between the Grassmann and Johnson graphs (more so than with the Hamming graphs). For example, analogous to Lemma 3.1.3, we have the following:

Lemma 6.1.2 The graphs G(q,n,k) and G(q,n,n k) are isomorphic. − Proof: Let , be some inner product on V. Then for any subspace W V, define W ⊥ = v hVi v,w = 0 w W , the orthogonal complement of W⊆in V. By standard { ∈ |h i ∀ ∈ } linear algebra, dim(W ⊥) = dim(V) dim(W) and dim(W ⊥) dim(W ⊥ U⊥) = dim(V) dim(W U). Now suppose− dim(V) = n, dim(U)− = dim(W)∩ = k and − ∩ dim(U W) = k 1. Then we have dim(W ⊥) = n k and dim(W ⊥ U⊥) = (n k)∩ k + (k −1) = n k 1. − ∩ − − − − − So we have shown that the function

ψ : Vk Vn k → − where ψ(W) = W ⊥

47 48 Grassmann Graphs is a bijection which preserves adjacency between G(q,n,k) and G(q,n,n k). − Hence G(q,n,k) = G(q,n,n k). ∼ −  As a consequence of the above lemma, we need only consider the cases where n 2k. Again, this is analogous to what we had for the Johnson graphs. As with the≥ Johnson graphs, we first want to determine properties such as the number of vertices and the degree of each vertex for the Grassmann graphs. To do this, we make use of the following definition:

Definition 6.1.3 The q-ary Gaussian binomial coefficient is given by

n n k+1 n (q 1) (q − 1)   = − ··· − . k (qk 1) (q 1) q − ··· − For a deeper discussion, and further examples, of this, see Goulden & Jackson [23].

Theorem 6.1.4 n 1. The number of vertices of G(q,n,k) is   . k q n k k 2. The degree of each vertex of G(q,n,k) is q .  −1  k 1 q − q Proof: 1. The number of vertices of G(q,n,k) is, by definition, the number of k- dimensional subspaces (to be succinct, we shall call these k-spaces) of V (which has dimension n). This is given by the number of ordered k-tuples of linearly independent vectors in V (?), divided by the number of possible ordered bases for a k-space, which is the number of ordered k-tuples of lin- k early independent vectors in (Fq) (†).

n n n 2 n k 1 (?) is given by (q 1)(q q)(q q ) (q q − ). (For our first vector we are allowed anything− except− the− zero··· vector;− for the second we are al- lowed anything except the q scalar multiples of the first; for the third we are allowed anything except the q2 linear combinations of the first two, and so on.)

k k k 2 k k 1 (†) is given by (q 1)(q q)(q q ) (q q − ) by exactly the same arguments as with (−?) above.− − ··· −

Dividing (?) by (†), we get (qn 1)(qn q)(qn q2) (qn qk 1) − − − ··· − − (qk 1)(qk q)(qk q2) (qk qk 1) − − − ··· − − 6.2 Distance-Transitivity 49

then by cancelling various powers of q, we obtain

n n 1 n 2 n k+1 (q 1)(q − 1)(q − 1) (q − 1) n − − − ··· − =   . (qk 1)(qk 1 1)(qk 2 1) (q 1) k − − − − − ··· − q n n Hence there are   k-spaces in V, so G(q,n,k) has   vertices. k q k q 2. The degree of a vertex U of G(q,n,k) is the number of k-spaces W (with U = W) such that dim(U W) = k 1. 6 ∩ − k Let Z = U W. By the above, there are   choices of Z. ∩ k 1 − q We can then generate a k-space W by adjoining any element of V U to a basis for Z. There are qn qk such elements. However, several of these\ ele- − k k 1 ments will generate the same k-space–in fact, any of the q q − elements of W Z will do this. So we divide through by qk qk 1, giving− us \ − − n k n k+1 n k q q q − q q − 1 n k − = − = q − = q −  . qk qk 1 q 1 q 1 1 − − − − q n k k Hence there are q −    k-spaces W such that U W is a (k 1)- 1 k 1 ∩ − q − q space, and so this number is the degree of the vertex U.  The aim of this chapter is to show that the Grassmann graphs are distance- transitive. We will do this asing the same broad strategy as in section 3.3, where we dealt with the Johnson graphs.

6.2 Distance-Transitivity

To begin with, we need to know some details about the automorphism group of the Grassmann graph G(q,n,k). From section 4.4, this should be quite straightforward.

Lemma 6.2.1

PGL(n,q) Aut(G(q,n,k)). ≤ Proof: In section 4.4, we saw that PGL(n,q) acts faithfully on bases of k-dimensional sub- spaces of V. Hence it has an induced action on the vertices of G(q,n,k). 

Because the action of PGL(n,q) is transitive on Vk (the set of all k-spaces in V), we can see that G(q,n,k) is vertex-transitive. 50 Grassmann Graphs

As with the Johnson graphs, we prove the distance-transitivity of the Grass- mann graph by showing that there is a direct relationship between the distance of two vertices and the dimension of the intersection of two vertices when regarded as k-spaces. The following lemma is the most crucial step.

Lemma 6.2.2 Let U,W be two vertices of G(q,n,k) such that d(U,V) = m. Then, regarding U,V as k-spaces, dim(U V) = k m. ∩ − Proof – by induction on m: Basis: suppose m = 0. That is, d(U,V) = 0, i.e. U = W, so dim(U W) = dim(U) = k = k 0. ∩ Induction Hypothesis− (IH): suppose the theorem holds for m r, that is for each m r, d(U,W) = m if and only if dim(U W) = k m. ≤ ≤Choose U,W,X such that d(U,W) = r∩, d(W,X)− = 1 and d(U,X) = r + 1. By the IH, dim(U W) = k r and by definition, dim(W X) = k 1. We want to find dim(U X∩). Clearly,− we have dim(U X) < k ∩r, as otherwise− that would contradict the∩ IH. ∩ − Recall that for any vector spaces A,B over the same field,

dim(A + B) = dim(A) + dim(B) dim(A B). (?) − ∩ Consider the space U +W. By (?), we have

dim(U +W) = dim(U) + dim(W) dim(U W) − ∩ = k + k (k r) − − = k + r .

Also by (?), dim(W + X) = k + k (k 1) = k + 1. Take a basis w1,...,wk for W and extend it to bases for U +W−and−W + X: { }

U +W = span w1,...,wk,u1,...,ur , W + X = span w1,...,wk,x . { } { } Because x is linearly independent from u1,...,ur, we have

U +W + X = span w1,...,wk,u1,...,ur,x , { } so dim(U + W + X) = k + r + 1. Now (U + X) (U + W + X), so therefore dim(U + X) k + r + 1. Again using (?), we have ⊆ ≤ dim(U X) = dim(U) + dim(X) dim(U + X) ∩ − k + k (k + r + 1) ≤ − = k (r + 1). − But we know already that dim(U X) < k r. Hence it must be that dim(U X) = k (r + 1), and the result follows∩ by induction.− ∩ −  6.2 Distance-Transitivity 51

A consequence of this result is that we now know that the diameter of G(q,n,k):

Corollary 6.2.3 The diameter of G(q,n,k) is k.

Proof: From Lemma 6.2.2, the maximum distance between two vertices U,W occurs when the dimension of (U W) is at its minimum, i.e. when k m = 0, so m = k. ∩ −  It is now a relatively straightforward matter to prove the following:

Theorem 6.2.4 G(q,n,k) is distance-transitive.

Proof: From Lemma 6.2.2, it follows immediately that for any vertices U,W,X,Y of G(q,n,k), d(U,W) = d(X,Y) if and only if dim(U W) = dim(X Y). ∩ ∩ Take a basis v1,...,vk m for U W, and extend it to bases { − } ∩

v1,...,vk m,uk m+1,...,uk and v1,...,vk m,wk m+1,...,wk { − − } { − − } for U and W respectively. Combine these to get a basis for U +W, and then extend this to obtain as basis for V,

v1,...,vk m,uk m+1,...,uk,wk m+1,...,wk,an (k m)+1,...,an . { − − − − − }

Similarly, we take a basis z1,...,zk m for X Y, extend it to bases { − } ∩

z1,...,zk m,xk m+1,...,xk and z1,...,zk m,yk m+1,...,yk { − − } { − − } for X and Y and then obtaining another basis for V,

z1,...,zk m,xk m+1,...,xk,yk m+1,...,yk,bn (k m)+1,...,bn . { − − − − − } Now, PGL(n,q) acts transitively on ordered bases of V. Therefore there is an element g PGL(n,q) that takes the first set of basis vectors onto the second. In particular, ∈

( v1,...,vk m,uk m+1,...,uk )g = z1,...,zk m,xk m+1,...,xk { − − } { − − } and ( v1,...,vk m,wk m+1,...,wk )g = z1,...,zk m,yk m+1,...,yk . { − − } { − − } That is, (U)g = X and (W)g = Y. Hence, for any U,W,X,Y with d(U,W) = d(X,Y), there exists g PGL(n,q) Aut(G(q,n,k)) such that (U)g = X and (W)g = Y. So G(q,n,k∈) is distance-≤ transitive.  52 Grassmann Graphs

6.3 Intersection Arrays

Of course, the next step is to calculate the intersection array of G(q,n,k). As be- fore, we only need calulate b j and c j, as these determine the whole array.

Theorem 6.3.1 For the Grassmann graph G(q,n,k), the intersection array ι(G(q,n,k)) is given by: 2 j 1. c j =   , and 1 q

2 j+1 n k j k j 2. b j = q  − −   −  . 1 q 1 q Proof: 1. Recall that ther vertices of G(q,n,k) are the k-dimemsional subspaces of an n-dimensional vector space V over Fq, two vertices being adjacent if as subspaces they intersect as a (k 1)-space. Fix a vertex (i.e. k-space) U, and suppose we have a vertex W at− distance j from U; that is, dim(U W) = ∩ k j. Then c j is precisely the number of k-spaces X such that dim(U X) = k − ( j 1) and dim(X W) = k 1. ∩ − − ∩ − Let Z = U W, and take a basis z1,...,z j for Z. Extend this to bases ∩ { } z1,...,zk j,u1,...,u j and z1,...,zk j,w1,...,w j for U and W respec- tively.{ We− construct X}from{W by “throwing− away”} a 1-dimensional sub- space of W Z and replacing it with a 1-space from U Z. Now dim(W Z) = \ j \ \ dim(U Z) = j, so by 6.1.4, part 1, there are   choices for the 1-space \ 1 q j we discard, and   choices for its replacement. 1 q 2 j Hence c j =   . 1 q

2. Fix a k-space U as above. Let k j be the number of vertices W such that d(U,W) = j (in the sense of 2.3.6). We shall calculate b j by calulating k j and using the formula (cf. 2.3.6)

k j+1c j+1 b j = . (‡) k j

So we want to find the number of k-spaces W such that dim(U W) = k j. k k ∩ − Let U W = Z0. By 6.1.4, part 1, there are   =   choices for Z0. ∩ k j j − q q We construct W by adjoining to Z0 an ordered j-tuple of linearly independent vectors in V U. There are (qn qk) choices for the first vector, which we \ − 6.3 Intersection Arrays 53

adjoin to Z0 to obtain Z1 say. For the second vector, we choose one of the n k+1 (q q ) vectors outside U + Z1. We continue this until we have adjoined j vectors.− So altogether, there are

n k n k+1 n k+ j 1 (q q )(q q ) (q q − ) − − ··· −

possible j-tuples of linearly independent vectors in V U. \ However, having generated a particular k-space W, we now have to count the number of different j-tuples which give rise to that particular space. Any j-tuple of linearly independent vectors in W Z0 will do this, and there are \

k k j k k j+1 k k 1 (q q − )(q q − ) (q q − ) − − ··· −

of these. Hence there are

(qn qk)(qn qk+1) (qn qk+ j 1) − − ··· − − (qk qk j)(qk qk j+1) (qk qk 1) − − − − ··· − −

choices for extending Z0 to a k-space W.

Rearranging this, we obtain

qk(qn k 1) qk+1(qn k 1 1) qk+ j 1(qn k j+1 1) − − − − − ··· − − − − qk j(q j 1) qk j+1(q j 1 1) qk(q 1) − − − − − ··· − q j(qn k 1) q j(qn k 1 1) q j(qn k j+1 1) = − − − − − ··· − − − (q j 1)(q j 1 1) (q 1) − − − ··· − n k = (q j) j  −  j q

j2 n k k and hence k j = q  −    . j q j q

All we have to do now is apply the formula (‡) to calculate b j. (This is easier 2 2 n k k j + 1 said than done!) We have k = q( j+1) , c = j+1  +−   +  j+1   j 1 q j 1 q 1 q j2 n k k and k j = q  −    . j q j q 54 Grassmann Graphs

So we have

n k k j + 1 2 ( j+1)2  −      q j + 1 q j + 1 q 1 q b j = q j2 n k k  −    j q j q ∏ j n k i ∏ j k i j+1 2 ∏ j i ∏ j i 2 j+1 i=0(q − − 1) i=0(q − 1) (q 1) i=1(q 1) i=1(q 1) = q j+1 − j+1 − − 2 j 1 − j 1 − ∏ (qi 1) ∏ (qi 1) (q 1) ∏ − (qn k i 1) ∏ − (qk i 1) i=1 − i=1 − − i=0 − − − i=0 − − ∏ j n k i ∏ j k i ∏ j+1 i 2 2 j+1 i=0(q − − 1) i=0(q − 1) i=1 (q 1) 1 = q j 1 − j 1 − j+1 − 2 ∏ − (qn k i 1) ∏ − (qk i 1) ∏ (qi 1)2 (q 1) i=0 − − − i=0 − − i=1 − − (qn k j 1) (qk j 1) = q2 j+1 − − − − − (q 1) (q 1) − − 2 j+1 n k j k j = q  − −   −  .  1 q 1 q

(I’ll bet you’re glad that’s over with!)

6.4 Linking the Grassmann and Johnson Graphs

Throughout this chapter, we have been reminded of the analogy between the Grass- mann graphs and the Johnson graphs we met in chapter 3. In fact, the Johnson graphs can be thought of as a limiting case (or the ‘thin case’ – see [11] §9.1) of the Grassmann graphs. This is in part explained by the following proposition1:

Proposition 6.4.1

n n lim   =  . q 1 k k → q Proof: Recall that

k n i+1 n q − 1   = ∏ − , k qi 1 q i=1 − so therefore we have

k n i+1 n q − 1 lim   = lim ∏ − q 1 k q 1 qi 1 → q → i=1 − k qn i+1 1 = ∏ lim − − q 1 qi 1 i=1 → −

1I had to put some Real Analysis in somewhere, didn’t I(!)? 6.4 Linking the Grassmann and Johnson Graphs 55

d n i+1 k (q − 1) = ∏ lim dq − using l’Hopital’sˆ rule q 1 d (qi 1) i=1 → dq − k (n i + 1)qn i = ∏ lim − − q 1 iqi 1 i=1 → − k n i + 1 = ∏ − i=1 i n(n 1) (n k + 1) = − ··· − k! n! = k!(n k)! − n = . k 

n So by taking the limit as q tends to 1 in all our formulas involving   (e.g. in k q the intersection array) associated with the Grassmann graphs G(q,n,k), we obtain exactly those we had for the Johnson graphs J(n,k,k 1). Although considering a vector space over a field with one element is nonsense− (as such a field cannot exist: a field must have at least two elements), we can (non-rigorously) think of this n-dimensional object as merely being a set with n elements, and its subspaces as subsets. In other words, we have exactly the construction of a Johnson graph. One could probably investigate a so-called q-analogue of the odd graphs, too.

Brouwer, Cohen & Neumaier [11] refer to the Johnson, odd, Hamming and Grassmann graphs as examples of families of graphs with classical parameters. They give many more properties and related families than we have here; in effect, we have only covered the ‘bare essentials’. Chapter 7

Linear Algebra and Distance-Transitive Graphs

7.1 The Spectrum and the Adjacency Algebra

In the previous chapter, we defined a family of graphs using linear algebra. In this chapter, we go the other way round: we start with a graph, and obtain some linear algebra from it. Suppose Γ is simple with vertex set VΓ = v1,...,vn . { } Definition 7.1.1 The of Γ is the n n matrix A(Γ), with entries × 1 if v v A = i j i j  0 otherwise.∼

Clearly, A(Γ) is a symmetric matrix. We will regard A as a matrix over R, but in some contexts it also makes sense to regard A as a matrix over F2, as the entries are always 0 or 1. (See Godsil & Royle [22] for more on this.)

Examples 7.1.2

0 1 1 1  1 0 1 1  A(K ) = 4  1 1 0 1     1 1 1 0 

Figure 7.1: The complete graph K4

56 7.1 The Spectrum and the Adjacency Algebra 57

0 1 1 1 1 0  1 0 1 0 1 1   1 1 0 1 0 1  A(Oct) =    1 0 1 0 1 1     1 1 0 1 0 1     0 1 1 1 1 0 

Figure 7.2: The octahedron Oct

The next definitions follow naturally from the above.

Definition 7.1.3 The eigenvalues of Γ are defined to be the eigenvalues of A(Γ). Similarly, the eigenvectors of Γ are the eigenvectors of A(Γ).

Since A(Γ) is symmetric, Γ has n real eigenvalues, but these are not necessarily distinct. Hence the following definition is non-trivial.

Definition 7.1.4 The spectrum of Γ, denoted by Spec(Γ), is the set of distinct eigenvalues of A(Γ), together with their (algebraic) multiplicities as roots of the characteristic polyno- mial χ(Γ) = det(λI A(Γ)). − Examples 7.1.5 We can now calculate the spectra of our two examples in 7.1.2:

3 In the case of K4, we find that χ(K4) = (λ 3)(λ + 1) , so • − 3 1 Spec(K ) = . 4  1− 3 

In the case of the octahedron, we obtain χ(Oct) = (λ 4)(λ + 2)2λ3, so • − 4 2 0 Spec(Oct) = .  1− 2 3 

Note: to calculate the spectrum of a graph on more than three vertices, it is useful to have access to a computer algebra package, such as MAPLE1. It’s not much fun trying to calculate eigenvalues of a matrix bigger than 3 by 3 by hand!

1MAPLE is a registered trade mark of Waterloo Maple Inc., 57 Erb St. West, Waterloo, Ontario, Canada N2L 6C2. 58 Linear Algebra and Distance-Transitive Graphs

In the case where Γ is connected and k-regular, we have the following spectral properties.

Proposition 7.1.6 Let Γ be a connected, k-regular graph on n vertices. Then the following hold: 1. k Spec(Γ); ∈ 2. k has multiplicity 1;

3. for all λ Spec(Γ), λ k. ∈ | | ≤ Proof: 1. Let v = (1,1,...,1)T . Since Γ is k-regular, the row sum of each row of A(Γ) is k. By inspection, we see that Av = (k,k,...,k)T = kv. Hence k Spec(Γ). ∈ 2. Suppose Ax = kx, and let x j be an entry of x with largest absolute value. Then

(Ax) j = kx j n i.e. ∑ a jixi = kx j. i=1

Now let v1,...,vk be the k vertices adjacent to the vertex j. By the definition of A, we have

n

∑ a jixi = xv1 + xv2 + + xvk = kx j. i=1 ···

Since x j is maximal, we have xv x j for each 1 i k and it follows that i ≤ ≤ ≤ xv = xv = = xv = x j. 1 2 ··· k We then apply this technique to all vertices adjacent to each xvi , then repeat- ing until all vertices of Γ are covered. (We can do this since Γ is connected.) In all cases, we can deduce that xh = x j for 1 h n. Thus x is a scalar multiple of v = (1,1,...,1)T , so the multiplicity≤ of k≤is 1.

3. Let λ be an eigenvalue of Γ and x the corresponding eigenvalue, so Ax = λx. As before, let x j be an entry of x with largest absolute value. Similarly to above, we have λx j = xv + xv + + xv . Thus 1 2 ··· k

λ x j = xv + xv + + xv | || | | 1 2 ··· k | xv + xv + + xv ≤ | 1 | | 2 | ··· | k | x j + x j + + x j ≤ | | | | ··· | | = k x j . | | Hence λ k. | | ≤  7.1 The Spectrum and the Adjacency Algebra 59

Things become slightly more abstract with this next definition.

Definition 7.1.7 The adjacency algebra, A(Γ), is the algebra of polynomials in A(Γ).

In other words, A(Γ) is the vector space of such polynomials, but with an addi- tional operation of multiplication, so there is a ring structure as well as the vector space structure (this is what is meant by an algebra). As a vector space, A(Γ) is finite-dimensional; by the Cayley-Hamilton Theorem, A satisfies χ(A) = 0, so this dimension is at most n.

Every element of A(Γ) is a linear combination of powers of A. Because of this, the algebra is commutative under multiplication. Also, these powers of A have a graph-theoretical interpretation, as the following theorem shows.

Theorem 7.1.8 l The number of paths from vi to v j of length l in Γ is the i, jth entry in A .

Proof – by induction on l: Basis: if l = 0, we have A0 = I, so the result holds (as we define there to be a path of length 0 from vi to vi for all i, and no viv j path of length 0 for i = j). Induction hypothesis (IH): if l = 1, we have A1 = A(Γ), the adjacency6 matrix of Γ, so the result holds by the definition of this. Induction step: assume the result holds of l = k; that is, the number of paths of k length k from vi to v j is given by the i, jth entry of A . Now consider the set of paths of length k + 1 from vi to v j. There is a 1-1 corre- spondence between this set and the set of all paths of length k from vi to vh, where vh v j. So the number of such paths is given by: ∼ n k k ∑ (A )ih = ∑ (A )ihah j (since ah j = 1 if vh v j, ah j = 0 otherwise) vh v j h=1 ∼ ∼ k+1 k+1 = (A )i j, the i, jth entry of A .

Hence, by induction, the number of paths of length l from vi to v j is given by the l i, jth entry of A . 

We now use this combinatorial condition to determine a lower bound for the dimension of A(Γ).

Proposition 7.1.9 Suppose Γ is connected, with diam(Γ) = d. Then the dimension of A(Γ) is at least d + 1. 60 Linear Algebra and Distance-Transitive Graphs

Proof: Choose x,y VΓ such that d(x,y) = d. Then there exists a geodesic w0w1 wd ∈ ··· (where w0 = x and wd = y). (We also use these labels in A(Γ).) Then for each i (1 i d) there is at least one path of length i from w0 to wi and no shorter path.≤ By≤ 7.1.8 above, this implies there is a non-zero entry in the 0,i th entry of Ai, th 0 2 i 1 i while the 0,i entries of A = I, A, A ,...,A − are all zero. Consequently, A is not linearly dependent on A0,...,Ai 1 and hence I,A,A2,...,Ad is a linearly − { } independent set of size d + 1 in A(Γ). Thus dim(A(Γ)) d + 1. ≥  Corollary 7.1.10 A has at least d + 1 distinct eigenvalues.

Proof: This follows immediately from 7.1.9. 

In the next section, we will see that if Γ is distance-regular (and thus also if Γ is distance-transitive), then the dimension of the adjacency algebra is exactly d +1.

7.2 Distance Matrices

In this section, we will construct a basis for the adjacency algebra of a distance- regular graph in terms of the distance matrices, as found by Damerell [16] in 1973. These are defined as follows:

Definition 7.2.1 Let Γ be a connected graph with diameter d. Then, for i = 0,...,d the i th distance matrix Ai has entries 1 if d(v ,v ) = i (A ) = r s i rs  0 otherwise for all v j,vk VΓ. ∈

It is clear from the definition that A0 = I and A1 = A(Γ), the adjacency matrix. Also, we have A0 + A1 + + Ad = J, where J is the all-ones matrix. It is clear ··· that A0,A1,...,Ad is a linearly independent set, as exactly one of the matrices has a{ non-zero entry} in the r,sth position for all r,s. We now determine a useful property of the distance matrices of a distance- regular graph.

Lemma 7.2.2 (Damerell 1973) Suppose Γ is distance-regular, with adjacency matrix A(Γ). Then

AAi = bi 1Ai 1 + aiAi + ci+1Ai+1, for 1 i d 1; − − ≤ ≤ − 7.2 Distance Matrices 61

also AA0 = a0A0 + c1A1, AAd = bd 1Ad 1 + adAd. − − Proof: Consider the r,sth entry of the matrices on each side. (AAi)rs is the number of vertices w such that d(vr,w) = 1 and d(vs,w) = i, i.e. Γ1(vr) Γi(vs) . So there are three possibilities for d(vr,vs): i 1, i or i + 1. |Hence ∩ | −

bi 1 if d(vr,vs) = i 1  − − ai if d(vr,vs) = i (AAi)rs =   ci+1 if d(vr,vs) = i + 1  0 otherwise  = bi 1(Ai 1)rs + ai(Ai)rs + ci+1(Ai+1)rs − − = (bi 1Ai 1 + aiAi + ci+1Ai+1)rs − − and so the result follows. For AA0 and AAd, the proof is almost identical. 

(Note that AA0 = a0A0 + c1A1 = 0A0 + 1A1 = A = AI = AA0, as required.)

A consequence of the lemma above is the following:

Lemma 7.2.3 (Damerell 1973) The distance matrix Ai (for 0 i d) of a distance-regular graph Γ is a polynomial ≤ ≤ pi(A) of degree i in A, and so is an element of A(Γ).

Proof – by induction on i: 0 1 By definition (see 7.2.1), A0 = I = A and A1 = A are polynomials in A of degree 0 and 1 respectively, and are elements of A(Γ) (so p0(A) = I and p1(A) = A). Now suppose A0 = p0(A),...,A j = p j(A) A(Γ). Then, by 7.2.2 above, ∈ 1 A j+1 = AA j a jA j b j 1A j 1 c j+1  − − − −  1 = Ap j(A) a j p j(A) b j 1 p j 1(A) , c j+1  − − − −  which is a polynomial in A of degree j + 1. Hence A j+1 = p j+1(A) A(Γ) and ∈ the result follows by induction. 

We now have the tools we need to prove the main theorem of this section.

Theorem 7.2.4 (Damerell 1973) Suppose Γ is a distance-regular graph of degree k and diameter d. Then A(Γ) has dimension d + 1 and has basis A0,...,Ad . { } 62 Linear Algebra and Distance-Transitive Graphs

Proof: First consider the matrix A kI. Since the row sums of A are all k and the diagonal elements are all 0, the row− sums of A kI are all 0. Thus we have −

A kI A0 + A1 + + Ad = A kI J = 0 (†) −  ···  − 

By the lemma above, each Ai is a polynomial in A of degree i, so the left-hand side of (†) is a polynomial in A of degree d + 1. However, any polynomial in A of degree d is non-zero, as the Ai are all linearly independent (by 7.1.9), so the ≤ polynomial min(A) = A kI A0 + A1 + + Ad is the minimum polynomial −  ···  of A in A(Γ). Hence the dimension of A(Γ) is d + 1. Since A0,A1,...,Ad is a linearly independent set of size d + 1 in A(Γ), it { } forms a basis. 

However, this isn’t the only basis for A(Γ) that we know.

Corollary 7.2.5 I,A,A2,...,Ad is a basis for A(Γ). { } Proof: By 7.1.9, I,A,A2,...,Ad is a linearly independent of size d + 1 in A(Γ), and { } since dim(A(Γ)) = d + 1 by the theorem above, it must form a basis for A(Γ). 

7.3 The Intersection Matrix

Let X be an element of A(Γ). Because we know two bases, I,A,A2,...,Ad and { } A0,A1,A2,...,Ad , for A(Γ), we can write X in two ways, { } d d i X = ∑ riA = ∑ siAi, i=0 i=0 where ri,si are some constants. It is the second of these that interests us here.

Recall from Lemma 7.2.2 that AiA = bi 1Ai 1 + aiAi + ci+1Ai+1 (†). (Using the first basis, it is clear that A(Γ) is commutative.)− − Consider the linear transfor- mation τ of A(Γ) which sends X to XA. By (†), we have

d XA = ∑ siAi A i=0  d = ∑ si(AiA) i=0 d = ∑ si(bi 1Ai 1 + aiAi + ci+1Ai+1). i=0 − − 7.3 The Intersection Matrix 63

Thus we have represented τ by the matrix 0 1   k a1 c2  b1 a2 .    B(Γ) =  b2 ..     .. cd 1   −   . ad 1 cd   −   bd 1 ad  − with respect to the second basis, A0,A1,...,Ad . { } The matrix B(Γ) bears an uncanny resemblance to to the intersection array ι(Γ) of Γ. The next definition is therefore unsurprising.

Definition 7.3.1 The matrix B(Γ), as shown above, is called the intersection matrix of Γ.

Since A,B both correspond to the same linear transformation τ of A(Γ), they must therefore have the same eigenvalues. A consequence of Theorem 7.2.4 is that A has exactly d + 1 distinct eigenvalues, and as B has at most d + 1, then each eigenvalue of B has multiplicity 1. However, as eigenvalues of A, they have mul- tiplicities 1, as the multiplicities must sum to n. We will show that it is possible to calculate≥ these multiplicites directly from B.

Let λ be an unspecified eigenvalue of B. Then there is a right eigenvector v(λ) = (v0(λ),v1(λ),...,vd(λ)) corresponding to λ. That is, v(λ) is a solution of the system Bv(λ) = λv(λ). Assume that v(λ) is the unique standard 2 eigenvector corresponding to λ, i.e. the unique solution with v0(λ) = 1. (We can do this because B is tridiagonal, where v0 = 0 would imply v(λ) = 0). So we have 0 1 1    k a1 c2 v1(λ)  b1 a2 c3  v2(λ)      ...  .      ...  .  Bv(λ) =     bi 1 ai ci+1  vi(λ)   −    ...  .      ...  .      bd 2 ad 1 cd vd 1(λ)  − −  −   bd 1 ad vd(λ)  − 2A vector is standard if its first entry is 1. 64 Linear Algebra and Distance-Transitive Graphs

v1(λ) 1     k + a1v1(λ) + c2v2(λ) v1(λ)  b1v1(λ) + a2v2(λ) + c3v3(λ)   v2(λ)       .   .       .   .  =   = λ   bi 1vi 1(λ) + aivi(λ) + ci+1vi+1(λ)   vi(λ)   − −     .   .       .   .      bd 2vd 2(λ) + ad 1vd 1(λ) + cdvd(λ) vd 1(λ)  − − − −   −   bd 1vd 1(λ) + advd(λ)   vd(λ)  − −

So we have a sequence vi(λ) as follows: { }

v0(λ) = 1 (by assumption, v(λ) is standard)

v1(λ) = λ . .

λvi(λ) = bi 1vi 1(λ) + aivi(λ) + ci+1vi+1(λ) − − and so on. Note that the ith term of the sequence vi(λ) is a polynomial in λ of degree i. This gives us the following lemma: { }

Lemma 7.3.2

Ai = vi(A).

Proof: Substituting A for λ in the definition of vi(λ) above, we obtain

Avi(A) = bi 1vi 1(A) + aivi(A) + ci+1vi+1(A). − − Comparing this with Lemma 7.2.2, we have Ai = vi(A). 

There are also d +1 left eigenvectors of B; corresponding to each eigenvalue λ of B we have uT (λ) satisfying uT B = λu(λ).

Let us now give names to the specific eigenvalues of B. Since Γ is k-regular, by Proposition 7.1.6 k is the largest eigenvalue of A and thus also of B. Then label the remaining eigenvalues as follows:

k = λ0 > λ1 > λ2 > > λd. ···

Similarly, let v0,...,vd and u0,...,ud be the corresponding standard right and left eigenvectors. 7.3 The Intersection Matrix 65

It seems natural that these ui and vi are related in some way. In fact, we have the following result.

Lemma 7.3.3 For 0 j d, (vi) j = k j(ui) j, where k j is as in 2.3.6. ≤ ≤ Proof: Define K to be the diagonal matrix

k0 0   k1  k2  K =  .  .     .    0 kd

From Theorem 2.3.6, we have b j 1k j 1 = c jk j, so the matrix BK is symmetric. Hence − −

BK = (BK)T = KT BT = KBT . ( ) ∗ T Suppose vi and ui are corresponding right and left eigenvectors of B, that is

Bvi = λivi T λ T and ui B = iui (†). Hence T (BK)ui = (KB )ui (from ( )) T T ∗ = K(ui B) λ T T = K( iui ) (from (†)) = λKui i.e. B(Kui) = λi(Kui). In other words, Kui is a right eigenvector of B correspond- ing to λ. Since k0 = 1, we get that (Kui)0 = 1 also, so Kui is standard. But vi is the unique standard right eigenvector corresponding to λi, so it follows that vi = Kui, and hence (vi) j = (Kui) j = k j(ui) j. 

Now suppose , is the usual inner product on Rd+1. We now prove two useful h i results about the inner products of our eigenvectors ui,vi.

Proposition 7.3.4

1. For i = h, ui,vh = 0. 6 h i 2. u0,v0 = n, the number of vertices of Γ. h i 66 Linear Algebra and Distance-Transitive Graphs

Proof:

T T 1. We have λi ui,vh = λiu vh = uiBvh = u (λhvh) = λh ui,vh . But for i = h, h i i i h i 6 λi = λh as the eigenvalues are all distinct, so the only way out is to have 6 ui,vh = 0. h i T 2. First, we notice that u0 = (1,1,1,...,1):

(1,1,1,...,1)B = (k,1 + a1 + b1,c2 + a2 + b2,...,cd + ad) = (k,k,k,...,k) T = ku0 .

By Lemma 7.3.3 above, (v0) j = k j(u0) j, so v0 = (1,k,k2,...,kd), thus u0,v0 = h i 1 + k + k2 + + kd = n. ···  We can now move on to one of the most important results of the project. It gives us a formula for the multiplicities of the eigenvalues of A, but in terms of information provided solely by B.

Theorem 7.3.5 (Biggs 1970) Suppose k = λ0 > λ1 > ... > λd are the eigenvalues of both A and B, and ui,vi are the standard left and right eigenvectors of B corresponding to λi. Then the multiplicity m(λi) of λi as an eigenvalue of A is given by the formula

u0,v0 m(λi) = h i. ui,vi h i Proof: Define elements Li A(Γ) as follows: ∈ d Li = ∑(ui) jA j. j=0

We will calculate the trace of Li in two ways. 1. We have

d d tr(Li) = tr ∑(ui) jA j! = ∑(ui) jtr(A j). j=0 j=0

Now, for j = 1,...,d, tr(A j) = 0 as all diagonal entries are 0, and tr(A0) = tr(I) = n. Hence tr(Li) = (ui)0n.

But each ui is standard, so (ui)0 and thus tr(Li) = n.

2. Recall from Lemma 7.3.2 that A j = v j(A). Thus the eigenvalues of A j are the eigenvalues of v j(A), which are v j(λh) for 0 h d. Clearly (and cru- ≤ ≤ cially!), the multiplicity of v j(λh) as an eigenvalue of A j is equal to the multiplicity of λh as an eigenvalue of A. 7.4 Algebraic Constraints on the Intersection Array 67

Also, recall that the sum of the eigenvalues of any square matrix is precisely the trace of that matrix. Hence we have

d tr(A j) = ∑ m(λh)v j(λh) h=0 d = ∑ m(λh)(vh) j h=0 d d and so tr(L j) = ∑(ui) j ∑ m(λh)(vh) j! j=0 h=0 d d = ∑ m(λh) ∑(ui) j(vh) j! h=0 j=0 d = ∑ m(λh) ui,vh . h=0 h i

But (by 7.3.4) for i = h, ui,vh = 0, so 6 h i

tr(Li) = m(λi) ui,vi . h i By equating parts (1) and (2) and by using 7.3.4 we have

tr(Li) = m(λi) ui,vi h i = n = u0,v0 , h i so we obtain

u0,v0 m(λi) = h i, ui,vi h i as required. 

The above result was obtained by Biggs around 1970 (as he puts it in [3]), “sneaking in at the back door of a theory developed by J.S. Frame, H. Wielandt and D.G. Higman, for which a basic reference is [27]”. The treatment given here is an amalgam of those found in [3], [4], [8] and [30].

7.4 Algebraic Constraints on the Intersection Array

The result obtained in Theorem 7.3.5 has the following interpretation. Suppose we have an arbitrary array of integers

1 c2 cd 1 cd  ∗ ··· −  ι = a0 a1 a2 ad 1 ad .  ··· −  k b1 b2 bd 1  ··· − ∗  68 Linear Algebra and Distance-Transitive Graphs

We can rewrite this array to form the matrix

0 1   k a1 c2  b1 a2 .    B =  b2 .. .    .. cd 1   −   . ad 1 cd   −   bd 1 ad  − If the ι is the intersection array of a distance-regular graph Γ, then B is its intersec- tion matrix, and thus the numbers

u0,v0 h i ui,vi h i are the multiplicities of the eigenvalues of Γ, so must be positive integers. This gives an important necessary condition for ι to be an intersection array. Combining this with the results of 2.3.6 and 2.3.8, we have the following definition:

Definition 7.4.1 A (d + 1) (d + 1) tridiagonal matrix B (with non-negative integer entries) of the form ×

0 1   k a1 c2  b1 a2 .    B =  b2 ..     .. cd 1   −   . ad 1 cd   −   bd 1 ad  − is said to be feasible if the following conditions are satisfied:

1. ci + ai + bi = k (for 1 i d 1), cd + ad = k; ≤ ≤ −

2. 1 c2 cd and k b1 bd 1; ≤ ≤ ··· ≤ ≥ ≥ − 3. For 2 j d, the numbers ≤ ≤

kb1 b j 1 k j = ··· − 1c2 c j ··· are positive integers;

4. For n = 1 + k + k2 + + kd, nk is even; ··· 7.4 Algebraic Constraints on the Intersection Array 69

5. For ui,vi as given in 7.3.5, the numbers

u0,v0 h i ui,vi h i are positive integers.

Similarly, an array of positive integers

1 c2 cd 1 cd  ∗ ··· −  ι = a0 a1 a2 ad 1 ad  ··· −  k b1 b2 bd 1  ··· − ∗  is said to be feasible if it corresponds to a feasible matrix.

It should be emphasised that feasibility is not a sufficient condition for ι to be the intersection array of some distance-regular graph. Hence the following defini- tions are non-trivial.

Definitions 7.4.2

A feasible array ι is said to be realisable if it is the intersection array of some • distance-regular graph.

An array is called redundant if it is feasible but not realisible. • We now give some examples.

Example 7.4.3 The array

1 1 3 ι  ∗  =  0 0 1 0  3 2 1  ∗  is not feasible: we have k = 3, so clearly (1) and (2) are satisfied. We obtain k2 = 6, k3 = 2 and n = 1 + 3 + 6 + 2 = 12, so (3) and (4) are also satisfied. It remains to check (5). The characteristic polynomial of the tridiagonal matrix B corresponding to ι is (λ 3)(λ + 1)(λ2 + λ 3), so the eigenvalues are 3, 1, and 1 − − T T ( 1 √13). Corresponding to λ0 = 3, we obtain u = (1,1,1,1) and v = 2 − ± 0 1 (1,3,6,2), so u0,v0 = 1 + 3 + 6 + 2 = 12. h 1 i Now let λ1 = ( 1 + √13). After several lines of calculations, we obtain 2 − 1 1  1 √   1 √  6 ( 1 + 13) 2 ( 1 + 13) u1 = − and v1 = −  1 ( 1 + √13)  1 ( 1 + √13) − 6 −  − 2 −   1   1  − − 70 Linear Algebra and Distance-Transitive Graphs so therefore

1 2 1 2 u1,v1 = 1 + ( 1 + √13) + ( 1 + √13) + 1 h i 12 − 12 − 1 = 2 + (1 2√13 + 13) 6 − 1 = (13 √13) 3 − and so u ,v 12 0 0 = h i 1 ui,vi (13 √13) h i 3 − which is clearly not an integer. Hence the array ι is not feasible, so there can be no graph corresponding to it.

Virtually every source (e.g. [10]) mentions that there are very few redundant matrices. However, without using a computer to enumerate large numbers of pos- sible examples, this is very difficult to verify. But there are still some examples of feasible arrays (or matrices) that are not realisable. This example was found by Smith [36].

Example 7.4.4 The array 1 2 2 4 ι  ∗  =  0 0 0 0 0  4 3 2 2  ∗  is redundant: it is feasible, with eigenvalues 4,√6,0, √6, 4, left eigenvectors − − 1 1 1 1 1      1  √6  0  √6  1  4 − 4 −  1 ,  1 ,  1 ,  1 ,  1 ,    6   − 3   6     1   1   0   1   1   √6   √6     − 2   1   2   −   1     3     1  − 3 − 3 right eigenvectors 1 1 1 1 1  4   √6   0   √6   4  − −  6 ,  1 ,  2 ,  1 ,  6 ,      −       6   √6   0   √6   6     −       −   3   2   1   2   3  − − and the quotients

u0,v0 h i ui,vi h i 7.4 Algebraic Constraints on the Intersection Array 71 are calculated to be 4, 10, 4 and 1. Any graph Γ corresponding to ι would have girth 4, so any two vertices x,y of Γ such that d(x,y) = 2 must lie on exactly one circuit of length 4. Fix u VΓ. ∈ Now, because c2 = 2, each vertex of Γ2(u) is adjacent to two vertices of Γ1(u). Thus we have the following situation:

Figure 7.3: Schematic for example 7.4.4

Because d(v1,v2) = 2, v1 and v2 must lie on a circuit of length 4, so we join v2 to w1 to create this. Likewise, we join v3 to w2. Now, since d(v2,v3) = 2, v2 and v3 must also lie in a 4-circuit, so we join them both to w3. We also have d(v1,v4) = 2, so these two vertices must also lie on a 4-circuit. However, we have c3 = 2, so we cannot join v4 to either w1 or w2, as both of these already are adjacent to two vertices of Γ2(u). Hence v1 and v4 cannot lie on a 4-circuit. So a graph corresponding to ι cannot exist. Chapter 8

Primitive and Imprimitive Graphs

8.1 Introduction

In chapter 4, we gave the definitions of primitive and imprimitive group actions for permutation groups in general. We now relate this to the study of distance- transitive graphs. Our first definitions are very natural.

Definition 8.1.1 A block of a graph Γ is a block of the automorphism group Aut(Γ) acting on the vertex set VΓ.

As always, we have the trivial blocks Ø, v and VΓ. { } Definition 8.1.2 If a distance-transitive graph Γ has only trivial blocks, we say Γ is a primitive graph. Otherwise, Γ has a non-trivial block system and is called an imprimitive graph.

We now give examples of each of these.

Example 8.1.3 The complete graphs, Kn, are all primitive distance-transitive graphs. (K5 is illus- trated overleaf.) This is because Aut(Kn) ∼= Sn, so every possible permutation of the vertices is an automorphism. Thus there is no non-trivial subset of VKn that is fixed unde the action of Sn. Hence Kn has no non-trivial blocks, so is a primitive graph.

Some more sophisticated examples of primitive graphs will be shown later, in section 8.4.

72 8.1 Introduction 73

Figure 8.1: K5, a primitive graph

Example 8.1.4 The complete bipartite graphs Kn,n are impimitive distance-transitive graphs. (K3,3 is illustrated).

Figure 8.2: K3,3, an imprimitive graph

The relation where u v if and only if u,v V1 or u,v V2 is clearly an ≡ ≡ ∈ ∈ equivalence relation on VKn,n that is invariant under the action of Aut(Γ), with the bipartition V1,V2 forming the block system. For example, as labelled in figure 8.2, we have u v but v w. As Kn,n are clearly distance-transitive and have a non- trivial block≡ system, they6≡ are imprimitive.

The complete characterisation of imprimitive graphs was one of the first major results obtained in the study of distance-transitive graphs. It is due to Smith [32] in 1971. Proving this result, and some applications of it, forms the rest of this chapter.

We now prove some basic lemmas which start to give us an idea of what a non- trivial block system can look like.

Lemma 8.1.5 (Smith 1971) Suppose B is a block of a distance-transitive graph Γ and let u B. Then if B con- ∈ tains a vertex of Γ j(u), Γ j(u) B. ⊆ Proof: Let v B Γ j(u), and choose some w Γ j(u) with v = w. By distance-transitivity, there∈ exists∩ some g Aut(Γ) such that∈ (u)g = u and6 (v)g = w. By the definition ∈ 74 Primitive and Imprimitive Graphs of a block, there is some equivalence relation on VΓ such that u v. But is invariant under the action of Aut(Γ), so (u)g ≡(v)g, i.e. u w, and≡ thus w ≡B. ≡ ≡ ∈ Hence Γ j(u) B. ⊆  An alternative way of stating the result above is to say that “a block B of a distance-transitive graph Γ is a disjoint union of cells of a distance partition of Γ”. This is a very useful result, as it narrows down the possibilities for a non-trivial block system. It also gives us an alternative way of showing that Kn is primitive.

Example 8.1.6 Choose a vertex u of Γ = Kn. If u is in some non-trivial block B, then there must be some other vertex v B. Since diam(Γ) = 1, we have v Γ1(u), so by Lemma ∈ ∈ 8.1.5, Γ1(u) B. But u Γ1(u) = VΓ, so B = VΓ, which is also a trivial block. ⊆ { } ∪ Hence Γ = Kn has no non-trivial blocks, so is primitive.

Another useful result is the following.

Lemma 8.1.7 (Smith 1971) Suppose Γ is distance-transitive, B a block of Γ, with u,v B and d(u,v) = 1. Then B = VΓ. ∈

Proof: By assumption, v B Γ1(u), so by Lemma 8.1.5, we have Γ1(u) B. Also, ∈ ∪ ⊂ u B Γ1(v), so Γ1(v) B. ∈ ∪ ⊂ Now choose w Γ2(u) Γ1(v) (this exists by distance-transitivity). Since w ∈ ∩ ∈ Γ1(v), we have w B, so therefore w Γ2(u) B and by the lemma, Γ2(u) B. ∈ ∈ ∩ ⊂ Repeating this process, we see that Γ3(u) B,...,Γd(u) B. ⊂ ⊂ Hence every vertex of Γ is contained in B, so therefore B = VΓ. 

The next lemma is a similar result. First we let

d if d is even η =  d 1 if d is odd − (in other words, η is the longest possible even length of a geodesic path in Γ). Similarly, we let

d 1 if d is even θ =  −d if d is odd

(the longest possible odd length of a geodesic path in Γ).

Lemma 8.1.8 Suppose Γ is distance-transitive, B a block of Γ, with u,v B and d(u,v) = 2. Then ∈ B = Γ0(u) Γ2(u) Γη(u). ∪ ∪ ··· ∪ 8.2 Antipodal Graphs 75

Proof: By assumption, v B Γ2(u), so by Lemma 8.1.5, we have Γ2(u) B, and likewise ∈ ∪ ⊂ Γ2(v) B. Now choose some w Γ4(u) Γ2(v) (again, this exists by distance- ⊂ ∈ ∩ transitivity). Because w Γ2(v), w B, so w B Γ4(u) and by Lemma 8.1.5 we ∈ ∈ ∈ ∩ have Γ4(u) B. ⊂ Repeating this process, we obtain the required result. 

8.2 Antipodal Graphs

Antipodal graphs are a class of imprimitive distance-transitive graphs. They are defined as follows:

Definition 8.2.1 A distance-transitive graph Γ with diameter d is said to be antipodal if, for all dis- tinct v,w Γ0(u) Γd(u), we have d(v,w) = d. ∈ ∪ We now clarify this definition with some examples.

Example 8.2.2

Figure 8.3: K3,3, an antipodal graph

Γ = K3,3 is antipodal: for v,w Γ2(u), we have d(v,w) = 2. (Clearly, this extends ∈ to Kn,n.)

Example 8.2.3 The , H, is not antipodal: the diameter of H is 3, but the vertices v10,v12 H3(v1) are at distance 2. ∈ Definition 8.2.4 Notice that if Γd(u) consists of a single vertex v, then the graph is automatically antipodal, because we have Γ0(u) Γd(u) = u,v , where d(u,v) = d. In this case, we call Γ trivially antipodal. ∪ { } 76 Primitive and Imprimitive Graphs

Figure 8.4: The Heawood graph, a non-antipodal graph

Examples familiar to us include the k-cubes Qk (see section 5.4), the octahe- dron (figure 2.6) and our ‘pretty picture’, J(6,3,2) (figure 3.2).

Our next result is another step in the characterisation of imprimitive graphs we are aiming for.

Lemma 8.2.5 (Smith 1971) A distance-transitive graph Γ is antipodal if and only if B = Γ0(u) Γd(u) is a block of Γ. ∪

Proof: Suppose B = Γ0(u) Γd(u) is a block of Γ, and choose vertices v,w B with d(v,w) = j. By distance-transitivity,∪ there exists g Aut(Γ) such that (v)g∈= u and ∈ (w)g = x, where x Γ j(u). By the definition of a block, Bg = (x)g x B B, ∈ { | ∈ } ⊆ so u,x B and consequently (by Lemma 8.1.5) Γ j(u) B. Therefore, because u,x are distinct∈ and d(v,w) = d(u,x), the only possibility⊂ for this distance is d. Hence Γ is antipodal. Conversely, suppose Γ is antipodal. Consequently, for any x,y B = Γ0(u) ∈ ∪ Γd(u), d(x,y) = 0 or d, so B = Γ0(x) Γd(x). Therefore, for any g Aut(Γ), if ∪ ∈ (v)g B Bg, then B = Γ0((v)g) Γd((v)g). But ∈ ∩ ∪

Bg = (Γ0(v) Γd(v))g ∪ = (Γ0((v)g) Γd((v)g)) ∪ because Γ is distance-transitive, g preserves distances. Thus B = Bg, so B is invari- ant under the action of Aut(Γ) and so is a block of Γ. 

The second part of the above proof tells us that any antipodal graph is imprim- itive. The converse is partially true, as we shall see later. 8.3 Bipartite Distance-Transitive Graphs 77

8.3 Bipartite Distance-Transitive Graphs

It is assumed that the reader is familiar with the concept of a bipartite graph (as de- fined in 1.1.1). So we just state the following result (Theorem 5.1 in Wilson [40]):

Proposition 8.3.1 A graph is bipartite if and only if it contains no odd circuits. 

A consequence of this is the following:

Proposition 8.3.2 Suppose Γ is distance-transitive, with intersection array

c1 cd 1 cd  ∗ ··· −  ι(Γ) = a0 a1 ad 1 ad .  ··· −  b0 b1 bd 1  ··· − ∗  Then Γ is bipartite if and only if ai = 0 for 1 i d. ≤ ≤ Proof: We prove the contrapositive in both directions. First, suppose a j = 0 for some j, so there are adjacent vertices v,w Γ j(u). Also, there are paths π6 ,ρ of length j from u to v and to w respectively. Let∈x be the last vertex where π and ρ meet and let m = d(x,v) = d(x,w). Then we can form a circuit of length 2m + 1, an odd number. So Γ is not bipartite. Conversely, suppose Γ is not bipartite. Then Γ contains some odd circuit σ, of length 2δ + 1 say. Choose some vertex u in σ. Then there exist two adjacent vertices v,w in σ such that d(u,v) = d(u,w) = δ, so v,w Γδ(u) as shown. ∈

Figure 8.5: An odd circuit in a non-bipartite graph Γ

Therefore there is at least one ai that is non-zero. 

The above proposition is important in the next result, which is another major step in our quest to classify all imprimitive distance-transitive graphs. 78 Primitive and Imprimitive Graphs

Lemma 8.3.3 (Smith 1971) Suppose Γ is distance-transitive with diameter d > 2 and valency k > 2. Then Γ0(u) Γ2(u) Γη(u) is a block of Γ if and only if it is bipartite. ∪ ∪ ··· ∪ Proof: Suppose Γ is bipartite, and choose v,w Γ0(u) Γ2(u) Γη(u). Because Γ is bipartite, it follows that d(u,v) must be∈ even: ∪ ∪ ··· ∪ Let π be a geodesic path from u to v. By Lemma 8.3.2, a j = 0 for all j, so there are no edges within Γ j(u). Therefore π is composed of pairs of edges as follows:

Figure 8.6: Three types of path

(because v,w Γ0(u) Γ2(u) Γη(u)). Thus π contains an even number of edges and so d∈(u,v) is∪ even. ∪ ··· ∪ Now let g Aut(Γ), and suppose towards a contradiction that (u)g Γ0(u) ∈ ∈ ∪ Γ2(u) Γη(u) and that (v)g Γ1(u) Γ3(u) Γθ(u). Let ρ be a geodesic ∪···∪ ∈ ∪ ∪···∪ path from (u)g to (v)g. By Lemma 8.3.2, a j = 0 for all j, so since (v)g Γ1(u) ∈ ∪ Γ3(u) Γθ(u), the penultimate vertex x of ρ must lie in Γ0(u) Γ2(u) ∪ ··· ∪ ∪ ∪ ··· ∪ Γη(u). By the above arguments, d((u)g,x) is even so therefore d((u)g,(v)g) must be odd. However, by distance-transitivity, d((u)g,(v)g) = d(u,v). But a number can’t be both odd and even at the same time, so we have a contradiction. Thus Γ0(u) Γ2(u) Γη(u) is a subset of VΓ that is invariant under the action of Aut(Γ)∪, so it forms∪ ··· ∪a block of Γ.

Conversely, suppose that Γ0(u) Γ2(u) Γη(u) is a block of Γ, say B, and that Γ is not bipartite. Then by∪ 8.3.1, Γ∪contains ··· ∪ an odd circuit and by 8.3.2, there exists some j such that a j = 0. However, this j must be odd, as if j was even 6 there would be two adjacent vertices in Γ j B, implying (by 8.1.7) that B = VΓ, contradicting our assumption that B is non-trivial.⊂ Let j = 2m + 1 be the smallest integer such that a2m+1 = 0. Then there exist 6 v,v Γ2m+1(u) such that d(v,v ) = 1. This gives us two cases to consider: 0 ∈ 0 1. The case where 2m + 1 3. Choose w,w Γ2m(u) satisfying d(v,w) = ≥ 0 ∈ 8.3 Bipartite Distance-Transitive Graphs 79

d(v0w0) = 1, and also choose x,x0 Γ1(u) satisfying d(w,x) = d(w0,x0) = 2m 1. So we have the following situation:∈ −

Figure 8.7: Schematic for case 1

Since d(u,w) = d(x,v) = 2m, by distance-transitivity there exists g Aut(Γ) ∈ such that (u)g = x,(w)g = v. Similarly, since d(u,w0) = d(x0,v0) = 2m, there exists h Aut(Γ) such that (u)h = x ,(w)h = v . By the induced action of g ∈ 0 0 and h on B, we have that x,v Bg and x0,v0 Bh. Bg is the image of B under some automorphism, so for any∈ vertex in Bg∈(for example, x), it contains all vertices of Γ at distance 2 from it. But because a1 = 0, we have d(x,x ) = 2, 6 0 giving x0 Bg and consequently Bh Bg. Similarly, x Bh so Bg Bh. Therefore∈ we have Bg = Bh = b , say.⊆ However, v,v B and∈ d(v,v ) =⊆1, so 0 0 ∈ 0 0 by Lemma 8.1.7, B0 = VΓ. But this is absurd, as it contradicts B0 being the image of the non-trivial block B under the automorphisms g and h. Hence our assumption that Γ is not bipartite must be false.

2. The case where 2m + 1 = 1. That is, a1 = 0, so any vertex y Γ1(u) is 6 ∈ adjacent to some other vertex z Γ1(u), with the edges uy,yz,zu forming a triangle. By the distance-transitivity∈ of Γ, any edge of Γ must lie in a triangle.

Fix v Γ1(u) and choose w Γ2(u) such that d(v,w) = 1. Thus the edge vw ∈ ∈ must lie in a triangle. Since a2 = 0, there exists v Γ1(u) with d(v,v ) = 0 ∈ 0 d(v0w) = 1 so that the edges vw,wv0,v0v form the required triangle. Finally, choose x Γ3(u) (this is where we use the assumption that d > 2) with d(w,x) = ∈1. The following diagram should clarify the above description.

Figure 8.8: Schematic for case 2 80 Primitive and Imprimitive Graphs

Since d(u,w) = d(x,v) = 2, then by distance-transitivity there exists g ∈ Aut(Γ) satisfying (u)g = x,(w)g = v. Because v Γ2(x) Bg, by Lemma ∈ ∩ 8.1.5 we have Γ2(x) Bg. However, this implies that the adjacent vertices ⊂ v,v0 are also in Bg, so by Lemma 8.1.7, Bg = VΓ, which contradicts the non-triviality of Bg.

In both cases we obtain a contradiction, so therefore it must be the case that Γ is bipartite. 

8.4 Smith’s Theorem

We now arrive at the primary goal of this chapter: the complete characterisation of imprimitive distance-transitive graphs, which is known as Smith’s Theorem.

Theorem 8.4.1 (Smith 1971) Let Γ be a distance-transitive graph of valency k > 2 and diameter d > 2, with intersection array

c1 cd 1 cd  ∗ ··· −  ι(Γ) = a0 a1 ad 1 ad .  ··· −  b0 b1 bd 1  ··· − ∗  Then Γ is imprimitive if and only if it is bipartite or antipodal.

Proof: We’ll show that the only possible non-trivial blocks of Γ are Γ0(u) Γd(u) or ∪ Γ0(u) Γ2(u) Γη(u), for some u in VΓ. Let∪B be a∪ non-trivial ··· ∪ block of Γ, with u B. By Lemma 8.1.5, B is the disjoint ∈ union of some of the Γ j(u). Obviously, Γ0(u) B. Then either ⊂ (i) B = Γ0(u) Γd(u), or ∪ (ii) B = Γ j(u) B for some j where 0 < j < d. Now consider the⊂ intersection array ι(Γ). If a j = 0, then there exist v,w B with d(v,w) = 1, so by Lemma 8.1.7 B = VΓ, 6 ∈ contradicting B being non-trivial. Therefore a j = 0 if Γ j(u) B. ⊂ If b j 1 2 or c j+1 2, then there exist x,y Γ j(u) B with d(x,y) = 2, as shown: − ≥ ≥ ∈ ⊂

If b j 1 2: If c j+1 2: − ≥ ≥ 8.4 Smith’s Theorem 81

Therefore, by Lemma 8.1.8, we have Γ0(u) Γ2(u) Γη(u) B. But no ∪ ∪ ··· ∪ ⊆ vertex in Γ1(u) Γ3(u) Γθ(u) is in B, as it would be adjacent to a vertex in ∪ ∪ ··· ∪ Γ0(u) Γ2(u) Γη(u) B which would imply B = VΓ, again contradicting ∪ ∪ ··· ∪ ⊆ the non-triviality of B. Hence B = Γ0(u) Γ2(u) Γη(u). ∪ ∪ ··· ∪ The remaining case to check is when b j 1 = 1 and c j+1 = 1. However, by − Theorem 2.3.6, c j c j+1 and b j 1 b j, so therefore c j = b j = 1. From above, ≤ − ≥ a j = 0 so this gives k = c j + a j + b j = 2, contradicting our initial hypothesis that k > 2.

Now, using lemmas 8.2.5 and 8.3.3, we know that Γ0(u) Γd(u) is a block if ∪ and only if Γ is antipodal and that Γ0(u) Γ2(u) Γη(u) is a block if and only if Γ is bipartite (for d > 2 and k > 2). By∪ the argument∪···∪ above, these are the only possible non-trivial block systems that can occur in a distance-transitive graph. Hence a distance-transitive graph (of diameter > 2, valency > 2) is imprimitive if and only if it is bipartite or antipodal. 

The most useful property of Smith’s Theorem is that it tells us a group-theoretic property of Γ without making any reference to the automorphism group of Γ what- soever. We finish this chapter by using it to ascertain whether certain graphs are primitive or imprimitive.

Proposition 8.4.2 The Hamming graphs H(d,n) are primitive for n > 2.

Proof: We check under what circumstances H(d,n) is bipartite or antipodal. Recall from section 5.3 that the intersection array of H(d,n) is

1 j d 1 d  ∗ ··· ··· −   0 d(n 2) . d(n 1)(d 1···············)(n 1) (d j)(n 1) 1(n 1) −  − − − ··· − − ··· − ∗  By 8.3.2, H(d,n) is bipartite if and only if the middle row of ι(H(d,n)) consists only of zeroes. That is, for all j 0 d, we have ∈ ··· d(n 1) j (d j)(n 1) = 0 nd −d −j −nd +−d + n j− j = 0 ⇔ n j −2 j−= 0− − ⇔ n =−2. ⇔

Hence the only bipartite Hamming graphs are H(d,2), the d-dimensional cubes from section 5.4. Checking antipodality is slightly more tricky. First, we notice that H(d,2) is 82 Primitive and Imprimitive Graphs trivially antipodal (in the sense of 8.2.4) for every d. By 2.3.8, we have

b0b1 bd 1 kd = ··· − c1c2 cd ··· d(n 1)(d 1)(n 1) 1(n 1) = − − − ··· − 1 2 d × × ··· × d!(n 1)d = − d! = (n 1)d. − d So kd = 1 when (n 1) = 1, so n 1 = 1 and thus n = 2. Therefore the d- dimensional cubes are− all antipodal.− However no other Hamming graphs are an- tipodal: for n 3, we have vertices u,v,w with u = 00 0, v = 11 1 and w = 21 1. Clearly≥ d(u,v) = d(u,w) = d, but d(v,w) = ···1. Thus these··· graphs are not antipodal.··· Consequently, by Smith’s Theorem the only imprimitive Hamming graphs are H(d,2) (they are both bipartite and antipodal). Therefore all other Hamming graphs, H(d,n) where n > 2, must be primitive. 

Proposition 8.4.3 The Johnson graphs J(n,k,k 1) are primitive for n = 2k. − 6 Proof: As with the Hamming graphs, we check if J(n,k,k 1) is bipartite and/or antipodal. Recall that ι(J(n,k,k 1)) is given by − − 12 i2 (k 1)2 k2 ( 0∗ ··· ··· − nk 2k2 ). k(n k)(k 1)(···············n k 1) (k i)(n k i) n 2k + 1 − − − − − ··· − − − ··· − ∗ For J(n,k,k 1) to be bipartite, we need to have the middle row of the inter- section array to be− all zeroes. That is, for each i 1,...,k 1 we need ∈ − k(n k) i2 (k i)(n k i) = 0 − − − − − − and (from the last column) nk 2k2 = 0. This last condition implies n = 2k, so substituting this gives us, for all−i,

k(2k k) i2 (k i)(2k k i) = 0 k2 i−2 (−k i−)2 =−0 − − ⇔ 2ki− 2−i2 = −0 ⇔ i =−k, ⇔ which is absurd. Therefore J(n,k,k 1) is never bipartite. In figure 3.2, we see that J(6,3,−2) is trivially antipodal. The obvious question is: does this apply to all Johnson graphs? The answer, in fact, is no. Consider 8.4 Smith’s Theorem 83

two vertices u = u1,...,uk and v = v1,...,vk of J(n,k,k 1) and suppose that they are at the maximum{ possible} distance,{ which} is k. This− occurs when the two sets are disjoint (so ui = v j for all i, j). If n = 2k, u and v form a partition of the set Ω, so v is the unique6 vertex at distance k from u and consequently the graph is trivially antipodal. However, if n > 2k we have the vertex w = v1,...,vk 1,x { − } (where x = ui and x = v j for all i, j). Clearly, d(u,v) = k and d(u,w) = k, but d(v,w) =61 so the graph6 is not antipodal. Using Smith’s Theorem, we can see that the only imprimitive Johnson graphs are J(2k,k,k 1) (as they are antipodal), so all others must be primitive. −  Chapter 9

New Graphs from Old

In this chapter, we shall investigate some methods for constructing a new graph from a given one, and consider whether these new graphs are distance-transitive.

9.1 Line Graphs

The basic construction is simple enough:

Definition 9.1.1 The line graph, L(Γ), of a connected graph Γ is the graph whose vertices corre- spond to the edges of Γ, with two vertices of L(Γ) being adjacent if the corre- sponding edges of Γ are adjacent, i.e. if they are both incident with some vertex of Γ. A graph Γ is a line graph if it is the line graph of some other graph.

This is best illustated with an example.

Example 9.1.2

Figure 9.1: Constructing the line graph of L6

84 9.1 Line Graphs 85

We start by deducing some basic properties of line graphs.

Proposition 9.1.3 Let Γ be a graph and L(Γ) its line graph. Then: 1. VL(Γ) = EΓ ; | | | | d(v) 2. EL(Γ) = ∑ ; | | 2 3. If Γ is k-regular, then L(Γ) is (2k 2)-regular. − Proof: 1. This is immediate from the definition of L(Γ). 2. The number of edges of L(Γ) is the number of pairs of adjacent vertices of L(Γ), which is the number of pairs of adjacent edges of Γ. Each vertex d(v) v VΓ has d(v) edges incident with it, giving   pairs of mutually ∈ 2 adjacent edges passing through v. Hence the result follows. 3. Suppose Γ is k-regular, and let uv be an edge of Γ. Then there are k 1 other edges incident with u and also with v. Thus there are 2(k 1) edges− of Γ adjacent to uv, so the degree of uv as a vertex of L(Γ) is− also 2(k 1) = 2k 2. Hence L(Γ) is (2k 2)-regular. − − −  It should be noted that Γ is cannot be uniquely determined by L(Γ). That is, there exist non-isomorphic graphs that have isomorphic line graphs. Consequently, the line graph is a “one-way” construction. An example of this is as follows:

Example 9.1.4 The graphs K3 and K1,3 are clearly not isomorphic. However we have L(K3) ∼= L(K1,3) ∼= K3.

Figure 9.2: Two non-isomorphic graphs with the same line graph

However, all is not lost: H.R. Whitney showed that this is the only such pair. We state this fomally below: 86 New Graphs from Old

Theorem 9.1.5 (Whitney 1932) Γ ∆ Γ ∆ Suppose we have non-isomorphic graphs and such that L( ) ∼= L( ). Then Γ ∆ ∼= K3 and ∼= K1,3 (or vice-versa).

Proof: See Harary [24], Theorem 8.3. 

9.2 Automorphisms of Line Graphs

In this project, an underlying theme has been investigating the automorphism groups of graphs, so it is inevitable that we now consider the automorphism groups of line graphs. It seems obvious that Aut(L(Γ)) should be related to Aut(Γ) in some way: as Aut(Γ) acts on the edges of Γ, it has an induced action on the vertices of L(Γ). In fact it turns out that, with four small exceptional cases, the two groups Aut(Γ) and Aut(L(Γ)) are isomorphic. To prove this, we first need a definition and a lemma.

Definition 9.2.1 A in Γ is a subset S EΓ such that each edge in S is incident with a common vertex. ⊆

Lemma 9.2.2 (Hemminger 1972) Suppose σ is a permutation of the edges of Γ (i.e. the vertices of L(Γ)). Then σ is 1 induced by an automorphism σ∗ of Γ if and only if σ and σ− preserve stars.

Proof: See Hemminger [25] for the details. 

We can now prove the desired result:

Theorem 9.2.3 Let Γ be a connected, simple graph with automorphism group Aut(Γ), VΓ 3 and with line graph L(Γ). Then | | ≥

Aut(Γ) Aut(L(Γ)). ≤

Furthermore, if Γ is not K4, K4 with an edge deleted or K4 with two adjacent edges deleted, then we have

Γ Γ Aut( ) ∼= Aut(L( )).

Proof: By construction, the vertices of L(Γ) correspond to edges of Γ, so can be labelled 9.2 Automorphisms of Line Graphs 87 as unordered pairs of vertices of Γ. For clarity, let G = Aut(Γ) and H = Aut(L(Γ)), and define a function θ : G H g → θ(g) 7→ where θ(g) has the following action on pairs u,v : { } θ(g) : u,v (u)g,(v)g . { } 7→ { } First, we will show that this is an injective homomorphism from G to H.

θ is well-defined: the pair (u)g,(v)g exists as an edge of Γ (and thus as a • vertex of L(Γ) since g is an{ automorphism} of Γ and thus preserves adjacency.

θ preserves the group operation: choose g,h G. Then • ∈ ( u,v )θ(gh) = (u)gh,(v)gh { } { } = ((u)g)h,((v)g)h { } = ( (u)g,(v)g )θ(h) { } = ( u,v )θ(g)θ(h). { } Hence θ(gh) = θ(g)θ(h).

To show θ is injective is where we need that VΓ 3 (so that VLΓ = • | | ≥ | | EΓ 2). We’ll show that Ker(θ) = eG . | | ≥ { } Suppose h Ker(θ). Then for all u,v VΓ, ( u,v )θ(h) = ( u,v )eH = u,v . (Note∈ that if we have only two vertices,∈ then{ the} non-identity{ element} of{ G}will fix the only possible pair: this is why we require VΓ 3.) Hence | | ≥ by the faithfulness of H, we have that h = eG, so Ker(θ) = eG . By the First { } Isomorphism Theorem (Gallian [18] p.199), G = Im(θ) H. ∼ ≤ We now want to determine when θ is also surjective, thus becoming an isomor- phism. Suppose that we have Im(θ) = φ H φ = θ(g) g G = H. That is, there exists some σ H that is not induced{ ∈ by an| element∀ of G∈. By} 6 Lemma 9.2.2 above, σ must not preserve∈ stars in Γ. But because σ is an automorphism of L(Γ), it preserves adjacency of vertices in L(Γ) (i.e. edges of Γ), so this only happens if σ maps the edges of a 3-star K1,3 to those of a triangle K3 (as subgraphs of Γ). Now, any edge uv of Γ incident with the subgraph K3 must be adjacent to 1 exactly two edges of K3. So because σ is an automorphism, σ− (uv) must be incident with exactly two edges of K1,3, as shown below: 88 New Graphs from Old

This gives the first of our three exceptional graphs. Furthermore, no edge can be incident with the other end of uv if it is not inci- 1 dent with K3, as under the automorphism σ− it would have nowhere to go:

Hence we can only add another two more edges; adding one and then another gives the second and third of the exceptional graphs.

Figure 9.3: The three exceptional graphs

So therefore, if Im(θ) = Aut(L(Γ)), it must be that (if VΓ 3), Γ is one of the 6 | | ≥ three graphs: K4, K4 with an edge deleted, or K4 with two adjacent edges deleted. θ Γ Γ So in all other cases, Im( ) ∼= Aut( ) ∼= Aut(L( )). 

Theorem 9.2.3 was proved by G. Sabidussi in 1961; our proof follows that given by Hemminger [26] in 1975.

9.3 Eigenvalues of Line Graphs

Line graphs also have interesting spectral properties. Recall (from 7.1.1) the defi- nition of the adjacency matrix of a graph. A related definition is this one. 9.3 Eigenvalues of Line Graphs 89

Definition 9.3.1 Suppose Γ is a graph with vertex set VΓ = v1,...,vn and edge set EΓ = e1,...,em . Then the incidence matrix M(Γ) is the n {m matrix} with entries { } × 1 if the edge e is incident with the vertex v M = j i i j  0 otherwise.

This is useful when working with the line graph of Γ, as it involves edges as well as vertices. In fact, we have the following result.

Lemma 9.3.2 Let Γ be a simple graph with n vertices, m edges and incidence matrix M(Γ) = M. Then the adjacency matrix of L(Γ) is A(L(Γ)) = MT M 2I. − Proof: We have n T T (M M)i j = ∑ (M )ia(M)a j a=1 n = ∑ (M)ai(M)a j. a=1 Now, for the summand we have

1 if the edges e ,e are both incident with the vertex v (M) (M) = i j a ai a j  0 otherwise.

If a given pair of edges of Γ are adjacent they can meet at exactly one vertex; therefore if i = j we have 6 1 if e is adjacent to e (MT M) = i j i j  0 otherwise.

However, if i = j, we are only dealing with a single edge, which is incident with T two vertices. Hence (M M)ii = 2. Now let A = A(L(Γ)) be the adjacency matrix of L(Γ). By definition, we have Ai j = 1 if the vertices wi,w j of L(Γ) are adjacent, that is if the corresponding T edges ei,e j of Γ are adjacent, i.e. if (M M)i j = 1. Thus for i = j, we have Ai j = T 6 T (M M)i j. Also, A must have zeroes in all the diagonal entries, however M M has all diagonal entries equal to 2. So to allow for this, we just have to subtract 2I from MT M to obtain A. Hence A(L(Γ)) = MT M 2I. −  A consequence of this is the following theorem about the spectrum of a line graph. 90 New Graphs from Old

Theorem 9.3.3 If λ is an eigenvalue of L(Γ), then λ 2. ≥ − Proof: Let , denote the usual inner product in Rm and let x be an eigenvector of MT M withh correspondingi eigenvalue µ. That is,

(MT M)x = µx.

So we have xT (MT M)x = xT (µx) (Mx)T Mx = µ(xT x) ⇔ Mx,Mx = µ x,x . ⇔ h i h i Since the inner product of a vector with itself is always non-negative, it fol- lows that µ 0. So we know that the eigenvalues of MT M, i.e. the solutions of det(MT M ≥µI) = 0, are all 0. ( ) Now, if−λ is an eigenvalue≥ of L∗(Γ), i.e. an eigenvalue of MT M 2I, it is a so- lution of det(MT M 2I λI) = det(MT M (λ+2)I) = 0. So if we− let µ = λ+2, then from ( ) we have− µ− 0; therefore λ +−2 0 and hence λ 2. ∗ ≥ ≥ ≥ −  The study of which graphs of least eigenvalue 2 are the line graphs of other graphs is quite extensive: for example, Godsil & Royle− [22] devote an entire chap- ter to the subject and its generalisations.

9.4 Distance-Transitive Line Graphs

The problem of “when is a line graph distance-transitive?” was addressed by Biggs [7] and is summarised by Holton & Sheehan [29]. Distance-transitive line graphs are very rare, as we shall see. First, we need some definitions.

Definition 9.4.1 A (k,g)- is a simple graph of valency k 3 and girth g 3 with the minimum possible number of vertices. ≥ ≥

We require g 3 because we want the graph to be simple. Also, we require k 3 because the≥ case k = 2 is not very interesting and is usually ignored. We can≥ always construct a graph of degree 2 and any girth g 3 with minimal number ≥ of vertices: these are just the g-circuits Cg. The notion of a cage was introduced by Tutte [37], when he investigated cubic cages. The table below gives a list of (3,g)-cages known to be unique. Many of these are graphs that are familiar to us already. 9.4 Distance-Transitive Line Graphs 91

Graph Common name/symbol VΓ Reference | | (3,3)-cage K4 4 (3,4)-cage K3,3 6 (3,5)-cage Petersen graph O3 10 3.5.1 (3,6)-cage Heawood graph 14 11.2.1 (3,7)-cage McGee graph 24 [38] (3,8)-cage Tutte’s 8-cage 30 11.2.1 (3,12)-cage Tutte’s 12-cage 126 [37]

Figure 9.4: Unique (3,g)-cages

The next result gives an idea of the size of a particular (k,g)-cage.

Proposition 9.4.2 For any k-regular graph of girth g (k,g 3), define n0(k,g) to be ≥

1 (g 1) k((k 1) 2 1)  1 + − − − when g is odd k 2 ( , ) =  − n0 k g  1 g  (k 1) 2 1 2 − − ! when g is even.  k 2  −

Then n0(k,g) is a lower bound for VΓ . | | Proof: First, suppose g is odd, so therefore g = 2d + 1 for some d 1. Now, for some v VΓ, we can start to form a distance partition of Γ. Since the≥ girth of Γ is 2d +1 ∈ and the valency of Γ is k, we have Γ0(v) = 1, Γ1(v) = k, Γ2(v) = k(k 1) d 1| | | | | | − and so on until Γd(v) = k(k 1) − . (See the diagram, which illustrates the case k = 3, g = 7.) | | −

Figure 9.5: Part of a (3,7)-cage

Hence we have VΓ 1 + k +k(k 1) + + k(k 1)d 1. On the right-hand side | | ≥ − ··· − − 92 New Graphs from Old is a finite geometric series, so we have

d 1 VΓ 1 + k + k(k 1) + + k(k 1) − | | ≥ − ··· − d 1 = 1 + ∑− k(k 1)i i=0 − k((k 1)d 1) = 1 + − − . k 2 − 1 Γ Substituting d = 2 (g 1), we obtain V n0(k,g). Secondly, suppose−g is even, so g| =|2 ≥d for some d 2. Choose some edge uv EΓ. Then at the vertices u and v, we can construct a≥ tree in a similar fashion as above∈ (k = 3, g = 8 is shown below) as far as the vertices distance d 1 from u and v. −

Figure 9.6: Part of a (3,8)-cage

2 d 1 So we have VΓ 2 + 2(k 1) + 2(k 1) + + 2(k 1) − , another finite geometric series.| | Thus ≥ − − ··· −

d 1 VΓ 2 ∑− (k 1)i | | ≥ i=0 − (k 1)d 1 = 2 − − , k 2 − 1 so substituting d = g, we again obtain VΓ n0(k,g), as required. 2 | | ≥ 

Consequently, a (k,g)-cage must have at least n0(k,g) vertices. In the case where this lower bound is actually achieved, we have the following:

Definition 9.4.3 1 A (k,g)-cage with exactly n0(k,g) vertices is called a (k,g)-.

1Some texts, e.g. [8], [22], only use the term Moore graph when g is odd; if g is even, the term generalised polygon is used instead. 9.4 Distance-Transitive Line Graphs 93

(For example, of the (3,g)-cages listed in figure 9.3, only the (3,7)-cage is not a Moore graph, since n0(3,7) = 22 < 24.)

There has been an extensive study of Moore graphs, especially concerning for which values of k and g a Moore graph exists and whether such a graph is unique. We summarise various authors’ work in the next theorem.

Theorem 9.4.4

1. If a Moore graph with even girth g exists, then g 4,6,8,12 . Furthermore, such graphs exist when: ∈ { }

(a) g = 4, k 3: these are the complete bipartite graphs Kk,k and are unique; ≥ (b) g = 6,8,12, k = q + 1 for some prime power q.

2. If a Moore graph with odd girth g exists, then g 3,4 . Furthermore, such graphs exist when: ∈ { }

(a) g = 3, k 3: these are the complete graphs Kk+1 and are unique; ≥ (b) g = 4, k = 3: the Petersen graph O3, which is unique; (c) g = 5, k = 7: the Hoffman-Singleton graph, see [13] or [28], which is unique; (d) g = 5, k = 57: ?

Proof: See Holton & Sheehan [29] for references. 

The big question mark in 2(d) is not a typographical error! Whether a (57,5)- Moore graph exists is a long-standing and quite famous open problem in graph theory. (Cases 2(b), (c), (d) are rather sporadic.) Note that the (3,6)-cage, (3-8)- cage and (3,12)-cage all fall into category 1(b), where the graphs are all related to projective geometry; see [22] or [29] for an explanation.

This is all very interesting. But what does it have to do with line graphs or distance-transitivity?

Theorem 9.4.5 (Biggs 1974) Let Γ be a connected graph of valency k 3. Then if its line graph L(Γ) is distance- transitive, Γ is a (k,g)-Moore graph for≥ some k 3 and k g. ≥ ≥ Proof: See [7].  94 New Graphs from Old

Consequently, all we have to do to identify a distance-transitive line graph is to consider each (k,g)-Moore graph, look at its line graph and check whether it is distance-transitive. The next two theorems are examples of when the answer is positive.

Theorem 9.4.6 The line graphs of Kn, for n 3, are the Johnson graphs J(n,2,1). ≥

Proof: Label the vertices of Kn as 1,2,...,n. The edges of Kn are all the possible un- ordered pairs of distinct integers chosen from 1,2,...,n. Two edges are adjacent if and only if both are incident with a common vertex, i.e. if the pairs have an element in common.

Therefore the vertices of L(Kn) are all the possible 2-subsets chosen from 1,2,...,n , two subsets x,y , z,w being adjacent if and only if x,y z,w =1. This{ is precisely} the definition{ } of{ J(n},2,1) (see chapter 3), which|{ we know}∩{ from}| section 3.2 to be distance-transitive. 

As a special case, we see that the line graph of the (isomorphic to K4) is the octahedron (isomorphic to L(K4) ∼= J(4,2,1)), a standard exercise about line graphs.

Theorem 9.4.7 The line graphs of Kn,n are the Hamming graphs H(2,n).

WARNING: Don’t confuse H(2,n) (known as the lattice graphs) with H(k,2) (the k-cubes)!!

Proof: The vertex set of Kn,n has bipartition V1,V2. Label the vertices of Kn,n in V1 as 11,21,...,n1 and those in V2 as 12,22,...,n2. The edges of Kn,n are all the possible pairs of the form i1 j2, that is all possible ordered pairs of integers chosen from 1,2,...,n. Two edges of Kn,n are adjacent if and only if they are incident with a common vertex, that is if the two pairs agree in the first co-ordinate or the second co-ordinate.

Therefore the vertices of L(Kn,n) are all the possible ordered 2-tuples of el- ements of the set 1,2,...,n , adjacent if and only if they agree in exactly one co-ordinate (equivalently,{ if they} differ in exactly one co-ordinate). This is pre- cisely the definition (see 5.1.1) of the Hamming graph H(d,n), with d = 2, which we know to be distance-transitive (by section 5.2). 

For example, H(2,3) (see figure 5.1) is the line graph of K3,3. 9.5 Bipartite Doubles 95

9.5 Bipartite Doubles

This is another straightforward construction:

Definition 9.5.1 The bipartite double, D(Γ), of a graph Γ has vertex set V1 ˙V2, where V1,V2 are two ∪ copies of VΓ, and a vertex v1 V1 is adjacent to a vertex w2 V2 if and only if the corresponding vertices v,w V∈Γ are adjacent in Γ. ∈ ∈ By construction, this D(Γ) is bipartite and has double the number of vertices and edges as Γ (hence the name). Also, if v1,v2 VD(Γ) are the vertices of D(Γ) ∈ corresponding to v VΓ, then deg(v1) = deg(v2) = deg(v). Thus if Γ is k-regular then so is DΓ. Enough∈ theory-now for an example.

Examples 9.5.2 (i) The bipartite double of the tetrahedron (K4) is the cube (Q3). (The labels are used to help us keep track of what’s happening.)

(a) K4 (b) D(K4)

(c) Q3

Figure 9.7: K4 and its bipartite double, Q3 96 New Graphs from Old

(ii) The bipartite double of K3,3 is two copies of K3,3.

Figure 9.8: K3,3 (left) and D(K3,3) (right)

What we observed in the example of K3,3 is not unusual, as we have this next result.

Proposition 9.5.3 The bipartite double, D(Γ), of a connected graph Γ is connected if and only if Γ is not bipartite. Furthermore, if Γ is bipartite, then D(Γ) is isomorphic to two copies of Γ.

Proof: Suppose Γ is bipartite. Then VΓ = U ˙W, so VD(Γ) = (U1 ˙W1) ˙ (U2 ˙W2), where ∪ Γ ∪ ∪ ∪ U1 ∼= U2 ∼= U and likewise for W. The only edges of are between U and W, so therefore the only edges of D(Γ) are between U1 and W2 and between U2 and W1, as shown in the diagram.

Figure 9.9: Schematic for 9.5.3, part 1

Therefore D(Γ) has two components, as there is no path from any vertex in U1 to W1, or from U2 to W2, etc. Because U1 ∼= U2 and W1 ∼= W2, it is clear that these two components are each isomorphic to Γ. Now suppose Γ is not bipartite, and that VD(Γ) = V1 ˙V2. Then Γ contains an odd circuit, say C, of length 2m + 1. Hence D(Γ) contains∪ a circuit C¯ of length 4k +2, which contains vertices in both V1 and V2. (The case k = 1 is shown below.) 9.5 Bipartite Doubles 97

Figure 9.10: Schematic for 9.5.3, part 2

If Γ contains no other vertices than those in C, then we are done. So suppose not, i.e. that there exists x VΓ such that x / C. Because Γ is connected, there is a path π = xab v that joins∈x to a vertex of∈C, say (WLOG) v. Then in D(Γ) we have the path ···

x1a2b1 v2 if π has odd length π1 =  ··· x1a2b1 v1 if π has even length. ···

Therefore any vertex in V1 is connected to the circuit C¯, and by symmetry every vertex in V2 is also. Thus given any two vertices of D(Γ), we can form a path from one to the other (via C¯, although there may well be other, shorter ones). Hence D(Γ) is connected. 

Relating this to distance-transitive graphs, a slightly negative consequence of the above result is that the bipartite double of a distance-transitive graph need not itself be distance-transitive: it may not even be connected. Worse still, even some non-bipartite distance-transitive graphs have non-distance-transitive bipartite dou- bles, as we see below.

Example 9.5.4 The bipartite double of the octahedron is not distance-transitive, as it does not have a well-defined intersection array.

Figure 9.11: The octahedron Oct and its bipartite double D(Oct) 98 New Graphs from Old

If Γ = D(Oct), then we have Γ1(1) Γ1(6) = 4, but Γ1(1) Γ1(2) = 2, so the intersection numbers are dependent| on∩ the choice| of vertices| and∩ we| cannot write down an intersection array. Hence D(Oct) cannot be distance-transitive, or even distance-regular.

To end this chapter on a positive note, there is one family of graphs familiar to us whose bipartite doubles are all distance-transitive. There are the odd graphs we met in chapter 3. Brouwer, Cohen & Neumaier [11] claim that to show these graphs are distance-transitive is “trivial” and “straightforward”. I have to disagree. Chapter 10

Bounding the Diameter

10.1 Introduction

For any integer k 2, there are only finitely many finite distance-transitive graphs of valency k. This≥ was shown by Cameron [12] and Weiss [39], by showing that for any such k, the diameter of a k-valent distance-transitive graph is bounded by some function of k. However, for small values of k (such as k = 3 or 4), the earlier case-specific work produced more manageable bounds. It is these that will we investigate first.

10.2 Cubic Graphs

In this section, we always assume that Γ is cubic.

Now, any distance-transitive graph is s-arc-transitive for some s 1. Tutte [37] proved a number of results regarding cubic s-arc-transitive graphs, which≥ can then be used to place a bound on the diameter of cubic distance-transitive graphs, as we shall see. We roughly follow the treatment given in Holton & Sheehan [29]. First, we need the following definition.

Definition 10.2.1 Let π = (u0,...,us) be an s-arc of Γ, and choose any vertex w adjacent to us other than us 1. Then the s-arc π∗ = (u1,...,us,w) is called a successor of π. − In general, it is not particularly straightforward to tell if a given graph is s-arc- transitive or not. However, using successors we have the following result.

Lemma 10.2.2 (Tutte 1947) Let π = (u0,u1,...,us) be an s-arc of a connected graph Γ. Suppose that for every successor π∗ of π, there is some g Aut(Γ) such that (π)g = π∗. Then Γ is s-arc- transitive. ∈

99 100 Bounding the Diameter

Proof: Let Aπ be the set of all s-arcs of Γ that are the image of π under an automorphism of Γ, i.e. Aπ = (π)g g Aut(Γ) . Choose some ρ = (v0,v1,...,vs) Aπ, so { | ∈ } ∈ ρ = (π)h for some h Aut(Γ). and let ρ∗ = (v1,v2,...,vs,vs+1) be some successor of ρ. ∈ 1 1 1 Then because (ρ)h− = π, we have (ρ∗)h− = (u1,u2,...,us,(vs)h− ), a suc- cessor of pi. So by assumption there exists some g Aut(Γ) satisfying (π)g = ∈ (ρ )h 1. Hence ρ = (π)hg, so therefore π Aπ. ∗ − ∗ ∗ ∈ We can then repeat this argument to show that a successor of π∗ is in Aπ, and a successor of that s-arc is in Aπ also, and so on. Because Γ is connected, this will eventually cover all s-arcs of Γ, so they must all be elements of Aπ. Hence every s-arc of Γ is the image of π under an automorphism of Γ, so Γ is s-arc-transitive. 

This result allows us to prove the next one, which applies specially to the cubic case.

Lemma 10.2.3 (Tutte 1947) Let Γ be a strictly s-arc-transitive, cubic graph (for some s), with π,ρ some s-arcs in Γ. Then there is a unique g Aut(Γ) such that (π)g = ρ. ∈ Proof: Suppose, for a contradiction, that we have g1,g2 Aut(Γ), g1 = g2, such that π ρ π π 1 π ∈ 1 6 ( )g1 = = ( )g2. Then ( )g1g2− = , and since g1g2− = e, we have a non- identity element, say h, of Aut(Γ) that fixes π. 6 Let π = (u0,u1,...,us). Since Γ is cubic, π has exactly two successors, say (u1,...,us,x) and (u1,...,us,y). By symmetry, π is itself the successor of the two s-arcs (z,u0,...,us 1) and (w,u0,...,us 1). So we have the following situation: − −

Figure 10.1: Schematic for 10.2.3

We can assume that h does not fix the successors of π, so WLOG assume that (x)h = y. Now, because Γ is s-arc-transitive, there exists f Aut(Γ) such that ∈ (z,u0,...,us 1) f = (u0,...,us). Hence (us) f x,y . Again WLOG, let (us) f = − ∈ { } x. So we now have (z,u0,...,us) f = (u0,...,us,x), and also (z,u0,...,us) f h = (u0,...,us,x)h = (u0,...,us,y). That is, the group elements f and f h map an (s + 1)-arc to both of its successors. Hence by 10.2.2 above, Γ is (s + 1)-arc- transitive, contradicting our supposition that Γ was strictly s-arc-transitive. 10.2 Cubic Graphs 101

Therefore our initial assumption must be false, so there must be a unique g Aut(Γ) satisfying (π)g = ρ. ∈  We can now us this to determine the size of the automorphism group of a cubic graph Γ.

Lemma 10.2.4 (Tutte 1947) Suppose Γ is connected, strictly s-arc-transitive cubic graph (for s 1), with girth g 3. Then Γ contains exactly VΓ .3.2s 1 s-arcs and Aut(Γ) = ≥VΓ .3.2s 1. ≥ | | − | | | | − Proof: 1 Since g 2s 2 (by 2.1.3) and g 3, we get s 2 (g + 2) < g, so therefore any s-arc is an≥ s-path− (it can’t have any≥ repeated vertices≤ as that would give a circuit of length < g). Now let S be the set of all s-arcs in Γ. By 10.2.3 above, S = Aut(Γ) (i.e. there are as many automorphisms of Γ as there are s-arcs in Γ).| | Since| Γ is cubic,| s 1 from each vertex we can form 3.2 − s-arcs (which are all s-paths). Also, Γ is vertex-transitive, so we can move each of these paths to start from any of the VΓ vertices. | | Thus S = VΓ .3.2s 1 = Aut(Γ) . | | | | − | |  The next lemma is a direct consequence of the previous one.

Lemma 10.2.5 (Tutte 1947) Suppose Γ is as in 10.2.4 and let G = Aut(Γ). Then for any vertex v VΓ, ∈ s 1 StabG(v) = 3.2 − . | | Proof: Since Γ is s-arc-transitive, it is also vertex-transitive, so for any v VΓ, OrbG(v) = VΓ (from 1.2.4). By the Orbit-Stabiliser Theorem (1.2.8), we have∈

Γ s 1 G V .3.2 − s 1 StabG(v) = | | = | | = 3.2 − . | | OrbG(v) VΓ | | | | The next step is the crucial one. It is another of Tutte’s results [37]; the proof is quite long. So we shall just state the result here.

Theorem 10.2.6 (Tutte 1947) For any s-arc-transitive cubic graph, s 5. ≤ Proof: See [37], or a version using more modern terminology can be found in Biggs [8].  102 Bounding the Diameter

We return to the domain of distance-transitive graphs with this corollary to Tutte’s theorem, due to Biggs & Smith [10].

Corollary 10.2.7 (Biggs & Smith 1971) For any cubic distance-transitive graph Γ, each ki (the size of Γi, a cell of a distance partition) is a divisor of 48.

Proof: Fix some v VΓ, and let H = StabG(v). By 2.2.4, H acts transitively on Γi, so each ∈ Γi is an orbit of H. By the Orbit-Stabiliser Theorem (1.2.8), the size of an orbit of s 1 H divides the order of H, i.e. Γi H , or ki 3.2 , for s = 1,2,3,4,5. | | | | − Hence k 48. i 

So we have a bound on each of the ki. We now convert this into a bound on the diameter of Γ, which we shall denote by d. If Γ is distance-transitive, we can obviously find its intersection array ι(Γ). Because Γ is cubic, by 2.3.5 the sum of each column of ι(Γ) is 3. Thus these columns (apart from the first and the last) must take one of these three forms: 1 1 2 Type A: 0 Type B: 1 Type C: 0 2 1 1

We’ll denote the ith column of ι(Γ) by pi. By 2.3.6, we know that the top row of the array must be increasing and the bottom row must be decreasing, so we can only have so many columns of type A, then so many of type B and finally some of type C.

Lemma 10.2.8 (Biggs & Smith 1971) p1,...,p6 cannot all be of type A. Also, p1,...,p5 cannot all be of type A with p6 of type B.

Proof: By the recurrence relation in 2.3.7, we have

ki 1bi 1 ki = − − . ci

If p1,...,p6 are all of type A, or if p1,...,p5 are of type A with p6 of type B (note that k0 = 1 and b0 = 3), we have k1 = 3, k2 = 6, k3 = 12, k4 = 24 and k5 = 48 (all divisors of 48), but k6 = 96 which does not divide 48. Hence the result follows. 

Suppose p j is the first column that is not of type A. Then we have two possible cases to consider:

1. p j is of type C, with 1 j 6; ≤ ≤ 10.2 Cubic Graphs 103

2. p j is of type B, with 1 j 5. ≤ ≤ Lemma 10.2.9 (Biggs & Smith 1971) In case 1, d < 2 j. In case 2, d < 3 j.

Proof:

1. Suppose d 2 j. Then choose vertices v Γ j(u) and w Γ2 j(u), so we have d(u,v) = d≥(v,w) = j. ∈ ∈

Figure 10.2: Schematic for 10.2.9, part 1

Since p j is of type C, we can form two paths π,ρ of length j from u to v, so we can form a circuit of length 2 j containing u and v. Since Γ is distance-transitive and d(u,v) = d(v,w), v,w must also lie in a circuit of length 2 j. But this is impossible, as there is only one vertex in Γ j+1(u) adjacent to v. So d < 2 j. 

2. Suppose d 3 j. Then choose vertices v Γ j(u), w Γ2 j(u) and x Γ3 j(u), so we have≥d(u,v) = d(v,w) = d(w,x) =∈j. ∈ ∈

Figure 10.3: Schematic for 10.2.9, part 2

Now p j is of type B, so u,v lie in a circuit σ of length 2 j + 1. By distance- transitivity, v,w must also lie in such a circuit, say τ. Thus p2 j must be of type C (as w is adjacent to two vertices in Γ2 j 1(u). Also, w,x should lie in a circuit of length 2 j + 1, but by the same arguments− as in part 1 above, this is impossible. Hence d < 3 j.  So we have laid all the groundwork for this next result, the target of this section.

Theorem 10.2.10 (Biggs & Smith, 1971) In all cases, d < 15. 104 Bounding the Diameter

Proof: In case 1, d < 2 j and 1 j 6. So d < 12 < 15. In case 2, d < 3 j and 1 ≤ j ≤ 5. So d < 15. ≤ ≤  We are now in a position to carry out what Cameron ([12], [13]) refers to as Smith’s program for the case of cubic distance-transitive graphs. We will do this in the next chapter.

10.3 Tetravalent Graphs

A tetravalent graph is a graph of valency 4.

After classifying all distance-transitive graphs of valency 3 (cubic graphs), this is the obvious next step. However, things aren’t quite so straightforward. First, this is because Tutte’s results on cubic s-arc-transitive graphs don’t generalise easily to higher valencies (for example, 10.2.3 relies on Γ being cubic). A number of researchers in the late 1960s and early 1970s worked on this problem, culminating in the following results of Gardiner [19], which are analagous to 10.2.6; see also Smith [34].

Theorem 10.3.1 (Gardiner 1973) Let p be a prime, and let Γ be a connected s-arc-transitive graph of valency p + 1. Then s 1,2,3,4,5,7 . ∈ { }  Also, this time analagous to 10.2.5, we have:

Theorem 10.3.2 (Gardiner 1973) Let Γ be as above, and let G = Aut(Γ). Then for any v VΓ, ∈ s 1 2 StabG(v) (p + 1)p (p 1) for s 4,5,7 , | | ≤ − − ∈ { } StabG(v) (p + 1)!p! for s 1,2,3 . | | ≤ ∈ { }  Consequently, in the special case we’re interested in (where p + 1 = 4, so p = 3), we have:

Corollary 10.3.3 For a tetravalent, s-arc-transitive graph Γ, with automorphism group G = Aut(Γ) and any v VΓ, ∈ 4 6 StabG(v) 2 .3 . | | ≤ Proof: For the case p = 3, by 10.3.2 we have (p + 1)!p! = 4!.3! = 24.32, and s 1 2 s 1 2 (p+1)p − (p 1) = 4.3 − .2 . By 10.3.1 we have s 7, so combining these two results, we obtain− ≤ 4 s 1 4 6 StabG(v) 2 .3 − 2 .3 . | | ≤ ≤  10.3 Tetravalent Graphs 105

From now on in this section, we always assume that Γ is tetravalent.

We now want to convert this bound on the order of the stabiliser of a vertex into a bound on the diameter, as we did in the previous section in the cubic case. Analagous to 10.2.7, we again quote the Orbit-Stabiliser Theorem (1.2.8) to show 4 6 that ki divides StabG(v) , so therefore ki 2 .3 . | | ≤ Because Γ is tetravalent, the sum of each column of ι(Γ) is 4, so (apart from the first and last column) the columns of ι(Γ) are of the following types:

1 1 1 2 2 3 Type A: 0 Type B: 1 Type C: 2 Type D: 0 Type E: 1 Type F: 0 3 2 1 2 1 1

There are some immediate restrictions on the way the columns appear in an inter- section array. For example, the top row must be increasing and the bottom row decreasing (by Theorem 2.3.6), so the columns must appear in the order above. Also, not all of these column types can occur in the same graph: again because of Theorem 2.3.6, we cannot have both of types C and D in the same array.

First, we’ll consider the case when the graph has girth 3 (as Smith did in [33]).

Theorem 10.3.4 (Smith 1973) Suppose Γ is a tetravalent distance-transitive graph with girth 3 and diameter d. Then d 8. ≤ Proof: We know that the entry a1 is non-zero. If a1 = 3, we must have the complete graph K5, which has diameter 1. If a1 = 2, we can construct the following:

Figure 10.4: Schematic for 10.3.4

Because each edge must lie in exactly two triangles, it follows that we have to in- clude edges v3v6, v4v6 and v5v6, in which case the graph obtained is the octahedron, which has diameter 2. The case a1 = 1 is more tricky, however Smith [33] showed that in this case the diameter of the graph is d 8. ≤  106 Bounding the Diameter

Now suppose the girth is greater than 3. Then a1 = 0, so the first column must be of type A. As in section 10.2, let pi denote the ith column of ι(Γ). By the formula kibi = ki+1ci+1, we have the following:

1. If pi is of type A and pi+1 is of types A, B or C, then ki+1 = 3ki.

2. If pi is of type B and pi+1 is of types B or C, then ki+1 = 2ki.

3 3. If pi is of type A and pi+1 is of types D or E, then ki+1 = 2 ki.

4. If pi is of type A and pi+1 is of type F, if pi is of type B and pi+1 is of types D or E, if pi and pi+1 are both of types C or D, or if pi is of type D and pi+1 is of type E, then ki+1 = ki.

5. If pi is of type C or E and pi+1 is of type E, or if If pi is of types B, C, D, E, or F and pi+1 is of type F, then ki+1 < ki.

Loosely, this means that for column types A and B, the graph is ‘growing’ (i.e. the numbers ki are increasing), for column types C and D, the graph is ‘static’ (i.e. the numbers ki are constant) and for column types E and F, the graph is ‘contract- ing’ (i.e. the numbers ki are decreasing). Thus it is sufficient to find an upper bound for the number j, where p j is the last column of type C or D, as this will enforce an upper bound on the diameter.

Now, the intersection array is one of the following four types:

1 1 2 2 ι Γ  ∗ ··· ··· ···  1. ( ) =  0 0 0 0 0  4 3 ··· 3 2 ··· 2 ···  ··· ··· ··· ∗  1 1 1 1 2 2 ι Γ  ∗ ··· ··· ··· ···  2. ( ) =  0 0 0 1 1 0 0  4 3 ··· 3 2 ··· 2 2 ··· 2 ···  ··· ··· ··· ··· ∗  1 1 1 1 ι Γ  ∗ ··· ··· ···  3. ( ) =  0 0 0 2 2  4 3 ··· 3 1 ··· 1 ···  ··· ··· ··· ∗  1 1 1 1 1 1 ι Γ  ∗ ··· ··· ··· ···  4. ( ) =  0 0 0 1 1 2 2  4 3 ··· 3 2 ··· 2 1 ··· 1 ···  ··· ··· ··· ··· ∗  Note that although there is necessarily a column of type A if the girth is greater than 3, the others types do not necessarily occur. So we now have:

Theorem 10.3.5 (Smith 1974) Suppose Γ is a tetravalent distance-transitive graph with girth > 3 and diameter d. 10.4 Extending to Higher Valencies 107

Then d 29. ≤ Proof: We consider the four cases listed above.

1. Let ps and ps+t be the first and last columns of type D. By 10.3.3 above, 4 6 we have ks 2 .3 , so this implies s 8. Smith [35] used quite a lengthy ≤ ≤ graph-theoretic argument to show t s, so therefore s +t 16, and ks+t+1 must divide 2.37. ≤ ≤ 1 1 1 Now, for s +t + 1 i d 1, we have ki+1 = 2 ki, ki+1 = 3 ki or ki+1 = 4 ki, so d (s +t + 1) ≤8≤ and− therefore d 25. − ≤ ≤ 2. Let pr be the first column of type B, and ps, pt be as in case 1. Again by 4 6 10.3.3, we have kr 2 .3 , so because there must be at least one column of ≤ type A before pr, we have 2 r 8. Using similar graph-theoretic argu- ments to those in case 1, Smith≤ [35]≤ showed that s +t < 17. 1 1 Again, we have that for s + t + 1 i d 1, ki+1 = 2 ki, ki+1 = 3 ki or 1 ≤ ≤ − ki+1 = 4 ki. This then leads on to imply that d < 29.

3. This time, let ps and ps+t be the first and last columns of type C. Similar to case 1, due to Smith [35] we have s 8 and t < s, so s +t 16. This time, 2 ≤7 ≤ however, we have ks+t+1 divides 2 .3 , so it follows that d 26. ≤ 4. Similar to case 2, we let pr be the first column of type B, and ps, pt be as in case 3. Again we have 2 r 8 and t < r. By further graph-theoretic ≤ ≤ 1 arguments, we have s +t < 16, and that for s +t + 1 i d 1, ki+1 = 2 ki, 1 1 ≤ ≤ − ki+1 = 3 ki or ki+1 = 4 ki. Consequently, this gives d < 27. Combining all these cases together, we can conclude that in all cases, d 29. ≤  So we can now carry out Smith’s program for the case of tetravalent graphs, which we will do in the next chapter.

10.4 Extending to Higher Valencies

The major stumbling block in extending the arguments we have seen in the previ- ous two sections to graphs of arbitrary valency k was finding a bound on the order of the stabiliser of a vertex. However, in 1983 Cameron et al [14] published a proof of the following result.

Theorem 10.4.1 – The Sims Conjecture1 Suppose G is a finite permutation group acting primitively on a set Ω, and for some x Ω, StabG(x) has an orbit of size k (other than x ). Then StabG(x) f (k), ∈ { } | | ≤ 1It’s still referred to as a conjecture, even though it has been proved for nearly 20 years! 108 Bounding the Diameter where f (k) is an integer.

Proof: See [14]. 

This proof is very deep, as it is dependent on the celebrated Classification of Finite Simple Groups and uses the O’Nan-Scott Theorem (see [13]), which is an- other major achievement of late-20th century group theory. Translated into our terms, we have Ω = VΓ, G = Aut(Γ), x a vertex of Γ and the orbit of size k being Γ1(x), whose size is, of course, the valency of Γ. It is also important to notice that the theorem only refers to primitive permutation groups. As we know from chapter 8, many distance-transitive graphs are imprimitive2, so the next theorem does not follow immediately.

Theorem 10.4.2 (Cameron 1982) There are only finitely many finite distance-transitive graphs of given valency k > 2.

Proof: See [12]. 

Cameron’s argument roughly follows those in the previous two sections, ini- tially making use of the assumption that Γ is primitive to show that because the order of the stabiliser is bounded (by 10.4.1) then the diameter of Γ is bounded. Consequently there can only be a finite number of such graphs Γ. However, if Γ is imprimitive, then it is bipartite or antipodal (recall Theorem 8.4.1). Using a con- struction of Smith called the derived graph (see [32]) (for the antipodal case) and another construction known as the distance-two graph (for the bipartite case), it is possible to reduce an imprimitive graph to a primitive one. Cameron then goes on to agrue that if there were infinitely many finite imprimitive distance-transitive graphs, then there would have to be infinitely many primitive ones, which con- cludes the proof.

The other important missing detail is what the value of the bound on the diameter actually is. Unforunately, it’s not a very practical number for performing calcula- tions with. This result can be found in Brouwer, Cohen & Neumaier [11]:

Theorem 10.4.3 The diameter of a distance-transitive graph of valency k 3 is at most (k6)!22k. ≥ Proof: See [11]. 

2In fact, in the cases k = 3 and k = 4, most are imprimitive: see chapter 11 10.4 Extending to Higher Valencies 109

The order of magnitude of this bound is beyond comprehension. For example, in the case k = 3, we have (36)!26, which fills up twenty lines of MAPLE output! Obviously, this doesn’t compare very favourably to Biggs & Smith’s original bound (see 10.2.10) of 15. Chapter 11

Graphs of Low Valency

11.1 Smith’s Program

Smith’s program is the method originally used to determine all distance-transitive graphs of a given valency, first used by Biggs and Smith [10] to find all cubic distance-transitive graphs, and then by Smith ([33],[35],[36]) to find the tetravelant ones.. It is something of a ‘brute force’ method; later methods, such as those of Gardiner [20], use more algebraic techniques.

Smith’s program works like this. First, use the required valency k to find a bound for the order of the stabiliser of a vertex. Second, use this bound to find an upper bound D for the diameter. (This, of course, is what we did in chapter 10.) Next, use a computer to find all the (D + 1) (D + 1) tridiagonal matrices that are feasible in the sense of 7.4.1. Finally, determine× which of these matrices are realisable (as defined in 7.4.2). The number of redundant matrices is, apparently, very small.

In this chapter, we give the results of applying Smith’s program to the cases of cubic and tetravalent graphs.

11.2 Cubic Distance-Transitive Graphs

Using Theorem 10.2.10, we know that the diameter of a cubic distance-transitive graph is less than 15. Biggs and Smith [10] then applied Smith’s program, obtain- ing the twelve graphs listed below. Gardiner [20] later obtained this classification using group-theoretic methods and without using a computer. (This presentation contains material from a number of sources, such as Smith’s 1974 survey article [34], Biggs’ investigation of the primitive examples [6] and Gardiner’s classifica- tion [20]).

110 11.2 Cubic Distance-Transitive Graphs 111

Theorem 11.2.1 (Biggs & Smith 1971, Gardiner 1975) The only cubic distance-transitive graphs are the twelve listed below. 

1. The complete graph K4 Number of vertices: 4 Diameter: 1 Girth: 3 Automorphism group: S4 Primitive 1  ∗  Intersection array:  0 2  3 (See figure 1.1)  ∗ 

2. The K3,3 Number of vertices: 6 Diameter: 2 Girth: 4 Automorphism group: S3 Wr Z2 Bipartite and antipodal 1 3  ∗  Intersection array:  0 0 0  3 2 (See figure 8.3)  ∗ 

3. The Petersen graph O3 (also known as: (3,5)-cage) Number of vertices: 10 Diameter: 2 Girth: 5 Automorphism group: S5 Primitive 1 1  ∗  Intersection array:  0 0 2  3 2 (See figure 3.3)  ∗ 

4. The cube Q3 Number of vertices: 8 Diameter: 3 Girth: 4 Automorphism group: Z2 Wr S3 Bipartite and antipodal 1 2 3  ∗  Intersection array:  0 0 0 0  3 2 1  ∗  112 Graphs of Low Valency

Figure 11.1: The cube Q3

5. The Heawood graph H (also known as: (3,6)-cage) Number of vertices: 14 Diameter: 3 Girth: 6 Automorphism group: PGL(2,7), see [20] Bipartite, not antipodal (see 8.2.3) 1 1 3  ∗  Intersection array:  0 0 0 0  3 2 2  ∗ 

Figure 11.2: The Heawood graph

6. The Number of vertices: 18 Diameter: 4 Girth: 6 Bipartite and antipodal 1 1 2 3  ∗  Intersection array:  0 0 0 0 0  3 2 2 1  ∗  11.2 Cubic Distance-Transitive Graphs 113

Figure 11.3: The Pappus graph

7. The Number of vertices: 28 Diameter: 4 Girth: 7 Automorphism group: PGL(2,7), see [6] Primitive 1 1 1 2  ∗  Intersection array:  0 0 0 1 1  3 2 2 1  ∗ 

Figure 11.4: The Coxeter graph

8. Tutte’s 8-cage T Number of vertices: 30 Diameter: 4 Girth: 8 Automorphism group: PGL(2,9), see [20] Bipartite 1 1 1 3  ∗  Intersection array:  0 0 0 0 0  3 2 2 2  ∗  114 Graphs of Low Valency

Figure 11.5: Tutte’s 8-cage

9. The Number of vertices: 20 Diameter: 5 Girth: 5 Automorphism group: S5 Antipodal 1 1 1 2 3  ∗  Intersection array:  0 0 1 1 0 0  3 2 1 1 1  ∗ 

Figure 11.6: The dodecahedron

10. The D(O3): the bipartite double of the Petersen graph Number of vertices: 20 Diameter: 5 Girth: 6 Automorphism group: S5 Z2 Bipartite and antipodal × 1 1 2 2 3  ∗  Intersection array:  0 0 0 0 0 0  3 2 2 1 1  ∗  11.2 Cubic Distance-Transitive Graphs 115

Figure 11.7: The Desargues graph

11. The Biggs-Smith graph Number of vertices: 102 Diameter: 7 Girth: 9 Automorphism group: PSL(2,17), see [6] Primitive 1 1 1 1 1 1 3  ∗  Intersection array:  0 0 0 0 1 1 1 0  3 2 2 2 1 1 1 (See below.)  ∗ 

12. The Number of vertices: 90 Diameter: 8 Girth: 10 Automorphism group: see [32] Bipartite and antipodal 1 1 1 1 2 2 2 3  ∗  Intersection array:  0 0 0 0 0 0 0 0 0  3 2 2 2 2 1 1 1 This graph is described in Smith [32]: it can be constructed∗  from Tutte’s 8-cage.

Excluding K4, the three primitive graphs (3, 7 and 11 above) were singled out by Biggs [6] as worthy of special atttention: he describes them as being “remark- able”. (Whether this is a formal definition rather than just an adjective is unclear.) They can each be constructed in a similar way.

We now give the construction for the Biggs-Smith graph, as originally given in Biggs & Smith [10]. Start with four 17-gons with vertices a0,...,a16, b0,...,b16, c0,...,c16, d0,...,d16 and edges aiai+1, bibi+2, cici+4, didi+8 (all modulo 17). Then take seventeen copies of the “H”-configuration below: 116 Graphs of Low Valency

Figure 11.8: “H”-configuration

The vertices ai,bi,ci,di are identified with those in the four 17-gons. Then the graph below is obtained:

Figure 11.9: The Biggs-Smith graph

Similar constructions can be given for the Coxeter graph (using three heptagons and seven copies of a “Y”-configuration), and for the Petersen graph (using two pentagons and five copies of K2). These can be found in Biggs & Smith [10] or Holton & Sheehan [29]. Biggs [6] investigated several properties of these three remarkable graphs, such as vertex- and edge-colourings, embeddings in orientable surfaces and spectral properties.

In 1986, Biggs, Boshier & Shawe-Taylor [9] extended the classification to 11.3 Tetravalent Distance-Transitive Graphs 117 determine all cubic distance-regular graphs. They showed that there is only one cubic graph that is distance-regular but not distance-transitive, this being Tutte’s (3,12)-cage (see [37]).

11.3 Tetravalent Distance-Transitive Graphs

Using the results of section 10.3, the maximum diameter of a tetravalent distance- transitive graph is 29. Applying Smith’s program to all tridiagonal matrices with the appropriate column types, and analysing which feasible matrices are realisable, yields the fifteen graphs listed below. This was done by Smith: see [33], [35], [36]. Later, an algebraic proof of this classification was obtained: see [11].

Theorem 11.3.1 (Smith 1974) The only tetravalent distance-transitive graphs are the fifteen listed below. 

1. The complete graph K5 Number of vertices: 5 Diameter: 1 Girth: 3 Automorphism group: S5 Primitive 1  ∗  Intersection array:  0 3  4 (See figure 8.1)  ∗ 

2. The complete bipartite graph K4,4 Number of vertices: 8 Diameter: 2 Girth: 4 Automorphism group: S4 Wr Z2 Bipartite and antipodal 1 4  ∗  Intersection array:  0 0 0  4 3  ∗ 

Figure 11.10: The complete bipartite graph K4,4 118 Graphs of Low Valency

3. The octahedron (also known as: Johnson graph J(4,2,1), line graph of K4) Number of vertices: 6 Diameter: 2 Girth: 3 Automorphism group: S4 Z2 Antipodal × 1 4  ∗  Intersection array:  0 2 0  4 1 (See figure 2.6)  ∗ 

4. L(K3,3): line graph of K3,3 (also known as: Hamming graph H(2,3)) Number of vertices: 9 Diameter: 2 Girth: 3 Automorphism group: S3 Wr Z2 Primitive 1 2  ∗  Intersection array:  0 1 2  4 2 (See figure 5.1)  ∗ 

5. The line graph of the Petersen graph, L(O3) Number of vertices: 15 Diameter: 3 Girth: 3 Automorphism group: S5 Antipodal 1 1 4  ∗  Intersection array:  0 1 2 0  4 2 1  ∗ 

Figure 11.11: The line graph of the Petersen graph, L(O3) 11.3 Tetravalent Distance-Transitive Graphs 119

6. The line graph of the Heawood graph, L(H) Number of vertices: 21 Diameter: 3 Girth: 3 Automorphism group: PGL(2,7) Primitive 1 1 2  ∗  Intersection array:  0 1 1 2  4 2 2  ∗ 

Figure 11.12: The line graph of the Heawood graph, L(H)

7. K5,5 minus a perfect : see [36]. Number of vertices: 10 Diameter: 3 Girth: 4 Automorphism group: S5 Z2 Bipartite and antipodal × 1 3 4  ∗  Intersection array:  0 0 0 0  4 3 1  ∗ 

Figure 11.13: K5,5 minus a perfect matching 120 Graphs of Low Valency

8. Distance-three graph of the Heawood graph H: see [36]. Number of vertices: 14 Diameter: 3 Girth: 4 Bipartite 1 2 4  ∗  Intersection array:  0 0 0 0  4 3 2 ∗ This graph, denoted by H3, is constructed from the Heawood graph, by tak- ing the same 14 vertices as H, and joining x,y in H3 if d(x,y) = 3 in H.

Figure 11.14: H3: the distance-three graph of H

9. The (4,6)-cage Number of vertices: 26 Diameter: 3 Girth: 6 Bipartite 1 1 4  ∗  Intersection array:  0 0 0 0  4 3 3  ∗ 

Figure 11.15: The (4,6)-cage 11.3 Tetravalent Distance-Transitive Graphs 121

10. The Gewirtz graph O4 (see [21]) Number of vertices: 35 Diameter: 3 Girth: 6 Automorphism group: S7 Primitive 1 1 2  ∗  Intersection array:  0 0 0 2  4 3 3  ∗  11. The line graph of Tutte’s 8-cage, L(T) Number of vertices: 45 Diameter: 4 Girth: 3 Automorphism group: PGL(2,9) Primitive 1 1 1 2  ∗  Intersection array:  0 1 1 1 2  4 2 2 2  ∗ 

Figure 11.16: The line graph of Tutte’s 8-cage, L(T)

12. The 4-cube Q4 Number of vertices: 16 Diameter: 4 Girth: 4 Automorphism group: Z2 Wr S4 Bipartite and antipodal 1 2 3 4  ∗  Intersection array:  0 0 0 0 0  4 3 2 1 (See figure 5.2)  ∗  122 Graphs of Low Valency

13. 4-fold covering of K4,4 (see [36]) Number of vertices: 32 Diameter: 4 Girth: 6 Bipartite and antipodal 1 1 3 4  ∗  Intersection array:  0 0 0 0 0  4 3 3 1  ∗  14. (4,12)-cage Number of vertices: 728 Diameter: 6 Girth: 12 Bipartite 1 1 1 1 1 4  ∗  Intersection array:  0 0 0 0 0 0 0  4 3 3 3 3 3  ∗  15. D(O4): the bipartite double of O4 Number of vertices: 70 Diameter: 7 Girth: 6 Automorphism group: S7 Z2 Bipartite and antipodal × 1 1 2 2 3 3 4  ∗  Intersection array:  0 0 0 0 0 0 0 0  4 3 3 2 2 1 1  ∗  Note that to calculate the automorphism groups of the graphs L(O3), L(H) and L(T), we can use Theorem 9.2.3, as we know (from the previous section) the automorphism groups of O3, H and T. Bibliography

[1] G.M. Adel’son-Vel’skii, B. Ju. Veisfeiler, A.A. Leman & I.A. Faradzev (1969), Example of a graph without a transitive automorphism group, Soviet Math. Dokl., 10, 440-441. Translated from the Russian by M.L. Glasser.

[2] R.B.J.T. Allenby (1991), Rings, Fields and Groups (2nd Edition), Edward Arnold.

[3] N.L. Biggs (1971), Intersection matrices for linear graphs, in Combinatorial Mathematics and its Applications (ed. D.J.A. Welsh), Academic Press.

[4] N.L. Biggs (1971), Finite Groups of Automorphisms, London Mathematical Society Lecture Notes Series (6), Cambridge University Press.

[5] N.L. Biggs (1972), An edge-colouring problem, Amer. Math. Monthly 79, 1018-1020.

[6] N.L. Biggs (1973), Three remarkable graphs, Canad. J. Math. 25, 397-411.

[7] N.L. Biggs (1974), The symmetry of line graphs, Utilitas Math. 5, 113-121.

[8] N.L. Biggs (1993), (2nd Edition), Cambridge Uni- versity Press.

[9] N.L. Biggs, A.G. Boshier & J. Shawe-Taylor (1986), Cubic distance-regular graphs, J. London Math. Soc. (2) 33, 385-394.

[10] N.L. Biggs & D.H. Smith (1971), On trivalent graphs, Bull. London Math. Soc. 3, 155-158.

[11] A.E. Brouwer, A.M. Cohen & A. Neumaier (1989), Distance-Regular Graphs, Springer-Verlag.

[12] P.J. Cameron (1982), There are only finitely many finite distance-transitive graphs of given valency greater than two, Combinatorica 2, 9-13.

[13] P.J. Cameron (1999), Permutation Groups, London Mathematical Society Student Texts (45), Cambridge University Press.

123 124 BIBLIOGRAPHY

[14] P.J. Cameron, C.E. Praeger, J. Saxl & G.M. Seitz (1983), On the Sims con- jecture and distance-transitive graphs, Bull. London Math. Soc. 15, 499-506.

[15] B-L. Chen & K-W. Lih (1987), Hamiltonian uniform subset graphs, J. Com- bin. Theory (B) 42, 257-263.

[16] R.M. Damerell (1973), On Moore graphs, Proc. Cambridge Philos. Soc. 74, 227-236.

[17] J.D. Dixon & B. Mortimer (1996), Permutation Groups, Graduate Texts in Mathematics (163), Springer.

[18] J.A. Gallian (1998), Contemporary Abstract Algebra (4th Edition), Houghton Mifflin.

[19] A. Gardiner (1973), Arc transitivity in graphs, Quart. J. Math. Oxford (2) 24, 399-407.

[20] A. Gardiner (1975), On trivalent graphs, J. London Math. Soc. (2) 10, 507- 512.

[21] A. Gewirtz (1969), Graphs with maximal even girth, Canad. J. Math. 21, 915-934.

[22] C.D. Godsil & G.F. Royle (2001), Algebraic Graph Theory, Graduate Texts in Mathematics (207), Springer.

[23] I.P. Goulden & D.M. Jackson (1983), Combinatorial Enumeration, Wiley- Interscience Series in , John Wiley & Sons.

[24] F. Harary (1969), Graph Theory, Addison-Wesley.

[25] R.L. Hemminger (1972), On Whitney’s line graph theorem, Amer. Math. Monthly 79, 374-378.

[26] R.L. Hemminger (1975), On the automorphism group of a line graph, Congr. Numer. 14 (Proceedings of the Sixth Southeastern Conference on , Graph Theory and Computing), 415-418.

[27] D.G. Higman (1967), Intersection matrices for finite permutation groups, J. Algebra 6, 22-42.

[28] A.J. Hoffman & R.R. Singleton (1960), On Moore graphs with diameters 2 and 3, IBM J. Res. Develop. 4, 497-504.

[29] D.A. Holton & J. Sheehan (1993), The Petersen Graph, Australian Mathe- matical Society Lecture Series (7), Cambridge University Press. BIBLIOGRAPHY 125

[30] P. Rowlinson (1997), Linear Algebra, in Graph Connections: Relationships Between Graph Theory and other Areas of Mathematics, (eds. L.W. Beineke & R.J. Wilson), Oxford Lecture Series in Mathematics and its Applications (5), Oxford University Press.

[31] A.B. Slomson (1991), Introduction to Combinatorics, Chapman & Hall.

[32] D.H. Smith (1971), Primitive and imprimitive graphs, Quart. J. Math. Oxford (2) 22, 551-557.

[33] D.H. Smith (1973), On tetravalent graphs, J. London Math. Soc. (2) 6, 659- 662.

[34] D.H. Smith (1974), Distance-transitive graphs, in Combinatorics: Proceed- ings of the British Combinatorial Conference, Aberystwyth 1973, (eds. T.P. McDonough & V.C. Mavron), London Mathematical Society Lecture Notes Series (13), Cambridge University Press.

[35] D.H. Smith (1974), Distance-transitive graphs of valency four, J. London Math. Soc. (2) 8, 377-384.

[36] D.H. Smith (1974), On bipartite tetravalent graphs, Discrete Math. 10, 167- 172.

[37] W.T. Tutte (1947), A family of cubical graphs, Proc. Cambridge Philos. Soc. 45, 459-474.

[38] W.T. Tutte (1966), Connectivity in Graphs, Mathematical Expositions (15), University of Toronto Press.

[39] R. Weiss (1985), On distance-transitive graphs, Bull. London Math. Soc. 17, 253-256.

[40] R.J. Wilson (1996), Introduction to Graph Theory (4th Edition), Prentice Hall.