A brief introduction to Combinatorial Games

Asaf Ferber ∗

September 26, 2019

In these notes we give a brief introduction to combinatorial games. I used these notes few years ago for a mini-course that I gave for undergrads in MIT during the winter break (three lectures). Feel free to use/distribute it, and if you find errors/improve the presentation I would love to get an updated TEX file! (asaff@uci.edu).

1 Introduction

How can one analyze a game? any attempt to analyze games like Chess, Go, Checkers, Tic-Tac- Toe, Hex etc. lead to the conclusion: there are enormously many cases to analyze and it seems quite hopeless. The only positive side is that it keeps the game interesting for a competition. As opposed to traditional Game Theory which focuses on incomplete information games (like Poker), here we consider perfect information games which are more similar in nature to Chess and Go. The term “Combinatorial Game” means a skill game, with no chance moves, where each player has a complete information about the current/past positions and can, theoretically, analyze all possible future position. The payoff function has 3 values: win, draw and loss. The first question one should ask: If these games are all deterministic, why can’t we just use a computer to analyze? the simple answer is that even though in theory you can, practically there are just too many possibilities for any computer to analyze in a reasonable time. For example, consider a 3 dimensional version of the kids game Tic-Tac-Toe which is played on a board of size 5 × 5 × 5. This game has roughly 3125 positions (each position can be either marked “Player I”, “Player II” or “Unoccupied”), which even though is only a finite number, it is so large that we would be afraid to meet it in a dark street at night. To summarize – traditional Game Theory is not useful for us, and a brute force computer search doesn’t help even for very simple games as the 3 dimensions Tic-Tac-Toe mentioned above. What else can we do?

2 Examples and basic techniques

In this section we discuss some examples and illustrate some useful techniques in analyzing certain games.

∗Department of Mathematics, UCI. Email: asaff@uci.edu.

1 2.1 Solitaire army The common feature of the Solitaire puzzles is that each one is played with a board and soldiers, the board contains a number of holes, each of which can hold 1 soldier. Each move consists of a jump by 1 soldier over 1 or more other soldiers. The soldiers jumped over being removed from the board. Each move therefore reduces the number of soldiers on the board. The Solitaire army is played on the infinite plane and the holes are in the lattice points Z × Z. The permitted moves are to jump horizontally or vertically. Suppose we start with all soldiers in the negative half plane. How many soldiers are needed to send one soldier forwards 1,2,3,4 or 5 holes into the upper plane? Obviously, 2 men are enough for sending one soldier 1 step forwards. Indeed, starting with the two soldiers standing on the same vertical line consecutively, the “lower” one can jump over the other and move 1 step forwards. It is also relatively easy to see that 4 soldiers are enough to send one 2 holes forwards (try it!). Less obvious is that eight soldiers are enough to push one soldier 3 holes forwards and 20 are enough to push a soldier 4 holes (exercise!). Before reading the next theorem, try to test your intuition by guessing the answer for the following question:

Question 2.1. How many soldiers are needed to send one of them 5 holes forwards?

To answer this question, probably the first thing I would do is to look at the online encyclopedia for integer sequences at https://oeis.org/. Based on that, the first search result which does not start at 1 gives us 52. It won’t be very surprising if the answer is not 52 but some other, relatively small, number. But it is very surprising to know that:

Theorem 2.2 (Conway, 1961). Impossible!

Proof. The proof of this theorem pioneered the use of potential functions in combinatorics. The basic plan is quite simple: We wish to assign a weight to each hole with the condition that if ABC are three consecutive holes on a vertical/horizontal line with ω(A), ω(B), ω(C) being their weights, then ω(A) + ω(B) ≥ ω(C). (1)

Suppose we have such a “weigh” function ω : Z × Z → R, we evaluate a position in the game by the sum of all weights of holes which are occupied by soldiers – this sum is called the value of the position. Note that by 1 we have that each legal move cannot increase the value (prove it formally!). Therefore, such a weigh function proves that no matter how we paly, we can never get into a position of larger value from the initial position. The desired punch line will be to find such a function ω for which the position of having a soldier 5 steps forwards is larger than any initial position. 2 √ Sounds easy? Let’s do it! Let ω be a positive number satisfying ω + ω = 1 (the golden section – 5−1 2 ). Now, assume that we have managed to find a configuration of finitely many soldiers for which one soldier can move 5 holes forwards into the upper plane. Assign a weight 1 to the this hole in the positive upper plane, and extend it as follows: on the vertical line with the 1 on it downwards assign the weights 1, w, w2,.... For each i ≥ 0, expand the horizontal line with ωi to the left and right in a symmetric way by assigning the weights . . . ωi+`, ωi+`−1, . . . , ωi+1, ωi, ωi+1, . . . , ωi+`−1, ωi+`,.... Now, a simple calculation gives us that the value of the top line of the lower half plane is

w5 + 2w6 + 2w7 + ... = w2.

2 P∞ i 2 1 Therefore, the value of the whole lower half plane is i=2 ω = ω · 1−ω = 1. In particular, since no hole in the lower plane has weight 0, no finite number of soldiers will suffice to send a man 5 forward! This completes the proof.

2.2 Strategy Stealing Strategy stealing is an existence argument which is oftenly used to determine the winner in a game between two perfect players. Unfortunately, it gives us no clue for how a winning/drawing strategy should look like. The strategy stealing argument is based on a symmetry argument, and therefore it works for symmetric games (whatever it means...). The class of games we are interested at in these notes are the so-called Positional Games. In a positional game there are two players, I and II, alternating turns in claiming previously unclaimed elements of some board V , with I going first. There is also a predetermined family of subsets of V , denoted by F which is considered as the collection of winning sets (the pair (V, F) is considered as the hypergraph of the game). The winner of the game is the first player to fully occupy all the elements of one of the winning sets in F. If there is no winner by the time that all the elements in V have been previously claimed, the game is declared as a draw. A complicated definition? not so... let us illustrate the definition by a simple example, namely the traditional child-game Tic-Tac-Toe. This game is played on a 3 × 3 board, where the players alternate turns in marking 0/X in previously unmarked spots. The winner is the first player to fully mark some combinatorial line (that is, horizontal, vertical, or diagonal line). This game can be easily described as a positional game! (do it). It sounds quite intuitive that the first player to move always have an advantage, and this is proven formally by the strategy stealing argument: Theorem 2.3 (Strategy stealing). Let (V, F) be any hypergraph of a positional game. Then, the first player can force at least a draw.

Sketch of proof. Assume that the second player has a winning strategy and we wish to obtain a contradiction. The basic idea is to analyze what happens if the first player steals this strategy and plays accordingly. A winning strategy is a list of instructions telling the player how to answer each move. Now, Player I can play as follows: first move – arbitrary. From now on, she pretends to be the second player (and ignores her first move). If at some point she needs to claim the element she has claimed first, then she plays arbitrarily. The crucial observation here is that an extra move cannot harm any of the players.

Let us now consider the following example: Consider a game with a board V = E(Kn) (that is, the board elements are all the edges of a complete graph on n vertices). Suppose that the winning sets consists of all subsets of edges that form a copy of Kk in Kn. We refer this game as the k-clique game played on E(Kn). A famous theorem of Ramsey asserts: Theorem 2.4 (Ramsey theorem). For every k there exists R(k) for which the following holds: every graph on at n ≥ R(k) many vertices either contains a copy of Kk or its complement (that is, the subgraph of Kn that consists of all the non-edges of G) contains a copy of Kk. Note that by Ramsey theorem, is n ≥ R(k) then there are no drawing positions in the k-clique game played on E(Kn)! Therefore, by the strategy stealing argument, since the first player has an advantage, we conclude that she has a winning strategy!

3 The following simple-looking problems are embarrassingly open:

Problem 2.5. Find an explicit winning strategy for every k ≥ 5.

Problem 2.6. Determine whether the k-clique game played on the countably infinite board, namely

E(KN), is a draw/first player’s win game.

2.3 The game of HEX The game of HEX was invented in the early 1940s by Piet Hein and became very popular since then. The board of the game is a rhombus of hexagons of size n × n and there are two players: Red (who starts the game) and Blue. Each of the players takes two opposite sides of the board and they alternately claim previously unclaimed hexagons. A player wins if her hexagons connect her opposite sides of the board. In the late 1940s John Nash proved that HEX is a first player win. The biggest open problem remained for this game is to find an explicit winning strategy. To solve HEX, we show the following:

Theorem 2.7. There is no draw position.

Clearly, as extra marks cannot harm, using the above theorem and the strategy stealing argument we conclude the desired (note that this game doesn’t fit the definition of a positional game as described in the previous section. Show that one can still use the strategy stealing argument). In order to prove the theorem, we need the following simple lemma:

Lemma 2.8. A finite graph whose vertices have degree at most two is the union of disjoint subgraphs, each of which is either (i) an isolated vertex, (ii) a simple cycle, (iii) a simple path.

Proof. Draw two pictures to yourself to understand it, think about it for two minutes, and then prove this easy statement as an exercise.

We present the board of HEX as a graph G = (V,E). Each corner of a hexagon is a vertex of G, and each side is an edge. We add four additional vertices, one connected to each of the four corners of the core graph – u1, u2, u3, u4, and the edges that connect them to the core graph ei. Note that those edges lie between Red-faces and Blue-faces. We show that two vertices out of the uis are connected by a simple path, and the winning strategy for White will follow.

Theorem 2.9. In every outcome of the game, there is either a R-path connecting regions of Red or B-path connecting regions of Blue.

Proof. First, construct a subgraph G0 of G with the same vertices but a subset of the edges. We add 0 0 an edge e to E if it lies between a Red face and a Blue face. Therefore, the eis are edges of E . The nodes ui are all having degree 1. If all three hexagons around a vertex are marked the same, then the node is isolated in G0 and has degree 0. Otherwise, there are two hexagons of same color and third different. In this case, the node has degree 2. Therefore, all degrees in the core graph are either 0 or 2. Applying previous lemma to G0, we obtain that G0 is a union of paths, cycles and isolated vertices. The only candidates for being endpoints of paths are the uis as all other vertices have degree 0/2. Therefore, there must exists a path with two distinct ui as endpoints.

4 2.4 Shannon’s switching game This game is played on the edge set of a multigraph G. That is, a graph where multiple edges between the same vertices are allowed. There are two players, Maker and Breaker, taking turns in alternating elements of E(G), with Breaker playing first. Maker’s aim is to build a spanning tree. Breaker’s goal is to prevent Maker from doing so. In 1964 Lehman solved this problem by the following simple and super elegant criterion.

Theorem 2.10 (Lehman). Maker (as a second player) has a winning strategy if and only if G contains two edge disjoint spanning trees.

Proof. If Maker has a winning strategy, then by a strategy stealing argument, Breaker (as a first player) could create a spanning tree as well. This gives two edge disjoint trees. The other implication goes as follows: Suppose that G has two edge-disjoint spanning trees. Whenever Breaker disconnects one of the two trees into two parts, Maker claims an edge of the other tree connecting these parts. Identifying the two endpoints, we obtain a multigraph with one less vertex, which again contains two edge-disjoint spanning trees. Therefore, we can complete the proof by a simple induction.

2.5 Pairing strategy Suppose that H = (V, F) is a hypergraph of a game. Moreover, suppose that there exists a collection of disjoint (unordered) pairs of vertices P such that for each E ∈ F, there exists X ∈ P with X ⊆ E. In such a scenario, whenever Player I claims an element from some pair X ∈ P, Player II can respond by claiming the other element of X (unless he already claimed it). Clearly, such a strategy is a draw strategy for Player II, and is known as a pairing strategy. The pairing strategy is the simplest possible way to force a draw and in fact is the most common technique in Game Theory. It applies in scenarios when no winning sets avoids all pairs. Note that if the edges are “almost disjoint” (whatever it means...) then the question whether a pairing strategy exists is in fact a standard graph-theoretic matching problem. That is, a pairing strategy exists if and only if one can find a family of disjoint 2-element representatives of the family of sets F. To this aim we can use the so-called Hall’s Marriage Theorem. The following theorem were first published by Hales and Jewett in 1963.

Theorem 2.11 (Pairing Strategy Draw). Suppose that for every subfamily H ⊆ F we have

[ A ≥ 2|H|. A∈H Then either player can force a Pairing Strategy Draw.

To prove this theorem, recall Hall’s theorem:

Theorem 2.12 (Hall’s theorem). Let G = (A ∪ B,E) be a bipartite graph. Then, G contains a matching saturating A (that is, a matching where each vertex in A is being matched to some vertex in B) if and only if the following holds:

For all X ⊆ A we have |N(X)| ≥ |X|.

5 For completeness we prove Theorem 2.11, as it contains a small and neat trick that is useful to know.

Proof. Define a bipartite graph G = (A ∪ B,E), with A = V ] V (that is, two disjoint copies of V ) and B = F. For v ∈ V and F ∈ F, we add the edge vF to E if and only if v ∈ F . The crucial observation is that if G contains a matching saturating A, then it corresponds to disjoint pairs as required (why?). Now all is left to do is to check that Hall’s condition is being satisfied (exercise!).

The following theorem is also quite useful and its proof is being left as an exercise:

Theorem 2.13 (Degree Condition for a Pairing Strategy). Let F be an n-uniform hypergraph and suppose that MaxDegree of F is at most n/2. Then, wither player can force a Pairing Strategy Draw.

Proof. Hint: Use Hall’s theorem.

2.6 Erd˝os-Selfridge– the method of conditional expectation If F is a hypergraph and A ∈ F, then the F-neighborhood of A is

FA = {B ∈ F | B ∩ A 6= ∅}.

The Maximum Neighborhood Size of F is defined as the maximum |FA|, where the maximum is taken over all A ∈ F. The following conjecture is the most important in this area:

Problem 2.14 (Neighborhood Conjecture). Assume that F is an n-uniform hypergraph, and assume that its maximum neighborhood size is less than 2n−1. Is it true that by playing on F the second player has a Strong Draw?

If true, the 2n−1 bound is best possible (can you think of an example?). It is quite remarkable that no bound better than n/2 (from the exercise in the previous section) is known! The pioneering application of the potential technique for 2-player games is a theorem by Erd˝os and Selfridge from 1973. The impact of their result completely changed the subject; it shifted the emphasis from Ramsey Theory and Matching Theory to the Probabilistic Method. Unlike the degree condition in the above problem which is a local condition, they gave a global criterion for a strong draw. Their result is the following:

Theorem 2.15 (Erd˝os-Selfridge). Let F be an n-uniform hypergraph, and assume that

|F| + MaxDed(F) < 2n.

Then, playing on F the second player can force a strong draw.

Remark 2.16. Beck has managed to extend this theorem to the biased case. That is, when player I is allowed to take a elements per turn and Player II claims b elements per turn.

Remark 2.17. Note that if the second player can force a draw then so does the first.

6 Proof. Let F = {A1,...,AM }. Assume that at some round of the game the first player has already occupied the elements x1, . . . , xi and the second player has occupied y1, . . . , yi−1. Note that by choosing an element yi, the second player “turns off” all the hyperedges containing yi and refer to those sets as “dead sets”. The winning sets which are not “dead” are referred to as “survivors”, and they all have a chance to be occupied by the first player. We wish to define a potential function measuring the “danger” of each “survivor” and to measure the “danger” of the whole position. To do so we define X −us Di = 2

s∈Si where us is the number of unoccupied elements of s and Si is the set of all survivors at the current round. A natural choice for second will thus be an element yi which minimizes the danger function Di+1. How to do so? suppose yi and xi+1 are the next two steps of the players. Let us measure the effect on the danger function. Clearly,

X −us 1 X −u2 1 Di+1 ≤ Di − 2 yi∈s + 2 xi+1∈s. s∈Si s∈Si

All in all, a natural choice for y is the unoccupied z for which P 2−u2 attains its maximum. i s∈Si Then, we obtain Di+1 ≤ Di. Therefore, the second player can force

D1 ≥ D2 ≥ ... ≥ Dlast.

Note that if first wins then there exists an s ∈ Slast with us = 0 and therefore Dlast ≥ 1. On the other hand, by assumption we have

X −n+11 X −n1 −n −n D1 = 2 x1∈s + 2 x1∈/s2 ≤ (|F| + MaxDeg(F))2 < 1. s∈S1 s∈Si This completes the proof.

Remark 2.18. Note that if the two players play completely at random, then the expected number of monochromatic edges is 2−n+1|F|. Now, if the expectation if smaller than 1 then there exists a drawing position (it doesn’t mean that either of the players can actually force such a position!). This explains the beauty of the argument – the proof is a “derandomization” of the above argument. It “upgrades” the existing drawing position to a Drawing strategy! Moreover, note that this condition is tight (a binary tree with n levels has 2n−1 edges and first player clearly can occupy a full branch).

Remark 2.19. A non-uniform version can be stated as follows: If X 2−|A| < 1/2 A∈F then the second player can force a strong draw. (Exercise!)

7 2.7 Applications Theorem 2.20. In both 42 and 83 Tic-Tac-Toe fames, the second player can force a strong draw.

Proof. In the 42 game there are 10 winning sets, each of size 4, and a maximum degree 3. Since 3 + 10 < 24, the Erd˝os-Selfridge theorem gives the desired. The 83 game is left as an exercise.

Theorem 2.21. Playing on the edge set of the complete graph, Maker, as a second player, can build a spanning connected graph.

Proof. We define the hypergraph of the game as the (edge) sets of all cuts. That is, all sets of the form S = E(A, V \ A). Note that there are exactly n sets S of size n − 1 and the others are of size at least 2n − 4. Moreover, the total number of sets is 2n−1. All in all we have X 2−|S| ≤ n2−n+1 + 2n2−2n+4 < 1/2. S This completes the proof.

Exercise 2.22. Perfect Matching game.

3 Biased Games

Many unbiased Maker-Breaker games are drastically in favor of Maker and therefore it is natural to give Breaker a bit more power. Formally, let m and b be two natural numbers and F be a hypergraph. The biased (m, b)-game is the same as Maker-Breaker game except that in each turn Maker claims m elements and Breaker claims b. As an illustration let us√ consider the triangle game played on E(K ). As was proven by Chvat´aland Erd˝osin 1978, for b ≤ 2n−C Maker wins by first n √ accumulating enough edges at a vertex v and then by closing a triangle containing v. For b ≥ 2 n, the game is Breaker’s win –in response to each move uv of Maker, Breaker claims free b/2 edges incident to u and b/2 free edges incident to v, also blocking immediate threats of Maker. The critical value b of this√ game is still unknown and the best bound is due to Balogh and Samotij and stands on (2 − 1/24) b. In order to learn the biased version of the game, it is probably better to consider the simplest game where all the edges are disjoint. This is called the box game introduced by Chvatal and Erd˝osin 1978. In a game Box(p, q; a1, . . . , an) the board is a union of vertex disjoint sets of sizes ai. In each move Maker claims p elements of the board and Breaker claims q. Maker wins if at the end of the game he fully occupies some edges. In case where all boxes are of same size s we use the notation Box(p, q; n × s). Pn−1 Theorem 3.1. If s ≤ (p − 1) 1 1/i, then Maker, as the first or second player, wins Box(p, 1; n × s).

Proof. Let a1 ≤ ... ≤ an ≤ a1 +1. Define f(n, p) by the following recursion: f(1.p) = 0 and f(n, p) = n(f(n−1,p)+p) Pn b n−1 c for n ≥ 2. If i=1 ai ≤ f(n, p), then Maker, as the second player wins the game. We prove it by induction, where in the inductive step Maker in his current turn claims p elements to Pn−1 keep the surviving boxes leveled. One can easily show that f(n, p) ≥ (p − 1)n 1 1/i. Pn Theorem 3.2. If s > p 1 1/i, then Breaker wins Box(p, 1; n × s).

8 Proof. At any point during the game, denote the set of surviving boxes by S. Breaker always destroy a box i ∈ S of minimum size. Suppose by contradiction that Maker wins at moce k. WLOG assume that Breaker destroys box i in his ith move and in his kth move Maker fully claims box k. Let ci denote the remaining size of box i ∈ S ∩ {1, . . . , k}. Define now a potential function Ψ by

k 1 X Ψ(j) := c . k − j + 1 i i=j

This is the potential just before Maker’s move j. Then, Ψ(k) = ck ≤ p, as Maker wins the game at this move, while Ψ(1) = s. In his jth move Maker decreases Ψ(j) by at most p/(k − j + 1) and Breaker destroys the smallest surviving box. Thus, Ψ(j + 1) ≥ Ψ(j) − p/(k − j + 1). It follows that

n ! X Ψ(k) ≥ s − (p/k + p/(k − 1) + ... + p/2) ≥ s − p 1/i − 1 > p, 1 contradiction.

As an immediate corollary we see that for the uniform box game, the game changes around p = s/ ln n. A biased version of Erd˝os-Selfridgewas obtained by Beck:

Theorem 3.3. Let p and q be positive integers. If X 1 (1 + q)−|A|/p < 1 + q A∈F then Breaker has a winning strategy in the (p, q) game F.

4 Random Strategies

Here we discuss games where a random strategy can give us non-trivial answers. At first glance it might seem awkward as Positional Games are deterministic games, but note that it actually makes sense in the following way: Suppose one of the players has a randomized strategy which ensures his win with a non-zero probability, against any strategy of the other. As the games are deterministic it thus follows that this player indeed has a (deterministic) winning strategy (even though it may be very difficult to find one!). A simple example is the following minimum degree game.

Theorem 4.1. For every  > 0 there exists a constant C such that the following holds. Suppose that G is a graph on n vertices with minimum degree d ≥ C log n. Then Maker has a strategy to build a spanning subgraph M with δ(M) ≥ (1 − )d/3.

Remark 4.2. For constant d one can get roughly d/4 for free. For d = Ω(log n), Beck showed that one can obtain roughly d/2 (harder!).

Proof. Maker’s strategy goes as follows: suppose that at round i, Breaker claims an edge uv. Then, Maker tosses a fair coin to decide whether he claims an arbitrary edge of the form ux or vx. Fix a vertex v ∈ V (G) and we show that with probability o(1/n) Breaker claims more than (1 + /2)2d/3

9 edges touching it. Indeed, suppose that Breaker has claimed at least (1 + /2)2d/3 such edges. In each step, Maker claims an edge of the form vx with probability 1/2 independently at random. Therefore, in expectation, Maker’s degree should be at least (1 − /2)d/3. As the experiments are independent, one can use Chernoff’s bounds to obtain

−2 ln n Pr[dM (v) ≤ (1 − )d/3] ≤ e = o(1/n).

Union bound does the rest of the job.

10