<<

FIXED POINT THEOREMS AND APPLICATIONS TO GAME THEORY

ALLEN YUAN

Abstract. This paper serves as an expository introduction to fixed point m theorems on of R that are applicable in game theoretic contexts. We prove Sperner’s Lemma, Brouwer’s Fixed Point Theorem, and Kakutani’s Fixed Point Theorem, and apply these theorems to demonstrate the conditions for existence of Nash equilibria in strategic games.

Contents 1. Introduction 1 2. Convexity and Simplices 2 3. Sperner’s Lemma 4 4. Brouwer’s Fixed Point Theorem 6 5. Kakutani’s Fixed Point Theorem 11 6. Nash Equilibria of Pure Strategic Games 13 7. Nash Equilibria of Finite Mixed Strategic Games 16 Acknowledgments 19 References 19

1. Introduction Game theory is a subfield of economics that describes how decision-makers in- teract. Although it makes many of the same assumptions as traditional models of economics (for example, rationality — the assumption that decision-makers pur- sue well-defined objectives), game theory differs from traditional economic models in that decision-makers attempt to obtain information about what other decision- makers will choose and take into account this information or the expectation of this information in making their own decision. We call this assumption reasoning strategically. In other traditional economic models (a competitive equilibrium model, for example), decision-makers generally only consider a set of parameters (e.g. prices) as given when making their decision, while in game theory, decision- makers seek to optimize their strategy based on whether their decision is optimal given what other decision-makers are expected to do. Game theory is commonly described using , which offers a flexible and abstract model to describe a variety of situations. A game is a description of a strategic interaction between different parties, called players, that describes constraints on what the players can do but not what they actually do. A solution systematically describes an outcome that might occur in a game — game theory predicts reasonable solutions for games. 1 2 ALLEN YUAN

In this paper, we will investigate a basic solution concept in game theory, the Nash equilibrium. A Nash equilibrium is a solution in which each player correctly predicts what the other players will do and responds optimally, so that no player can improve their position by choosing differently. We seek to understand the conditions under which a Nash equilibrium is guaranteed to exist. To do this, we must return to mathematics to describe our conditions concisely and to prove that these conditions truly will yield a Nash equilibrium. To begin with, we define a mathematical structure, the , in section 2 and explore some properties of this structure. In section 3, we prove Sperner’s Lemma, a key result about simplices and their subdivisions that is crucial to the proof of Brouwer’s Fixed Point Theorem, in section 4. We then extend Brouwer’s Theorem for point- valued functions to Kakutani’s Theorem for set-valued functions in section 5. In section 6, we apply the mathematics we have covered to game theory by defining the basic components of game theory mathematically. We then establish a set of conditions for the existence of a Nash equilibrium and prove that these conditions are sufficient. Finally, in section 7, we consider an non-deterministic extension of strategic games and show that any such extension of a game with a finite number of outcomes must have a Nash equilibrium. For this paper, we only assume some familiarity with introductory real analysis and microeconomic theory.

2. Convexity and Simplices We first define the notion of convexity and use convexity to describe the set of points that make up a simplex. We then describe simplicial subdivisions, which will be important in the proof of Sperner’s Lemma. Definition 2.1. A set S ⊆ Rm is convex if for all x, y ∈ S and λ ∈ [0, 1] we have λx + (1 − λ)y ∈ S. One way to think about this definition is if S is convex, then we can take any two vectors in S and connect their tips with a straight line segment, and all the vectors with tips on that line segment will also be in S. i 1 n Notation 2.2.x will denote the ith vector in a set of vectors x ,..., x , while xi will denote the ith component of the vector x.[n] denotes the set {1, . . . , n}. n X i 1 n Definition 2.3. λix is a convex combination of x ,..., x if λi ≥ 0 for all i=1 n n X X i i ∈ [n] and λi = 1. We call λix strictly positive if λi > 0 for all i ∈ [n]. i=1 i=1 Definition 2.4. For A ⊆ Rm, the of A, denoted co(A), is the set of all finite convex combinations of points in A: ( n n ) X i i X co(A) = λix | x ∈ A, λi ≥ 0 ∀i ∈ [n]; λi = 1 i=1 i=1 n 1 n m X i Definition 2.5.x ,..., x ∈ R are affinely independent if λix = 0 and i=1 n X λi = 0 implies that λ1 = ··· = λn = 0. In other words, any zero linear i=1 combination of these vectors with coefficients that sum to 0 must be trivial. FIXED POINT THEOREMS AND APPLICATIONS TO GAME THEORY 3

x0 x0

x1 x2 x1 x2

Figure 1. A 2-simplex on the left and a closed 2-simplex on the right.

Definition 2.6. An n-simplex is the set of all strictly positive convex combina- tions of an (n + 1)-element affinely independent set. An n-simplex T with affinely- independent vertices x0,..., xn is defined by

( n n ) 0 n X i X T = x ··· x = λix | λi > 0 ∀i ∈ {0, . . . , n}; λi = 1 i=0 i=0 The standard n-simplex is the n-simplex formed by the n + 1 standard basis vectors in Rn+1. Example 2.7. A 0-simplex is a single point. A 1-simplex is a line segment (minus the endpoints). A 2-simplex is a triangle (minus the ). A 3-simplex is a tetrahedron (minus the boundary). See Figure 1 for an example of a 2-simplex. Definition 2.8. A closed n-simplex with vertices at the affinely independent vectors x0,..., xn is the convex hull of the set of vectors {x0,..., xn}:

( n n ) 0 n X i X T = x ··· x = λix | λi ≥ 0 ∀i ∈ {0, . . . , n}; λi = 1 i=0 i=0 Note that the of an n-simplex is a closed n-simplex. The closure of the standard n-simplex is denoted ∆n, or ∆ if the dimension is evident. Definition 2.9. If k ≤ n, then the k-simplex xi0 ··· xik is a face of x0 ··· xn, where i0, . . . , ik ∈ {0, ··· , n} and i0 < ··· < ik. Note that a closed n-simplex is also the union of all faces of an n-simplex with the same vertices. Definition 2.10. Let P(S) denote the power set of S. For y ∈ co({x0,..., xn}) n X i 0 n such that y = λix , let the set-valued function χ : co({x ,..., x }) → P({0, . . . , n}) i=0 be defined by χ(y) = {i | λi > 0}. Note that if χ(y) = {i0, . . . , ik}, then y ∈ xi0 ··· xik . This face is called the carrier of y. Definition 2.11. If T = x0 ··· xn is an n-simplex, a simplicial subdivision of T is a finite collection of simplices {Ti | i ∈ [m]} (called subsimplices) such that m [ Ti = T and for all i, j ∈ [m] we have Ti ∩ Tj = Ø or Ti ∩ Tj = Tij, where Tij is i=1 a common face of Ti and Tj. The mesh of a simplicial subdivision is the diameter of the largest subsimplex in the subdivision. See Figure 2 for an example. 4 ALLEN YUAN

x0 x0

x3 x3

x1 x2 x1 x2 x4 x4

Figure 2. The closed 2-simplex to the left is not simplicially sub- divided because the subsimplices x0x1x4 and x0x2x3 do not have a common face as the intersection of their closures. The closed 2-simplex to the right is simplicially subdivided.

Definition 2.12. For any m ∈ N, the set of vertices n ki X V = {v ∈ n+1 | v = where k ∈ for all i ∈ {0, . . . , n} and k = m} R i m i N0 i i=0 forms a simplicial subdivision of ∆n. (Note here that N0 is the set of natural num- bers and 0.) This subdivision is called an equilateral subdivision. By increasing m, we can make the mesh of this subdivision arbitrarily small. See Figure 3. Definition 2.13. Given a simplex T = x0 ··· xn, the barycenter of T is given n 1 X by b(T ) = xi. For simplices T ,T , we say that T > T if T is a face n + 1 1 2 1 2 2 i=0 of T1 and T1 6= T2. Given a simplex T , the family of all simplices b(T0) ··· b(Tk) such that T ≥ T0 > T1 > ··· > Tk is a simplicial subdivision of T called the first barycentric subdivision of T . Further barycentric subdivisions are defined recursively and barycentric subdivisions can also have arbitrarily small mesh. See Figure 3.

3. Sperner’s Lemma Definition 3.1. Given a simplicially subdivided closed n-simplex T = x0 ··· xn, let V be the set of all vertices of all subsimplices. A labeling function f : V → {0, . . . , n} is called a proper labeling of this subdivision if f(v) ∈ χ(v) for all v ∈ V (where χ is defined as in Definition 2.10). A n-subsimplex is completely labeled by f if f takes on all values 0, . . . , n on its vertices. Sperner’s Lemma guarantees the existence of a completely-labeled n-subsimplex for any properly-labeled simplicially-subdivided simplex. In fact, it proves some- thing stronger: that the number of completely-labeled n-subsimplices is odd. To prove this, we construct a graph of a of the subsimplices, where the sub- simplices are vertices and they are connected if they share a completely-labeled face (which serves as an edge). We then use cardinality and parity to finish the inductive argument. FIXED POINT THEOREMS AND APPLICATIONS TO GAME THEORY 5

e2

x0

e1

1 2 e3 x x

Figure 3. The left diagram shows an equilateral subdivision of ∆2 with m = 4. The right diagram shows the first barycentric subdivision of the closed 2-simplex x0x1x2.

Theorem 3.2 (Sperner’s Lemma). If T is an n-dimensional closed simplex that is simplicially subdivided and properly labeled by f, then there are an odd number of completely-labeled n-subsimplices.

Proof. By induction on n, the dimension of the simplex. If n = 0, then T is a single point — call it x0. By definition of the labeling function, the single point in T can only be assigned a label of 0, and since the face containing it consists only of that point (x0), this face is a completely-labeled 0-subsimplex. There are no other possible nonempty subsimplices or simplicial subdivisions, so there are an odd number of completely-labeled 0-subsimplices for n = 0. We now assume the case of n − 1 and will show the inductive hypothesis holds for n. Consider an arbitrary n-dimensional closed simplex that is simplicially sub- divided and properly labeled by f — call it T , where T = x0 ··· xn. We will begin by defining a few sets. Let: • C be the set of all completely-labeled n-subsimplices • A be the set of all almost completely-labeled n-subsimplices (i.e. those n-subsimplices that achieve {0, . . . , n − 1} on their vertices but not n) • B be the set of all completely-labeled (n − 1)-subsimplices on the face x0 ··· xn−1, and • E be the set of all completely-labeled (n − 1)-subsimplices. (Note that B ⊆ E.) We will construct a graph G = (V,E) from these sets. Let the set of vertices be V = C ∪ A ∪ B and let E be the set of edges. An edge e ∈ E is incident to a vertex v ∈ V if: (1) v = e ∈ B or (2) v ∈ A ∪ C and e is a face of v. We begin by showing that G is well-defined, i.e. that every edge is incident to exactly two distinct vertices. Consider an arbitrary e ∈ E. We know either e ∈ B or e ∈ E \ B: 6 ALLEN YUAN

• If e ∈ B, then e is incident at itself and is a face of another n-simplex Tn, and since e is completely-labeled it forces Tn to be in either A or C (since Tn must attain at least {0, . . . , n − 1} on its vertices). Thus e is incident at two distinct vertices. • If e ∈ E \ B, then e must be a face of two distinct n-simplices (it cannot be on any face but x0 ··· xn because then it would not be able to attain {0, . . . , n − 1}), and it forces those two n-simplices to be in A or C for the same reason as above, so e will be incident at two distinct vertices. Thus G is well-defined as a graph. We now will calculate the degree of vertices v ∈ V . We consider cases: • If v ∈ B, then deg v = 1. Exactly one edge is incident to v — itself (since v ∈ B ⊂ E) — and v cannot have any other edges incident to it since it is not in A ∪ C. • If v ∈ C, then deg v = 1. We can see that since v achieves {0, . . . , n} on its vertices, only one (n − 1) face of v can attain {0, . . . , n − 1}. All other (n − 1) faces must contain n and thus will be missing an element from {0, . . . , n − 1}. • If v ∈ A, then deg v = 2. Since v achieves {0, . . . , n − 1} on its vertices but not n and v is an n-subsimplex, we know by the Pigeonhole Principle that v must achieve exactly one duplicate value in {0, . . . , n − 1} on its vertices. Thus there are two ways to completely label an (n−1) face of v, using each of the two duplicate values, so there must be two edges incident to v. X By the Handshake Theorem, we know deg v = 2|E| (since each edge is counted v∈V X twice in the sum of the degrees), but from above we can also see that deg v = v∈V 2|A|+|B|+|C|. Thus 2|E| = 2|A|+|B|+|C|. We know 2|E| and 2|A| must be even, 0 so |B| + |C| must be even as well. We can see that |B| must be odd. Consider T = x0 ··· xn−1 divided in the same way as T and labeled by the same function f. This is a simplicially subdivided and properly labeled (n−1)-dimensional closed simplex, so by the inductive hypothesis it has an odd number of completely-labeled (n − 1)-subsimplices, but these completely-labeled (n − 1)-subsimplices are simply the elements of B, so |B| must be odd. Thus it follows that |C| must be odd as well, and thus an n-dimensional closed simplex that is simplicially subdivided and properly labeled must have an odd number of completely-labeled n-subsimplices.  We will use the existence of a completely-labeled n-subsimplex in the proof of Brouwer’s Theorem.

4. Brouwer’s Fixed Point Theorem Definition 4.1. Given a set X that is a subset of a and a function f : X → X, x∗ ∈ X is a fixed point of f if f(x∗) = x∗. We now will prove Brouwer’s Fixed Point Theorem for simplices, namely, that any that maps the standard closed m-simplex to itself must have a fixed point. m Notation 4.2. Given vectors x, y ∈ R , we notate xi ≤ yi for all i ∈ [m] by x ≤ y. We notate xi < yi for all i ∈ [m] by x < y. FIXED POINT THEOREMS AND APPLICATIONS TO GAME THEORY 7

Lemma 4.3. If x, y ∈ ∆m and x ≤ y, then x = y. m m X X Proof. Since x, y ∈ ∆m, we know that xi = yi = 1. Assume that xj < yj for i=0 i=0 some j ∈ {0, . . . , m}. Since x ≤ y, it follows that xi ≤ yi for all i ∈ {0, . . . , m}, so m m X X then 1 = xi < yi = 1, a contradiction. Thus xi ≥ yi for all i ∈ {0, . . . , m}, i=0 i=0 but we also know that xi ≤ yi for all i ∈ {0, . . . , m}, so it follows that xi = yi for all i ∈ {0, . . . , m} and x = y. 

Theorem 4.4. If f : ∆m → ∆m is continuous, then it has a fixed point.

Proof. Fix n ∈ N and take a simplicial subdivision of ∆m such that the mesh is 1 less than or equal to . (We know such a subdivision must exist - for example, n take an equilateral or barycentric subdivision.) Let V be the set of all vertices of all subsimplices and construct a labeling function λ : V → {0, . . . , m} such that if i0 i v ∈ V satisfies χ(v) = {i0, . . . , ik} (so that v ∈ e ··· e k ), then λ(v) is the smallest element of χ(v) ∩ {i | fi(v) ≤ vi}, which must exist because otherwise m k k m X X X X 1 = fi(v) ≥ fij (v) > vij = vi = 1. i=0 j=0 j=0 i=0

We can see that λ is a proper labeling of this simplicial subdivision of ∆m be- cause λ(v) ∈ χ(v) for all v ∈ V . Thus, by Sperner’s Lemma, there must exist a completely-labeled m-subsimplex. Denote this subsimplex by np0 ··· npm (where 0, . . . , m are the labelings). We can repeat this process for all n ∈ N to get a sequence of completely-labeled m-subsimplices indexed by n. Now consider a se- n 0 quence of vertices with the same labeling, ( p ). Since ∆m is compact (closed and n 0 bounded) and p ∈ ∆m for all n ∈ N, we know by the Bolzano-Weierstrass The- n0 0 orem that there exists a convergent subsequence (n0) such that ( p ) converges 0 to some point p ∈ ∆m. However, by Bolzano-Weierstrass again, we can take a n1 1 convergent subsequence of (n0) — call it (n1) — such that ( p ) converges to 1 p ∈ ∆m. We can continue in this manner until arriving at a convergent sub- nm m m sequence (nm) such that ( p ) converges to p ∈ ∆m. All subsequences of a convergent sequence must converge to the same value, so let (nm p0),..., (nm pm) converge to p0,..., pm. However, as n → ∞, we know the mesh approaches 0, so the vertices of np0 ···n pm must approach each other, and so p0 = ··· = pm = p, n i n i where p ∈ ∆m. We also know from above that fi( p ) ≤ pi for all n ∈ N and i ∈ {0, . . . , m}, so since f is continuous we have that fi is continuous for all i ∈ {0, . . . , m} and fi(p) ≤ pi for all i ∈ {0, . . . , m}. Then it follows f(p) ≤ p, but then by Lemma 4.3 it follows that f(p) = p (since p, f(p) ∈ ∆m), so p must be a fixed point of f.  The general version of Brouwer’s Theorem describes the conditions on a set X such that any continuous function f : X → X will have a fixed point. To prove the general version of Brouwer’s Theorem, we utilize homeomorphisms to show that any nonempty, compact, in m dimensions is homeomorphic to the standard closed (m − 1)-simplex, and thus will share this “fixed-point property” that we proved the standard closed (m − 1)-simplex to have in Theorem 4.4 (since this result holds for all n ∈ N). 8 ALLEN YUAN

Definition 4.5. B is homeomorphic to A if there exists a continuous bijection f : A → B such that f −1 is also continuous (called a homeomorphism).

Corollary 4.6. For all n ∈ N, if K is homeomorphic to ∆n and f : K → K is continuous, then f has a fixed point.

Proof. Consider an arbitrary n ∈ N. Since K is homeomorphic to ∆n, there exists −1 some continuous bijection h : ∆n → K such that h is continuous. We know the composition of continuous functions is continuous, so h−1 ◦f ◦h must be continuous. −1 −1 However, we know h ◦ f ◦ h : ∆n → ∆n, so by Theorem 4.4 h ◦ f ◦ h must have a fixed point — call it z. Thus h−1(f(h(z))) = z. Since h is bijective, we know h−1 is also bijective, so it follows that f(h(z)) = h(z). Then h(z) ∈ K is a fixed point of f.  To prove that any nonempty, compact, convex set in Rm is homeomorphic to the standard closed (m − 1)-simplex, we will first show that any nonempty, compact, convex set in Rm is homeomorphic to a closed unit in Rm. Since the standard closed (m − 1)-simplex also has these properties, it will also be homeomorphic to a closed unit ball in Rm, and we can compose these homeomorphisms to prove what we originally desired. To show that any nonempty, compact, convex set in Rm is homeomorphic to a closed unit ball in Rm, we will need a helper function which we will call k, defined below, and a set of points called a ray of x ∈ Rm, which is defined to be the set r(x) = {αx | α ∈ R+}. Given K, an arbitrary compact, convex subset of Rm with nonempty interior, let k : Rm \{0} → K be a function defined by k(x) = y such that y ∈ r(x) ∩ K but ∀α > 1 αy 6∈ K. This function maps a point x ∈ Rm to the point in K farthest from the origin along r(x). We assume 0 to be in the interior of K without loss of generality, and thus we can find points in any direction from 0 (choose an arbitrarily small -ball around 0 that is fully contained in K). For the following proofs, we assume K retains its properties of being compact and convex with nonempty interior. Lemma 4.7. For all K, k is well-defined and bounded.

Proof. Let N = {kyk | y ∈ r(x) ∩ K} and rx = sup N. Since K is bounded and the interior is nonempty, we know r(x) ∩ K and N are bounded and nonempty, so sup N must exist. In addition, since sup N ∈ N, there must exist some sequence y1, y2,... such that yn ∈ r(x) ∩ K for all n ∈ and lim kynk = sup N = rx. N n→∞ Since r(x)∩K is compact and yn ∈ r(x)∩K for all n ∈ N, by Bolzano-Weierstrass, we can find a convergent subsequence of (yn) that converges to y ∈ r(x)∩K. Since lim kynk = rx, it follows that kyk = rx. By the convexity of K, we know that all n→∞ points between 0 and y must be in K. In addition, it is not possible to have some y0 ∈ r(x) ∩ K such that y0 = αy with α > 1 because then ky0k = αkyk > kyk, which contradicts kyk = sup N. Thus k is well-defined. We can see that k is bounded below by 0 and above by the compactness of K.  Lemma 4.8. For all K, k is continuous.

Proof. Consider x ∈ Rm \{0}, and suppose that k is not continuous at x. Then there exists some  > 0 such that for all δ > 0 there exists some y ∈ Rm \{0} such that kx − yk < δ but kk(x) − k(y)k ≥ . Since 0 is in the interior of K there exists FIXED POINT THEOREMS AND APPLICATIONS TO GAME THEORY 9 a ball around 0 with radius 0 such that B(0, 0) ⊂ K. Since K is convex, we know that co(B(0, 0) ∪ {k(x)}) ⊂ K, since k(x) ∈ K. Let C = co(B(0, 0) ∪ {k(x)}) and let T = C ∩ ∂B(k(x), ). (If 0 ∈ B(k(x), ) we can make  arbitrarily smaller such that 0 6∈ B(k(x), ).) We can construct a sequence of points y1, y2,... such that lim yn = x and k(yn) 6∈ B(k(x), ) for all n ∈ by choosing yn to be n→∞ N 1 the y given by δ = . Since y approaches x, for sufficiently large n, r(y ) will n n n intersect T in its interior. We can also see that k(yn) must be on the “farther side” of B(k(x), ) from the origin for sufficiently large n, since if k(yn) is closer to the origin than B(k(x), ), then there must exist some α > 1 such that αk(yn) ∈ K because T ⊂ K and r(yn) will intersect T farther away from the origin than k(yn). Since k(y1), k(y2), · · · ∈ K and K is compact, by Bolzano-Weierstrass, we 0 can choose a convergent subsequence of (k(yn)) that converges to y ∈ K. Since k(yn) ∈ r(yn) for all n ∈ and lim yn = x, we have that the direction of r(yn) N n→∞ approaches the direction of r(x) and y0 ∈ r(x). Then since k(x) ∈ r(x) we have k(x) ky0k that y0 = ky0k = k(x), but y0 ∈ K and since y0 is on the “farther kk(x)k kk(x)k ky0k side” of B(k(x), ) from the origin it follows that > 1, a contradiction to kk(x)k the definition of k. Thus k must be continuous at x, and we can generalize to all m x ∈ R \{0} to say that k is continuous on its domain. 

We will now apply k to construct our desired homeomorphism:

Theorem 4.9. Every compact, convex subset of Rm with nonempty interior is homeomorphic to a closed unit ball in Rm.

Proof. Let K be an arbitrary compact, convex subset of Rm with nonempty interior. Consider the function f : Rm → Rm given by  x  if x 6= 0 f(x) = kk(x)k . 0 if x = 0

Since k is continuous and the identity/norm functions are continuous on Rm \{0}, x we know that is continuous on x 6= 0. We now will show that f is continuous kk(x)k at 0. Let  > 0, and we seek δ > 0 such that for all y ∈ Rm such that kyk < δ we

y have < . (Note if y = 0 then kf(y)k = 0 < .) Since k sends a point to kk(y)k the boundary of K (which is in K because K is closed) and we showed above that K contains some open ball B(0, 0) in its interior, it follows that kk(y)k ≥ 0 for all y ∈ Rm \{0}. Let δ = 0, and consider y ∈ Rm \{0} such that kyk < 0. Since kyk 0 > 0, we know < , but since kk(y)k ≥ 0 we have that 0

y kyk kyk = ≤ < . kk(y)k kk(y)k 0 Thus k is continuous at 0. 10 ALLEN YUAN

Now consider the function g : Rm → Rm given by ( kk(x)kx if x 6= 0 g(x) = . 0 if x = 0 We will show that f and g are inverses. To do this, we first note that if x and y are positively collinear (i.e. x = cy where c ∈ R+), they share a ray (r(x) = r(y)), so k(x) = k(y) as well. If x = 0, then f(g(0)) = 0 and g(f(0)) = 0. If x 6= 0, then f and g will never map a nonzero value into zero by the way k is defined, so now we can see that kk(x)kx f(g(x)) = f(kk(x)kx) = . kk(kk(x)kx)k We can see that x and kk(x)kx must be positively collinear, so k(x) = k(kk(x)kx) and kk(x)k = kk(kk(x)kx)k, so it follows that f(g(x)) = x. In addition,     x x x g(f(x)) = g = k , kk(x)k kk(x)k kk(x)k x  x  but x and must be positively collinear, so k(x) = k and kk(x)k = kk(x)k kk(x)k   x k , so it follows that g(f(x)) = x. Thus f and g are inverses. We can kk(x)k quickly prove that g must also be continuous at 0. Since K is bounded, we know that there must exist some M > 0 such that kk(y)k ≤ M for all y ∈ Rm \{0}, so  if we let δ = we can see for all y ∈ m such that kyk < δ that M R kkk(y)kyk = kk(y)kkyk ≤ Mkyk < . Note that if x ∈ K, then k(x) = αx where α ≥ 1, because otherwise we would 1 x have a contradiction to the definition of k. Thus ∈ (0, 1] and f(x) = α αkxk is in B(0, 1). (If x = 0 then f(0) = 0 ∈ B(0, 1) trivially, and g(x) = 0 ∈ K.) Conversely, x ∈ B(0, 1) gets mapped into

g(x) = kk(x)kx = kk(x)kkxkux

(where ux is a unit vector in the direction of x). Since kk(x)k = sup{kyk | y ∈ K ∩ r(x)} and kxk ≤ 1, by convexity, we know that g(x) ∈ K because 0, k(x) ∈ K. We now define f 0 : K → B(0, 1) and g0 : B(0, 1) → K in the same manner as f and g above, respectively. From above, we know both f 0 and g0 are continuous and they are inverses, so it follows that f 0 is bijective. Thus, f 0 is a continuous bijection such that its inverse, g0, is continuous, so it follows that K and B(0, 1) must be homeomorphic.  Finally, we prove the general version of Brouwer’s Fixed Point Theorem. Corollary 4.10 (Brouwer’s Fixed Point Theorem). If K is a nonempty, compact, convex subset of Rm and f : K → K is continuous, then f has a fixed point. Proof. We can prove this by strong induction on m. For the case of 0, K is a single point x∗, and so by definition of f : K → K, we have that f(x∗) = x∗, a fixed point. Now assume the cases of 0, 1, 2, . . . , m − 1. If K has at most n affinely independent vectors where 0 ≤ n < m, we can use the inductive hypothesis in n dimensions to show that f must have a fixed point (i.e. we can embed K in a FIXED POINT THEOREMS AND APPLICATIONS TO GAME THEORY 11 lower-dimensional space that will satisfy the conditions to apply Theorem 4.9). If K has m affinely independent vectors, then K must have nonempty interior in Rm by convexity, so by Theorem 4.9, we know that K is homeomorphic to a closed unit ball B(0, 1) in Rm (and thus B(0, 1) is also homeomorphic to K), so there must exist some continuous bijection g : K → B(0, 1) such that g−1 is continuous. m However, ∆m−1 is also a compact, convex subset of R with nonempty interior, so ∆m−1 is homeomorphic to B(0, 1) as well and there must exist some continuous −1 bijection h : B(0, 1) → ∆m−1 such that h is continuous. Composing h ◦ g gives −1 −1 −1 a continuous bijection h ◦ g : K → ∆m−1 such that (h ◦ g) = g ◦ h is also continuous. Thus K is homeomorphic to ∆m−1, so by Corollary 4.6 it follows that f has a fixed point.  5. Kakutani’s Fixed Point Theorem Kakutani’s Fixed Point Theorem extends Brouwer’s Theorem to set-valued func- tions. To look at these functions, we need a new notion of continuity and of fixed points. Definition 5.1. Let P(X) denote all nonempty, closed, convex subsets of X. If S is nonempty, compact, and convex, then the set-valued function Φ : S → P(S) is upper semi-continuous if for arbitrary sequences (xn), (yn) in S, we have that lim xn = x0, lim yn = y0, and yn ∈ Φ(xn) for all n ∈ imply y0 ∈ Φ(x0). n→∞ n→∞ N Definition 5.2. A fixed point of a set-valued function Φ : S → P(S) is a point x∗ ∈ S such that x∗ ∈ Φ(x∗). Kakutani’s Theorem is very similar to Brouwer’s Theorem, but for set-valued functions: it imposes the same restrictions (nonempty, compact, convex) on a set X to ensure any upper semi-continuous set-valued function that maps X into a closed, convex subset of itself will have a fixed point. We now prove Kakutani’s Theorem for simplices. Theorem 5.3. If S is a r-dimensional closed simplex in a Euclidean space and Φ: S → P(S) is upper semi-continuous, then Φ has a fixed point.

Proof. Consider an arbitrary simplicial subdivision Sn of S with mesh less than or 1 equal to . Let V be the set of vertices of this subdivision, and for each xn ∈ V n let yn be an arbitrary point from Φ(xn). We can extend this mapping linearly within each subsimplex (i.e. take the coefficients from convex combinations of the vertices and apply those to the yn instead of xn), and this will create a continuous point-valued function ϕn : S → S, since convex combinations are continuous and S is convex. By Brouwer’s Fixed Point Theorem, since S is nonempty, compact, and convex and ϕn : S → S is continuous, there must exist some fixed point xn ∈ S such that ϕn(xn) = xn. We can then take the sequence of points x1, x2, · · · ∈ S, and since S is compact, by Bolzano-Weierstrass, there must exist a convergent subsequence of (xn), (xn ), such that lim xn = x0 ∈ S. v v→∞ v We will now show that x0 is a fixed point of Φ. Let Rn be a r-dimensional closed subsimplex of Sn that contains xn (note that if xn falls on a common face of two n 0 n 1 n r closed simplices, choose one arbitrarily). Let z , z ,..., z be the vertices of Rn. If we consider each vertex individually, by a similar proof as in Theorem 4.4, we can see that by compactness each sequence of vertices has a convergent subsequence and 12 ALLEN YUAN since the mesh approaches 0 and xn approaches x0 we have that each sequence of a vertex must have a convergent subsequence that converges on x0. Since xn ∈ S we r r X n in i n i X n i know xn = λ z , with λ ≥ 0 for all n ∈ N, i ∈ {0, 1, . . . , r} and λ = 1 i=0 i=0 n i n i n i for all n ∈ N. Let y = ϕn( z ) for all i ∈ {0, 1, . . . , r} and n ∈ N. Since the z n i n i are vertices of the simplicial subdivisions, by definition of ϕn, we have y ∈ Φ( z ) for all i ∈ {0, 1, . . . , r} and n ∈ N. In addition, since ϕn is linear for all n ∈ N, we have that r ! r r r X n in i X n in i X n i n i X n in i xn = ϕn(xn) = ϕn λ z = ϕn( λ z ) = λ ϕn( z ) = λ y i=0 i=0 i=0 i=0 for all n ∈ N. Since nλi is in [0, 1] and nyi is in S for all n ∈ N and i ∈ {0, 1, . . . , r}, 0 with S being compact, we can take a subsequence of (nv), labeled by (nv), such 0 0 0 n i n i nv i i that ( v λ ) and ( v y ) both converge for all i ∈ {0, 1, . . . , r}; i.e. lim λ = λ0 v→∞ 0 nv i i n i and lim y = y0 for all i ∈ {0, 1, . . . , r}. From above, we have that λ ≥ 0 for v→∞ r r X n i X n in i all n ∈ N and i ∈ {0, 1, . . . , r}, λ = 1 for all n ∈ N, and xn = λ y i=0 i=0 i for all n ∈ N, and since these properties are continuous it follows that λ0 ≥ 0 r r X i X i i 0 for all i ∈ {0, 1, . . . , r}, λ0 = 1, and x0 = λ0y0. However, since (nv) is a i=0 i=0 0 0 n i nv i n i subsequence of (nv), ( v z ) must converge to the same limit as ( z ), so lim v z = v→∞ 0 nv i i x0 for all i ∈ {0, 1, . . . , r}. In addition, lim y = y0 for all i ∈ {0, 1, . . . , r} and v→∞ n0 i n0 i v y ∈ Φ( v z ) for all v ∈ N and i ∈ {0, 1, . . . , r}, so by the upper semi-continuity i of Φ we have that y0 ∈ Φ(x0) for all i ∈ {0, 1, . . . , r}. However, since Φ(x0) is r X i i convex, then λ0y0 must also be in Φ(x0), so it follows that x0 ∈ Φ(x0) and x0 i=0 is a fixed point of Φ.  We will now generalize Kakutani’s Theorem to nonempty, compact, convex sets, as we did with Brouwer’s Theorem. Lemma 5.4. If f : S → S0 is a continuous point-valued function and g : S0 → P(S0) is a upper semi-continuous set-valued function such that g ◦ f : S → P(S), then g ◦ f is upper semi-continuous.

Proof. Consider sequences (xn), (yn) in S such that lim xn = x0, lim yn = y0, n→∞ n→∞ and yn ∈ g(f(xn)) for all n ∈ N. We will show that y0 ∈ g(f(x0)). By the 0 continuity of f, we know that the sequence (f(xn)) converges to f(x0) in S , so lim f(xn) = f(x0). By the upper semi-continuity of g, we know then that y0 ∈ n→∞ g(f(x0)), so g ◦ f is upper semi-continuous, as desired.  Definition 5.5. A function ψ : X → Y where Y ⊂ X is retracting if ψ(y) = y for all y ∈ Y . FIXED POINT THEOREMS AND APPLICATIONS TO GAME THEORY 13

Corollary 5.6 (Kakutani’s Fixed Point Theorem). If S is a nonempty, compact, convex set in a Euclidean space and Φ: S → P(S) is upper semi-continuous, then Φ has a fixed point. Proof. Consider an arbitrary closed simplex S0 that contains S as a subset (such a simplex must exist because S is compact and convex). We can construct a con- tinuous retracting function ψ : S0 → S (for example, the identity within S, and a function that will flatten out S0 \ S onto the boundary of S). We know that since Φ ◦ ψ : S0 → P(S) and P(S) ⊂ P(S0) we have that Φ ◦ ψ maps points in S0 to sets in P(S0). Since ψ is continuous and Φ is upper semi-continuous, by Lemma 5.4, we have that Φ◦ψ is upper semi-continuous. By Theorem 5.3, it then follows that Φ◦ψ 0 must have a fixed point — call it x0 ∈ S . We know x0 ∈ Φ(ψ(x0)) by definition of fixed point; however, since Φ(ψ(x0)) ⊂ S, we know that x0 ∈ S. Since ψ is retracting it follows that ψ(x0) = x0, and thus Φ(ψ(x0)) = Φ(x0), so x0 ∈ Φ(x0) and x0 is a fixed point of Φ.  Kakutani’s Fixed Point Theorem is the capstone of our work thus far and will be the key result needed in the proof of the existence of a Nash equilibrium.

6. Nash Equilibria of Pure Strategic Games We now apply the mathematics covered above to rigorously define game theory and establish a set of conditions for the existence of a Nash equilibrium. However, we must first look at how players in a game describe their preferences over a set of outcomes. Definition 6.1. A binary relation on a set A is a subset of A × A consisting of all pairs of elements related to each other. For a, b ∈ A, if a is related to b, we denote this by R(a, b). We assume that the relation is defined on the set A unless otherwise noted. Definition 6.2. Binary relations can have several important properties, including: • Completeness: For all a, b ∈ A, we have R(a, b), R(b, a), or both. • Reflexivity: For all a ∈ A, R(a, a). • Transitivity: For all a, b, c ∈ A, if R(a, b) and R(b, c), then R(a, c). Definition 6.3. A preference relation is a complete reflexive transitive binary relation. For a, b ∈ A, we denote a related to b by a % b. If a % b but b %6 a, then we denote this by a b, and if a % b and b % a, we denote this by a ∼ b. In the context of economics, for a, b ∈ A, a % b implies that a is weakly preferred to (at least as good as) b, a ∼ b implies that the consumer is indifferent between a and b, and a b implies that a is strongly preferred to (better than) b. Since a preference relation is complete, we are able to establish a preference ordering between any two elements of A, and transitivity prevents cycles in preferences.

Definition 6.4. A preference relation is continuous if a % b whenever there exist sequences (ak) and (bk) in A such that lim ak = a, lim bk = b, and ak bk for k→∞ k→∞ % all k ∈ N. Q Notation 6.5. For a set of sets {Ai}i∈N , Ai denotes the Cartesian product of i∈N Ai for all i ∈ N. 14 ALLEN YUAN

We can now mathematically define a game:

Definition 6.6. A strategic game is a tuple hN, (Ai), (%i)i consisting of: • a finite set of players N. • for each player i ∈ N a nonempty set of actions Ai. Q • for each player i ∈ N a preference relation %i on A = Aj. j∈N

A strategic game is called finite if Ai is finite for all i ∈ N.

Note that each player’s preference relation %i is not defined on his own set of actions Ai but instead on the set of all actions A, indicating that each player not only cares about his own actions but also about the actions of the other players when evaluating the outcome.

Notation 6.7. A collection of values of some variable xi, one for each player i ∈ N, is called a profile and is denoted by (xi)i∈N , or (xi) if the group of players is clear. Given a profile x = (xi)i∈N and i ∈ N, we let x−i denote the list (xj)j∈N\{i} of elements of our profile x not including the choice of player i, and given this definition we let (x−i, xi) = x. We now define a Nash equilibrium mathematically as well:

Definition 6.8. A Nash equilibrium of a strategic game hN, (Ai), (%i)i is a profile a∗ ∈ A such that for all i ∈ N we have ∗ ∗ ∗ (a−i, ai ) %i (a−i, ai) for all ai ∈ Ai. The intuitive interpretation of this definition is that given optimal play by all ∗ other players a−i, any individual player i cannot make himself better off by changing his strategy ai. To establish the conditions of existence for a Nash equilibrium, we will define it in an alternative way. Definition 6.9. The best-response function of player i, Y Bi : Aj → P(Ai), j∈N\{i} is given by 0 0 Bi(a−i) = {ai ∈ Ai | (a−i, ai) %i (a−i, ai) for all ai ∈ Ai}.

Note that Bi is set-valued. The best-response function, given the actions of the other players, is the set of actions for player i that maximizes his payoff. Intuitively, each action in the profile of a Nash equilibrium must be a best response to the rest of the profile. This idea justifies our alternative definition of a Nash equilibrium:

Definition 6.10. A Nash equilibrium of a strategic game hN, (Ai), (%i)i is a ∗ ∗ ∗ profile a ∈ A such that ai ∈ Bi(a−i) for all i ∈ N. Thus, to prove the existence of a Nash equilibrium for a given strategic game ∗ hN, (Ai), (%i)i, it suffices to show that there exists some profile a ∈ A such that ∗ ∗ for all i ∈ N we have ai ∈ Bi(a−i). If we let B : A → P(A) be defined by Q ∗ ∗ ∗ B(a) = Bi(a−i), then we seek some a ∈ A such that a ∈ B(a ). We can use i∈N Kakutani’s Fixed Point Theorem in order to show such a point exists, but we need to verify the conditions under which Kakutani’s Fixed Point Theorem holds. FIXED POINT THEOREMS AND APPLICATIONS TO GAME THEORY 15

Definition 6.11. A preference relation %i over A is quasi-concave on Ai if for 0 0 all a ∈ A the set {ai ∈ Ai | (a−i, ai) %i (a−i, ai)} is convex. Theorem 6.12. The strategic game hN, (Ai), (%i)i has a Nash equilibrium if Ai is a nonempty, compact, convex subset of a Euclidean space and %i is continuous and quasi-concave on Ai for all i ∈ N. Proof. Utilizing B as defined above, we will show that A and B satisfy the con- ditions to apply Kakutani’s Fixed Point Theorem. Since the Ai are nonempty, compact, and convex for all i ∈ N, it follows that their Cartesian product must also be nonempty, compact, and convex, so A is nonempty, compact, and convex. For arbitrary i ∈ N, we will now show that Bi(a−i) is nonempty, closed and Q convex for all a−i ∈ Aj. (Fix an arbitrary a−i throughout.) To show that j∈N\{i} Bi(a−i) is nonempty, we assume that we can construct a continuous function ui : 0 0 0 Ai → R such that for ai, ai ∈ Ai,(a−i, ai) %i (a−i, ai) if and only if ui(ai) ≥ ui(ai). (This type of function is called a utility function.) Then since Ai is compact and ui is continuous, it follows that ui(Ai) is compact as well, so by the Extreme Value ∗ ∗ Theorem there must exist some ai ∈ Ai such that ui(ai ) ≥ ui(ai) for all ai ∈ Ai. ∗ By definition of ui, it follows that (a−i, ai ) %i (a−i, ai) for all ai ∈ Ai, and thus ∗ ai ∈ Bi(a−i), so Bi(a−i) is nonempty. To show that Bi(a−i) is closed, take an arbitrary p ∈ Bi(a−i). Then there must exist some sequence (pk) such that pk ∈ Bi(a−i) for all k ∈ N and lim pk = p. k→∞ However, by definition of Bi(a−i), we know that (a−i, pk) %i (a−i, ai) for all ai ∈ Ai. For each ai ∈ Ai we can construct a sequence ((a−i, pk)) such that lim (a−i, pk) = k→∞ (a−i, p) and a sequence ((a−i, ai)) such that lim (a−i, ai) = (a−i, ai). From above k→∞ we know that (a−i, pk) %i (a−i, ai) for all k ∈ N, so by the continuity of %i it follows that (a−i, p) %i (a−i, ai) for all ai ∈ Ai. Then p ∈ Bi(a−i), so it follows that Bi(a−i) is closed. We now will show that Bi(a−i) is convex. Consider ai ∈ Bi(a−i). Since %i is 0 0 quasi-concave on Ai, we have that S = {ai ∈ Ai | (a−i, ai) %i (a−i, ai)} is convex. However, since ai is a best response, the only responses weakly preferable to ai must be best responses, so S ⊂ Bi(a−i). In addition, any other best response ∗ ∗ ai ∈ Bi(a−i) must be at least as good as x, otherwise ai cannot be a best response because ai would be better. Then Bi(a−i) ⊂ S, so it follows that Bi(a−i) = S and Bi(a−i) is convex. Q Since Bi(a−i) is nonempty, closed, and convex for all i ∈ N and a−i ∈ Aj, j∈N\{i} it follows that their Cartesian product B(a) must be nonempty, closed, and convex as well for all a ∈ A. Thus B : A → P(A). We now will show that B is upper semi- continuous. Consider sequences (xk), (yk) in A such that lim xk = x0, lim yk = k→∞ k→∞ 0 k k k k y , and y ∈ B(x ) for all k ∈ N. Then it follows that yi ∈ Bi(x−i) for all i ∈ N, k ∈ N. Consider an arbitrary i ∈ N. Then by definition of Bi, we k k k have that (x−i, yi ) %i (x−i, ai) for all ai ∈ Ai and k ∈ N. For each ai ∈ Ai, k k k k 0 0 we can construct a sequence ((x−i, yi )) such that lim (x−i, yi ) = (x−i, yi ) and a k→∞ k k 0 sequence ((x−i, ai)) such that lim (x−i, ai) = (x−i, ai), and we know from above k→∞ k k k that (x−i, yi ) %i (x−i, ai) for all k ∈ N, so by continuity of %i it follows that 0 0 0 0 0 (x−i, yi ) %i (x−i, ai) for all ai ∈ Ai. Then we know that yi ∈ Bi(x−i) for all 16 ALLEN YUAN i ∈ N, so y0 ∈ B(x0). Thus B is upper semi-continuous, and by Kakutani’s Fixed Point Theorem there exists some a∗ ∈ A such that a∗ ∈ B(a∗). By Definition 6.10 ∗ it follows that a is a Nash equilibrium of the strategic game hN, (Ai), (%i)i.  Thus, we have a set of conditions under which a Nash equilibrium is guaranteed to exist. However, these conditions are somewhat limited — for example, any finite game cannot satisfy these conditions, since each Ai cannot be convex if it is finite and nonempty (assuming the game is non-degenerate; i.e. players will have a decision to make). However, by extending finite games into non-deterministic strategies, we will see in the next section that we can prove the existence of a Nash equilibrium for any finite strategic game.

7. Nash Equilibria of Finite Mixed Strategic Games To begin with, we will examine the economic principles behind evaluating non- deterministic outcomes and define non-deterministic strategies. Definition 7.1. A lottery is a set of probabilities of realizing certain states of the world with associated payoffs attached to each state. A probability distribution over a set A coupled with a payoff function f : A → R creates a lottery over A.

Remark 7.2. Throughout this section, for the strategic game hN, (Ai), (%i)i, we assume each player i ∈ N’s preference relation %i satisfies the von Neumann- Morgenstern utility axioms, so we can construct a utility function ui : A → R Q (where A = Ai) for each player whose expected value, when coupled with the i∈N set of probability distributions over A, describes player i’s preferences over the set of lotteries on A. We now represent this strategic game by hN, (Ai), (ui)i. Notation 7.3. Let ∆(X) denote the set of probability distributions over X. If X is finite and δ ∈ ∆(X), then δ(x) is the probability that δ assigns to x ∈ X. The support of δ is the set χ(δ) = {x ∈ X | δ(x) > 0}.

Definition 7.4. Given a strategic game hN, (Ai), (ui)i, we call an element αi ∈ ∆(Ai) a mixed strategy and an ai ∈ Ai a pure strategy. A profile of mixed strategies α = (αj)j∈N induces a probability distribution over A. The probability Q 1 of a = (aj)j∈N under α is given by α(a) = αj(aj), assuming that Aj is finite j∈N for all j ∈ N and that each player’s strategy is resolved independently. Using mixed strategies, we can extend a deterministic strategic game by allowing each player to assign a probability distribution to how they will act — for example, we can create a mixed strategy by choosing Strategy A 30% of the time, Strategy B 40% of the time, and Strategy C 30% of the time. The outcomes are evaluated based on the expected value over lotteries on A, which are determined by the probability distribution over A arising from each player’s choice of mixed strategy and each player’s individual utility function.

1Note that this is a normal product, not a Cartesian product. FIXED POINT THEOREMS AND APPLICATIONS TO GAME THEORY 17

Definition 7.5. The mixed extension of the strategic game hN, (Ai), (ui)i is the Q strategic game hN, (∆(Ai)), (Ui)i, where Ui : ∆(Ai) → R gives the expected i∈N Q value of the lottery over A induced by α ∈ ∆(Ai). If Aj is finite for all j ∈ N, i∈N then    X Y Ui(α) =  αj(aj) · ui(a) . a∈A j∈N

We state without proof that Ui is multilinear for all i ∈ N, which arises mainly from the fact that given an arbitrary λ ∈ [0, 1], λβj +(1−λ)γj is the mixed strategy given by applying the mixed strategy βj with probability λ and the mixed strategy γj with probability 1−λ. We now extend the concept of Nash equilibrium to mixed strategy games. Definition 7.6. A mixed strategy Nash equilibrium of a strategic game is a Nash equilibrium of its mixed extension. We refer to the Nash equilibria of a strategic game as the pure strategy Nash equilibria. We first prove two results that link a game’s pure strategy Nash equilibria and mixed strategy Nash equilibria.

Proposition 7.7. The pure strategy Nash equilibria of a strategic game hN, (Ai), (ui)i are a subset of its mixed strategy Nash equilibria. Proof. Consider a pure strategy Nash equilibrium a∗ ∈ A. We can represent this ∗ ∗ as a profile of mixed strategies by considering the profile α = (e(aj ))j∈N , where e(aj) ∈ ∆(Aj) represents the degenerate mixed strategy that assigns a probability 0 0 of 1 to aj and a probability of 0 to all aj ∈ Aj such that aj 6= aj. By definition ∗ ∗ ∗ of Nash equilibrium, for all i ∈ N, we have that ui(a−i, ai ) ≥ ui(a−i, ai) for all ai ∈ Ai, so by the multilinearity of Ui, introducing any other pure strategy into the ∗ ∗ ∗ support of αi cannot increase Ui. (If ai is a better response to a−i than ai, player ∗ i could gain by shifting probability over to ai from ai. If the player is indifferent between them, she will gain nothing by shifting probability.) Thus it holds for all ∗ ∗ ∗ ∗ i ∈ N that Ui(α−i, αi ) ≥ Ui(α−i, αi) for all αi ∈ ∆(Ai), so α must be a Nash equilibrium of the mixed extension of our strategic game hN, (Ai), (ui)i. Thus the set of pure strategy Nash equilibria, when represented as mixed strategies, are also mixed strategy Nash equilibria. 

Proposition 7.8. Given a finite strategic game hN, (Ai), (ui)i, a mixed strategy ∗ Q profile α ∈ ∆(Ai) is a mixed strategy Nash equilibrium of hN, (Ai), (ui)i if i∈N ∗ and only if for all i ∈ N we have that every pure strategy in the support of αi is a ∗ best response to α−i. Proof. We first show that if α∗ is a mixed strategy Nash equilibrium, then for all ∗ i ∈ N we have that every pure strategy in the support of αi is a best response ∗ to α−i. Suppose that there exists some i ∈ N and some pure strategy ai ∈ Ai in ∗ ∗ the support of αi such that ai is not a best response to α−i. However, then by the multilinearity of Ui, player i could increase her payoff by creating a new mixed strategy that shifts probability from ai to a pure strategy that is a best response to ∗ ∗ α−i (which must exist because the game is finite). Then αi is not a best response ∗ ∗ to α−i, so α cannot be a Nash equilibrium, a contradiction. Thus our original 18 ALLEN YUAN assumption was incorrect and we have for all i ∈ N that every pure strategy in the ∗ ∗ support of αi is a best response to α−i. We now will show that if for all i ∈ N we have that every pure strategy in ∗ ∗ ∗ the support of αi is a best response to α−i, then α is a mixed strategy Nash equilibrium. Consider an arbitrary i ∈ N, and suppose that there exists some 0 ∗ 0 ∗ ∗ mixed strategy αi ∈ ∆(Ai) such that Ui(α−i, αi) > Ui(α−i, αi ). By multilinearity 0 of Ui, then αi must have an pure strategy in its support that gives a better payoff ∗ ∗ in response to α−i than at least one pure strategy in the support of αi , so at ∗ ∗ least one pure strategy in the support of αi is not a best response to α−i. This ∗ 0 ∗ ∗ contradicts the given information, so it must follow that Ui(α−i, αi) ≤ Ui(α−i, αi ) 0 ∗ for all αi ∈ ∆(Ai). This holds for all i ∈ N, so α must be a mixed strategy Nash equilibrium.  Finally, we prove our main result of this section. Theorem 7.9. Every finite strategic game has a mixed strategy Nash equilibrium.

Proof. Consider an arbitrary finite strategic game hN, (Ai), (ui)i, and let mi = |Ai| i for all i ∈ N. Then we can represent each ∆(Ai) as a collection of vectors p = m Pi (p1, p2, . . . , pmi ) where pk ≥ 0 for all k ∈ [mi] and pk = 1. Then ∆(Ai) is a k=1 standard mi − 1 simplex for all i ∈ N, so it is nonempty, compact, and convex for all i ∈ N. We know that Ui is continuous because it is multilinear, so we will show Q that Ui is quasi-concave in ∆(Ai). Consider α ∈ ∆(Ai), and we will show that i∈N 0 0 S = {αi ∈ ∆(Ai) | Ui(α−i, αi) ≥ Ui(α−i, αi)} is convex. Take βi, γi ∈ S, and we will show that for arbitrary λ ∈ [0, 1] that λβi + (1 − λ)γi ∈ S. By definition of S, we have that Ui(α−i, βi) ≥ Ui(α−i, αi) and Ui(α−i, γi) ≥ Ui(α−i, αi), so it follows that λUi(α−i, βi) ≥ λUi(α−i, αi) and (1 − λ)Ui(α−i, γi) ≥ (1 − λ)Ui(α−i, αi). If we add these inequalities together, we get that

λUi(α−i, βi) + (1 − λ)Ui(α−i, γi) ≥ λUi(α−i, αi) + (1 − λ)Ui(α−i, αi) = Ui(α−i, αi).

However, by the multilinearity of Ui, we have that

λUi(α−i, βi) + (1 − λ)Ui(α−i, γi) = Ui(α−i, λβi + (1 − λ)γi), so then Ui(α−i, λβi + (1 − λ)γi) ≥ Ui(α−i, αi) and λβi + (1 − λ)γi ∈ S. It follows that S is convex, so Ui is quasi-concave in ∆(Ai). Thus the mixed extension of hN, (Ai), (ui)i, given by hN, (∆(Ai)), (Ui)i, satisfies the conditions given by Theorem 6.12, so it must have a Nash equilibrium. By definition, this is a mixed strategy Nash equilibrium of hN, (Ai), (ui)i.  FIXED POINT THEOREMS AND APPLICATIONS TO GAME THEORY 19

Acknowledgments It is a pleasure to thank my mentor, Dylan Quintana, for suggesting this topic for research and for guiding me through the research process. I greatly appreciate his help in patiently explaining the proofs I had difficulty understanding and in providing comments on the first few drafts of this paper. In addition, I would like to thank Professor Peter May for organizing the University of Chicago’s Mathe- matics REU and for offering me the opportunity to begin to investigate the field of mathematical economics.

References [1] Martin J. Osborne and Ariel Rubenstein. A Course in Game Theory. MIT Press. 1994. [2] Kim C. Border. Fixed Point Theorems with Applications to Economics and Game Theory. Cambridge University Press. 1985. [3] John Nachbar. “Fixed Point Theorems.” https://pages.wustl.edu/files/pages/imce/nachbar/fixedpoints.pdf. [4] Shizuo Kakutani. “A Generalization of Brouwer’s Fixed Point Theorem.” Duke Mathematical Journal. 1941. [5] Aleksandar Makelov. “Compact convex sets in Euclidean space are homeomorphic to balls.” https://amakelov.github.io/2016/01/18/Compact-convex-sets-in-Euclidean-space-are- homeomorphic-to-balls.html