ABSTRACT

DETERMINACY IN THE LOW LEVELS OF THE PROJECTIVE HIERARCHY

by Michael R. Cotton

We give an expository introduction to for sets of reals. If we are to assume the Axiom of Choice, then not all sets of reals can be determined. So we instead establish determinacy for sets according to their levels in the usual hierarchies of complexity. At a certain point ZFC is no longer strong enough, and we must begin to assume large cardinals. The primary results of this paper are Martin’s theorems that Borel sets are determined, homogeneously Suslin sets are determined, and if a measurable cardinal κ exists then the complements of analytic sets are κ-homogeneously Suslin. We provide the necessary preliminaries, prove Borel determinacy, introduce measurable cardinals and ultrapowers, and then prove analytic determinacy from the existence of a measurable cardinal. Finally, we give a very brief overview of some further results which provide higher levels of determinacy relative to larger cardinals. DETERMINACY IN THE LOW LEVELS

OF THE PROJECTIVE HIERARCHY

A Thesis

Submitted to the

Faculty of Miami University

in partial fulfillment of

the requirements for the degree of

Master of Arts

Department of Mathematics

by

Michael R. Cotton

Miami University

Oxford, Ohio

2012

Advisor: Dr. Paul Larson

Reader: Dr. Dennis K. Burke

Reader: Dr. Tetsuya Ishiu Contents

Introduction 1 0.1 Some notation and conventions ...... 2

1 Reals, trees, and determinacy 3 1.1 The reals as ωω ...... 3 1.2 Determinacy ...... 5 1.3 Borel sets ...... 8 1.4 Projective sets ...... 10 1.5 Tree representations ...... 12

2 Borel determinacy 14 2.1 Games with a tree of legal positions ...... 14 2.2 Coverings of trees and Borel determinacy ...... 15 2.3 Proofs of the lemmas ...... 17

3 Measurable cardinals 24 3.1 Filters, ultrapowers, and elementary embeddings ...... 24 3.2 Normal measures ...... 30

1 4 Det(Π1) given a measurable cardinal 34 4.1 Towers of measures and homogeneously Suslin sets ...... 34 4.2 Determinacy of homogeneously Suslin sets ...... 35 4.3 Effect of a measurable cardinal ...... 37

5 Remarks and further results 42

Bibliography 44

ii Acknowledgments

The author would like to thank the Miami University Department of Mathematics for financial support and quality education.

Also, Paul Larson gave many hours of an already busy year to our independent meetings and should be thanked for an excellent introduction to the world of .

The author thanks the readers Dennis Burke and Tetsuya Ishiu for their interest and suggestions. Also to Patrick Dowling and Dennis Burke again for a great deal of guidance and support.

Most importantly, the author is grateful to his parents William and Roxanne Cotton. It has been an eventful six years, and this would never have been possible without them.

iii Introduction

In , we study the definable sets in spaces that are nice enough to behave like the real numbers, and one area of investigation is the structural con- sequences of determinacy. However, we cannot always be certain which sets are determined. So then the study of determinacy becomes two-sided and leads to many connections between descriptive set theory, large cardinals, combinatorial set theory, inner models, and even forcing. The first side is exactly how things began, meaning the exploration of the consequences of certain sets being determined (for example, 1 1 If Πn sets are determined, then all Σn+1 sets of reals are Lebegue measurable, have the Baire property, and have the perfect set property). The second side is to see how much determinacy is actually derivable.

This paper is an expository introduction to the second. It is easily shown that, if we are to assume the Axiom of Choice, then not all sets of reals can be determined. So the natural direction then is to try to work our way up the sets in the usual hierarchies of complexity. At a certain point ZFC is no longer strong enough, and we must begin to assume large cardinals.

In Section 1 we introduce the concepts necessary to understand the determinacy proofs presented, and in Section 2 we prove D.A. Martin’s theorem that in ZFC all Borel sets are determined. But this is as far as ZFC can take us. So in Section 3 we introduce measurable cardinals, and since measurable cardinals are important for further studies in determinacy, we provide a more thorough introduction than is immediately necessary. Finally, Section 4 is devoted to Martin’s theorem that analytic sets are determined if a measurable cardinal exists. The important results which we explore in detail are:

· Borel sets are determined.

· Homogeneously Suslin sets are determined.

· If there is a measurable cardinal κ, then complements of analytic sets are κ-homogeneously Suslin.

None of the theorems here are due to the author. The results in Sections 2 and 4 are Martin’s, with the proofs in Section 4 being the author’s solutions to a sequence of exercises given in Neeman [5]. Also, the work in Section 3 should be credited to Scott, Keisler, and Tarski.

Finally, while some preliminary materials are provided, we assume a beginning to

1 intermediate level of set theory as well as basic topology, logic, and perhaps a bit of measure theory. However, no specific background in descriptive set theory or large cardinals is needed. When large cardinal hypotheses are mentioned that we do not define, this is merely to help provide some context, and it is not necessary to know what they are in order to continue reading.

0.1 Some notation and conventions Since we assume the Axiom of Choice throughout the paper (Although it is not nec- essary for many of the results), we use ordinal notation rather than alephs. So when

ℵ0 ω referring to cardinality, ℵ0 = ω, ℵ1 = ω1, 2 = 2 , etc.

Given sets X and Y , X Y is the set of functions from X into Y . In the case of or- dinals, we use αX interchangeably to mean the set of functions f : α → X or the set of ordered subsets of X of length α, and where both sets are ordinals the usual superscipt notation is reserved for arithmetic. For example, 2ω is the cardinality of the reals while ω2 is the set of infinite binary sequences.

The usual interval notation for ordinals is used (i.e. if α < β then (α, β] = {ζ : α < ζ ≤ β},[α, β) = {ζ : α ≤ ζ < β}, etc.) as well as interval notation on R. And to avoid confusion between intervals and ordered pairs we usually write ordered pairs and sequences with the angled brackets h i.

If κ is a cardinal and X is a set, then [X]κ denotes the set of unordered subsets of X of size κ, and [X]<κ denotes the subsets of X of size less than κ. However, in the ordered case we continue to use the left superscripts so that <ωX means the set finite sequences from X.

Also, the usual |X| is used to mean the cardinality of a set X. So since a finite sequence <ω s ∈ X may be equivalently thought of as a function {h0, x0i, h1, x1i,..., hn, xni}, its length (in this case n + 1) is also denoted |s|.

While working with ultrapowers and embeddings in Section 3, we often quantify over proper classes (for example, “there exists an elementary embedding j : V ≺ M”) which is not formally proper since it cannot be a theorem of ZFC ( VM is not a set to consider existence from). So such a thing should technically be considered a schema of theorems by isolating each embedding or restricting to initial segments Vα, Mα of the models. However, we make no effort to do so, and it should just be understood that formalization is possible whenever these issues might be considered.

2 1 Reals, trees, and determinacy

In this section we give a brief introduction to the the space ωω and its topology, determined sets, the Borel and projective hierarchies, and the use of tree structures to describe certain subsets of ωω.

1.1 The reals as ωω From a course in general topology, one might already be familiar with the fact that, when ω is given the discrete topology, then the Tychonoff product space ωω, consist- ing of all sequences of natural numbers, is homeomorphic to the irrationals. So it is a good set-theoretic representation of the real numbers.

We usually consider the base B for a topology on ωω where

ω <ω B = {B ⊆ ω : ∃s ∈ ω ∀x ∈ B (x  |s| = s)}

In other words, we want a basic open set to be all extensions of some fixed ini- tial segment (or cones if we picture a tree structure), which produces precisely the same topology as the base obtained by fixing finitely many coordinates. To see this, notice that the base B is contained in the usual base. So it suffices for us to show that any basic open set in the Tychonoff topology can be generated by elements of B.

<ω ω For each s ∈ ω, we let Bs be the basic open set {x ∈ ω :(x  |s|) = s}.

Then if F is some finite subset of ω and f : F → ω is a function giving the coordinates at each point of F . Then the basic open set U = {x ∈ ωω : ∀k ∈ F (x(k) = f(k))} is equal to the set: [ {Bs : |s| > sup{k : k ∈ F } ∧ ∀k ∈ F (s(k) = f(k))} which is a union of sets from B, and since any basic open set of the Tychonoff topol- ogy can be expressed in this way, this shows that our base generated by fixing initial segments is sufficient for handling ωω.

An actual homeomorphism between ωω and R \ Q can be tricky to define. One way is to instead map from NN onto (0, 1) \ Q by sending hai : i < Ni to the continued fraction: 1 1 a0 + 1 a1+ a2+...

3 and combining this with the more obvious mappings from ωω onto NN and from (0, 1) \ Q onto R \ Q.

In descriptive set theory, we deal primarily with the class of Polish spaces. A is a separable, completely metrizable space, and the definitions and proofs we give throughout this paper regarding certain classes of subsets of ωω can be easily generalized to all Polish spaces. However, we are primarily concerned with ωω because it is a particularly important space of this type, and the following theorem should help clarify what we mean.

Theorem 1.1. For every Polish space Y , there is a continuous surjection f : ωω → Y .

Proof.

Let (Y, ρ) be any complete separable metric space, and let D = {d0, d1, d2 ...} be a countable dense subset of Y . Then for each x = hx(α): α ∈ ωi ∈ ωω we will define a x x ω Cauchy sequence y = hyn : n ∈ ωi ∈ Y as follows:

x Let y0 = dx(0). Then for each n ∈ ω let:

 x 1 x dx(n+1) if ρ(yn, dx(n+1)) < 2n yn+1 = x yn otherwise This indeed produces a Cauchy sequence since, if m < n ∈ ω, we have

x x x x x x x x ρ(ym, yn) ≤ ρ(ym, ym+1) + ρ(ym+1, ym+2) + ... + ρ(yn−1, yn) 1 1 1 ≤ + + ... + 2m 2m+1 2n−1 1 ≤ . 2m−1

1 So given any ε > 0, we just let N ∈ ω be large enough so that 2N−1 < ε, and we’ll x x have that ρ(ym, yn) < ε for all m, n ≥ N.

By the completeness of Y , these Cauchy sequences (defined for each x ∈ ωω) converge, ω x and we can define the function f : ω → Y by f(x) = lim yn. n→∞

To see that f is surjective, let y ∈ Y . Then we can build an x ∈ ωω so that 1 1 1 ω ρ(y, dx(0)) < 23 , ρ(y, dx(1)) < 24 , ρ(y, dx(2)) < 25 ,..., etc. Then we will have an x ∈ ω such that f(x) = y.

1 To see that f is continuous, notice that if we let σ(x0, x1) = 2n+1 , where n is least s.t. ω x0(n) 6= x1(n), then σ is a metric on ω compatible with our topology. Now let ε > 0 1 1 and then let n be large enough so that 2n < ε. So then whenever σ(x0, x1) < 2n+1

4 x0 x1 x0 x1 so that x0  n = x1  n, we’ll have that y0 = y0 , . . . , yn = yn , and in particular x0 x1 1 1 1 ρ(f(x0), f(x1)) ≤ ρ(f(x0), yn ) + ρ(f(x1), yn ) ≤ 2n+1 + 2n+1 = 2n < ε.  Another important consideration about our representation ωω of the reals is that we do not need to be concerned with any differences between n( ωω) and m( ωω) for different n, m ∈ ω. In other words, dimension doesn’t really matter. This is a result of the following theorem: Theorem 1.2. ωω is homeomorphic to k( ωω) for any k ∈ ω. Proof (sketch). Take any bijection f : ω → k × ω, and then we can define a homeomorphism g : k×ωω → ωω by g(x)(α) = x(f(α)) for each α ∈ ω.



1.2 Determinacy

ω For a set X and a set of sequences A ⊆ X, the game GX (A) is played as follows:

I x0 x2 x4 ... II x1 x3 x5 ... And with the rules:

· Each xn ∈ X.

· Player I wins if x = hxn : n < ωi ∈ A. · Player II wins otherwise. In other words, the two players take turns picking elements of the set and the game continues for ω many moves. If the resulting sequence is in the set A then it is a win for I, and if not it is a win for II. We sometimes refer to the resulting sequence x as a play of the game.

Definition 1.3. We say a player has a winning strategy in the game if that player can somehow guarantee that they will win.∗

We say that a game is determined if one of the players has a winning strategy, and ω we say that a set A ⊆ X is determined if the game GX (A) is determined.

∗ Hence a winning strategy for I would be a way of picking x2i’s so that, no matter what II’s moves are, the sequence will be in the set A, and a winning strategy for II is similar except that II makes the odd moves and wants the sequence to be in ωX \ A.

5 Definition 1.4. A of a topological space is some collection of subsets of the space, and for a pointclass Γ we write Det(Γ) to say that every set in Γ is determined.

We write ¬Γ to denote the pointclass consisting of all complements of sets in Γ.

And we say that a pointclass Γ of a topological space Y is closed under continuous preimages if, for any A ∈ Γ and any continuous function f : Y → Y , we have that f −1[A] ∈ Γ.

Theorem 1.5. If Γ is a pointclass of ωX which is closed under continuous preimages,† then

Det(Γ) =⇒ Det(¬Γ).

Proof. ω ω The function f : X → X defined by f(hxn : n < ωi) = hxn+1 : n < ωi (the function that cuts off the first coordinate) is continuous. So if A ∈ Γ then f −1(A) will be determined.

−1 Suppose that I has a winning strategy in GX (f (A)). Then II will have a win- ω ning strategy in GX ( X \ A). II simply ‘steals’ the strategy that I would play in −1 GX (f (A)). More specifically, II pretends to play whatever I’s first move would be for the extra coordinate in the preimage game. Then, since I’s strategy in this auxiliary game will guarantee that the sequence in the preimage is in f −1(A), if II ω continues according to that strategy then x will be in A; a win for II in GX ( X \ A).

−1 Now suppose that II has a winning strategy in GX (f (A)). Then I will have a ω winning strategy in GX ( X \ A) since I can pretend that II already made the first −1 move in GX (f (A)) and then steal whatever II’s strategy is for the remainder of the ω preimage game. Then I guarantees that x 6∈ A and wins GX ( X \ A).

 Now, we present the Gale-Stewart theorem which marks the beginning of our journey:

Theorem 1.6. Open and closed sets are determined.

Proof. Since the open sets and closed sets are which are closed under continuous

†Continuity is with respect to the discrete topology on X.

6 preimages, it suffices by Theorem 1.5 to show determinacy for the closed sets. So as- sume A ⊆ ωX is closed (where X is given the discrete topology). We will show that if

II does not have a winning strategy in GX (A) then I has a winning strategy in GX (A).

So suppose II does not have a winning strategy in GX (A), and given any partial play hx0, . . . , x2i−1i where I is to play next, we will call it a non-losing position for I if II has no winning strategy from then on. In particular, our assumption is that ∅ is a non-losing position for I. Also, this means that given any non- ‡ losing position hx0, . . . , x2i−1i, there exists some x2i that I can play in order for hx0, . . . , x2i−1, x2i, x2i+1i to remain non-losing no matter what II plays for x2i+1.

We use this to produce a winning strategy for I. More specifically, ∅ is a non-losing

position for I. So I plays an x0 so that I’s next turn will again be non-losing. Then I

simply chooses every x2i in this way in order to guarantee every partial play remains a non-losing position. This indeed gives a winning strategy for I since it means that, ω for any play x = hxn : n < ωi ∈ X of the game where I employs this strategy, it cannot be that x ∈ ωX \ A because ωX \ A is open and therefore a union of basic ω ω open sets. So if x ∈ X \A then we would have x ∈ Bhx0,...,x2i−1i ⊆ X \A for some i, but this would be a losing position for I which contradicts the strategy. Hence x ∈ A and I wins.

 Sometimes it will be convenient to adopt additional notation for strategies. Since I S 2i goes first and makes the even moves, a strategy for I is a function σ : i∈ω X → X that outputs whatever move I should make given the previous moves. Then whenever y ∈ ωX is the sequence of moves made by II and σ is the strategy employed by I, we denote the resulting play of the game by σ ∗ y. Similarly, a strategy for II is a func- S 2i+1 ω tion τ : i∈ω X → X, and whenever z ∈ X is the sequence of moves made by I and τ is the strategy employed by II, we denote the resulting play of the game by z∗τ.

So a winning strategy for I in the game GX (A) is a strategy σ so that {σ ∗ y : y ∈ ω X} ⊆ A, and a winning strategy for II in the game GX (A) is a strategy τ so that {z ∗ τ : z ∈ ωX} ⊆ ωX \ A.

Theorem 1.7. (AC). There is a set of reals which is not determined. Proof. I and II each have | <ωωω| = ω| <ωω| = ωω = 2ω many strategies. So we can let

‡ Here, we are making use of the Axiom of Choice in order for I to choose such an x2i. However, choice is not necessary for the result.

7 ω ω hσα : α < 2 i and hτα : α < 2 i enumerate the strategies for I and II, respectively. ω Then we recursively pick aα and bα for each α < 2 as follows:

Having chosen aβ and bβ for all β < α,

ω · Choose bα so that bα = σα ∗ y for some y ∈ ω and bα 6∈ {aβ : β < α}.

ω · Choose aα so that aα = z ∗ τα for some z ∈ ω and aα 6∈ {bβ : β ≤ α}.

ω This is possible since there are always 2 many options for σα ∗ y and z ∗ τα since the ω maps y 7→ σα ∗ y and z 7→ z ∗ τα are one to one, and we have only specified α < 2 many things that they cannot be.

ω ω ω Now, if we let A = {aα : α < 2 } and B = {bα : α < 2 }, it follows that B ⊆ ω \ A.

And neither I nor II can have a winning strategy in Gω(A). To see this, suppose ω that I has some winning strategy σγ ∈ {σα : α < 2 }. Then II could play whichever ω sequence y of moves is such that σγ ∗ y = bγ ∈ B ⊆ ω \ A, a contradiction. Also, ω suppose that II has some winning strategy τγ ∈ {τα : α < 2 }. Then I could play whichever sequence z of moves is such that z ∗ τγ = aγ ∈ A, a contradiction.

 The is the statement that all subsets of ωω are determined. This, however, contradicts the Axiom of Choice since we have just shown that the use of AC to enumerate the strategies allows us to build sets for which neither player has a winning strategy in the game. Thus if we want to do our set theory in ZFC, the question becomes: Which sets of reals are determined? And it turns out that, un- less we make extra assumptions, the Borel sets is the sharpest answer that we can give.

1.3 Borel sets A σ-field of subsets of ωω is a pointclass containing ∅ which is closed under countable unions and complementation, and the Borel sets are the subsets of ωω which form the smallest σ-field containing all of the open sets.‡

Equivalently, we can say that the Borel sets are the σ-field generated by the open sets, and this leads us to the following more constructive description:

0 0 Definition 1.8. For each 1 ≤ α < ω1 we define by recursion the classes Σα, Πα, 0 ω and ∆α of subsets of ω as follows: ‡The Borel sets of any ωX, where X is given the discrete topology, are defined in exactly the same way. Actually, we’ll prove determinacy for the Borel sets in this more general setting.

8 0 · Let Σ1 = the open sets.

0 0 0 0 · Once Σα is defined, let Πα = ¬Σα. (i.e. the complements of sets in Σα)

0 · Once Πβ is defined for all β < α, 0 S 0 let Σα = { n<ω An : each An ∈ Πβn and each βn < α}. 0 0 0 · And finally, let ∆α = Σα ∩ Πα.

In other words, we begin with the open sets. Then we take their complements. Then we take all countable unions of sets available so far. Then we take their complements. Then we take all countable unions. Then complements... etc.

0 0 0 In particular, Π1 = ¬Σ1 is the closed sets. So then Σ2 is the Fσ sets (countable 0 0 unions of closed sets). And Π2 = ¬Σ2 is the Gδ sets (complements of the Fσ’s).

0 0 0 0 0 0 Theorem 1.9. For all 1 ≤ β < α < ω1 we have Σβ ⊆ ∆α, Πβ ⊆ ∆α, ∆β ⊆ Σα, 0 0 and ∆β ⊆ Πα.

Proof. 0 0 0 0 0 0 0 0 ∆β ⊆ Σα and ∆β ⊆ Πα are immediate. And to show that Σβ ⊆ ∆α and Πβ ⊆ ∆α, 0 0 0 it suffices to only show Σβ ⊆ ∆α since any ∆α is clearly closed under complements. 0 0 In fact, all we need is that Σβ ⊆ ∆β+1 since it follows from what we already have 0 0 that γ ≤ α =⇒ ∆γ ⊆ ∆α.

0 ω 0 ω 0 First, notice that if A ∈ Σβ then ω\A ∈ Πβ. But this gives then that ω\A ∈ Σβ+1, 0 0 0 and hence A ∈ Πβ+1. So we have Σβ ⊆ Πβ+1, and now it only remains to show that 0 0 0 0 0 0 Σβ ⊆ Σβ+1 in order to have Σβ ⊆ Σβ+1 ∩ Πβ+1 = ∆β+1.

This takes a quick induction. The space ωω is nice enough for us to have that open 0 0 0 0 sets are Fσ. So Σ1 ⊆ Σ2. Then if we have Σγ ⊆ Σγ+1 for all γ < β, we also have that 0 0 0 S 0 Πγ ⊆ Πγ+1 for all γ < β. Hence Σβ+1 = { n<ω An : each An ∈ Πγn and each γn ≤ S 0 0 β} ⊇ { n<ω An : each An ∈ Πγn and each γn < β} = Σβ.

 This means that the Borel sets, in the way that we have constructed them, form a hierarchy. In fact, we refer to it as the and can sum up the situation with the following diagram:

0 0 0 Σ1 Σ2 ··· Σα ··· 0 0 0 ∆1 ∆2 ··· ∆α ··· 0 0 0 Π1 Π2 ··· Πα ···

9 Where it follows from Theorem 1.9 that each pointclass is contained in any other that occurs further to the right from it.

Theorem 1.10. Each pointclass in the Borel hierarchy is closed under continuous preimages.

Proof. 0 We prove this by induction on the hierarchy. Since Σ1 is the open sets, it is obviously 0 closed under continuous preimages. Also, if Σα is closed inder continuous preimages 0 ω ω then Πα is closed under continuous preimages since, if we let f : ω → ω be any 0 ω 0 −1 ω continuous function and A ∈ Πα, it follows that ω \ A ∈ Σα. So then f ( ω \ ω −1 0 −1 0 0 A) = ω \ f (A) ∈ Σα, and therefore f (A) ∈ Πα. And if Πβ is closed under 0 continuous preimages for all β < α, then Σα must be as well since then we have 0 S 0 that A ∈ Σα =⇒ A = n<ω An where each An ∈ Πβn and each βn < α. So −1 −1 S S −1 0 −1 0 f (A) = f ( n<ω An) = n<ω f (An) ∈ Σα since each f (An) is again in Πβn .



1.4 Projective sets The projective sets are the subsets of ωω obtainable by taking continuous images and complements.

Similar to the way we were able to for Borel sets, the projective sets can also be described in a hierarchy of complexity:

1 1 Definition 1.11. For each n ∈ N, we define by recursion the classes Σn, Πn, and 1 ω ∆n of subsets of ω as follows:

1 ω · Let Σ1 = the continuous images of ω.

1 1 1 · Once Σn is defined, let Πn = ¬Σn.

1 1 1 · Once Πn is defined, let Σn+1 = continuous images of the Πn sets.

1 1 1 · And finally, let ∆n = Σn ∩ Πn.

1 ω In particular, we call the Σ1 sets the analytic subsets of ω, and their complements, 1 the Π1 sets, are sometimes referred to as the coanalytic sets.

1 1 1 1 1 1 Theorem 1.12. For all n ∈ N we have Σn ⊆ ∆n+1, Πn ⊆ ∆n+1, ∆n ⊆ Σn+1, and 1 1 ∆n ⊆ Πn+1.

10 Proof. Since the identity function is continuous, it follows from the definition that ∀n ∈ N, 1 1 1 1 1 1 we have Σn+1 ⊇ Πn and Πn+1 = ¬Σn+1 ⊇ ¬Πn=Σn. Then the containments in the theorem are immediate.

 So the projective hierarchy can also be viewed in a diagram:

1 1 1 Σ1 Σ2 ··· Σn ··· 1 1 1 ∆1 ∆2 ··· ∆n ··· 1 1 1 Π1 Π2 ··· Πn ··· Where again we have that each pointclass is contained in any other which occurs further to the right from it.

We finish our introduction to projective sets with a couple more properties of analytic sets that we will use again later.

Theorem 1.13. A ⊆ ωω is analytic iff it is the continuous image of a closed set.

Proof. If A ⊆ ωω is analytic, then A = f( ωω) for some continuous function f : ωω → ωω. Hence A is a continuous image of a closed set since the whole space ωω is closed. Also, if A is the continuous image of a closed set C, then since a closed subspace of a Polish space is again Polish, and we proved for any Polish space there must exist a continuous surjection from ωω onto it, it follows that A is a continuous image of ωω and is analytic.



1 1 Theorem 1.14. Σ1 and Π1 are closed under continuous preimages. Proof. 1 1 To see that Σ1 is closed under continuous preimages, let A ∈ Σ1 and suppose f : ωω → ωω is some continuous function. Then A = h(C) for some continuous h and closed set C, so in particular the set C∗ = {hh(x), xi : x ∈ C} is closed. Now consider the continuous function F : ωω × ωω → ωω × ωω defined by F (x, y) = hf(x), yi. Then F −1(C∗) must be closed and we can see that f −1(A) is then the projection into the first coordinate of F −1(C∗). Hence f −1(A) is the continuous image of a closed set in 2( ωω), and as we did in the previous proof, we can simply use a homeomorphism to show that this means f −1(A) is a continuous image of a closed set of ωω.

ω ω 1 Then, if we let f : ω → ω be any continuous function and A ∈ Π1, it follows that ω 1 −1 ω ω −1 1 −1 1 ω \ A ∈ Σ1. So then f ( ω \ A) = ω \ f (A) ∈ Σ1, and therefore f (A) ∈ Π1.

11  It’s actually true that all of the projective pointclasses are closed under continuous preimages, but we omit the rest of the induction up the hierarchy.

1.5 Tree representations Definition 1.15. For a set Y , we say that T is a tree on Y iff T ⊆ <ωY and is closed under initial segments,

<ω i.e. for any tuple s ∈ Y we have s ∈ T =⇒ s  n ∈ T for all n ≤ |s|.

We write [T ] to denote the set of all infinite branches through T , or more precisely, the set of infinite branches through T is

ω [T ] = {x ∈ Y : x  n ∈ T for all n < ω}. The next theorem provides us with our first useful application of the tree structures:

Theorem 1.16. A ⊆ ωY is closed iff A = [T ] for some tree T on Y .

Proof. If A = [T ] for a tree T ⊆ <ωY and x 6∈ A, then there must be some n ∈ ω s.t. ω s = x  n 6∈ T . Then it follows that the basic open set Bs = {y ∈ Y : y  n = s} about x must be disjoint from [T ], and hence A is closed. Conversely, if A is closed then we can let the tree T = {s ∈ <ωY : s ⊂ x for some x ∈ A}. Obviously then we have A ⊂ [T ]. Also, if y ∈ [T ] we have y  n ∈ T for each n ∈ ω, which by the definition of the tree means that for each n there exists a x ∈ A s.t. x  n = y  n. Since A is closed this means it must be that y ∈ A and hence A = [T ].

 Trees of the type already defined will be sufficient for handling the Borel sets, but we need a little more in order to apply this kind of technique in the projective case where we must deal with continuous images.

Definition 1.17. For sets X and Y , we say that T is a tree on X × Y iff T is a set of pairs hs, ti ∈ <ωX × <ωY such that |s| = |t| and T is closed under initial segments, i.e. we have hs, ti ∈ T =⇒ hs  n, t  ni ∈ T for all n ≤ |s|.

12 So then we write the set of infinite branches through T as

ω ω [T ] = {hx, yi ∈ X × Y : hx  n, y  ni ∈ T for all n < ω}. Definition 1.18. The projection of T , denoted p[T ], is the subset of ωX defined by

p[T ] = {x ∈ ωX : ∃y ∈ ωY s.t. hx, yi ∈ [T ]}.

In other words, p[T ] is the projection into the first coordinate of the set [T ]. In Section 4, we will only really be interested in trees on ω × Y for some set Y (trees that code projections onto subsets of ωω), so we will go ahead and focus our notation in that way.

So let T ⊆ <ωω × <ωY be a tree on ω × Y .

Then given some finite sequence, s ∈ <ωω, let

|s| Ts = {t ∈ Y : hs, ti ∈ T }. And given an infinite sequence, x ∈ ωω, let [ T = {t ∈ <ωY : hx |t|, ti ∈ T } which is the same as saying T = T . x  x xn n<ω <ω So Ts is the set of tuples in Y that can be paired with s so that hs, ti is in the ω tree, and given an infinite sequence x ∈ ω, Tx is the tree on Y consisting of tuples in <ωY that are paired with some initial segment of x as a node in the tree T . In particular, x ∈ p[T ] ⇐⇒ [Tx] 6= ∅.

One of the primary uses for these tree structures is that they allow us to code contin- uous images in a way that we no longer really need to pay attention to the topology. Recall that the analytic sets are the images of the reals under continuous functions. By Theorems 1.13 and 1.16, this can be seen to be equivalent to the following:

Definition 1.19. A ⊆ ωω is analytic iff A = p[T ] for some tree T on ω × ω.

This is because, if A = p[T ] then since [T ] must be a closed set we have that A is the projection of a closed set. Projections are continuous. So then A is analytic. Conversely, if A is analytic, then A = f(C) for a closed set C. Thus A is the projection into the first coordinate of the closed set {hf(x), xi : x ∈ C}. And since this set is closed it must be equal to [T ] for some tree on ω × ω. Hence A = p[T ] for a tree T on ω × ω.

13 2 Borel determinacy

We first introduce a slightly new version of the determinacy game, G(T,A) involving trees. Then we will give Martin’s inductive proof of Borel determinacy.

2.1 Games with a tree of legal positions

ω For a set A ∈ X, we modify the game GX (A).

Let T be a tree on X and A ⊆ [T ]. Then the game G(T,A) is played as follows:

I x0 x2 x4 ... II x1 x3 x5 ... with the rules:

· Each xn ∈ X.

· Player I wins if x = hxn : n < ωi ∈ A. · Player II wins otherwise. but with the additional rule that:

· hx0, x1, . . . , xni ∈ T for each n < ω. Also, we need to make sure that the game doesn’t get stuck at some finite stage, so we assume that our trees are pruned. We say that T is a pruned tree on X if it is a tree on X such that every node has a proper extention (i.e. s ∈ T =⇒ ∃t ∈ T such that s ( t).

Then, we can also think of a strategy σ for I as a pruned subtree of T such that:

· If hx0, . . . , x2ii ∈ σ, then hx0, . . . , x2i, x2i+1i ∈ σ for any choice of x2i+1.

· For any hx0, . . . , x2i−1i ∈ σ, there is a unique x2i s.t. hx0, . . . , x2i−1, x2ii ∈ σ. And then σ is a winning strategy for I iff [σ] ⊆ A.

We can define a strategy τ for II similarly, and then τ is a winning strategy for II iff [τ] ∩ A = ∅.

<ω ω It’s easy to see that G( X,A) is still just the game GX (A). So for a set A ∈ X, if we can prove that whenever T is a pruned tree on X such that A ⊆ [T ] the game G(T,A) is determined, it follows that A is a determined set.

14 2.2 Coverings of trees and Borel determinacy We continue thinking of strategies as subtrees.

Also, for a tree R and n < ω, let R|n = {s ∈ R : |s| ≤ n}.

Definition 2.1. Given a pruned tree T on X, a covering of T is a triple (T˜ , π, ϕ), such that:

· T˜ is a (nonempty) pruned tree on some X˜. · π : T˜ → T is monotone (i.e. s ⊆ t =⇒ π(s) ⊆ π(t)) with |π(s)| = |s|. Hence we can extend π to be a continuous function from [T˜] into [T ], (denoted π :[T˜] → [T ]). · ϕ maps strategies for I and II in T˜ to strategies for I and II in T , respectively. Moreover, ϕ(˜σ) restricted to positions of length ≤ n depends only on σ˜ restricted to positions of length ≤ n,

i.e., m ≤ n =⇒ ϕ(˜σ|m) = ϕ(˜σ|n)|m.

· If σ˜ is a strategy for I (resp. II) in T˜ and x ∈ [ϕ(˜σ)], then ∃ x˜ ∈ [˜σ] such that π(˜x) = x.

For k < ω, we say that a covering (T˜ , π, ϕ) is a k-covering if T˜|2k = T |2k and ˜ π  (T |2k) is the identity. In particular, this gives that ϕ(˜σ)|2k =σ ˜|2k for any strat- egy σ˜ in T˜. (So intuitively we can think of a k-covering as being one that doesn’t affect each player’s first k moves.)

Finally, we say that a covering (T˜ , π, ϕ) unravels A ⊆ [T ] if A˜ = π−1(A) is clopen, ˜ −1 0 ‡ i.e., A = π (A) ∈ ∆1.

First notice that, if (T˜ , π, ϕ) is a covering of T and A ⊆ [T ], then wheneverσ ˜ is a winning strategy for I in G(T,˜ A˜), then ϕ(˜σ) is a winning strategy for I in G(T,A) since, if not, there would be an x ∈ [ϕ(˜σ)] such that x 6∈ A. But then the last part of the definition gives us that ∃x˜ ∈ [˜σ] such that π(˜x) = x. In particular, we would havex ˜ ∈ A˜ since [˜σ] ⊆ A˜. But then we have x ∈ A, a contradiction. Similarly, ifτ ˜ is a winning strategy for II in G(T,˜ A˜), then ϕ(˜τ) is a winning for II in G(T,A).

‡ ˜ ˜ 0 Note that X doesn’t have to be the same set as X. So, when we say that A is ∆1, we mean ˜ 0 ˜ that A is a ∆1 subset of [T ].

15 So if a covering (T˜ , π, ϕ) unravels A ⊆ [T ], this means A˜ is clopen and therefore G(T,˜ A˜) is determined by Theorem 1.6. By the previous remarks, it then follows that G(T,A) is determined.

The main result of this section will be the following theorem:

Theorem 2.2. If T is a pruned tree on X and A ⊆ [T ] is Borel subset of [T ], then for every k < ω, there is a k-covering of T which unravels A.

<ω Then, by considering T = X and therefore making G(T,A) = GX (A), we can see that this gives the corollary:

Corollary 2.3. Borel sets are determined.

To prove the theorem, we will need two lemmas which, for the moment, we will take as given:

Lemma 2.4. Let k < ω, and let (Ti+1, πi+1, ϕi+1) be a (k + i)-covering of Ti for

each i < ω. Then there is a pruned tree T∞ and maps π∞,i, ϕ∞,i such that each

(T∞, π∞,i, ϕ∞,i) is a (k + i)-covering of Ti where π∞,i = πi+1 ◦ π∞,i+1 and ϕ∞,i =

ϕi+1 ◦ ϕ∞,i+1.

Lemma 2.5. If T is a pruned tree and A ⊆ [T ] is closed, then for every k < ω, there is a k-covering of T which unravels A.

Proof of Theorem 2.2. Let T be a pruned tree on X, and k ∈ ω. First note that, whenever a covering (T˜ , π, ϕ) unravels a set A, then it also unravels its complement since π is continuous 0 and ∆1 is closed under complementation.

0 So it suffices to prove by induction that, for all 1 ≤ α < ω1, if A ⊆ [T ] is a Σα subset of [T ] then there is a k-covering of T which unravels A. By the previous remark and 0 Lemma 2.5, we already have the result for Σ1. So suppose we have the result for all 0 Σβ with β < α. Then again since the coverings unravel complements it follows that 0 we have the result for each Πβ with β < α.

0 S Now, let A be a Σα subset of [T ], and let k ∈ ω. Then A = i<ω Ai where each Ai is a Π0 subset of [T ] and each β < α. Let (T , π , ϕ ) be a k-covering of T = T which βi i 1 1 1 0 unravels A0. Then since the Borel pointclasses are closed under continuous preimages

16 we have that π−1(A ) is a Π0 subset of [T ] for each j > 0, and π−1(A ) is a ∆0 subset 1 j βj 1 1 0 1 −1 of [T1]. Then, let (T2, π2, ϕ2) be a (k+1)-covering of T1 which unravels π1 (A1). Again −1 −1 −1 by closure under continuous preimages, we have that π2 (π1 (Aj)) = (π1 ◦ π2) (Aj) is a Π0 subset of [T ] for each j > 1, and π−1(π−1(A )) = (π ◦ π )−1(A ) is a ∆0 βj 2 2 1 j 1 2 j 1 subset of [T2] for each j ≤ 1.

Continuing in this way, we recursively define a (k + i)-covering (Ti+1, πi+1, ϕi+1) of −1 −1 −1 −1 Ti which unravels πi (πi−1(... (π1 (Ai)))) = (π1 ◦ ... ◦ πi−1 ◦ πi) (Ai) so that we have π−1 (π−1(... (π−1(A )))) = (π ◦ ... ◦ π ◦ π )−1(A ) is a Π0 subset of [T ] for i+1 i 1 j 1 i i+1 j βj i+1 −1 −1 −1 −1 0 each j > i, and πi+1(πi (... (π1 (Aj)))) = (π1 ◦ ... ◦ πi ◦ πi+1) (Aj) is a ∆1 subset of [Ti+1] for each j ≤ i. Then we can apply Lemma 2.4 to get a (k + 1)-covering

(T∞, π∞,i, ϕ∞,i) of each Ti where π∞,i = πi+1 ◦ π∞,i+1 and ϕ∞,i = ϕi+1 ◦ ϕ∞,i+1.

But by the recursive construction, we can see then that (T∞, π∞,0, ϕ∞,0) unravels −1 0 −1 each Ai so that π∞,0(Ai) is a ∆1 subset of [T∞] for each i < ω. Hence, π∞,0(A) = −1 S S −1 0 π∞,0( i<ω(Ai) = i<ω π∞,0(Ai) is then a Σ1 subset of [T∞]. So finally, we can reapply 0 ˜ −1 the case for Σ1 and let (T , π, ϕ) be a k-covering of T∞ which unravels π∞,0(A). −1 −1 −1 0 ˜ Thus, π (π∞,0(A)) = (π∞,0 ◦ π) (A) is a ∆1 subset of [T ], and we can see that ˜ (T , π∞,0 ◦ π, ϕ∞,0 ◦ ϕ) is a k-covering of T which unravels A.



2.3 Proofs of the lemmas The difficult part of the proof for Borel determinacy in the last section is establishing the two lemmas that we initially passed over. We’ll begin by proving Lemma 2.4 before we introduce some additional notation and definitions needed for Lemma 2.5.

The first of the two lemmas gave us the existence of the inverse limits needed for the inductive step. Also, it’s worth noticing that the proof of this lemma is precisely why we used k-coverings rather than just any coverings.

Proof of Lemma 2.4.

Let k < ω, and let (Ti+1, πi+1, ϕi+1) be a (k + i)-covering of Ti for each i < ω.

We want to show that there exists a pruned tree T∞ and maps π∞,i, ϕ∞,i such that each (T∞, π∞,i, ϕ∞,i) is a (k + i)-covering of Ti where π∞,i = πi+1 ◦ π∞,i+1 and ϕ∞,i = ϕi+1 ◦ ϕ∞,i+1.

17 Define the tree T∞ by:

s ∈ T∞ ⇐⇒ ∃i < ω s.t. s ∈ Ti and |s| ≤ 2(k + i).

Then T∞ is pruned. To see this note that, even when s ∈ Ti is s.t. |s| = 2(k + i), s has an extension in Ti+1 since Ti+1 is pruned and belongs to a (k + i)-covering of Ti. Hence Ti+1|2(k + i) = Ti|2(k + i) and then there is an extension t ) s in Ti+1 s.t. |t| = 2(k + i) + 1 ≤ 2(k + (i + 1)) so that t ∈ T∞. Also, it is clear from this definition that we will have T∞|2(k + i) = Ti|2(k + i). So our covering will indeed be a (k + i)-covering of the Ti’s.

Now define each π∞,i by:

 s if |s| ≤ 2(k + i) π∞,i(s) = (πi+1 ◦ πi+2 ◦ ... ◦ πj)(s) if 2(k + i) < |s| ≤ 2(k + j)

Then π∞,i is well defined since in the second case the choice of π∞,i(s) is indepen- dent of j. Also, since each πi is monotone with |πi(s)| = |s|, we can see that this is also true for π∞,i, and it’s clear from this definition that we have π∞,i = πi+1◦π∞,i+1.

Finally, define each ϕ∞,i by:

· ϕ∞,i(σ∞)|2(k + i) = σ∞|2(k + i).

· For j > i, ϕ∞,i(σ∞)|2(k + j) = (ϕi+1 ◦ ϕi+2 ◦ ... ◦ ϕj)(σ∞|2(k + j)).

It’s clear then that ϕ∞,i(σ∞) restricted to positions of length ≤ n depends only on

σ∞ restricted to positions of length ≤ n. We can also see from this definition that we have from this definition that we have ϕ∞,i = ϕi+1 ◦ ϕ∞,i+1. So all that’s left to verify is the last condition in the definition of a covering.

So suppose σ∞ is a strategy in T∞ and xi ∈ [ϕ∞,i(σ∞)]. We want to show that

∃ x∞ ∈ [σ∞] such that π∞,i(x∞) = xi.

Let xi+1 ∈ [ϕ∞,i+1(σ∞)], xi+2 ∈ [ϕ∞,i+2(σ∞)], ... etc., come from the last covering condition for each of (Ti+1, πi+1, ϕi+1), (Ti+2, πi+2, ϕi+2), ... etc. And consider that we also have ϕj+1(ϕ∞,j+1(σ∞)) = ϕ∞,j(σ∞) for all j ≥ i from our definition. Then

πj+1(xj+i) = xj for all j ≥ i, and since πj+i is the identity on sequences of length

≤ 2(k+j), it follows that xi, xi+1,... must converge to a sequence x∞ which is defined by x∞  2(k + j) = xj  2(k + j) for all j ≥ i.

18 So finally, we have σ∞|2(k + j) = ϕ∞,j(σ∞)|2(k + j) for j ≥ i, and in particular this

means xj ∈ [ϕ∞,j(σ∞)] for j ≥ i, and it follows that x∞ ∈ [σ∞] and π∞,i(x∞) = xi.

 Definition 2.6. Let T be a tree on a set X. Then we say that Σ is a quasistrategy for I in T if it is a pruned subtree of T such that:

· If hx0, . . . , x2ii ∈ Σ, then hx0, . . . , x2i, x2i+1i ∈ Σ for any choice of x2i+1.

· For any hx0, . . . , x2i−1i ∈ Σ, there is an x2i s.t. hx0, . . . , x2i−1, x2ii ∈ Σ.

A quasistrategy for II is defined similarly.

We say that a quasistrategy Σ is a winning quasistrategy for I in G(T,A) if [Σ] ⊆ A, and we say it is a winning quasistrategy for II in G(T,A) if [Σ] ∩ A = ∅.

Note that our definition of quasistrategy only differs from that of a strategy in that we do not require the uniqueness condition for the players’ moves. In other words, a quasistrategy doesn’t tell the player which move to make and merely narrows down their choices. Also note that, if a quasistrategy is a winning quasistrategy, then it contains a winning strategy.‡

Recall from the proof of Theorem 1.6 the definition of a non-losing position. We ex- tend this definition somewhat and also say that, at any odd turn when I has already played, hx0, . . . , x2i−1, x2ii is a non-losing position for I if II has no winning strategy from then on. So then if I has a winning quasistrategy in the game G(T,A) and we let Σ = {p ∈ T : p is a non-losing position for I}, this also gives a winning quasistrategy for I, which we will call the canonical quasistrategy for I in the game G(T,A). If II has a winning quasistrategy, we define the canonical quasistrategy for II in the same way. Also, note that the canonical quasistrategy will contain any other winning quasistrategy.

For the next proof (which is really the meat of the argument as it demystifies the notion of a covering), we’ll introduce a bit more notation as well:

<ω _ For a tree T on set X and a finite sequence s ∈ T , let Ts = {v ∈ X : s v ∈ T }, ω _ _ and if A ⊆ [T ], let As = {x ∈ X : s x ∈ A}. (where s v is concatenation)

‡Note that getting a winning strategy from a winning quasistrategy requires the Axiom of Choice.

19 Proof of Lemma 2.5. Let T be a (nonempty) pruned tree on X, let A ⊆ [T ] be closed in [T ], and fix k ∈ ω.

We want to show that there is a k-covering of T which unravels A.

Let TA = {x  n : x ∈ A and n ∈ ω}. So since A ⊆ [T ] and trees are closed under initial segments, we have TA ⊆ T .

Recall that the original game G(T,A) is played

I x0 x2 x4 ... II x1 x3 x5 ... so that each hx0, x1, . . . , xni ∈ T and I wins if x = hxn : n < ωi ∈ A.

We will define a k-covering (T˜ , π, ϕ) as an auxiliary game where the players make additional moves on top of the moves for G(T,A). The set of all possible positions in this new game will be the definition of the tree T˜.

The new game is played as follows:

The first 2k moves are played in T as before. (We want this to be a k-covering.)

I x0 . . . x2k−2 II x1 . . . x2k−1

But then I’s next move is a pair hx2k, ΣIi where hx0, . . . , x2ki ∈ T and ΣI is a quasis-

trategy for I in the tree Thx0,...,x2ki (where II would start first in a game on Thx0,...,x2ki). The quasistrategy played by I will serve as a restriction on the rest of the players’ moves in G(T,A). (The purpose of an unraveling is to simplify the game).

I x0 ··· x2k−2 hx2k, ΣIi II x1 ··· x2k−1

Now, II has two options for how to play the next move:

Option 1:

I x0 ··· x2k−2 hx2k, ΣIi II x1 ··· x2k−1 hx2k+1, si

II plays a pair hx2k+1, si where hx0, . . . , x2k+1i ∈ T , s ∈ Thx0,...,x2k+1i, |s| is even, and s ∈ (ΣI)hx2k+1i \ (TA)hx0,...,x2k+1i. And if II goes with this option then I and II con- tinue playing x2k+2, x2k+3, ... as usual so that hx0, . . . , xni ∈ T for all n, but they

20 are required to make sure s ⊆ hx2k+2, x2k+3,...i. In other words, this option implies that II goes ahead and chooses the next even |s| many moves of the game G(T,A) in a way that still agrees with I’s restricting quasistrategy but at the same time will

ensure that I does not win G(T,A) with the play hxn : n < ωi.

Option 2:

I x0 ··· x2k−2 hx2k, ΣIi II x1 ··· x2k−1 hx2k+1, ΣIIi

II plays a pair hx2k+1, ΣIIi where hx0, . . . , x2k+1i ∈ T and ΣII is a quasistrategy for T player II in (ΣI )hx2k+1i (TA)hx0,...,x2k+1i. And if II goes with this option then I and II continue playing x2k+2, x2k+3, ... as usual so that hx2k+2, x2k+3, . . . , xii ∈ ΣII for all i ≥ 2k + 2. In other words, this option implies that II agrees with I’s restricting quasistrategy but offers another quasistrategy to restrict the game further.

Again, the tree T˜ is defined as the set of all possible/legal positions in this game.

Also, the map π is obvious:

π(hx0, . . . , x2k−1, hx2k, ΣI i, hx2k+1, ∗i, x2k+2, . . . , xii = hx0, . . . , xii.

This clearly induces a continuous map π :[T˜] → [T ], and we can also see that

−1 x˜ ∈ π (A) ⇐⇒ x˜(2k + 1) is of the form hx2k+1, ΣIIi

since the sequence x 6∈ A iff II chooses Option 1. In particular, this means that π−1(A) is a clopen subset of [T˜] since, by continuity, π−1(A) is closed in [T˜], and −1 ˜ −1 π (A) is also open in [T ] since for anyx ˜ ∈ π (A) the cone Bhx0,...,hx2k+1,ΣIIii contains x˜ and is a subset of π−1(A).

Now, the tricky part is describing the map ϕ. To do so we will show that, given a strategyσ ˜ in T˜, we can find a strategy σ = ϕ(˜σ) in T such that whenever x ∈ [σ] there exists anx ˜ ∈ [˜σ] where π(˜x) = x. (It will be clear that ϕ(˜σ) restricted to positions of length ≤ n depends only onσ ˜ restricted to positions of length ≤ n.)

Case 1: Supposeσ ˜ is a strategy for I in T˜.

We let σ|(2k − 1) =σ ˜|(2k − 1). Then,σ ˜ gives the next hx2k, ΣIi, and we let σ give

this x2k. Next, II plays an x2k+1, and (by determinacy for open games) there are two

21 things that can happen:

Subcase 1A:

If I has a winning strategy in the game G((ΣI)hx2k+1i, [(ΣI)hx2k+1i] \ Ahx0,...,x2k+1i), then we require σ to play the moves of this strategy. After finitely many more moves there

will be a shortest even s so that s 6∈ (TA)hx0,...,x2k+1i (and hence is a winning position for I in this game). Let hx2k+2, . . . , xii = s, and it follows (by II playing Option 1 ) ˜ that hx0, . . . , x2k−1, hx2k, ΣI i, hx2k+1, si, x2k+2, . . . , xii is a legal position in T . Then

we let σ just require I to play according toσ ˜ from xi+1 on. It’s clear then that if

x ∈ [σ] this gives anx ˜ ∈ [˜σ] s.t. π(˜x) = x. (thex ˜ where II played this hx2k+1, si)

Subcase 1B:

If II has a winning strategy in the game G((ΣI)hx2k+1i, [(ΣI)hx2k+1i] \ Ahx0,...,x2k+1i), then we let ΣII be II’s canonical quasistrategy in this game. In this case, we can assume II plays hx2k+1, ΣIIi, and I can continue by letting σ give the same moves asσ ˜ as long as II continues by playing moves in T that are legal in T˜ (meaning II plays so that each hx2k+2, . . . , xii ∈ ΣII). If II doesn’t do so and plays so that some hx2k+2, . . . , xii 6∈ ΣII, then since ΣII was the canonical quasistrategy, it follows that this would be a winning position for I in this game, and I could simply continue as in Subcase 1A. Again, if x ∈ [σ] this gives anx ˜ ∈ [˜σ] s.t. π(˜x) = x. (thex ˜ where II played this hx2k+1, ΣIIi)

In the case where II plays so that some hx2k+2, . . . , xii 6∈ ΣII, what we mean by I continuing ‘as in Subcase 1A’ is that hx2k+2, . . . , xii = s is an even length tuple such ˜ that hx0, . . . , x2k−1, hx2k, ΣI i, hx2k+1, si, x2k+2, . . . , xii is a legal position in T , and we again let σ require I to play according toσ ˜ from xi+1 on.

Case 2: Supposeσ ˜ is a strategy for II in T˜.

We let σ|(2k − 1) =σ ˜|(2k − 1). Then I plays some x2k.

_ Define: S = {ΣI :ΣI is a quasistrategy for I in Thx0,...,x2ki} and U = {hx2k+1i s ∈

Thx0,...,x2ki : |s| is even and ∃ ΣI ∈ S s.t.σ ˜ requires II to play hx2k+1, si if I plays hx2k, ΣIi}. _ _ So then we have that U = {x ∈ [Thx0,...,x2ki]: ∃ hx2k+1i s ∈ U s.t. hx2k+1i s ⊆ x} is an open set in [Thx0,...,x2ki].

Now, we consider the game G(Thx0,...,x2ki, U)

I x2k+2 x2k+4 ... II x2k+1 x2k+3 ...

22 where II plays first and wins iff hx2k+1, x2k+2,...i ∈ U. Again by determinacy for open games, there are two things that can happen:

Subcase 2A:

If II has a winning strategy in this game, then (in T ) σ continues from x2k by playing this winning strategy until a position hx2k+1, x2k+2, . . . , xii is reached such that hx2k+1, x2k+2, . . . , xii ∈ U. Then we can let s = hx2k+2, . . . , xii, and it follows from the definition of U that there exists a ΣI witnessing that this sequence is in- deed in U. Then we let σ continue on from xi according to the plays ofσ ˜ that follow hx0, . . . , x2k−1, hx2k, ΣIi, hx2k+1, si, x2k+2, . . . , xii. Again, if x ∈ [σ] this gives an x˜ ∈ [˜σ] s.t. π(˜x) = x. (thex ˜ where I played this hx2k, ΣIi and II played this hx2k+1, si)

Subcase 2B:

If I has a winning strategy in this game, then let ΣI be I’s canonical quasistrategy in this game. Then ΣI is a winning quaistrategy for I so we have [ΣI] ⊆ [Thx0,...,x2ki] \U ˜ which means that ΣI ⊆ Thx0,...,x2ki \ U. So, if I played hx2k, ΣIi in T , thenσ ˜ must tell II to respond with Option 2. If not, an Option 1 play hx2k+1, si would mean _ ˜ _ hx2k+1i s ∈ U and the rules of T would also imply hx2k+1i s ∈ ΣI, which is a con- ˜ tradiction since ΣI ⊆ Thx0,...,x2ki \U. So if I played hx2k, ΣIi in T then II must respond with some hx2k+1, ΣII i according toσ ˜. Then we let σ play this same x2k+1 and then followσ ˜ as long as I plays according to the rules of T˜. If for any move I doesn’t ˜ play according to T (meaning some hx2k+2, . . . , x2ii 6∈ ΣII), then it follows from the definition of a quasistrategy that, since ΣII is a quasistrategy for II in (ΣI)hx2k+1i, I’s moves are not any more restricted by ΣII then they already were in (ΣI)hx2k+1i. Hence we would have that hx2k+2, . . . , x2ii 6∈ (ΣI)hx2k+1i, hx2k+2, . . . , x2ii is a losing position for I in this game, and we would be back to Subcase 2A: II had the winning strategy. Again, if x ∈ [σ] this gives anx ˜ ∈ [˜σ] s.t. π(˜x) = x. (thex ˜ where I played this hx2k, ΣIi and II played this hx2k+1, ΣIIi)

˜ In the case where I doesn’t play according to T and hx2k+2, . . . , x2ii 6∈ (ΣI)hx2k+1i, what we mean by being ‘back to Subcase 2A’ is that we can then let s = hx2k+2, . . . , xii and again let σ continue on from xi according to the plays ofσ ˜ that follow hx0, . . . , x2k−1, hx2k, ΣIi, hx2k+1, si, x2k+2, . . . , xii.

Finally, we can see then that if we define ϕ by mapping each strategyσ ˜ to the strategy σ which is built in this way, then ϕ meets the appropriate definition. Thus, we’ve finished constructing a covering (T˜ , π, ϕ) which unravels A so that T˜|2k = T |2k, ˜ π  (T |2k) is the identity, and ϕ(˜σ)|2k =σ ˜|2k. So we have the result. 

23 3 Measurable cardinals

In this section we introduce measurable cardinals and ultrapowers, and we discuss the relationship between them. We make an attempt to suggest that measurable cardinals are large in a way that should be considered much more substantial than many weaker large cardinal hypotheses.

3.1 Filters, ultrapowers, and elementary embeddings Since we are only interested in two-valued measures, it is often convenient to treat them instead as filters. A filter on a set X is a collection of nonempty subsets of X which is closed under supersets and finite intersections.

We call a filter nonprincipal if it does not include any singletons.

An ultrafilter is a maximal filter. Equivalently, a filter µ on a set X is an ultrafilter iff for every A ⊆ X either A ∈ µ or X \ A ∈ µ.

A filter µ is λ-complete if whenever δ < λ and {Aα : α < δ} ⊆ µ, we have \ Aα ∈ µ. α<δ

Notice then that all filters are at least ω-complete, and ω1-completeness is the same as closure under countable intersections. We usually say countably complete to

mean ω1-complete.

Definition 3.1. A cardinal κ is measurable if there exists a κ-complete nonprincipal ultrafilter on κ.

For the remainder of this paper, we will use the words ‘ultrafilter’ and ‘measure’ in- terchangeably. So we mean the same thing by µ(A) = 1 as A ∈ µ and by µ(A) = 0 as A 6∈ µ.

Definition 3.2. Let M and N be two structures over the same language. Then an elementary embedding from M into N is a map j : M → N such that, for every

first-order formula φ in the language and every a0, . . . , an ∈ M, we have that:

M |= φ(a0, . . . , an) ⇐⇒ N |= φ(j(a0), . . . , j(an)).

If j is an elementary embedding from M into N we denote this by j : M ≺ N.

24 Given a measurable cardinal κ, we can use the measure on κ to generate a nontrivial elementary embedding from V into itself. We will also show that the existence of a measurable is in fact equivalent to the existence of such an embedding.

Definition 3.3. Let µ be a κ-complete nonprincipal measure on κ. Then consider the equivalence relation on functions in κV defined by

f =µ g ⇐⇒ {α < κ : f(α) = g(α)} ∈ µ.

κ κ We write the collection {[f]: f ∈ V } of =µ-equivalence classes as V/µ.

κ Then we define the relation ∈µ on V/µ by

[f] ∈µ [g] ⇐⇒ {α < κ : f(α) ∈ g(α)} ∈ µ.

κ We call the structure ( V/µ, ∈µ) the µ-ultrapower of V and denote it by Ult(V, µ).

Now, we need the following version ofLo´s’sTheorem:

Theorem 3.4. Ult(V, µ) |= φ([f1],..., [fn]) ⇐⇒ {α ∈ κ : φ(f1(α), . . . , fn(α))} ∈ µ.

Proof. We proceed by formula induction. The result for = and ∈ is clear from the definitions

of =µ and ∈µ. The result for a conjunction, φ1 ∧ φ2, follows from the intersection property of the filter. And we get the result for negations, ¬φ, since the filter is ‘ultra’. Now, the result for ∃xφ is a bit trickier and requires the use of the Axiom of Choice. The forward direction is trivial, so suppose we have the result for φ and

S = {α < κ : ∃xαφ(xα(α), f1(α), . . . , fn(α))} ∈ µ.

Then choose such an xα for each α ∈ S, and define the function x : κ → V by  x (α) if α ∈ S x(α) = α 0 otherwise

Then we have φ(x(α), f1(α), . . . , fn(α)) for every α ∈ S. So it follows that

{α < κ : φ(x(α), f1(α), . . . , fn(α))} ∈ µ,

which by the result for φ gives Ult(V, µ) |= φ([x], [f1],..., [fn]), and therefore

Ult(V, µ) |= ∃[x]φ([x], [f1],..., [fn]).



25 Corollary 3.5. j : V ≺ Ult(V, µ) where each j(x) = [fx] and each fx is the constant function fx : κ → {x}.

Lemma 3.6. Ult(V, µ) is wellfounded.

Proof. Suppose for contradiction that Ult(V, µ) is illfounded. Then there exists some infinite descending sequence [f0] 3µ [f1] 3µ [f2] 3µ ... in Ult(V, µ) which means

An = {α < κ : fn+1(α) ∈ fn(α)} ∈ µ for all n < ω, and since µ is countably complete it must follow that \ An = {α < κ : f0(α) 3 f1(α) 3 f2(α) 3 ...} n<ω is in µ and is therefore nonempty. But this would mean there exists an infinite descending sequence in V . So we have a contradiction. −→←−



So now that we know ∈µ is wellfounded we can apply Mostowski’s transitive collapse to obtain a transitive inner model M such that M ∼= Ult(V, µ).

It follows then that j : V → M ∼= Ult(V, µ) defined by taking the ultrapower and then applying the transitive collapse is an elementary embedding into an inner model† M ⊂ V . We call this the embedding induced by µ, and to sum up this construction we write j : V ≺ M ∼= Ult(V, µ).

But it remains to show that this embedding is non-trivial, which follows from the next lemma:

Definition 3.7. Given a nontrivial elementary embedding j : V ≺ M, the critical point of j is the least ordinal δ such that j(δ) > δ.‡

Lemma 3.8. Let µ be a κ-complete nonprincipal measure on κ and suppose that j : V ≺ M ∼= Ult(V, µ). Then κ is the critical point of j.

†By inner model we mean a transitive model of ZF which contains the ordinals ‡That for every ordinal α we have j(α) ≥ α is an easy induction.

26 Proof. Let id : κ → κ be the identity map on κ. Then we clearly have

[id] < j(κ) = [fκ].

But we also know that every (α, κ) ∈ µ. To see this notice that, since µ is an ultrafilter, (α, κ) 6∈ µ would imply [0, α] ∈ µ. This cannot happen since, by the κ-completeness S of µ, we would then have [0, α] = β≤α{β} ∈ µ =⇒ ∃β ≤ α s.t. {β} ∈ µ, which is a contradiction since µ is nonprincipal. This gives that α < [id] for each α < κ. Hence κ ≤ [id] and we have κ < j(κ).

Now suppose for contradiction that some α < κ is the critical point of j. Then if we let [f] = α we have {β < κ : f(β) < α} ∈ µ. But {β < κ : f(β) < α} S = ξ<α{β < κ : f(β) = ξ} so it would follow from the κ-completeness of µ that

∃ ξ < α s.t. {β < κ : f(β) = ξ} ∈ µ which is a contradiction since that would mean [f] = ξ. −→←−

 We can get much more, however. And the following theorem provides us with a useful and interesting characterization of measurability. But first we’ll prove a couple of lemmas:

Lemma 3.9. Any nontrivial elementary embedding j : M ≺ N between inner models M and N with N ⊂ M has a critical point.

Proof. It suffices to show that j cannot be the identity on the ordinals. Since j is non- trivial, ∃x ∈ M such that j(x) 6= x. Let γ be the least rank of such an x. Now y ∈ x =⇒ j(y) ∈ j(x). Also y ∈ x means we have rank(y) < rank(x). Since x is of least rank in order to have j(x) 6= x, this gives us j(y) = y and hence y ∈ j(x). So we have that x ⊂ j(x), and since also x 6= j(x) it follows that ∃z such that z ∈ j(x)\x.

Now suppose for contradiction that rank(j(x)) = rank(x) = δ. If this were true it would mean rank(z) < δ = rank(x) and hence we’d have j(z) = z ∈ j(x). And by elementarity, j(z) ∈ j(x) =⇒ z ∈ x, a contradiction. −→←−

Thus, rank(j(x)) 6= rank(x), and we have j(δ) 6= δ.



27 Lemma 3.10. If there exists a nontrivial elementary embedding j : V ≺ M for some inner model M, then its critical point is a measurable cardinal.

Proof. Let κ be the critical point of j, which exists by Lemma 3.9. Then define µ on κ by:

∀X ⊆ κ (X ∈ µ ⇐⇒ κ ∈ j(X)).

First, we will show that µ is a κ-complete nonprincipal ultrafilter.

Since κ is the critical point, we have κ < j(κ) and hence κ ∈ µ. And that µ is a filter is immediate from the definition. Also, since ∀α < κ we have j({α}) = {α}, a singleton {α} ∈ j(κ) iff j({α}) ∈ j(κ) which by elementarity happens iff {α} ∈ κ, and this is not true for any α < κ. So µ must be nonprincipal.

To see that µ is an ultrafilter, notice that κ ∈ j(κ) since κ is the critical point. So if κ 6∈ j(X) it must be that κ ∈ j(κ \ X).

To show the κ-completeness of µ, let λ < κ and suppose Xα ∈ µ for each α ∈ λ. We can rethink this index then as a function f : λ → µ s.t. ∀α ∈ λ we have κ ∈ j(f(α)). But since λ is less than the critical point, we know that j(α) = α for every α in the domain. So by elementarity we have

hα, f(α)i ∈ f ⇐⇒ hj(α), j(f(α))i ∈ j(f) ⇐⇒ hα, j(f(α))i ∈ j(f).

In particular, this says that ∀α ∈ λ we have j(f)(α) = j(f(α)). So since we have

that κ ∈ j(Xα) = j(f(α)) for each α, it follows that \ \ \ \ \ κ ∈ j(Xα) = j(f(α)) = j(f)(α) = j( f(α)) = j( Xα). α<λ α<λ α<λ α<λ α<λ Now, it remains to show that κ is regular and therefore a cardinal.

To see that κ is regular, suppose for contradiction that λ < κ but λ is cofinal in κ. As in the proof of Lemma 3.8, we know that each (α, κ) ∈ µ where α ∈ λ. But then T T by the κ-completeness of µ, it follows that α<λ(α, κ) ∈ µ. However, α<λ(α, κ) = ∅ since λ is cofinal in κ. So it would follow that ∅ ∈ µ, a contradiction. −→←−

 Theorem 3.11. A cardinal κ is measurable iff there exists an elementary embedding j : V ≺ M for some inner model M with κ as its critical point.

28 Proof. This follows from the previous lemma and the construction of the embedding induced by µ for any ultrafilter µ on κ which witnesses its measurability.

 The previous theorem is why, in the hierarchy of large cardinals, one might consider measurables to be the smallest of the ‘really large’ ones. The existence of a measur- able is equivalent to having nontrivial elementary embeddings from V into a proper transitive subclass which contains the ordinals. In other words, if we have measur- ables then our set theoretic universe is big enough to contain first-order copies of itself.

The following theorem due to Scott is one example of the kind of influence this can have on our set theory: Theorem 3.12. If there exists a measurable cardinal, then V 6= L. Proof. Let κ be the least measurable cardinal and µ a κ-complete nonprincipal ultrafilter on κ. Then suppose for contradiction that V = L, and consider the induced embedding j : V ≺ M ∼= Ult(V, µ).

Then we have V |= κ is the least measurable cardinal. and we have M |= j(κ) is the least measurable cardinal.

But since M ⊂ V , and every inner model has to contain the constructible sets (i.e. L ⊂ M), it would follow from V = L that M = V and hence V |= j(κ) is the least measurable cardinal. This says that j(κ) = κ, which is a contradiction since κ is the critical point of the induced embedding and so we should have j(κ) > κ. −→←−

 So measurable cardinals are large enough that if we assume one exists then we tran- scend constructibility. It is worth noting however that measurables are not the weak- est large cardinals to imply V 6= L. In fact, every Ramsey cardinal is Rowbottom, every Rowbottom cardinal is J´onsson,and the existence of a J´onssoncardinal already implies V 6= L. We will soon show that a measurable cardinal is Ramsey, as a version 1 of this fact will be needed in our proof of Det(Π1).

29 3.2 Normal measures We can actually get a little more out of measurable cardinals if we are particular about which measure on the cardinal we use.

Definition 3.13. An ultrafilter µ is normal on κ if it is closed under diagonal intersections. This means that, if we have a sequence of sets hXα : α < κi with each

Xα ∈ µ, then \ 4 Xα = {β < κ : β ∈ Xα} ∈ µ. α<κ α<β Definition 3.14. We say that a function f is regressive on a set Z of ordinals if f(α) < α for all α ∈ Z.

The following characterization of normality will be useful:

Theorem 3.15. An ultrafilter µ is normal on κ iff every function (in κκ) which is regressive on a set from µ is constant on some set from µ.

Proof. First, assume that µ is normal on κ and {α < κ : f(α) < α} ∈ µ, but suppose for contradiction that ∀β ∈ κ we have {α < κ : f(α) = β} 6∈ µ. Then this would mean that ∀β < κ we have {α < κ : f(α) 6= β} ∈ µ. By normality it would follow that 4 {α < κ : f(α) 6= β} = {α < κ : f(α) ≥ α} ∈ µ, which is a contradiction since β<κ we would have {α < κ : f(α) < α} ∈ µ and {α < κ : f(α) ≥ α} ∈ µ. Hence {α < κ : f(α) < α} ∩ {α < κ : f(α) ≥ α} = ∅ ∈ µ. −→←−

Now, assume that every function which is regressive on a set in µ is constant on a set in µ, and hXα : α < κi is a sequence of sets from µ. But suppose for contradiction that 4 Xα 6∈ µ. Then we have κ \ 4 Xα ∈ µ. So let f : κ \ 4 Xα → κ be α<κ α<κ α<κ defined by f(β) = min{α < κ : β 6∈ Xα}. Then f is regressive on its domain since T f(β) ≥ β =⇒ β ∈ α<β Xα =⇒ β ∈ 4 Xα =⇒ β 6∈ dom(f). But then ∀α < κ we α<κ −1 −1 have f ({α}) ∩ Xα = ∅ and each Xα ∈ µ. Hence no f ({α}) can be in µ and this contradicts the assumption. −→←−

 Lemma 3.16. If µ is a normal ultrafilter on κ then it is κ-complete.

30 Proof.

Suppose {Xα : α < λ} ⊂ µ where λ < κ. Then we can let Xγ = κ for every γ ≥ λ. By normality we have \ 4 Xα = {β < κ : β ∈ Xα} ∈ µ. α<κ α<β T T But the choice of Xγ for each γ ≥ λ gives {β < κ : β ∈ α<β Xα} = α<λ Xα. So T α<λ Xα ∈ µ and this shows κ-completeness.

 Theorem 3.17. If κ is measurable, then there exists a normal measure on κ.

Proof. Since κ is measurable, there is a nontrivial elementary embedding j : V ≺ M with κ as its critical point. Now consider the same measure µ on κ that we defined for Lemma 3.10, i.e., ∀X ⊆ κ (X ∈ µ ⇐⇒ κ ∈ j(X)). In the proof of Lemma 3.10 we already showed that µ is a nonprincipal ultrafilter. So it only remains to show that µ is normal. We will appeal to Theorem 3.15 and show that if a function is regressive on a set in µ then it is constant on a set in µ. So suppose that f ∈ κκ is such that {α < κ : f(α) < α} ∈ µ. By the definition of the measure, this means that κ ∈ j({α < κ : f(α) < α}). In particular, we have j(f)(κ) < κ.

Now, let β = j(f)(κ) and let X = {α < κ : f(α) = β}. Then by elementarity we have j(X) = {α < j(κ): j(f)(α) = j(β)}, and since β < κ we have j(β) = β. Hence j(X) = {α < j(κ): j(f)(α) = β} and we can see then that κ ∈ j(X). Thus, X ∈ µ and f is constant on X, so this shows normality.

 Definition 3.18. A cardinal κ is Ramsey iff for every function f :[κ]<ω → 2 there exists a set X ⊂ κ such that |X| = κ and f is constant on [X]n for each n < ω.

Rather than merely show that measurable cardinals are Ramsey, we will prove a stronger result which is due to Rowbottom and will be useful in the next section:

Theorem 3.19. Suppose κ is measurable and µ is a normal measure on κ. Then if n ∈ ω and f :[κ]n → 2, there exists X ∈ µ s.t. f is constant on [X]n.

31 Proof. Since µ is an ultrafilter, the case for n = 1 is immediate. We will first prove the case for n = 2, and then proceed by induction.

So let f :[κ]2 → 2. Again by using the ultrafilter property, we have that for every

α < κ there exists iα ∈ {0, 1} s.t. Xα = {β < κ : f({α, β}) = iα} ∈ µ. We also get ∗ ∗ ∗ ∗ that there is an i ∈ {0, 1} and X ∈ µ s.t. iα = i whenever α ∈ X . Now, consider the diagonal intersection

4 Xα = {β < κ : f({α, β}) = iα, ∀α < β} ∈ µ. α<κ

∗ 2 2 And let X = X ∩ 4 Xα. To see that f is constant on [X] let {α, β} ∈ [X] and α<κ ∗ ∗ WLOG say α < β. Then α ∈ X , β ∈ Aα, and so f({α, β}) = i .

Now suppose we have it for n, and we will show (using essentially the same argument) that it follows for n + 1. So let f :[κ]n+1 → 2. Then that µ is an ultrafilter gives us n that for every a ∈ [κ] there exists ia ∈ {0, 1} s.t. Xa = {β < κ : f(a∪{β}) = ia} ∈ µ, ∗ and then the result for n and the map a 7→ ia gives us that there is an i ∈ {0, 1} and ∗ ∗ ∗ n X ∈ µ s.t. ia = i whenever a ∈ [X ] . Now, consider the “diagonal intersection” \ 4Xa = {β < κ : β ∈ Xa} = {β < κ : f(a∪{β}) = ia whenever β = max(a∪{β})}. a∈[β]n

∗ n+1 n+1 And let X = X ∩ 4Xa. To see that f is constant on [X] , let x ∈ [X] , β = max(x), and a = x \ β. Then, a ∈ [X∗]n, β = max(a ∪ {β}), and so f(x) = f(a ∪ {β}) = i∗.

 Corollary 3.20. Suppose κ is measurable and µ is a normal measure on κ. Then for every f :[κ]<ω → 2 there exists X ∈ µ such that f is constant on [X]n for each n < ω.

Proof.

For each n ≥ 1 the theorem says that we can find Xn ∈ µ such that f is constant on n T [Xn] . Let X = Xn and it follows from countable completeness that X ∈ µ.



Corollary 3.21. Measurable cardinals are Ramsey.

32 Actually much more can be said, and to reinforce that a measurable is indeed a stronger assumption than a Ramsey, we give the following theorem without proof:

Theorem 3.22. If κ is a measurable cardinal, and µ is a normal measure on κ, then

{α < κ : α is Ramsey} ∈ µ.

So not only are measurable cardinals Ramsey, but the least measurable cardinal must be larger than measurably many Ramseys.

33 1 4 Det(Π1) given a measurable cardinal

Our goal is to prove Martin’s result that the existence of a measurable cardinal im- 1 plies that Π1 (coanalytic) sets are determined. We will do this by first defining homogeneously Suslin sets and showing that they are determined. Then it will follow 1 from the existence of a measurable cardinal that all Π1 sets are homogeneously Suslin.

4.1 Towers of measures and homogeneously Suslin sets Let m( <ωY ) be the set of countably complete measures on <ωY .§

<ω i j Then for any i ≤ j in ω, if µi, µj ∈ m( Y ) with µi( Y ) = 1 and µj( Y ) = 1 (i.e. i j i Y ∈ µi and Y ∈ µj), we say that µj projects to µi if for every Z ⊆ Y , we have:

j Z ∈ µi ⇐⇒ {s ∈ Y : s  i ∈ Z} ∈ µj

i Here, we can also say that µi is the projection of µj to Y .

<ω Definition 4.1. A tower of measures on Y is a sequence hµk : k < ωi where we have that:

k k · For every k < ω, µk( Y ) = 1. (i.e. Y ∈ µk)

i · For every i < j, µi is the projection of µj to Y .

We call the tower wellfounded if for every sequence hZk : k < ωi such that each ω Zk ∈ µk, there exists an f ∈ Y where f  k ∈ Zk for all k < ω.

We often refer to such a function f as a thread through the Zk’s.

Definition 4.2. For κ ≥ ω1, we say a tree T on ω × Y is κ-homogeneous if there exists a partial function π : <ωω −→ m( <ωY ) where we have the following:

¶ † · For every s ∈ dom(π), π(s) is a κ-complete measure with π(s)(Ts) = 1

ω · For every x ∈ ω, x ∈ p[T ] ⇐⇒ x  k ∈ dom(π) for each k < ω, and hπ(x  k): k < ωi is a wellfounded tower.

We simply say homogeneous when we mean ω1-homogeneous.

§We include principal measures in the set m( <ωY ). ¶From here we can see that, for δ ≤ κ, a κ-homogeneous tree is δ-homogeneous. † |s| Recall that Ts = {t ∈ Y : hs, ti ∈ T }.

34 Definition 4.3. A set A ⊆ ωω is called κ-homogeneously Suslin if it is the projection of some κ-homogeneous tree T ; ‡

i.e. A = p[T ] = {x ∈ ωω : ∃y ∈ ωY s.t. hx, yi ∈ [T ]} k

To show that these sets are determined, we will consider a new game G∗ which is always determined. Then we will show that, for homogeneously Suslin sets, the de- ∗ terminacy of Gω(A) follows from the determinacy of G .

4.2 Determinacy of homogeneously Suslin sets Given a tree T on ω × Y with A = p[T ], the game G∗(T,A) is played as follows:

I x0 y0 y1 x2 y2 y3 ... II x1 x3 ... and with the rules:

· For every i < ω, xi ∈ ω and yi ∈ Y .

· I wins if hx, yi ∈ [T ] where x = hxi : i < ωi and y = hyi : i < ωi. · II wins otherwise.

Lemma 4.4. G∗(T,A) is determined

Proof. Thinking of T as a subset of <ω(ω × Y ) rather than <ωω × <ωY , the game can also be formulated as:

I hx0, y0i hx2, y2i ... II hx1, y1i hx3, y3i ...

where I still gets to choose the yi’s on II’s moves. Now, this is a game played on ω × Y where the payoff set is [T ], a closed set in ω(ω × Y ). So by Theorem 1.6 the game is determined.



‡So for δ ≤ κ, a κ-homogeneously Suslin set is δ-homogeneously Suslin. kRecall that [T] is the set of infinite branches through the tree T .

35 Lemma 4.5. Suppose A ⊆ ωω and A = p[T ] for a tree T on ω × Y . Then I has a ∗ winning strategy in Gω(A) whenever I has a winning strategy in G (T,A).

Proof. ∗ Assume I has a winning strategy in G (T,A). Now in the game Gω(A),

I x0 x2 x4 ... II x1 x3 x5 ...

∗ I can look at the winning strategy from G and play precisely the same x2i’s. In ∗ other words, as the play of Gω progresses, I can be playing G at the same time. ∗ Beginning with the move x0 in Gω and hx0, y0i in G , I then responds to any x2i+1 ∗ in Gω by completing the hx2i+1, y2i+1i move in G and then choosing x2i+2 according to the G∗ strategy. So then I wins G∗ which means that at the end of the play it must be that hx, yi ∈ [T ] where x = hxi : i < ωi and y = hyi : i < ωi. Hence x = hxi : i < ωi ∈ p[T ] = A, and we have that I is guaranteed a win in Gω.

 We still haven’t needed A to be homogeneously Suslin. The following lemma is precisely where this becomes necessary.

Lemma 4.6. Suppose A ⊆ ωω is homogeneously Suslin with T a corresponding ho- mogeneous tree. Then II has a winning strategy in Gω(A) whenever II has a winning strategy in G∗(T,A).

Proof. Assume II has a winning strategy in G∗(T,A), and let π : <ωω −→ m( <ωY ) be a partial function witnessing the homogeneity of T . At some stage of a play in Gω(A), suppose I makes a move x2i, and let s = hxk : k ≤ 2ii.

II’s strategy is to respond by first considering every tuple

|s| hyk : k ≤ 2ii ∈ Ts = {t ∈ Y : hs, ti ∈ T }

∗ Then II looks at the partial plays hx0, y0, x1, y1, . . . , x2i, y2ii of G corresponding to t each t ∈ Ts. So let x2i+1 be II’s next move corresponding to t according to the ∗ ∗ winning strategy for G . Then there must exist an x2i+1 such that

t ∗ ∗∗ π(s)({t ∈ Ts : x2i+1 = x2i+1}) = 1.

∗∗ S t This is because we have 1 = π(s)(Ts) = π(s)( n<ω{t ∈ Ts : x2i+1 = n}), and by the countable completeness of π(s), if the union of countably many sets has measure 1 then at least one of them must have measure 1.

36 ∗ After finding such an x2i+1, this is II’s next move in Gω.

To see this is a winning strategy for II in Gω, suppose for contradiction that it is not.

Then we would have a play of Gω where II follows this strategy yet it still happens that hxi : i < ωi = x ∈ A = p[T ] (a win for I). By definition 4.2, the homogeneity of the tree gives us that hπ(x  k): k < ωi must be a wellfounded tower. So consider the sequence hZ : i < ωi where each Z ⊆ T is the set {t ∈ T : xt = x∗ } 2i+1 2i+1 x2i+1 s 2i+1 2i+1 corresponding to II’s moves. Since each of these sets has measure 1, there must be a fiber f through the Z’s. But by the choice of the Z’s, it follows that f would belong † to [Tx]. This would give an infinite branch hx, fi through T which can occur while II plays the winning strategy in G∗. But in G∗, the infinite branch would be a win for I! So we have a contradiction. −→←−

 Theorem 4.7. Homogeneously Suslin sets are determined. Proof. Combining Lemmas 4.4, 4.5, and 4.6 gives us the result.



4.3 Effect of a measurable cardinal For the rest of this section, we suppose a measurable cardinal exists and call it κ. ω 1 Also, let A ⊆ ω be some fixed Π1 set, and we will show that having the measurable κ implies that A must also be κ-homogeneously Suslin.

<ω Definition 4.8. The Kleene-Brouwer order

s

· · · · = == ÑÑ == ÑÑ == ÑÑ · · · Ñ = == ÑÑ == ÑÑ == ÑÑ · Ñ

† <ω Recall that Tx = {t ∈ Y : hx  |t|, ti ∈ T }.

37 Then

1 2 5 6 >> Ð >> ÐÐ >> ÐÐ > ÐÐ 3 4 7 >> Ð >> ÐÐ >> ÐÐ > ÐÐ 8 Note that the Kleene-Brouwer order is a strict linear order since, whenever s 6= t, either one is an extension of the other or there must be some least n so that s(n) 6= t(n).

Lemma 4.9. KB (x  1) >KB (x  2) >KB ...

ω For the other direction, suppose that hsk : k < ωi ∈ R is such that sk+1 n0 and hsk(1) : n0 < k < ωi is again eventually constant. So we extend the tuple and can construct the branch inductively. Once we have hsn0 (0), . . . , sni (i)i, with each snk+1 an extention of hsn0 (0), . . . , snk (k)i and each nk+1 chosen so that snk+1 (k + 1) is least for the next coordinate, we know snk (i + 1) exists and extends the sequence for all k > i and that hsk(i + 1) : ni < k < ωi is eventually constant. So we choose an ni+1 where sni+1 (i + 1) is least and extend the sequence to hsn0 (0), . . . , sni (i), sni+1 (i + 1)i. Continuing in this way produces an infinite branch of R.



<ω Lemma 4.10. Given an analytic set A, there is a map s 7→≺s, for s ∈ ω, where:

(i) ≺s is a linear order on |s|.

(ii) s ⊆ t =⇒ ≺s⊆≺t. (iii) x ∈ A ⇐⇒≺ is wellfounded, where ≺ = S ≺ x x k<ω xk

Proof. 1 Since A is Π1, its complement is analytic and by definition 1.19 must be the pro- jection of a tree on ω×ω. So we may let R ⊆ <ω(ω×ω) be a tree where p[R] = ωω\A.

38 Then let

Now, for each s define the ordering ≺s on |s| by:

 ∗ si, sj ∈ Rs ∧ si

We have (i) since, if i and j are < |s|, then |sn| ≤ n for each n gives us that |si| < |s| and |sj| < |s|. Hence one of the three cases in the definition of the ordering will apply. ∗ ∗ ∗ ∗ We have (ii) since s ⊆ t =⇒ Rs ⊆ Rt and |sn| ≤ n gives us that si ∈ Rs ⇐⇒ si ∈ Rt for each i < |s|. Hence the ordering on |s| does not change from ≺s to ≺t.

ω Also, for any x ∈ ω, ≺x is a linear ordering of ω which is isomorphic to the Kleene- S S ∗ Brouwer order on Rx = Rx n = R which by Lemma 4.9 is wellfounded n<ω  n<ω xn ω iff Rx has no infinite branch. But x ∈ A precisely when [Rx] = ∅ since p[R] = ω \ A. So we have (iii).



Now we can construct the tree:

Let T ⊆ <ω(ω × κ) be the tree consisting of nodes hs, ti so that t has the form hα0, . . . , α|s|−1i with αi ∈ κ for each i, and αi < αj ⇐⇒ i ≺s j.

Notice then that, since there cannot be an infinite descending sequence of ordinals, ω ω whenever x ∈ ω we can find a sequence hαn : n < ωi ∈ κ where αi < αj ⇐⇒ i ≺x j iff ≺x is wellfounded. Hence [Tx] 6= ∅ ⇐⇒≺x is wellfounded ⇐⇒ x ∈ A, and we have ω ω ω p[T ] = {x ∈ ω : ∃f ∈ κ s.t. hx, fi ∈ [T ]} = {x ∈ ω :[Tx] 6= ∅} = A.

So A is the projection of the tree T , and it remains to show that T is in fact κ-homogeneous.

<ω s For each s ∈ ω and each C ⊆ κ let C be the set of tuples hα0, . . . , α|s|−1i with s |s| αi ∈ C for each i and αi < αj ⇐⇒ i ≺s j (i.e. C = C ∩ Ts).

By Theorem 3.17 we may let µ be a normal measure on κ. Then, for each s ∈ <ωω |s| define a filter πs on κ by

s Z ∈ πs ⇐⇒ ∃ C ∈ µ s.t. C ⊆ Z

39 <ω |s| Lemma 4.11. For each s ∈ ω the filter πs on κ is a κ-complete ultrafilter.

Proof. For κ-completeness, suppose λ < κ and Zi ∈ πs for each i ∈ λ. Then for each i s there exists a Ci ∈ µ with Ci ⊆ Zi. Then, by the κ-completeness of µ, we know T T T s T |s| T |s| i<λ Ci ∈ µ, and hence i<λ Zi ⊇ i<λ Ci = i<λ( Ci ∩ Ts) = Ts ∩ ( i<λ Ci) = |s| T T s T Ts ∩ ( i<λ Ci) = ( i<λ Ci) shows that i<λ Zi is also in πs.

To see that πs is an ultrafilter, notice that by the definition of the tree T there is |s| exactly one element of Ts corresponding to each element of [κ] . In fact, there is a one-to-one correspondence between Cs and [C]|s| for each C ⊆ κ since there is one |s| way to order the elements of any t ∈ [C] so that (t, <) matches (|s|, ≺s) on its indices.

Now suppose Z ∈ |s|κ. Since the measure µ on κ is normal, we can apply Theorem 3.19. So consider the function f :[κ]|s| → 2 where, for each t ∈ [κ]|s|, f(t) = 1 iff the corresponding element of Ts is in Z. By the theorem, ∃C ∈ µ s.t. f is constant on [C]|s|. Thus, either Cs ⊆ Z (if its image is {1}) or Cs ⊆ |s|κ \ Z (if its image is {0}), |s| which means either Z ∈ πs or κ \ Z ∈ πs. So we have that πs is an ultrafilter.



Lemma 4.12. x ∈ A ⇐⇒ hπ : k < ωi is a wellfounded tower. xk

Proof. That, for any i ≤ j, π is the projection of π to iκ follows from the xi xj |s| correspondence between Ts and κ discussed in the previous proof. We have Z ∈ π =⇒ ∃ C ∈ µ s.t. Cxi ⊆ Z, which gives Cxj ⊆ {t ∈ jκ : t i ∈ Z} xi  j j and hence {t ∈ κ : t  i ∈ Z} ∈ πj. Also, {t ∈ κ : t  i ∈ Z} ∈ πj implies x j j x i ∃ C ∈ µ s.t. C  ⊆ {t ∈ κ : t  i ∈ Z}, which gives C  ⊆ Z and hence Z ∈ πi.

Now suppose x ∈ A and hZ : k < ωi is a sequence such that each Z ∈ π . For k k xk (x k)  T (xk) each k we can fix a Ck ∈ µ s.t. Ck ⊆ Zk and let C = k<ω Ck so that C ⊆ Zk for every k and C ∈ µ by the completeness of µ. Since x ∈ A, we have that ≺x is wellfounded and it follows that since C must be uncountable we can choose a sequence ω (x k) hαi : i < ωi = f ∈ C where αi < αj ⇐⇒ i ≺x j. Then each f  k ∈ C  ⊆ Zk, and f is a thread through the Zk’s.

Conversely, if for a sequence hZ : k < ωi such that each Z ∈ π there is a k k xk ω thread through the Zk’s, then the thread is a sequence hαi : i < ωi in κ with αi < αj ⇐⇒ i ≺x j which is only possible if ≺x is wellfounded and therefore implies x must be in A.



40 1 Theorem 4.13. If κ is measurable, then all Π1 sets are κ-homogeneously Suslin. Proof. For any coanalytic set A we have shown that we can construct the tree T such that p[T ] = A where, by Lemmas 4.11 and 4.12, the function π : <ωω −→ m( <ωκ) defined by π(s) = πs witnesses the κ-homogeneity of T .



1 Corollary 4.14. If there exists a measurable cardinal, then Det(Π1). Proof. 1 By the above theorem, the presence of a measurable cardinal implies that Π1 sets are κ-homogeneously Suslin, and by Theorem 4.7 such sets are determined.



1 Corollary 4.15. If there exists a measurable cardinal, then Det(Σ1). Proof. 1 Since by Theorem 1.14 the pointclass Π1 is closed under continuous preimages and 1 1 Σ1 = ¬Π1, this follows from corollary 4.14 and Theorem 1.5.



41 5 Remarks and further results

Martin proved that analytic determinacy follows from the existence of a measurable in 1969, and it wasn’t until 1974 that he proved Borel determinacy in ZFC. Then, he made things a bit easier in 1982 with the inductive proof of Borel determinacy that we give in this paper.

Although proving Borel determinacy doesn’t take any large cardinals, Harvey Friedman was able to show, before Martin actually proved Borel determinacy, that any proof would have to require a great deal of set theory (in the sense that we have to work with significantly large sets). If we look back to the inductive proof we gave and suppose we wanted to unravel some closed set in ω2, then the quasistrategies are subsets of <ω2 which we can, by enumeration, identify with ω. So then the unraveling required us to play a game involving moves that are in essence from P(ω). Then, when we inducted up the hierarchy we had to perform α many unravelings for each α < ω1. So we needed to take advantage of the existence of uncountably many iterations of the powerset on ω, and Friedman’s result in 1971 that the Axiom of Replacement is necessary for any proof of Borel determinacy also shows that this is unavoidable.

We made a point that measurable cardinals are Ramsey, and that the existence of a measurable is a significantly stronger assumption. Martin actually proved that ana- lytic determinacy follows from the existence of a Ramsey cardinal (more specifically, 1 † that if a Ramsey cardinal exists then Π1 sets are homogenously Suslin ), which is a much sharper result. Later on, Martin was able to improve this even further and showed that analytic determinacy follows from the existence of sharps for all reals. And in 1978, Harrington was able to show the converse of this, and we now know that analytic determinacy is equivalent to the existence of sharps for all reals. So in order to establish determinacy up the Borel hierarchy we already have to use very large sets. But to take even the first step above the Borels into the projective hierar- chy requires large cardinals. At this point, it should be no surprise that in order to establish determinacy higher and higher in the hierarchy, we employ more and more large cardinals.

Definition 5.1. δ is a Woodin cardinal if for every f ∈ δδ, there exists an ele- mentary embedding j : V ≺ M with critical point α < δ such that Vj(f)(α) ⊆ M and β < α =⇒ f(β) < α (i.e. α is closed under f).

While Woodin cardinals are not necessarily measurable, the existence of a Woodin cardinal is a much stronger assumption. For instance, one can show that a Woodin cardinal δ is regular and {κ < δ : κ is measurable} is stationary in δ.

† 1 However, a measurable cardinal κ implies Π1 sets are κ-homogeneously Suslin rather than merely homogeneously Suslin, and we prove the theorem from a measurable because both measurables and this κ-homogeneity are important in later results.

42 Kechris and Martin developed a notion of weakly homogeneous trees and weakly homo- geneously Suslin sets similar (but not as strong) as the homogeneity we have already seen, and the weakly homogeneously Suslin sets turn out to be the projections of the homogeneously Suslin sets.

1 So since we have already seen that if a measurable cardinal κ exists then Π1 sets 1 are κ-homogeneously Suslin, it follows then that the Σ2 sets are κ-weakly homoge- neously Suslin. Then, in 1985 Martin and Steel established the result that allows us an inductive step to work up the hierarchy and derive projective determinacy:

Theorem 5.2. If δ is a Woodin cardinal and A ⊆ ωω is δ+-weakly homogeneously Suslin. Then for any α < δ, ωω \ A is α-homogeneously Suslin.

This gives the following theorem. Theorem 5.3. If there are n Woodin cardinals with a measurable cardinal above 1 them, then Πn+1 sets are determined. To see how this follows from Theorem 5.2, we can apply Theorem 4.13 so that the 1 measurable κ above the Woodin cardinals, δ1 < . . . < δn, gives that Π1 sets are 1 κ-homogeneously Suslin. So then Σ2 sets are κ-weakly homogeneously Suslin and + 1 therefore δn -weakly homogeneously Suslin. And it follows from Theorem 5.2 that Π2 + 1 sets are δn−1-homogeneously Suslin. Continuing in this way, we get that Πk sets are + 1 + δn−(k−1)-homogeneously Suslin for 1 ≤ k ≤ n. Hence Πn sets are δ1 -homogeneously 1 + Suslin. Then Σn sets are δ1 -weakly homogeneously Suslin, and one last application 1 of Theorem 5.2 then says that Πn+1 sets are homogeneously Suslin and are therefore determined (by Theorem 4.7).

Martin and Steel were also able to show for the previous theorem that having the measurable cardinal sitting above the n Woodins is necessary. But since each Woodin cardinal has a stationary set of measurables, we can see that the previous theorem also gives: Theorem 5.4. If there are infinitely many Woodin cardinals, then all projective sets are determined.

Almost immediately after Martin and Steel achieved this, Hugh Woodin was able to take it a step further and proved: Theorem 5.5. If there are infinitely many Woodin cardinals with a measurable car- dinal above them, then AD L(R). This means that, relative to large cardinals, all definable sets of reals (not just the projective ones) are determined. Also, Woodin was able to justify our immediate use of much larger cardinals to get determinacy above the analytic sets by showing in 1989 1 that ZFC+Det(Π2) and ZFC+{there exists a Woodin cardinal} are equiconsistent.

43 Bibliography

[1] Thomas Jech, Set theory, the third millennium ed., Springer Monographs in Math- ematics, Springer, Berlin, 2003.

[2] Akihiro Kanamori, The higher infinite: Large cardinals in set theory from their beginnings, second ed., Springer Monographs in Mathematics, Springer, Berlin, 1994.

[3] Alexander Kechris, Classical descriptive set theory, Graduate Texts in Mathemat- ics, vol. 156, Springer-Verlag, Berlin, 1995.

[4] Paul Larson, A brief history of determinacy, The Handbook of the History of Logic (Gabbay. Kanamori. Woods, ed.), vol. 6, Elsevier, Amsterdam, 2012.

[5] Itay Neeman, Determinacy in L(R), Handbook of Set Theory (Foreman. Kanamori, ed.), vol. 3, Springer, New York, 2010.

44