<<

Theory of Probability Notes

These are notes made in preparation for oral exams involving the following topics in probability : Random walks , Martingales , and Markov Chains . Textbook used :“Probability : Theory and Examples ," Durrett . Durrett Chapter 4 (Random Walks ). d : Let X1,X2,q be iid taking values in R and let Sn : X1 +q+Xn. Sn is a random walk.

This section is concerned with the properties of the sequence S1Ωcæ,S2Ωcæ,q Stopping Times P Z Stopping Time : Let ΩE, FFF,F ΩFFFn ænì0, æ be a filtered . A stopping time T : E î + W ¡+K¬ is a random variable such that ¡T í n¬ 5 FFFn for all n ì 0, or equivalently, ¡T : n¬ 5 FFFn for all n ì 0. g The constant times (e.g., T è 10) are always stopping times.

R g Let ΩXn ænì0 be an . Fix A 5 BBB . Then the first entry time into A,

notated as TA :: inf ¡n ì 0 : Xn 5 A¬, with the convention that inf 2 : +K is a stopping time.

Associated with each stopping time N is a ]-field FFFN : information known at time N. FFFN is the collection of sets A that have A V ¡N : n¬ 5 FFFn, -n 9 K. When N : n, A must be measurable with respect to information known at time n. P Stopping Times Lemma : Let S,T,ΩTn ænì0 be stopping times on the filtered probability space ΩE, FFF,F ΩFFFn ænì0, æ. Then the following are stopping times: iæ S T T :: min ΩS,Tæ, ii æ S U T :: max ΩS,Tæ,

iii æ S + T, iv æ lim inf n Tn and inf n Tn, væ lim sup n Tn and sup n Tn. c c c Proof of i) ¡S T T í n¬ : Ω¡S í n¬ V ¡T í n¬ æ 5 FFFn. n

Notational Conventions for Measurable Space ΩS, SSSæ : Assuming the random sequence S1Ωcæ,S2Ωcæ,q defined above, Let E :: ¡Ωc1,c2,q æ : ci 5 S¬, FFF :: SSS ≥ SSS ≥q, P :: S ≥ S ≥q where S is the distribution of Xi, and XnΩcæ :: cn. Finite Permutation of N: A map Z from N onto N so that ZΩiæ é i for only finitely many i. Permutable Event A: An event A 5 FFF is permutable if Z?1A è ¡c : Zc 5 A¬ : A for any finite permutation Z. n Symmetric Function : A function f : R î R is said to be symmetric if fΩx1,x2,q,xn æ : fΩxZΩ1æ,xZΩ2æ,q,xZΩnæ æ for each Ωx1,q,xn æ 5 R and for each permutation Z of ¡1,2,q,n¬. n Exchangeable ]-field L: Let X1,X2,q be a sequence of r.v.s on ΩE, FFF,F Pæ. Let Fn :: f : R î R symmetric m’ble , K and Ln :: ]ΩFn,Xn+1,Xn+2,q æ. The exchangeable ]-field L is defined as L :: Vn:1 Ln. The next results shows that for an iid sequence, there is no difference between L and TTT, they are both trivial.

Hewitt Savage 0-1 Law (Gen. of Kolmogorov 0-1) D4.1.1: Let L be the exchangeable ]-field of iid X1,X2,q Then PΩAæ 5 ¡0,1¬ for any A 5 L. Random Walk Possibilities (Consequence of the HSL) D4.1.2: For random walks on R, there are 4 possibilities, one of whichhasprobability1. i) Sn : 0 for all n, ii/iii) Sn î çK, iv) ?K : lim inf Sn 9 lim sup Sn : K. r r Proof : Theorem D4.1.1 implies lim sup Sn is a constant c 5 ø?K,K¿. Let Sn : Sn+1 ? X1. Since Sn has the same distribution as Sn, it follows that c : c ? X1. If c is finite, subtracting c from both sides we conclude X1 è 0 and i) occurs. Turning the last statement around, we see that if X1 is not equivalently zero, then c : ?K or K. The same analysis applies to the lim inf. Discarding the impossible combination lim sup Sn : ?K and lim inf Sn : +K, we have proved the results. n Exercises D4.1.1-7

Theorem D4.1.3: Let X1,X2,q be iid, FFFn : ]ΩX1,q,Xn æ and T be a stopping time with PΩT 9 Kæ ; 0. Conditional on ¡T 9 K¬, XT+n, n ì 1 is independent of FFFT and has the same distribution as the original sequence.

4/22/2020 JodinMorey 1 Tn 1 Tn 1 Let T a stopping time, T0 :: 0, TnΩcæ :: Tn?1Ωcæ + TΩO ? Ωcææ for n ì 1, and tnΩcæ :: TΩO ? Ωcææ, we now extend D4.1.3: P Theorem D4.1.4: Suppose ΩT 9 Kæ : 1. Then the "random vectors" Vn : Ωtn,XTn?1+1,q,XTn æ are independent and identically distributed.

Proof : It is clear from theorem D4.1.3 that Vn and V1 have the same distribution. The independence also follows from theorem D4.1.3 and induction since V1,q,Vn?1 5 FFFΩTn?1 æ. n

Exercises D4.1.8-11

Wald ’s Identity : Let U1,U2,q be iid with finite mean S :: EøUn ¿. Set U0 and let Sn : U1 +q+Un. Let T be a stopping time with EøT¿ 9 K. Then, EøST ¿ : SEøT¿. Exercises D4.1.12-14 2 2 Theorem D4.1.6 (Wald ’s 2nd Equation ): Let U1,U2,q be iid with S :: EøUn ¿ : 0, EøUn ¿ :: ] 9 K, and 2 2 Sn : U1 + U2 +q. If T is a stopping time with EøT¿ 9 K, then EøST ¿ : ] EøT¿. 2 Theorem D4.1.7 (used in the proof of D4.1.6): Let X1,X2,q be iid with EøXn ¿ : 0 and EøXn ¿ : 1, and let

1 9 K for c 9 1, Tc : inf n ì 1 : |Sn | ; cn 2 , then EøTc ¿ : K for c ì 1. E X2 Lemma D4.1.8: If T is a stopping time with EøT¿ : , then TTn î 0. K EøTTn¿ Recurrence

Recurrent Value : x 5 S is considered recurrent if, for every L ; 0, we have P |Sn ? x| 9 L i.o. : 1.

Possible Value (of random walk ): x 5 S is a possible value if, for any L ; 0, 0n such that PΩ|Sn ? x| 9 Læ ; 0. Let VVV be the set of all recurrent values, and UUU be the set of all possible values, then: Theorem D4.2.1: VVV is either 2 or a closed a subgroup of Rd. If VVV is a closed subset, then VVV : UUU. Transient /Recurrent : If VVV : 2, the random walk is said to be transient, otherwise it is called recurrent.

Let ^0 : 0 and ^n : inf ¡m ; ^n?1 : Sm : 0¬ be the time of the nth return to 0. Theorem D4.2.2: For any random walk, the following are equivalent: P P K P i) Ω^1 9 Kæ : 1, ii) Sm : 0 i.o. : 1, and iii) >m:0 ΩSm : 0æ : K. P P P K K Proof : If Ω^1 9 Kæ : 1, then Ω^n 9 Kæ : 1 for all n and Sm : 0 i.o. : 1. Let V :: >m:0 1¡Sm:0¬ : >n:0 1¡^n9K¬ be the number of visits to 0, counting the visit at time 0. Taking expected value and using Fubini’s theorem to put the expected K K K n 1 value inside the sum: EøV¿ : PΩSm : 0æ : PΩ^n 9 Kæ : PΩ^1 9 Kæ : . The 2nd equality shows >m:0 >n:0 >n:0 1?PΩ^19Kæ that ii “ iii and, in combinations with the last two, shows that if i) is false, then iii) is false (i.e.,iii “ i ). n d Theorem D4.2.3 (6.33): The Simple (lattice) Symmetric Random Walk (SSRW) Sn on Z is recurrent for d : 1,2 and is transient for d ì 3. Examples 6.27-6.32 2 Exercise 6.34: Consider SSRW on Z. Show that E0øT0 ¿ : +K. Prove the same for SSRW on Z .

Theorem D4.2.6: The convergence (divergence) of BnPΩ|Sn | 9 Læ for any single value of L ; 0 is sufficient to determine the transience (recurrence) of Sn. The proof of this theorem uses the following lemmas: K P P K P Lemma D4.2.4 (Gen. of D4.2.2): If >n:1 Ω|Sn | 9 Læ 9 K, then |Sn | 9 L i.o. : 0. If >n:1 Ω|Sn | 9 Læ : K, then P |Sn | 9 2L i.o. : 1. K P d K P Lemma D4.2.5: Let m be an integer ì 2. Then, >n:0 Ω|Sn | 9 mLæ í Ω2mæ >n:0 Ω|Sn | 9 Læ. Sn p Theorem D4.2.7 (Chung -Fuchs Theorem ): Suppose d : 1. If the weak law of large numbers holds in the form n î 0, then Sn is recurrent.

4/22/2020 JodinMorey 2 R2 Sn Theorem D4.2.8: If Sn is a random walk in and 1 “ a non-degenerate normal distribution (non-degenerate means n 2 dim support Ωlim n Sn/ n æ : 2), then Sn is recurrent. Remark : The conclusion is also true if the limit is degenerate, but in that case, the random walk is essentially one-(or zero)-dimensional, and the result follows from the Chung-Fuchs Theorem. Let f :: EøeitX j ¿ be the characteristic function of one of the steps of the random walk, then we have:

1 Theorem D4.2.9: Let J ; 0. Sn is recurrent if and only if X d Re dy : K. Ω?J,Jæ 1?fΩyæ 1 Theorem D4.2.10: Let J ; 0. Sn is recurrent if and only if sup r91 X d Re dy : K. Ω?J,Jæ 1?rfΩyæ The next two lemmas are used in the proof of D4.2.10: Lemma D4.2.11 (Parseval Relation ): Let S and T be probability measures on Rd with characteristic functions f and b. Then: X bΩtæSΩdt æ : X fΩxæTΩdx æ. Proof : Since eit 6x is bounded, by Fubini: X bΩtæSΩdt æ : XX eitx TΩdx æSΩdt æ : XX eitx SΩdt æTΩdx æ : X fΩxæTΩdx æ. n 2 Lemma D4.2.12: If |x| í Z , then 1 ? cos x ì x . Proof : It suffices to prove the results for x ; 0. If z í Z , then cos z ì 1 , x 3 x 4 x 3 2 so sin y : cos zdz y , and 1 cos x : sin ydy y dy : x2 . n X0 ì 2 ? X0 ì X0 2 4 3 Truly Three Dimensional Random Walk : A random walk Sn in R is considered truly three-dimensional if the distribution of X1 is PΩX1 6 O é 0æ ; 0, for all nonzero vectors O. Theorem D4.2.13: No truly three-dimensional random walk is recurrent. In conclusion: p p Sn is recurrent in d : 1 if Sn/n î 0.

p Sn is recurrent in d : 2 if Sn/ n “ a non-degenerate normal 2D distribution. p Sn is transient in d ì 3 if it is “truly N-dimensional."

4/22/2020 JodinMorey 3 Durrett Chapter 5 (Martingales ). Conditional Expectation : Consider a probability space ΩE, FFF,Pæ and a random variable X 5 L1ΩE, FFF,Pæ. Let G ¥ FFF be a sub-]-field. We define EøX|G¿, the conditional expectation of X given G, as a random variable Y such that: i) Y is G-measurable and E|Y| 9 K.

ii) EøEøX|G¿1A ¿ : EøY1A ¿ : EøX1A ¿ for any A 5 G. Y defined in this way is also unique. Conditional Expectation Characterizations : g EøX|A¿, where A is an event, is a scalar and the expected value of X given that A occurs. g EøX|Y¿, where Y is a r.v., is a random variable whose value at c 5 E is EøX|A¿, where A is the event ¡Y : YΩcæ¬.

g EøX|1A ¿ is the case Y : 1A, and 1AΩcæ is 1 if c 5 A and 0 otherwise. This is the random variable that returns EøX|A¿ if c 5 A and EøX|Ac ¿ if c 6 A.

Exercise D5.1.1: Generalize the last argument (not shown on this document, see Durrett) to show that if X1 : X2 on B 5 FFF,F then EøX1|FFF¿ : EøX2|FFF¿ a.s. on B. Absolute Continuity : Let T and S be two ]-finite measures on ΩE, FFFæ. We say T is absolutely continuous with respect to S, and write T 99 S, if SΩAæ : 0 “ TΩAæ : 0, for each A 5 FFF. Radon -Nikodym Lemma : Let T and S be two ]-finite measures on ΩE, FFFæ. Then T 99 S if and only if there exists a FFF-measurable function f : E î ø0, æ such that TΩBæ : fdS, for all B FFF. K XB 5 Existence of Conditional Expectation : Let X 5 L1ΩE, FFF,Pæ and G be a sub-]-field. Then EøX|G¿ exists. X PΩA|GGGædP Exercise D5.1.2: Baye’s formula - Let G 5 GGG and show that PΩG|Aæ : G . PΩA|GGGædP XE

Properties of Conditional Expectation Trivial Conditional Expectations : a) If X 5 G, then EøX|G¿ : X a.s. b) If G : ¡2,E¬, then EøX|G¿ : EøX¿. c) If X is independent of G, then EøX|G¿ : EøX¿ a.s.. To prove this, observe that EøX¿ is G-measurable (in that the preimage of a constant is 5 ¡2,E¬ 5 Gæ

and for any A 5 G we have: EøX1A ¿ : EøX¿Eø1A ¿ : EøEøX¿1A ¿. Lemma 6.2.1: Suppose X and Z are independent random variables. Let f : Ü2 î Ü be a measurable function such that E|fΩX,Yæ| 9 K and let gΩzæ : EøfΩX,zæ¿. Then: EøfΩX,Zæ|Z¿ : gΩZæ a.s. Linearity of Conditional Expectation : For L1 random variables. Monotonicity : If X í Y and X,Y 5 L1, then EøX|G¿ í EøY|G¿ a.s. Pre -Tower Property : If FFF ∞ GGG and EøX|GGG¿ 5 FFF,F then EøX|FFF¿ : EøX|GGG¿. Tower Property : Let H ¥ G be sub-]-fields of FFF. Then: EøEøX|G¿|H¿ : EøX|H¿ a.s. Take out what is known : If X is G-measurable, then for any random variable Y such that E|Y| 9 K and E|XY | 9 K, we have: EøXY |G¿ : XEøY|G¿ a.s.

Conditional MCT : Let X,Xn ì 0 be a sequence of integrable random variables and Xn ì X. Then EøXn|G¿ ì EøX|G¿ a.s. Conditional Jensen ’s Inequality : If f : Ü î Ü is a convex function and E|X| 9 K and E|fΩXæ| 9 K, then EøfΩXæ|G¿ ì fΩEøX|G¿æ a.s.

4/22/2020 JodinMorey 4 Lp Contraction : For p ì 1, Eø|EøX|G¿|p ¿ í Eø|X|p ¿.

Conditional Fatou ’s Lemma : Let Xn ì 0 be integrable random variables and let lim inf n Xn be integrable. Then Eølim inf n Xn|G¿ í lim inf n EøXn|G¿ a.s.

Conditional DCT : If Xn î X a.s. and |Xn | í Y for some integrable random variable Y. Then EøXn|G¿ î EøX|G¿ a.s. Chebyshev ’s Inequality : If a ; 0, then PΩ|X| ì a|FFFæ í a?2EøX2|FFF¿. Exercises D5.1.3-7 Mean Square Error : Suppose that X 5 L2ΩE, FFF,Pæ. Then for any Y 5 L2ΩE,G,Pæ, we have: E ΩX ? Yæ2 ì E ΩX ? EøX|G¿æ2 . The equality holds if and only if Y : EøX|G¿ a.s. Exercise s D5.1.8-11

Symmetrization by Conditional Expectation : Suppose that X1,X2,q is an iid sequence. Then for any n ì k, we have: E 1 E øfΩX1,X2,q,Xk æ|Ln ¿ : øBifΩXi1 ,Xi2 ,q,Xik æ|Ln ¿, where the sum is over all k-tuples of distinct integers Ωnæk 1 í i1,q,ik í n and Ωnæk : nΩn ? 1æqΩn ? k + 1æ is the number of such k-tuples.

5.2 Martingales , Almost Sure Convergence Discrete Time Martingale P Filtration : Let ΩE, FFF,F æ be a probability space. A filtration ΩFFFn ænì0 is an increasing sequence of sub sigma fields FFF0 ¥ FFF1 ¥q¥ FFF.F P Adapted : A sequence of random variables ΩXn ænì0 defined on ΩE, FFF,F æ is said to be adapted to the filtration ΩFFFn ænì0 if Xn is FFFn-measurable for each n ì 0. P Martingale : A sequence of random variables ΩXn ænì0 defined on ΩE, FFF,F æ is said to be a martingale with respect to the E E filtration ΩFFFn ænì0 if : a) ΩXn ænì0 is adapted to ΩFFFn ænì0. b) |Xn | 9 K for each n. c) øXn+1|FFFn ¿ : Xn a.s. for each n.

In c) if ":" is replaced by "í" or ì", then ΩXænì0 is said to be a super-martingale or sub-martingale respectively. Exercise 5.2: Show that if ΩX æ is a martingale with respect to the filtration ΩF æ , then it is also a martingale with n nì0 FFn nì0 respect to its canonical filtration Ω]ΩX0,X1,q,Xn æænì0. Theorem D5.2.1: If Xn is a super-martingale, then for n ; m, EøXn|FFFm ¿ í Xm.

Theorem D5.2.2: i) If Xn is a sub-martingale, then for n ; m, EøXn|FFFm ¿ ì Xm. ii) If Xn is a martingale, then for n ; m, EøXn|FFFm ¿ : Xm.

Theorem D5.2.3: If Xn is a martingale with respect to FFFn and f is a convex function with E|fΩXn æ| 9 K for all n then p p fΩXn æ is a sub-martingale with respect to FFFn. Consequently, if p ì 1 and E|Xn | 9 K for all n, then |Xn | is a sub-martingale with respect to FFFn.

Theorem D5.2.4: If Xn is a sub-martingale with respect to FFFn and f is a convex function with E|fΩXn æ| 9 K for all n + then fΩXn æ is a sub-martingale with respect to FFFn. Consequently, i) If Xn is a sub-martingale then ΩXn ? aæ is a sub-martingale. ii) If Xn is a super-martingale, then Xn T a is a super-martingale. 2 Exercise D5.2.3: Give an example of a sub-martingale Xn so that Xn is a super-martingale. Hint: Xn does not have to be random.

Doob ’s Martingale Transform : Call the sequence of random variables ΩHn ænì1 predictable with respect to a filtration ΩFFFn ænì0 if Hn is FFFn?1 measurable for each n ì 1. Let ΩXn ænì0 be a ΩFFFn ænì0–martingale. Define n ΩH 6 Xæ0 : 0, ΩH 6 Xæn : >k:1 HkΩXk ? Xk?1 æ. 1 Doob ’s Martingale Transform Lemma : Assume that Xn is a martingale and ΩH 6 Xæn 5 L for each n. Then, ΩH 6 Xæn is a ΩFFFn ænì0-martingale.

4/22/2020 JodinMorey 5 Theorem D5.2.5: Let Xn be a super-martingale. If Hn is predictable and each Hn is bounded, then ΩH 6 Xæn is a super-martingale (similarly for sub-martingales and for martingales).

Doob ’s Decomposition : Any sub-martingale Xn with respect to FFFn can be uniquely written as the sum of a martingale Mn with respect to FFFn and an increasing predictable process An with A0 : 0.

Let D0 : X0 and Dj : Xj ? EøXj|FFFj?1 ¿ 5 FFFj for j ì 1. Set Mn : D0 + D1 +q+Dn 5 FFFn, and A0 : 0, An : Xn ? Mn : EøXn|FFFn?1 ¿ ? ΩD0 +q+Dn?1 æ 5 FFFn?1 for n ì 1.

Stopping Time Martingale Proposition : If T is a stopping time and ΩXn ænì0 is a super-martingale, then ΩXTTn ænì0 is a super-martingale.

Stopped Martingale Corollary : If T is a stopping time and ΩXn ænì0 is a martingale, then ΩXTTn ænì0 is a martingale. E E K P Let T be a stopping time with øT¿ 9 K, then øT¿ : >i:1 ΩT ì iæ.

Martingale Convergence P R. Let ΩE, FFF,F ΩFFFn ænì0, æ be a filtered probability space and let ΩXn ænì0 be any adapted process. Let a 9 b 5 Denote by Unøa,b¿ the number of up crossings from a to b by time n, i.e., the largest k ì 0 such that there are (random) times

0 í s1 9 t1 9q9 sk 9 tk í n and Xsi í a,Xti ì b for each i. + + EøΩXn?aæ ¿?EøΩX0?aæ ¿ Doob ’s Upcrossing Inequality : If ΩXn æ is a sub-martingale, then EøUnøa,b¿¿ . nì0 í b?a We use the Upcrossing Inequality in the following theorem to show that the nonnegative sub-martingale has a finite number of of crossings, and therefore converges. E + Martingale Convergence : Suppose that ΩXn ænì0 is a sub-martingale with sup n øXn ¿ 9 K. Then for some X, we have Xn î X a.s., where E|X| 9 K. This gives us the following 2 corollaries: 1 E E L -Bounded Martingale Convergence : If ΩXn ænì0 is a martingale with sup n |Xn | 9 K, then Xn î X a.s. and |X| 9 K.

Non -negative Super -Martingale Convergence : If ΩXn ænì0 is a super-martingale with Xn ì 0, then Xn î X a.s. and EøX¿ í EøX0 ¿. Exercises D5.2.4-D5.2.14 Exercise D5.2.5: Let Xn :Bmín1Bm and suppose Bn 5 FFFn. What is the Doob decomposition for Xn? 1 2 Exercise D5.2.13: The switching principle. Suppose Xn and Xn are super-martingales with respect to Fn, and N is a 1 2 1 2 stopping time so that XN ì XN. Then show that: Yn : Xn1¡N;n¬ + Xn1¡Nín¬ is a super-martingale.

Examples

Theorem D5.3.1 (Bounded Increments ): Let X1,X2,qbe a martingale with |Xn+1 ? Xn | í M 9 K. Let C :: lim Xn exists and is finite , and D :: lim sup Xn : +K and lim inf Xn : ?K . Then, PΩC W Dæ : 1. Exercises D5.3.1-D5.3.4

Theorem D5.3.2 (2nd Borel -Cantelli Lemma ): Let FFFn, n ì 0 be a filtration with FFF0 : ¡2,E¬ and An, n ì 1 a F K P F sequence of events with An 5 FFn. Then, An i.o. : >n:1 ΩAn|FFn?1 æ : K . Exercises D5.3.5-6 Polya ’s Urn Scheme Radon -Nikodym Derivatives

Let S be a finite measure and T a probability measure on ΩE, FFFæ. Let FFFn ì FFF be ]-fields (i.e., ]ΩWFFFn æ : FFF). Let Sn and Tn be the restrictions of S and T to FFFn.

Theorem D5.3.3: Suppose Sn 99 Tn for all n. Let Xn : dSn/dTn and let X : lim sup Xn. Then, SΩAæ : XdT + SΩA ¡X : ¬æ. XA V K 4/22/2020 JodinMorey 6 Lemma D5.3.4: Xn : dSn/dTn (Define: ΩE, FFF,F Tæ) is a martingale with respect to FFFn. Branching Processes : Galton -Watson n n Let Ui ,i ì 1,n ì 0 be iid nonnegative integer valued random variables with a common S :: EøUi ¿ 5 Ω0,Kæ. Define n n U1 +q+UZn , if Zn ; 0, Z0 : 1 and Zn+1 : 0, if Zn : 0.

Zn m Then, n is a martingale with respect to FFFn : ]ΩUi : i ì 1,0 í m 9 næ. S nì0 n Theorem D5.3.7: If S 9 1, then Zn : 0 for all n sufficiently large, so Zn/S î 0. m Theorem D5.3.8: If S : 1 and PΩUi : 1æ 9 1, then Zn : 0 for all n sufficiently large. k m Generating Function for the Offspring Distribution pk : fΩsæ :: pks on s ø0,1¿, where pk : PΩU : kæ. >kì0 5 i This generating function is used in the proof of the following theorem: P P K Theorem D5.3.9 (probability of extinction ): Zn : 0 for some n : ΩWn:0 ¡Zn : 0¬æ : [ the unique fixed point of f in ø0,1æ.

So, If S ; 1, then [ 9 1, that is, P Zn ; 0 for all n ; 0.

n Theorem D5.3.10: W : lim Zn/S is not è 0 if and only if Bpkk log k 9 K. n Exercises D5.3.10-12. Exercise D5.3.12: Show that if PΩlim Zn/S : 0æ 9 1, then it is : [ and hence n ¡lim Zn/S ; 0¬ : Zn ; 0 for all n a.s.

Martingale Inequalities

Stopping Time Submartingale Inequality (Proposition 5.24): If ΩXm æmì0 is a sub-martingale and T is a stopping time with PΩT í kæ : 1, for some k 5 Z+, then EøX0 ¿ í EøXT ¿ í EøXk ¿. P Z Corollary : If ΩXm æmì0 is a martingale and T is a stopping time with ΩT í kæ : 1, for some k 5 +, then EøX0 ¿ : EøXT ¿ : EøXk ¿. Exercise D5.4.1-3 D Doob ’s Maximal Inequality : Let ΩXm æmì0 be a nonnegative sub-martingale, Xn :: max 0ímín Xm, R ; 0, and D P 1 E 1 E A :: ¡Xn ì R¬. Then, ΩAæ í R øXn1A ¿ í R øXn ¿. + Relatedly: If ΩXn æ is a sub-martingale, then ΩXn æ is a nonnegative sub-martingale, and if ΩXn æ is a martingale, then Ω|Xn |æ is a nonnegative sub-martingale.

Observe that EøXn|FFFn?1 ¿ : Xn?1 … EøXn1A ¿ : EøXn?11A ¿ for all A 5 FFFn?1. 2 Exercise D5.4.4. Exercise D5.4.5: Let Xn be a martingale with X0 : 0 and EøXn ¿ 9 K. Show that: E 2 øXn ¿ 2 P max Xm R . Hint: Use the fact that ΩXn + cæ is a submartingale and optimize over c. ì í EøX2 ¿+R2 1ímín n p D L -Maximal Inequality : Let ΩXn ænì0 be a nonnegative sub-martingale, Xn : max 0ímín Xm. Fix p ; 1. Then, p p p p EøΩXD æ ¿ EøXn ¿. n í p?1 + Theorem D5.4.4: Let Xn be a sub-martingale and log x :: max Ωlog x,0æ, then E D 1 ?1 E + + + øXn ¿ í Ω1 ? e æ ¡1 + øXn log ΩXn 濬. Exercise D5.4.6: Prove Theorem D5.4.4 by carrying out the following steps: i) Imitate the proof of D5.4.2 but use the D + D trivial bound PΩAæ í 1 for R í 1 to show that EøXn T M¿ í 1 + X Xn log ΩXn T MædP. p 1 L -Convergence Theorem for Martingales (See L Bdd Cnvg Thm): Suppose that ΩXn ænì0 is a martingale with p p sup Eø|Xn | ¿ 9 K for some p ; 1. Then, Xn î X a.s. and in L . 2 Theorem D5.4.6 - Orthogonality of Martingale Increments : Let Xn be a martingale with EøXn ¿ 9 K for all n. If m í n 2 and Y 5 FFFm has EøY ¿ 9 K then EøΩXn ? Xm æY¿ : 0.

4/22/2020 JodinMorey 7 2 Theorem D5.4.7 - Conditional Variance Formula : If Xn is a martingale with EøXn ¿ 9 K for all n, then 2 2 2 E ΩXn ? Xm æ |FFFm : EøXn|FFFm ¿ ? Xm. Exercises D5.4.7-5.4.9 Uniform Integrability and L1-Convergence of Martingales

Uniform Integrability : A family of random variables ΩXF æF5? is said to be uniformly integrable (UI ) if E sup F5? ø|XF |1¡|XF |;M¬ ¿ î 0 as M î K. E E 1- Remark: Since |XF | í M + ø|XF |1¡|XF |;M¬ ¿, we have that ΩXF æF5? is UI “ ΩXF æF5? is L bounded, i.e., sup F5? E|XF | 9 K. Sub ]-field UI Lemma : Let X 5 L1ΩE, FFF,F Pæ.Then, EøX|GGG¿ : GGG a ]-field ∞ FFF is uniformly integrable. 1 p + Exercise D5.5.1: Let f ì 0 be any function with x fΩxæ î K as x î K, for example, fΩxæ : x with p ; 1 or fΩxæ : x log x. If EøfΩ|Xi |æ¿ í C for all i 5 I, then ¡Xi : i 5 I¬ is uniformly integrable. Convergence in Probability Equivalency Theorem : If Xn î X in probability, then TFAE : 1 g ¡Xn : n ì 0¬ is uniformly integrable. g Xn î X in L “ Eø|Xn ? X|¿ î 0.

g E|Xn | î E|X| 9 K. p Convergence in Probability Corollary : As a consequence of the previous theorem, Xn î X and ΩXnænì0 is UI iff 1 1 L p 1 L Xn î X. In particular, if Xn î X and |Xn | í Y for some Y 5 L , then Xn î X.

Sub -martingale Equivalencies Theorem : for a sub-martingale ΩXn ænì0, TFAE: 1 1 g ΩXn ænì0 is UI , g Xn converges a.s. and in L , g Xn converges in L .

If ΩXn ænì0 is a martingale, then these are also equivalent to:

g there exists an integrable random variable X so that Xn : EøX|FFFn ¿. 1 Lemma D5.5.4: If integrable random variables Xn î X in L , then EøXn|1A ¿ î EøX1A ¿. Exercise D5.5.2-4.

Theorem D5.5.8 - Levy ’s 0-1 Law : Suppose that FFFn ì FFFK :: ]ΩWn FFFn æ and A 5 FFFK, then Eø1A|FFFn ¿ î 1A a.s.. 1 1 Levy ’s Forward Law : Suppose that FFFn ì FFFK :: ]ΩWn FFFn æ. If X 5 L , then EøX|FFFn ¿ î EøX|FFFK ¿ a.s. and in L .

Kolmogorov ’s 0-1 Law : Let U1,U2,q be independent random variables and let FFFn : ]ΩU1,U2,q,Un æ for each n. Let K P TTT : Vk:1 ]ΩUk,Uk+1,q æ be the tail ]-field. Then for any A 5 TTT,T ΩAæ 5 ¡0,1¬. Exercises D5.5.5-7

Theorem D5.5.9 (Dominated Convergence for Converging ]-Algebra ): Suppose Yn î Y a.s. and |Yn | í Z for all n where EøZ¿ 9 K. If FFFn ì FFFK then EøYn|FFFn ¿ î EøY|FFFK ¿ a.s. 1 1 Exercise D5.5.8: Show that if FFFn ì FFFK and Yn î Y in L , then EøYn|FFFn ¿ î EøY|FFFK ¿ in L .

Backward Martingale

Let ΩFFF?n ænì0 be a sequence of sub-]-fields, with the property q¥ FFF?2 ¥ FFF?1 ¥ FFF0. A sequence of random variables ΩX?n ænì0 is said to be a backward (or reverse) martingale if: 1 g X?n 5 FFF?n for each n 5 Z+. g X?n 5 L for each n 5 Z+.

g EøX?n|FFF?Ωn+1æ ¿ : X?Ωn+1æ for each n 5 Z+. Clearly, EøX |FFF ¿ : X for each n Z . Hence, if ΩX æ is a reverse martingale, then it is UI. 0 ?n ?n 5 + ?n n5Z+ nîK 1 Convergence of Reverse Martingale Theorem : Let ΩX?n ænì0 be a reverse martingale. Then X?n î X?K a.s. and in L . E Moreover, øX0|FFF?K ¿ : X?K where FFF?K : Vn5Z+ FFF?n. p nîK p Exercise D5.6.1: Show that if X0 5 L , the convergence X?n î X?K occurs in L .

4/22/2020 JodinMorey 8 1 Levy ’s Backward Law : Let Y 5 L . Suppose that there is a decreasing sequence of ]-fields GGG0 µ GGG1 µ GGG2 µq and K E E 1 GGGK : Vn:0 GGGn. Then, øY|GGGn ¿ î øY|GGGK ¿ a.s. and in L .

Exercise D5.6.2: Prove the backwards analog of theorem D5.5.9. Suppose Yn î Y?K a.s. as n î ?K and |Yn | í Z a.s. where EøZ¿ . If , then EøY | ¿ EøY | ¿ a.s. í K FFF ï FFF?K n FFFn î ?K FFF?K Exchangeable Sequence : We say a sequence of random variables X1,X2,q is exchangeable if for each n ì 1, we have d ΩX1,X2,q,Xn æ : ΩXZΩ1æ,XZΩ2æ,q,XZΩnæ æ, for all permutations Z of ¡1,2,q,n¬, i.e., their joint law is invariant under any finite permutation of coordinates. p Exchangeable Sequence “ Identical Distribution (but not the reverse). p IID “ Exchangeable Sequence (but not the reverse). de Finetti ’s Theorem : If X1,X2,q are exchangeable, then conditional on L, X1,X2,q are iid. k Lemma D5.6.4 - Consistency of U-Statistics (used in proof of HW Law): Suppose X1,X2,q are i.i.d. and f : R î R 1 1 a.s.+L E is bounded, then AnΩfæ :: BifΩXi1 ,Xi2 ,q,Xik æ î øfΩX1,X2,q,Xk æ¿ a.s. Ωnæk 1 1 a.s.+L For example, if X1,X2,q is a iid sequence with finite mean, then |Xi Xj | î E|X1 X2 |. nΩn?1æ >iéjín ? ?

Theorem D5.6.6: If X1,X2,q are exchangeable and take values in ¡0,1¬, then there is a probability distribution FΩOæ on 1 k n?k ø0,1¿ so that PΩX1 : 1,q,Xk : 1,Xk+1 : 0,q,Xn : 0æ : O Ω1 Oæ dF ΩOæ. X0 ? Exercises D5.6.3-5 Optional Stopping Theorem P Let ΩE, FFF,F ΩFFFn ænì0, æ be a filtered probability space and let T be a stopping time. We denote by FFFT, the ]-field of "events which have occurred prior to time T." In symbols: FFFT :: A 5 FFF : A V ¡T í n¬ 5 FFFn, -n ì 0 . In the above definition of FFFT, the event ¡T í n¬ can be replaced by ¡T : n¬. Optional Stopping Proposition :

i) If T is a stopping time, then FFFT is a ]-field and T is FFFT-measurable.

ii) If S í T are stopping times, then FFFS ¥ FFFT. P iii) Let T be a stopping time with ΩT 9 Kæ : 1 and ΩXn ænì0 be an adapted sequence. Then XT 5 FFFT.

UI Sub -martingale Stopping Time Closure : If ΩXn ænì0 is a UI sub-martingale, then for any stopping time T, ΩXTTn ænì0 is UI .

Theorem D5.7.2: If EøXT ¿ 9 K and Xn1¡T;n¬is UI , then XTTn is UI .

Theorem D5.7.3 (see prop. 5.24): If Xn is a uniformly integrable sub-martingale, then for any stopping time T í K, we have: EøX0 ¿ í EøXT ¿ í EøXK ¿, where XK : lim Xn. P Optional Stopping Theorem for Submartingales : If S and T are stopping times with ΩS í T 9 Kæ : 1 and ΩXTTn ænì0 is a uniformly integrable sub-martingale, then EøXT|FFFS ¿ ì XS a.s. Consequently, EøXS ¿ í EøXT ¿. P Optional Stopping Theorem for Martingales : If S and T are stopping times with ΩS í T 9 Kæ : 1 and ΩXTTn ænì0 is a uniformly integrable martingale, then EøXT|FFFS ¿ : XS a.s. Consequently, EøXS ¿ : EøXT ¿.

Theorem D5.7.5 (a generalization of Wald ): Suppose Xn is a sub-martingale and Eø|Xn+1 ? Xn | : FFFn ¿ í B a.s. If T is a stopping time w/EøT¿ 9 K, then XTTn is uniformly integrable and hence EøXT ¿ ì EøX0 ¿.

Nonnegative Supermartingale Stopping Time Theorem D5.7.6: If Xn is a nonnegative super-martingale and T í K is a stopping time, then EøX0 ¿ ì EøXT ¿ where XK : lim Xn, which exists by Theorem D5.2.9.

4/22/2020 JodinMorey 9 Theorem D5.7.7 (Asymmetric Simple Random Walk ): Let U1,U2,q be iid random variables and Xn :: U1 +q+Un. P P 1 Let ΩUi : 1æ : p and ΩUi : ?1æ : q è 1 ? p with p é q. Without loss of generality, we assume 2 9 p 9 1. 1?p x a) If fΩxæ : p then fΩSn æ is a martingale. fΩbæ?fΩ0æ b) If we let Tx : inf ¡n : Sn : x¬ then for a 9 0 9 b, we have: PΩTa 9 Tb æ : . fΩbæ?fΩaæ P P 1?p ?a c) If a 9 0, then Ωmin n Sn í aæ : ΩTa 9 Kæ : p . d) If b ; 0, then PΩT 9 æ : 1 and EøT ¿ : b . b K b 2p?1 1 Exercise D5.7.2: Let Sn be an asymmetric simple random walk with 2 9 p 9 1, 2 2 2 b]2 and let ] : pq . Use the fact that Xn : ΩSn ? Ωp ? qænæ ? 4] n is a martingale to show Var ΩTb æ : . Ωp?qæ3 Exercises D5.7.3-9 Random Durrett Exercises D5.1.3: Prove Chebyshev’s Inequality for conditional expectations.

D5.1.4: Suppose X ì 0 and EøX¿ : K. (There is nothing to prove when EøX¿ 9 K.) Show there is a unique FFF-measurable Y with 0 Y such that Xd P : Yd P for all A FFF.F HINT: Let XM : X M, YM : EøXM|FFF¿, í í K XA XA 5 T and let M î K.

D5.1.5: Imitate the proof in the remark after Thereom 1.5.2 to prove the conditional Cauchy Schwartz inequality: EøXY |GGG¿2 í EøX2|GGG¿EøY2|GGG¿.

D5.1.6: Give an example on E: ¡a,b,c¬ in which: EøEøX|FFF1 ¿|FFF2 ¿ é EøEøX|FFF2 ¿|FFF1 ¿.

D5.1.7: Show that when E|X|, E|Y|, and E|XY | are finite, each statement implies the next one and give examples with X,Y 5 ¡?1,0,1¬ a.s. that show the reverse implications are false: i) X and Y are independent, ii) EøY|X¿ : EøY¿, ii) label EøXY ¿ : EøX¿EøY¿.

D5.1.8: Show that if GGG ∞ FFF and EøX2 ¿ 9 K, then E ¡X ? EøX|FFF¿¬2 + E ¡EøX|FFF¿ ? EøX|GGG¿¬2 : E ¡X ? EøX|GGG¿¬2 .øIf we drop the 2nd term on the left, we get an inequality that says geometrically, the larger the subspace the closer the projection is, or statistically, more information means a smaller mean square error.]

D5.1.9: Let Var ΩX|FFFæ : EøX2|FFF¿ ? EøX|FFF¿2. Show that Var ΩXæ : EøVar ΩX|FFFæ¿ + Var ΩEøX|FFF¿æ.

2 D5.1.10: Let Y1,Y2 be iid with mean S and variance ] , N an independent positive integer valued random variable with 2 2 2 EøN ¿ 9 K and X : Y1 +q+YN. Show that Var ΩXæ : ] EøN¿ + S Var ΩNæ. To understand and help remember the formula, think about the 2 special cases in which N or Y is constant.

D5.1.11: Show that if X and Y are random variables with EøY|GGG¿ : X and EøY2 ¿ : EøX2 ¿ 9 K, then X : Y a.s.

D5.2.4: Give an example of a martingale Xn with Xn î ?K a.s. Hint: Let Xn : U1 +q+Un, where the Ui are independent (but not identically distributed) with EøUi ¿ : 0.

4/22/2020 JodinMorey 10 K K D5.3.5: Let pm 5 ø0,1æ. Use the Borel-Cantelli lemmas to show that m:1 pm : K.

K P n?1 c P n?1 c D5.3.6: Show >n:2 ΩAn|Vm:1 Am æ : K implies ΩVm:1 Am æ : 0.

D5.3.10: Galton and Watson who invented the process that bears their names were interested in the survival of family names. Suppose each family has exactly 3 children but coin flips determine their sex. In the eighteen hundreds, only male children kept the family name so following the male offspring leads to a with 1 3 3 1 p0 : 8 , p1 : 8 , p2 : 8 , p3 : 8 . Compute the probability [ that the family name will die out when Z0 : 1.

D5.4.1: Show that if j í k, then EøXj|1¡T:j¬ ¿ í EøXk|1¡T:j¬ ¿ and sum over j to get a 2nd proof of EøXT ¿ í EøXk ¿.

D5.4.2: Generalized the proof of Theorem 5.4.1 to show that if Xn is a sub-martingale and M í T are stopping times with PΩT í kæ : 1, then EøXM ¿ í EøXT ¿.

D5.4.3: Use the stopping times from the Exercise 4.1.7 to strengthen the conclusion of the previous exercise to EøXT|FFFM ¿ ì XM.

4/22/2020 JodinMorey 11 Durrett Chapter 6 (Markov Chains )

Markov chain : Given a filtration ΩFFFnænì0, an ΩFFFnænì0-adapted ΩXnænì0 taking values in ΩS, SSSæ is called a if it has the Markov Property :

PΩXn+1 5 B|FFFnæ : PΩXn+1 5 B|Xnæ a.s. for each B 5 SSS, n ì 0. Markov Chain Transition Probability : A set function p : S ≥ SSS î ø0,1¿ is said to be a transition probability if: i) For each x 5 S, B î pΩx,Bæ is a probability measure on ΩS, SSSæ. ii) For each B 5 SSS,S x î pΩx,Bæ is a SSS-measurable function. P We say a Markov chain ΩXn ænì0 has transition probabilities Ωpn ænì0 if ΩXn+1 5 B|FFFn æ : pnΩXn,Bæ almost surely for each n ì 0 and B 5 SSS.S

Transition Matrix : The probability of moving from i to j in one time step is PΩj|iæ :: pij , if we put these into a matrix, we have the transition matrix p : øpij ¿.

Time Homogeneous Markov Chain : A Markov chain in which the transition probabilities are all the same pn : p for all time n ì 0. Exercise 6.4: Let p be a transition probability on S ≥ SSS and S be a probability measure on ΩT,TTTæ. Then there exists a unique probability measure R on T ≥ S, TTT SSS such that RΩA ≥ Bæ : SΩdx 0 æpΩx0,Bæ for all A TTT,T B SSS.S ≈ XA 5 5

Theorem 6.5 (Existence of Markov Chain ): Let Ωpn ænì0 be a sequence of transition probabilities and S be a probability Z+ Z+ Z+ measure on ΩS, SSSæ. Then there exists a unique probability measure PS on ΩS , SSS æ such that if X 5 S , the coordinate Z+ maps XnΩxæ : xn : S î S form a Markov chain with respect to its canonical ]-field ΩFFFn ænì0 and such that for each B 5 SSS:

i) PSΩX0 5 Bæ : SΩBæ, ii) PSΩXn+1 5 B|FFFn æ : pnΩXn,Bæ PS- a.s.

Proposition 6.6: If ΩXn ænì0 is a Markov chain with transition probabilities Ωpn ænì0 and initial distribution S, then the finite dimensional distributions are given by EQ 37 in the class notes , i.e., P ΩX0 5 A0,X1 5 A1,q,Xk 5 Ak æ : X SΩdx 0 æ X p0Ωx0,dx 1 æqX pkΩxk?1,dx k æ. A0 A1 Ak Useful in proving this proposition is the following lemma: Lemma 6.7: Under the Markov set up, for each n ì 0 and for any bounded measurable function f : S î R, we have: EøfΩXn+1 æ|FFFn ¿ : X pnΩXn,dy æfΩyæ a.s. Theorem D6.1.3 (Monotone Class Theorem ): Let AAA be a Z-system that contains E and let HHH be a collection of real valued functions that satisfies:

i) If A 5 AAA,A then 1A 5 HHH.H ii) If f,g 5 HHH,H then f + g, and cf 5 HHH for any real number c.

iii) If fn 5 HHH are nonnegative and increase to a bounded function f, then f 5 HHH.H Then HHH contains all bounded functions measurable with respect to ]ΩAAAæ.

Proposition 6.8: Let ΩXn ænì0 be a Markov chain on a countable set S with transition matrix p and initial distribution S. Then:

a) P X0 : i0, X1 : i1,q,Xn : in : SΩi0 æp0Ωi0,i1 æqpn?1Ωin?1,in æ n b) PΩXn : j|X0 : iæ : Ωp æΩi,jæ. n c) PΩXn : jæ : SΩiæΩp æΩi,jæ. >i5S

4/22/2020 JodinMorey 12 Section D6.2 Examples Examples and Exercises D6.2.2-9 1 Exercise D6.2.4: Let U0,U1q be iid 5 ¡H,T¬, taking each value with probability 2 . Show that Xn : ΩUn,Un+1 æ is a Markov chain and compute its transition probability p. What is p2? 1 Exercise D6.2.8: Let U0,U1q be iid 5 ¡?1,1¬, taking each value with probability 2 . Let S0 : 0, Sn : U1 +q+Un, and Xn : max ¡Sm : 0 í m í n¬. Show that Xn is not a Markov chain.

D6.3 - Strong Markov Property

Let ΩXn ænì0 be an S-valued (time homogeneous) Markov chain with initial distribution S. By theorem 6.5, assume that Z+ Z+ Xn’s are the coordinate maps on the space ΩS , SSS ,PS æ. Let FFFn be the ]-field generated by X0,q,Xn. Define the shift Z+ Z+ k operator O : S î S by OΩx0,x1,q æ : Ωx1,x2,q æ. So: ΩO Ωxææn : xk+n for n,k ì 0.

Theorem 6.14 (Strengthened Markov Property ): Let ΩXn ænì0 be as above. For any bounded measurable function Z + R E k E P f : S î , and any k ì 0, Søf E O |FFFk ¿ : Xk øf¿ S- a.s. E P Above, S is the expectation operator associated with probability measure S, where S : Jx0 . Exercise 6.15: From the above theorem, deduce that: P P P ΩΩXk+1,Xk+2,q æ 5 B,ΩX0,X1,q,Xk?1 æ 5 A|Xk æ : ΩΩXk+1,Xk+2,q æ 5 B|Xk æ ΩΩX0,X1,qXk?1 æ 5 A|Xk æ. Exercise D6.3.1: Use the Markov property to show that if A 5 ]ΩX0,q,Xn æ and B 5 ]ΩXn,Xn+1,q æ, then for any initial distribution S, we have: PSΩA V B|Xn æ : PSΩA|Xn æPSΩB|Xn æ. In words, the past and future are conditionally independent given the present. Hint: Write the left-hand side as ESΩESΩ1A1B|Fn æ|Xn æ.

Theorem D6.3.2 (Chapman -Kolmogorom Equation ): PxΩXm+n : zæ :ByPxΩXm : yæPyΩXn : zæ. (short proof). P K Theorem D6.3.3: Let Xn be a Markov chain and suppose ΩWm:n+1 ¡Xm 5 Bm ¬|Xn æ ì J ; 0 on ¡Xn 5 An ¬, then P Xn 5 An i.o. ? Xn 5 Bn i.o. : 0. Intuitive meaning of this theorem : "If the chance of a pedestrians getting run over is greater than J ; 0 each time he crosses a certain street, then he will not be crossing it indefinitely (since he will be killed first)!" Exercise D6.3.2

Absorbing : A state a is called absorbing if PaΩX1 : aæ : 1. Z + R Theorem 6.16 (Strong Markov Property ): Let ΩXn ænì0 be as above. For any bounded measurable function f : S î E T E P and for any stopping time T, Søf E O |FFFT ¿ : XT øf¿ on ¡T 9 K¬ S- a.s.

Theorem D6.3.5 (Reflection Principle ): Let U1,U2,q be independent and identically distributed with a distribution that is symmetric about 0. Let Sn : U1 +q+Un. If a ; 0, then PΩsup mín Sm ; aæ í 2PΩSn ; aæ. Exercises D6.3.3-12 Exercise D6.3.7: Let Xn be a Markov chain with S : ¡0,1,q,N¬ and suppose that Xn is a martingale and PxΩ^0 T ^N 9 Kæ ; 0 for all x. i) Show that 0 and N are absorbing states, that is, pΩ0,0æ : pΩN,Næ : 1. Exercise D6.3.10: Let ^A : inf ¡n ì 0 : Xn 5 A¬ and gΩxæ : Exø^A ¿. Suppose that S ? A is finite and for each x 5 S ? A, P xΩ^A 9 Kæ ; 0. i) Show that gΩxæ : 1 + >y pΩx,yægΩyæ for all x 6 A ΩDæ

Recurrence and Transience 0 k k?1 Let X0 be any state. Let Ty :: 0, and for k ì 1, let Ty :: inf ¡n ; Ty : Xn : y¬, the time of the kth visit since possibly X0 to y. Let [yz :: PyΩTz 9 Kæ (probability you’ll get to z from y in finite time. k k?1 Theorem 6.17: For k ì 1, PyΩTz 9 Kæ : [yz [zz . k Exercise 6.18 (iid cycle ): Suppose y 5 S such that [yy : 1. Let Rk : Ty be the time of the kth return to y, and for k ì 1, P let rk : Rk ? Rk?1 be the kth inter-arrival time. Use the strong Markov property to conclude that under y, the vectors

vk : Ωrk,XRk?1 ,q,XRk?1 æ,k ì 1 are i.i.d. Recurrent : A state y 5 S is called recurrent if [yy : 1 and is called transient if [yy 9 1. 4/22/2020 JodinMorey 13 k k By Theorem 6.17: If y is recurrent, then Py Xn : y i.o. : lim PyΩTy 9 Kæ : lim k [yy : 1. kîK k If y is transient, then Py Xn : y i.o. : lim k [yy : 0. K Total Visits (NΩyæ): Let the total number of visits to y by the Markov chain Xn be notated as NΩyæ :: >n:1 1¡Xn:y¬. Lemma 6.20: For any x,y 5 S, we have: k?1 i) PxΩNΩyæ : kæ : [xy [yy Ω1 ? [yy æ, [xy K n 0 c ii) ExøNΩyæ¿ : : p Ωx,yæ (where we interpret : 0, : + for c ; 0). 1?[yy >n:1 0 0 K E K n Recurrent Corollary : A state x 5 S is recurrent if and only if xøNΩxæ¿ : >n:1 p Ωx,xæ : K. Exercises D6.4.2-3 Communication

A state x leads to , or is accessible from another state y é x, denoted by x î y, if [xy ; 0 (or equivalently, for some n n ì 1, p Ωx,yæ ; 0). Formally, state y is accessible from state x if there exists an integer nxy ì 0 such that P Ωnxy æ ΩXnxy : y|X0 : xæ : pxy ; 0. This integer is allowed to be different for each pair of states, hence the subscripts in nij . Allowing n to be zero means that every state is accessible from itself by definition, or x î x. The accessibility relation is reflexive and transitive, but not necessarily symmetric. A pair of states x and y are said to communicate , denoted by x ñ y, if x î y and y î x.

Communicating Class :"ñ" is an equivalence relation. Therefore, there is a partition C1,C2 of S, with each block Ci being referred to as a communicating class. Irreducible Subset : A subset A ¥ S is called irreducible if x ñ y for all x,y 5 A. By definition, each class is irreducible. A Markov chain is said to be irreducible if it is possible to get to any state from any state. More formally, a Markov chain is said to be irreducible if its state space is a single communicating class, i.e., x ñ y for all x,y 5 s.

Proposition 6.23: If x is recurrent and [xy ; 0, then:

i) [yx : 1, ii) y isrecurrent, iii) [xy : 1.

Exercise D6.4.4: Use the strong Markup property to show that [xz ì [xy [yz . Closed Subset of States : We call a subset of states A ¥ S closed if [xy : 0 for all x 5 A and y 6 A. Lemma 6.24: A recurrent class C is closed. Lemma 6.25: If C is a finite closed set, then it contains at least one recurrent state. In particular, a finite closed class C is recurrent. Lemma 6.26: In a finite state Markov chain, a class is recurrent (respectively transient) if and only if it is closed (respectively not closed). Exercise D6.4.5

Theorem D6.4.5 (Decomposition Theorem , motivated by Example D6.4.1): Let R : ¡x : [xx : 1¬ be the recurrent states of a Markov chain. R can be written as Wi Ri, where each Ri is closed and irreducible. [This results shows that for the study of recurrent states we can, without loss of generality, consider a single irreducible closed set.] Birth and Death Chains on N

Let N :: inf ¡n : Xn : 0¬. Let pΩi,i + 1æ :: pi, pΩi,i ? 1æ :: qi, pΩi,iæ :: ri, where q0 : 0. Define the function f so that fΩXNTn æ is a martingale. Set fΩ0æ : 0 and fΩ1æ : 1, and note that for the martingale property to hold when Xn : k ì 1, we have: fΩkæ : pkfΩk + 1æ + rkfΩkæ + qkfΩk ? 1æ. And using rk : 1 ? Ωpk + qk æ, we rewrite this as qk fΩk + 1æ ? fΩkæ : pk ΩfΩkæ ? fΩk ? 1ææ. Suppose that pk,qk ; 0 for k ì 1 so that the chain is irreducible. We find that fΩm + 1æ fΩmæ : m qj for m 1 and fΩnæ : n?1 m qj for n 1. ? m:0

4/22/2020 JodinMorey 14 fΩbæ?fΩxæ Theorem D6.4.6: Let Tc : inf ¡n 1 : Xn : c¬. If a 9 x 9 b, then PxΩTa 9 Tb æ : , and ì fΩbæ?fΩaæ fΩxæ?fΩaæ PxΩTb 9 Ta æ : . fΩbæ?fΩaæ Theorem D6.4.7: 0 is recurrent if and only if fΩMæ î as M î , that is: fΩ æ K m qj : . If fΩ æ 9 , K K K è >m:0

Theorem D6.4.8: Suppose S is irreducible, and f ì 0 with Ex fΩX1 æ í fΩxæ for x 6 F, a finite set, and fΩxæ î K as x î K, that is, ¡x : fΩxæ í M¬ is finite for any M 9 K, then the chain Xn is recurrent. Exercises D6.4.7-10 Stationary Measures Stationary /Invariant Measure S:

(invariant) Sp : S : SΩyæ :Bx5SSΩxæpΩx,yæ. (i.e., S is a left eigenvector of p).

The last equation says PSΩX1 : yæ : SΩyæ. Using the Markov property and induction, it follows that PSΩXn : yæ : SΩyæ for all n ì 1. If S is a probability measure, we call S a stationary distribution, and it represents a possible equilibrium for the chain. Stationary /Invariant Distribution Z:

i) (probability distribution) Bx5SZΩxæ : 1

ii) (invariant) Zp : Z : ZΩyæ :Bx5SZΩxæpΩx,yæ. (i.e., Z is a left eigenvector of p). Exercises D6.5.1-2 Reversible Measure : A measure S that satisfies SΩxæpΩx,yæ : SΩyæpΩy,xæ for all x,y. Theorem D6.5.1: Suppose p is irreducible. A necessary and sufficient condition for the existence of a reversible measure is that i) pΩx,yæ ; 0 implies pΩy,xæ ; 0, and ii) for any loop x0,q,xn : x0 with n pΩxi?1,xi æ pΩxi,xi?1 æ ; 0, : 1. <1íiín n:0 1¡Xn:y¬ : >n:0 xΩXn : y,Tx ; næ, is a stationary measure. Exercise D6.5.3

Recurrent Time in y: Define SxΩyæ as the expected time spent in y between visits to x.

Exercise 6.42 (Irreducible Transient Chain with no Stationary Measure ). Consider a Markov chain on Z+ with 1 i 1 1 i transition matrix p given by pΩ0,1æ : 1, pΩi,i + 1æ : 1 ? 2 for all i ì 1, and pΩ1,0æ : 2 , pΩi,i ? 1æ : 4 , 1 i 1 i pΩi,0æ : 2 ? 4 for all i ì 2. Show that this Markov chain is irreducible, transient and does not have any nontrivial stationary measure. Lemma 6.43: Suppose that a Markov chain is irreducible and recurrent. Let S be a stationary measure with SΩyæ ; 0 for all y 5 S. If T is another stationary measure, then S : cT for some c ; 0. Exercises D6.5.4-7 Exercise D6.5.5: Show that if p is irreducible and recurrent, then SxΩyæSyΩzæ : SxΩzæ K Positive recurrent : ExøTx ¿ : nPΩTx : næ : SxΩyæ 9 “ PxΩTx 9 æ : 1. >n:1 >y5S K K

Null -Recurrent : A state x 5 S is said to be null recurrent if PxΩTx 9 Kæ : 1, but ExøTx ¿ : K.

If ¡Xn ¬ is recurrent but not null recurrent then it is called positive recurrent . A state j is called positive recurrent if the expected amount of time to return to a state j given that the chain started in state j has finite first moment: Eø^jj ¿ 9 K. A positive recurrent state j is always recurrent: If Eø^jj ¿ 9 K, then fj : PΩ^jj 9 Kæ : 1, but the converse is not true: a recurrent state need not be positive recurrent. A current state j for which Eø^jj ¿ : K is called null recurrent.

4/22/2020 JodinMorey 15 To test whether the state is postive -recurent or null -recurrent , we compute the mean return time: E K n xøTx ¿ : >n:1 np Ωx,xæ : K, is null-recurrent. If a chain is finite and irreducible, then there exists a unique stationary /invariant distribution Z, and it is positive recurrent .

If ¡Xn ¬ is positive recurrent , then for every x,y 5 S, lim pnΩx,yæ : ZΩyæ ; 0 where Z : S î ø0,1¿ is the nîK stationary /invariant distribution . Unique Corollary : An irreducible, positive recurrent Markov chain has a unique stationary /invariant distribution Z. Stationary /Invariant Corollaries : For an irreducible and recurrent chain, the following are true. a) The stationary /invariant measures are unique up to constant multiples. b) If S is a stationary /invariant measure , then SΩxæ ; 0 for all x. c) The stationary /invariant distribution Z, if it exists, is unique. d) If a stationary /invariant measure has infinite mass, then the stationary /invariant distribution Z cannot exist. Recurrence from Stationary /Invariant Distributions : If Z is a stationary /invariant distribution of a Markov chain and ZΩxæ ; 0, then x is recurrent. Calculating Stationary /Invariant Distribution : If p is irreducible and has a stationary distribution Z. Then ZΩxæ : 1 . ExøTx ¿ Exercise D6.5.8: Compute the expected number of moves it takes a night to return to its initial position if it starts in a corner of the chessboard, assuming there are no other pieces on the board, and each time it chooses a move at random from its legal moves. (Note: A chessboard is ¡0,1,q,7¬2. A Knight’s move is L-shaped; 2 steps in one direction, followed by one step in a perpendicular direction.) Lemma 6.51 (Thm D6.5.6): For an irreducible Markov chain, the following are equivalent. i) There exists x 5 S that is positive recurrent. ii) There exists a stationary distribution Z. iii) Every state is positive recurrent.

Exercise D6.5.9: Suppose p is irreducible and positive recurrent. Then ExøTy ¿ 9 K for all x,y. Exercises D6.5.10-13 Theorem D6.5.7: If p is irreducible and has a stationary distribution Z, then any other stationary measure is a multiple of Z.

Doubly Stochastic : A probability transition matrix pij : PΩXn+1 : j|Xn : iæ is called doubly stochastic if Bipij : 1 for all j and Bjpij : 1 for all i. The uniform distribution is a stationary distribution of p if and only if the probability transition matrix is doubly stochastic. d Stationary Sequence : A sequence of random variables ΩXn ænì0 is said to be stationary if ΩXn,Xn+1,q æ : ΩX0,X1,q æ, d for each n ì 0, or equivalently, ΩXn,Xn+1,q,Xn+m æ : ΩX0,X1,q,Xm æ, for each n,m ì 0. Exchangeable sequences are stationary.

A reversible measure is always stationary since Bx5SSΩxæpΩx,yæ :Bx5SSΩyæpΩy,xæ : SΩyæBPΩy,xæ : SΩyæ 6 1 : SΩyæ.

Lemma 6.60: Consider a Markov chain ΩXnænì0 started from a stationary distribution π and transition matrix p. Fix N ì 1 and define Yn : XN?n for n : 0,1,...,N. Then ΩYnæ0íníN is a time-homogeneous Markov chain with initial distribution π ZΩyæpΩy,xæ and transition matrix q given by qΩx,yæ : ZΩxæ .

4/22/2020 JodinMorey 16 Convergence of Markov Chains n Number of Visits to y: Let NnΩyæ ::Bi:11¡Xi:y¬ be the number of times the chain visits y during the time ¡1,q,n¬. Theorem 6.62 (Asymptotic Density of Returns ): Let y S be recurrent. Then lim NnΩyæ : 1 1 P - a.s. 5 n E øT ¿ ¡Ty9K¬ x nîK y y

Aperiodic . For a state x, let Ix :: ¡n ì 1 : pnΩx,xæ ; 0¬. Let dx be the greatest common divisor of Ix (if Ix : 2, then dx : +K). We say x has period dx. If every state of a Markov chain has period 1, then we call the chain aperiodic.

Lemma 6.67: If x ñ y, then dx : dy, i.e., periodicity is a class property. n Lemma 6.68: If dx : 1, then there exists n0 ì 1 such that p Ωx,xæ ; 0 for all n ì n0. Lemma 6.69: An irreducible aperiodic Markov chain has the following property: for each x,y 5 S, there exists n n0 : n0Ωx,yæ ì 1 such that p Ωx,yæ ; 0 for all n ì n0.

An irreducible aperiodic chain ¡Xn ¬ is null recurrent if it is recurrent and lim pnΩx,yæ : 0 for all x,y 5 S. nîK Theorem 6.70 (Markov Chain Convergence Theorem ): Consider an irreducible, aperiodic Markov chain with stationary distribution Z. Then, pnΩx,yæ î ZΩyæ as n î K, for all x,y 5 S. Exercises D6.6.1-7 Exercise D6.6.1: Show that if S is finite and p is irreducible and aperiodic, then there is an m so that pmΩx,yæ ; 0 for all x,y. Convergence Theorem Total Variation Distance : For two probability measures S,T on S, their total variation distance is given by: dTV ΩS,Tæ :: 1/2 >x:S|SΩxæ ? TΩxæ| : sup A¥S|SΩAæ ? TΩAæ|. Exercise 6.73: Let S,T be two probability measures on a countable space S.Then, dTV ΩS,Tæ : inf PΩX é Yæ : ΩX,Yæ has a joint distribution with X ~ S, Y ~ T .

Coupled Markov Chain : Let S, T be to probability measures on a countable space S, and ΩXn,Yn ænì0 on the product space S ≥ S. The chain is considered coupled if:

i) Its marginal processes ΩXn ænì0 and ΩYn ænì0 are Markov chains on S with the same transition matrix p and the initial distributions S and T respectively, and

ii) Xn : Yn for n ì T, where T :: inf ¡n ì 0 : Xn : Yn ¬. The next few lemmas are used in the proof of Theorem 6.70:

Lemma 6.74 (Coupling Inequality ): Let ΩXn,Yn ænì0 be a coupled Markov chain as above with X0 ~ S and Y0 ~ T. Let Sn and Tn be the marginal distribution of Xn and Yn respectively. Then, dTV ΩS,Tæ í PΩT ; næ.

Lemma 6.75: Let ΩXn,Yn ænì0 is a Markov chain on S ≥ S with transition matrix q and the initial distribution ΩX0,Y0 æ ~ S ≈ T. Then ΩXn ænì0 and ΩYn ænì0 are Markov chains on S with the same transition matrix p and the initial distributions S and T respectively.

Lemma 6.76: PS≈TΩT 9 Kæ : 1. Mixing Time of Markov Chains Exercise 6.78: Prove that EøT¿ : kΩn kæ. (Hint: Use optional stopping theorem on the martingale ΩD2 tæ ). ? t ? tì0 Exercise 6.80: Show that for any C ; 0, PΩS ; dlog d + Cd æ í e?C for all d ì 1. Hint: P d P 1 dlog d+Cd ΩS ; dlog d + Cd æ í >i:1 ith coupon is not picked in the first dlog d + Cd draws : d 1 ? d .

Random Durrett Exercises

D6.3.2: Let D : Xn : a for some n ì 1 and let hΩxæ : PxΩDæ. i) Use theorem D6.3.3 to conclude that hΩXn æ î 0 a.s. c on D . Here, a.s. means PS a.s. for any initial distribution S. ii) Obtain the result in Exercise D5.5.5 as a special case.

D6.4.5: Show that in the Ehrenfest chain (Example D6.2.5), all states are recurrent. 4/22/2020 JodinMorey 17 18 D6.4.6: A gambler is playing roulette and bedding $1 on black each time. The probability she wins $1 is 38 , and the 20 probability she loses is $1 is 38 . i) Calculate the probability that starting with $20 she reaches $40 before losing her 2n E money. ii) Use the fact that Xn + 38 is a martingale to calculate øT40 T T0 ¿.

D6.5.3: Use the construction in the proof of Theorem D6.5.2 to show that SΩjæ :Bkìjfk+1 defines a stationary measure for the renewal chain (Example D6.2.3).

D6.5.10: Suppose p is irreducible and has a stationary measure S with BxSΩxæ : K. Then p is not positive recurrent.

Ergodic Theory Measure Preserving Map : Let ΩE, FFF,F Sæ be a probability space. A measurable map T : E î E is called measure preserving if for each E 5 FFF,F we have: SΩT?1ΩAææ : SΩAæ. In other words, S E T?1 : S. In this case, S is called the invariant measure with respect to T and ΩE, FFF,F S,Tæ is called a measure preserving system . Invariant Event : Let ΩE, FFF,F S,Tæ be a measure preserving system. Call an event A 5 FFF invariant if T?1ΩAæ : A S- a.s., i.e., SΩA=T?1ΩAææ : 0. Collection of invariant events: III:I : ¡A 5 FFF : A is invariant ¬. Invariant Random Variable : A random variable X : ΩE, FFFæ î R is said to be invariant if X E T : X S- a.s. Ergodic System : A measure preserving system ΩE, FFF,F S,Tæ is ergodic if SΩAæ : 0 or 1 for each A 5 III.I Birkhoff ’s Ergodic Theorem : Let ΩE, FFF,F S,Tæ be a measure preserving system. Then for any X 5 L1ΩE, FFF,F Sæ, we have: a.s.+L1 1 n?1 k E I n >k:0 X E T î øX|II¿, as n î K. Moreover, when T is ergodic, then we have EøX|III¿ : EøX¿ a.s. since III is a trivial ]-field. Consequently, L1 1 n?1 k a.s.+ E n >k:0 X E T î øX¿, as n î K.

Brownian Motion

Standard Brownian Motion (BM ): A collection of random variables ΩBt ætì0 defined on a common probability space such that:

1. For any n ì 1 and 0 : t0 9 t1 9q9 tn, the increments Bti ? Bti?1 , 1 í i í n are independent.

2. For any s 9 t, Bt ? Bs ~ NΩ0,t ? sæ. Also, B0 : 0 a.s.

3. For a.s. every c 5 E, the path t î BtΩcæ is continuous. Properties of BM :

g If 0 9 t0 9 t1 9q9 tn, then the joint distribution of ΩBt1 ,Bt2 ,q,Btn æ follows n-dimensional multivariate Gaussian E with mean 0 and the covariance matrix given by øBti Btj ¿ : min Ωti,tj æ. 1 g For each T ; 0 and L ; 0, the Brownian path t î BtΩcæ is 2 ? L -Holder continuous on ø0,T¿. 1 L 2 ? |Bt0 ? Bt1 | í C|t0 ? t1 | , a.s.

Bct Scale Invariance : Fix c ; 0. Define Wt : for t ø0, æ. Then ΩWt æ is again standard BM. g c 5 K tì0

g Negative BM : Ω?Bt ætì0 is also standard BM.

g Time Shift : Fix s ì 0 and define Wt : Bt+s ? Bs for t ì 0. Then, ΩWt ætì0 is a standard BM, independent of FFFs :: ]ΩBu : u í sæ.

4/22/2020 JodinMorey 18 tB 1 if t ; 0, t g Time Inversion : Let Wt : then ΩWt ætì0 is a standard BM. Note that ΩWt ætì0 are jointly 0 if t : 0, Gaussians with mean 0 and for 0 9 s 9 t, we have covariance matrix given by : E 1 1 øWsWt ¿ : st min Ω s , t æ : min Ωs,tæ : s.

Bt g Law of Large Numbers : lim t : 0 a.s. tîK g Nowhere Differentiability : Brownian paths are nowhere differentiable, a.s. 0 + 0 Let FFFs :: ]ΩBr : 0 í r í sæ and FFFs :: Vt;s FFFt . + + 0 0 + FFFs are right continuous: Vt;s FFFt : Vt;s ΩVu;t FFFu æ : Vu;s FFFu : FFFs . + 0 And therefore allow us an “infinitesimal peak into the future." In other words, A 5 FFFs if it is in FFFs+L, for all L. + d, Blumenthal ’s 0-1 law : If A 5 FFF0, then for all x 5 R we have: PxΩAæ 5 ¡0,1¬.

Difference between Convergence in Probability and Convergence Almost Surely :

For simplicity, consider the case where X : 0 and Xn is the indicator function of an event En."Xn converges almost surely to 0" says that with probability 1, only finitely many of the events En occur. "Xn converges in probability to 0" says that the probability of event En goes to 0 as n î K. Consider a case where for each m, you partition the sample space into m events, each of probability 1/m, and take all these events for all m to form your sequence En. Then Xn î 0 in probability because the probabilities of the individual events go to 0, but each sample point is in infinitely many En (one for each m) so Xn does not go to 0 almost surely.

Markov Examples p Dice Game : A game whose moves are determined entirely by dice is a Markov chain, indeed, an absorbing Markov chain. This is in contrast to card games such as blackjack, where the cards represent a ’memory’ of the past moves. Consider probability for a certain event in the game. In the above-mentioned dice games, the only thing that matters is the current state of the board. The next state of the board depends on the current state, and the next roll of the dice. It doesn’t depend on how things got to their current state. In a game such as blackjack, a player can gain an advantage by remembering which cards have already been shown, so the next state (or hand) of the game is not independent of the past states. P 1 p Multivalued Markov Chain ¡Xn,Xn+1 ¬nì1: where Xi 5 ¡T,H¬ iid. ¡Xi : T¬ : 2 . p Random walk Markov chains : Consider a random walk on the number line where, at each step, the position (call it x) may change by 1 (to the right) or -1 (to the left) with probabilities: P 1 1 x , + move left : 2 + 2 c+|x| Pmove right : 1 ? Pmove left , w/c ; 0. Example, if constant c equals 1, the probabilities of a move to the left at positions x : ?2,?1,0,1,2 are 1 1 1 3 5 6 , 4 , 2 , 4 , 6 resp. The random walk has a centering effect that weakens as c increases. Since probabilities depend only on the current position (value of x) and not on prior positions, this biased random walk satisfies definition of Markov chain. If Xn : 1, then PΩXn+1 : 2|FFFn æ : PΩXn+1 : 2|Xn : 1æ. p Gambler ’s Fortune : Suppose you start w/$10, and wager $1 on an unending fair coin toss indefinitely, or until you lose all your money. If Xn is # of $s after n tosses, w/X0 : 10, then ¡Xn : n 5 N¬ is a Markov process. If I know that you have $12 now, then it is expected that w/even odds, you’ll either have $11 or $13 after next toss. The fact that the guess is not improved by the knowledge of earlier tosses showcases the Markov property, the memoryless P P 1 property of a stochastic process. If Xn : c + 1, then ΩXn+1 : c|FFFn æ : ΩXn+1 : c|Xn : c + 1æ : 2 .

4/22/2020 JodinMorey 19 Martingale Examples p Galton Watson Process 1 p Doobs Martingale Transform (if ΩH 6 Xæn 5 L ). n p An unbiased random walk (in any number of dimensions): Sn :: >i:1 Xi where Xi 5 ¡?1,1¬ iid and S0 : 0, is an example of a martingale.

EøSn+1|FFFn ¿ : EøXn+1 + Sn|FFFn ¿ : Sn + EøXn+1 ¿ : Sn + Ω1æPΩXn+1 : 1æ + Ω?1æPΩXn+1 : ?1æ 1 1 : Sn + 2 ? 2 : Sn a.s.

p A gambler ’s fortune is a martingale if all betting games which the gambler plays are fair. Suppose Sn is a gambler’s fortune after n tosses of a fair coin, where the gambler wins $1 if the coin comes up heads and loses $1 if it comes up tails. The gambler’s conditional expected fortune after the next trial, given the history, is equal to their present fortune. This sequence is thus a martingale. EøSn+1|FFFn ¿ : Same as above 2 p Gamblers Total Gain /Loss Variance : Let Yn :: Xn ? n, where Xn is gambler’s fortune from preceding example. The sequence ¡Yn : n : 1,2,3,...¬ is a martingale. Used to show that gambler’s total gain or loss varies roughly between plus or minus the square root of the number of steps. p Pólya ’s Urn contains a number of different coloured marbles; at each iteration a marble is randomly selected from the urn and replaced with several more of that same colour. For any given colour, the fraction of marbles in the urn with that colour is a martingale. For example, if currently 95% of the marbles are red then, though the next iteration is more likely to add red marbles than another color, this bias is exactly balanced out by the fact that adding more red marbles alters the fraction much less significantly than adding the same number of non-red marbles would. Let n Sn : >i:0 Xi the # of marbles that are red, where X0 is the initial number of red marbles. Sn+1 1 E |FFFn : EøXn+1 + Sn|FFFn ¿ Tn+1 Tn+1 Sn 1 Tn+1 Sn+ Sn 1+ Sn 1 1 Tn Tn Tn Sn : ΩSn + EøXn+1|FFFn ¿æ : Sn + P you choose red : : : : a.s. Tn+1 Tn+1 Tn+1 Tn+1 Tn+1 Tn

4/22/2020 JodinMorey 20