U.U.D.M. Project Report 2020:32

Discrete Martingales and Harmonicity

Isak Dahlqvist

Examensarbete i matematik, 15 hp Handledare: Ingemar Kaj Examinator: Martin Herschend Juni 2020

Department of Mathematics Uppsala University

Contents

1 Introduction 3

2 Random walk 3 2.1 Simple random walk ...... 3 2.2 Gambler’s ruin ...... 6

3 Measure theory 9 3.1 Definitions ...... 9 3.2 Convergence theorems ...... 11

4 Conditional expectations 12 4.1 Definitions ...... 13 4.2 Properties ...... 14

5 Martingales 16 5.1 Harmonicity ...... 17 5.2 Betting strategies ...... 18 5.3 Stopping times ...... 19

6 More boundary value problems 20 6.1 Multidimensional boundary value problems ...... 20 6.2 Escape times ...... 26 6.2.1 One dimension ...... 26 6.2.2 Multiple dimensions ...... 27

7 Optional stopping theorem 29

2 1 Introduction

Martingales has its origins in gambling. The name Martingale often refers to a gambling strategy, where one plays a game of for example coin tossing. The strategy is performed by doubling ones bet for each loss, until finally the coin gives a winning outcome. In later sections we will take a closer look at this betting strategy. We must note that the theory of martingales does not refer to this betting strategy but a certain property for stochastic processes. As of today martingales has been found useful in various different branches outside of game theory, among those are mathematics of finance, especially in the theory for financial derivatives such as options; modern and in stochastic analysis, here mostly in diffusion theory [1]. We will touch upon many of the different mathematical areas where the martin- gale is present. Mostly we will cover gambling examples and generalizations of such but when present we will point out connections to other areas of mathematics. We will first cover some basic stochastic processes in section 2, in section 3 we very briefly cover some measure theory, in section 4 we define which we need for defining martingales which is done in section 5, in section 6 we connect back to the results of section 2 and look at the notion of being harmonic and its relation with martingales, lastly in section 7 we cover a major theorem from martingale theory the Optional stopping theorem. The intentions of this thesis is to give a quick introduction into martingales and also minor priming into some ideas from measure theory while still connecting back to the subject of Markov pro- cesses and other stochastic processes. The reader is assumed to know basic probability theory and at least be acquainted with some theory in Markov processes. Much focus is spent on the connection between martingales and harmonicity. We will refer to harmonicity as the property of being harmonic and associated matters, this is to emphasize that this thesis does not cover the subject of harmonic functions it will only reflect on the relation between martingales and harmonic functions.

2 Random walk

2.1 Simple random walk We will start by introducing ourselves to a stochastic process. For this we will use a simple symmetric random walk as an example. We will work through the example by first defining the random walk and then proving some of its properties, these results are often seen in introductory books on Markov processes. Generalized versions of this model is used in an array of problems. For example modeling stock prices, stochastic models of heat diffusion and so forth. In the next subsection 2.2 we will cover bounded symmetric random walks. In particular the gambler’s ruin problem which is

3 the one dimensional case, this is later extended to multidimensional bound- ary value problems in section 6. The main purpose of this section is to get a intuitive feel for how some stochastic processes behave. The first part down until Proposition 2.1 is based on lecture notes from the course Markov processes given at Uppsala university [2], the section from Proposition 2.1 until the next subsection is based on Alm and Britton [3]. We now give our first definition.

Definition 2.1. Let ) be a subset of R, a stochastic process is then a col- lection of random variables {-C }C ∈) defined on the Ω. The set of all possible values for a stochastic process is called the state space and it is commonly denoted (.

The variable C is called a time-parameter. Thus one should note that {-C }C ∈) is an ordered set, sometimes one uses the notation (-C )C ∈) to stress this fact. Here we will only consider stochastic processes where C is a discrete parameter. A special family of stochastic processes is Markov chains. These have the Markov property that is where the next step of the process is only ∞ dependent on the previous step. Formally if {-=}==0 is a Markov chain then,

P[-=+1 = H|-1,-2,..., (-= = G)] = P[-=+1 = H|-= = G] = ?(G,H).

Remark. We use ?(G,H) to denote the probability of going from state G to state H in one time step.

Now let us consider a simple symmetric random walk.

∞ Definition 2.2. Let {-=}==0 be a sequence of random variables such that Í= P[-= = 1] = P[-= = −1] = 1/2 for all = ∈ N. Define (= = 8=0 -8 then (= is a simple symmetric random walk in one dimension [2].

Note that the random walk is a Markov process. Let us consider the probability of returning to 0. We define the event

: := ”the random walker reaches (= = : for some step = ≥ 1”.

For simplicity let us define %: := P[: ]. By using our new notation we are now interested in finding %0, in the search for this we will also find %: for all : ∈ Z. This will give us an idea of how probable it is for the random walker to return where it started. To find this probability consider this quite simple proposition.

: Proposition 2.1. For any : ∈ Z \{0} we have that %: = %1

Proof. Notices that 2 can only happen if we first reach (= = 1 thus we get

P[2] = P[1] P[2|1].

4 Notice that the event of walking from state 1 to state 2 is equivalent to the event of walking from state 0 to state 1. Thus P[2|1] = P[1], hence 2 %2 = %1. Using this recursively we see that, : %: = %1 · %:−1 = %1 ·(%1 · %:−2) = ... = %1 .  Now we will use a technique usually called first step analysis, where we condition on the first step. Let us calculate the following,

%1 = P[-1 = 1] P[1|-1 = 1] + P[-1 = −1] P[1|-1 = −1]

1 1 1 1 2 = · 1 + %2 = + %1. 2 2 |{z} 2 2 using Prop. 2.1

Thus %1 is given by the equation, 2 %1 − 2%1 + 1 = 0.

The equation only has the solution %1 = 1, therefore by Proposition 2.1 %: = 1 for all : ∈ Z \{0}. Notice that

%0 = P[-1 = 1] P[0|-1 = 1] + P[-1 = −1] P[0|-1 = −1] 1 1 = · 1 + · 1 = 1. 2 2

Thus we have shown that in fact %: = 1 for all : ∈ Z. In other words each state will certainly be reached and in particular we will return to state 0 [3]. How long time will it take until we actually return? Let us also calculate the time it takes to return to 0. Define )8, 9 := ”The time it takes to walk from state i to state j”, also define )8 :=”the time it takes to return to state i starting in state i”. If we assume that E[)0] < ∞ then we can use first step analysis again but for the which yields, 1 1 E[) ] = (1 + E[) ]) + (1 + E[) ] ) 0 2 1,0 2 −1,0 | {z } =E[)1,0 ] by symmetry

= 1 + E[)1,0].

Using first step analysis on E[)1,0] gives, 1 1 1 E[) ] = (1 + E[) ]) + (1 + E[) ]) = 1 +  [) ]. (1) 1,0 2 0,0 2 2,0 2 2,0 | {z } =0

To reach 0 from 2 we first have to reach 1 thus we can rewrite E[)2,0] as,

E[)2,0] = E[)2,1] + E[)1,0] = 2 E[)1,0]. | {z } (2) =E[)1,0 ]

5 and thus combining equation (1) and (2) we reach the contradiction,

E[)1,0] = 1 + E[)1,0].

Hence our assumption was wrong and E[)0] = ∞ [3].

2.2 Gambler’s ruin This section is mostly based on Lawler, Chapter 1 [4], with some details filled in from Alm and Britton [3]. So let us look at the gambler’s ruin problem. It’s a famous problem where we look at a gambler who is walking into a casino. The gambler has G coins in his pocket that he is willing to lose and if he wins # coins in total after any number of games he will be satisfied and leave. Thus the game can either end with the gambler going bankrupt that is he has 0 coins or that the gambler has # coins and decides to stop playing. In each game he has a probability ? of winning and 1 − ? ∞ of losing. Formally we denote this as, let {-=}==0 be a sequence of random variables such that P[-= = 1] = ? and P[-= = −1] = 1 − ? for all =. Since we earlier considered a simple symmetric random walk we will for simplicity now also consider the gambler’s ruin problem for ? = 1/2. Let

= Õ ,= = ,0 + -8 8=0 be the coins that the gambler has after the =:th game, where ,0 = G is the number of coins he starts with. Define

g := min{= : ,= = 0 or ,= = #}.

Let g ∧ = = min{=, g} then the coins at time = is given by ,ˆ= = ,g∧=. If one considers the size of g the statement that P[g < ∞] = 1 seems quite intuitive since after some time we will reach either end.1 We state this in the following proposition.

Proposition 2.2. If {,=} is a bounded random walk as the one defined above, then P[g < ∞|,0 = G] = 1.

Proof. We will consider &(G) := P[g = ∞|,0 = G] this is the probability that the game goes on forever starting with x coins. Thus

 ≤ ≤ 0 if G 0 or # G, &(G) = 1 1 &(G + 1) + &(G − 1) if 0 < G < #,   2 2 since if we already reached either 0 or # the game has stopped, otherwise we can rewrite &(G) with the . Now we have a difference

1We mostly end up bankrupt when we gamble, don’t we?

6 equation with starting values &(0) = 0 and &(#) = 0. Note that for each x such that 0 < G < # we have

&(G) ≤ <0G{&(G − 1),&(G + 1)}.

This is seen by letting " = <0G{&(G − 1),&(G + 1)} then we have that 1 1 1 1 &(G) = &(G + 1) + &(G − 1) ≤ " + " = ". 2 2 2 2 With this fact we can see that the maximum of &(G) is obtained in either 0 or # but &(0) = 0 and &(#) = 0 hence it must be the case that &(G) ≡ 0. Therefore P[g < ∞|,0 = -] = 1, that is the game will end (almost surely) in finite time.  Now that we know that the playing time for a reasonable gambler will be finite, let us consider the chances that the gambler actually ends up leaving the casino with # coins. For this we will define a similar function to &(G). We define the function  : {0, 1,...,#} → [0, 1] as (G) = P[,g = #|,0 = G], in words the probability that the gambler ends up with # coins starting with G coins. Note that (0) = 0 and (#) = 1 also note that for each x where 0 < G < # we have 1 1 (G) = (G + 1) + (G − 1). (3) 2 2 So we end up with the same difference equation as in the proof we just did but with different starting values. Therefore let us find the general solution to this difference equation by proving the following theorem.

Theorem 2.1. Suppose 0, 1 ∈ R and # ∈ Z+, then the only function  : {0, 1,...,#} → R satisfying (3) with (0) = 0 and (#) = 1 is the linear function

G(1 − 0) (G) = 0 + . #

Proof. Suppose  is a solution to (3), let ,= be defined as before and let ,0 = G. Also let g be the time to reach either 0 or # as earlier. What we will now prove is that E[(,g∧=)] = (G) for all n. This will be done using induction. Base case: Let = = 0 and note that (,0) = (G) thus E[(,0)] = E[(G)] = (G) since (G) is a constant with respect to the expected value. Inductive assumption: Assume that for = = : the following holds

E[(,g∧: )] = (G).

7 Inductive step: Let = = : + 1 then we rewrite

# Õ E[(,g∧(:+1) ] = P[,g∧: = H] E[(,g∧(:+1) )|,g∧: = H] (4) H=0 using the law of total expectation. Note that if H = 0 or H = # and ,g∧: = H, then we have reached either 0 or # and thus ,g∧(:+1) = ,g = H. Hence for these values of H we have that,

E[(,g∧(:+1) )] = E[(H)] = (H).

If instead 0 < H < # and ,g∧: = H then, 1 1 E[(, )|, = H] = E[ (H + 1) + (H − 1)] g∧(:+1) g∧: 2 2 1 1 = E[ (H + 1)] + E[ (H − 1)] 2 2 1 1 = (H + 1) + (H − 1) 2 2 = (H).

Where the last equality holds because of (3). Thus we have showed that (4) can be rewritten as,

# Õ E[(,g∧(:+1) )] = P[,g∧: = H] E[(,g∧(:+1) )|,g∧: = H] H=0 # Õ = P[,g∧: = H](H) H=0

= E[(,g∧: )] = (G).

Where the third equality holds by definition of the expected value and the fourth equality holds by the inductive assumption. Thus by the inductive hypothesis we have proved our claim. Hence we can now calculate the following,

(G) = lim E[(,g∧=)] =→∞ # Õ = lim P[,g∧= = H](H) =→∞ H=0

= P[,g = 0](0) + P[,g = #](#)

= (1 − P[,g = #])(0) + P[,g = #](#)

= (0) + P[,g = #]((#) − (0)) [4].

8 G The case (0) = 0 and (#) = 1 gives us that P[, = #|, = G] = . This g 0 # can be seen by noting that (G) is in fact the mean value of (G − 1) and (G + 1) this means that the point (G, (G)) must lie on a straight line. It is thus given by (G) = 1 + 2 · G. The condition (0) = 0 gives us 1 = 0 1 and the condition (#) = 1 gives us  = . Hence for arbitrary boundary 2 # conditions we have showed that [3] G (G) = (0) + ((#) − (0)) # G(1 − 0) = 0 + . # 

3 Measure theory

In this section we will very briefly cover some of the definitions in measure theory that will be of use as we carry on. They will not be used rigorously later since this would involve much technicality that we do not want to bother with in this thesis. This section could be skipped without losing much or any understanding. It is just here for us to have a rigorous foundation to rely upon and refer to. The first subsection is based on Zdzislaw and Tomasz [1]. We also cover some convergence theorems that one typically encounters in a first course on measure theory that will be needed later. These are based on the lecture notes from Sagitov [5].

3.1 Definitions Definition 3.1. A collection F of subsets of Ω is a f-algebra on Ω if

1.Ω ∈ F 2.If  ∈ F then, Ω \  ∈ F Ø 3.If = ∈ F and = ∈ N then, = ∈ F = Where Ω in the case of probability theory is the sample space but in general it could be an arbitrary set. The two tuple (Ω, F) is called a mea- surable space. We say that a f-algebra F is smaller than a f-algebra H if F ⊂ H, this leads us to the following definition.

Definition 3.2. The f-algebra generated by C is the smallest f-algebra that contains C, we denote it f(C)

9 Example Borel sets:

Let F = R C = {(0, 1) : 0 < 1, where 0, 1 ∈ R} = ”Open intervals” f(C) = ”smallest f-algebra containing the open intervals”. f(C) = B(R) which is what we call the Borel sets.

Definition 3.3. Let (Ω, F) be a measurable space. A map P : F → [0, 1] is a probability measure if,

1. it is countably additive, which means if 8 ∩9 = œ for {= : = ∈ N} ⊂ F Ø Õ P[ =] = P[=] = =

2. P(Ω) = 1.

The triple (Ω, F , P) is called a . If  ∈ F then we say that  is an event. If P() = 1 we say that  happens almost surely.

Definition 3.4. A function - : Ω → R is F -measurable if

{l : -(l) ∈ } ∈ F ∀ ∈ B(R).

Where B(R) is the Borel sets mentioned earlier. Then X is measurable with respect to the measurable space (Ω, F). Note that if (Ω, F , P) is a probability space then X is what we call a . If - is a random variable then f(-) is the f-algebra generated by X it will consist of all sets {l : -(l) ∈ } where  ∈ B(R). ∞ ∞ If {-8 }8=0 is a sequence of random variables. Then f({-8 }8=0) is the sigma algebra generated by the sequence and thus consist of all the sets {l : -8 (l) ∈ } where  ∈ B(R).

∞ Definition 3.5. If {F =}==0 is a sequence of f-algebras on Ω such that ∞ F 0 ⊂ F 1 ⊂ F 2 ⊂ ... then {F =}==0 is a filtration ∞ Definition 3.6. A sequence {-=}==0 of random variables is adapted to a ∞ filtration {F =}==0 if and only if -= is F =-measurable for all = ∈ N

We will now show that the filtration that is given by F = := f(-1,-2,...,-=) ∞ gives the smallest filtration such that the sequence {-=}==0 is adapted to ∞ ∞ ∞ {F =}==0. Here the filtration {F =}==0 is smaller than a filtration {H=}==0 if ∞ F = ⊂ H= for every = ∈ N and the sequence {-=}==0 is adapted to both. If we

10 ∞ ∞ consider any filtration {H=}==0 such that the sequence {-=}==0 is adapted to it. Then

-= is H=-measurable, ∀= ∈ N

also H0 ⊂ H1 ⊂ ....

thus -1,-2,...,-= is H=-measurable

Noting that f(-1,-2,...,-=) is the smallest f-algebra containing -1,-2,...,-= in other words the smallest f-algebra such that -1,-2,...,-= is f(-1,-2,...,-=)- measurable. We can conclude that F = = f(-1,-2,...,-=) ⊂ H= for all = ∈ N

3.2 Convergence theorems Now let us cover some of the standard convergence theorems one will en- counter in a course on measure theory. We will cover the bounded con- vergence theorem, Fatou’s lemma, the monotone convergence theorem and lastly we will state the dominated convergence theorem without any proof. Let us begin with the bounded convergence theorem. This subsection is based on the notes by Sagitov [5].

∞ Theorem 3.1 (Bounded convergence theorem). If {-=}==0 is a sequence of random variables such that |-=| < " almost surely and that -= → - almost surely then

E[ lim -=] = lim E[-=]. =→∞ =→∞ Proof. Let Y > 0 then

| E[-=] − E[-]| ≤ E[|-= − -|]

= E[|-= − -| 1{-= − -} ≤ Y] + E[|-= − -| 1{|-= − -| > Y}]

≤ Y + E[" 1{|-= − -| > Y}]

= Y + " P[1{|-= − -| > Y}].

Since -= → - there exists for each Y a # ∈ N such that for each = > #, P[1{|-= − -| > n}] = 0. 

∞ Lemma 3.2 (Fatou’s Lemma). If {-=}==0 is a sequence of random variables such that -= ≥ 0 almost surely then

lim inf E[-=] ≥ E[lim inf -=]. =→∞ =→∞

Proof. Let .= = inf -=. Then .= ≤ -= also we have that .= ↑ . = lim inf -=. <≥= =→∞ Hence it is sufficient to show that

lim inf E[.=] ≥ E[.]. =→∞

11 Note that .= ∧ " ≤ " thus by the bounded convergence theorem we get that

lim inf E[.=] ≥ lim inf E[.= ∧ "] = E[. ∧ "] =→∞ =→∞ The convergence E[. ∧ "] → E[.] as < → ∞ it follows from the definition of expectation. 

Theorem 3.3 (Monotone convergence Theorem). Let {-=} be a sequence of random variables, if 0 ≤ -= (l) ≤ -=+1 (l) for all = and l then for all l the limit of -= as = → ∞ exists (could be infinite). We also have that

lim E[-=] = E[ lim -=]. =→∞ =→∞ Proof. By monotonicity of the expected value we have that

E(-=) ≤ E[ lim -=]. =→∞ In particular we have that

lim inf E(-=) ≤ E[ lim -=]. =→∞ =→∞

Note that since lim -= exists we have that lim -= = lim inf -= thus by Fa- =→∞ =→∞ =→∞ tou’s lemma we have that

E[ lim -=] = E[lim inf -=] ≤ lim inf E[-=] ≤ E[ lim -=]. =→∞ =→∞ =→∞ =→∞ 

∞ Theorem 3.4 (Dominated convergence for random variables). Let {-=}==1 be a sequence of random variables such that |-=| ≤ / for each = ∈ N, where E[|/|] < ∞. Then

-= → - almost surely as n → ∞ =⇒ E[-=] → E[-] as n → ∞

Proof. The proof of this is omitted but it is a consequence of the Lebesgue’s dominated convergence theorem, which could be found in any introductory book on measure theory.  For more details on the convergence theorems consider Sagitov [5].

4 Conditional expectations

Here we will define the conditional expected value and prove some properties. This section is based on Lawler [4] with most of the proofs filled in.

12 4.1 Definitions ∞ Suppose we have a sequence of random variables {-8 }8=1. If the indices of -= represents time, F = is the known information up until time step n. In ∞ ∞ measure theoretic terms {-8 }8=1 is adapted to the filtration {F =}==0 but for simplicity we will say that F = is the information in -1,-2,...,-=.

Remark. F =-measurable could intuitively be interpreted as the name is sug- gesting that we are able to ”measure” the value of it using F = and thus we can determine its properties. Now we have everything we need to characterize the conditional expec- tation thus we give the following definition.

Definition 4.1. Given a random variable ., a f-algebra F = we have that the conditional expectation E[. | F =] is the unique F =-measurable random variable such that

E[1+ .] = E[1+ E[. | F =]] (5) for every F =-measurable event +. First we have to show that this definition is well defined, thus we need to show that there exists a unique such F =-measurable random variable. Let /,- be two F =-measurable random variables such that

E[1+ /] = E[1+ -], for every F =-measurable event +. Then by linearity of the expected value we get

E[(/ − -) 1+ ] = 0.

Since this holds for every F =-measurable event V we can conclude that P[/ − - < 0] = P[/ − - > 0] = 0.

Since if P[/ − - < 0] > 0 there must be some event , where / < -. On this set , we have that 1, ≡ 1 hence E[(/ −-) 1, ] < 0 which is a contradiction. Since we can argue in a similar manner if P[/ − - > 0] > 0 it must be the case that

P[/ = -] = 1.

Hence we have now showed that / = - almost surely. To prove existence of the conditional expectation is something that generally needs some work in measure theory and is thus left out of this thesis. For the interested reader the proof mostly involves using the measure theoretic Radon-Nikodym the- orem.

13 4.2 Properties Now we will look at some of the properties that the conditional expectation has.

a. The random variable E[. | F =] is a function of -1,-2,...,-=

Proof. A rigorous proof of this involves some measure theory but we will just give a simple intuition here. Note that E[.] is a real number (when it exists) and it is completely determined by .. Conditioning . on something random like F = makes the expected value depend on the outcome F = which in turn is dependent on -1,-2,...,-=. Hence the conditional expected value is a function of -1,-2,...,-= 

b. To say that . is F =-measurable means that if we treat -1,-2,...,-= as constants then . is constant. Hence if the random variable . is F =-measurable E[. | F =] = .

Proof. The definition states that E[. | F =] is exactly the F =-measurable event such that,

E[1+ .] = E[1+ E[. | F =]].

Since . is F =-measurable the statement follows directly from the def- inition and we are done. 

c. If .,/ are random variables and 0, 1 are constants, then

E[0. + 1/| F =] = 0 E[. | F =] + 1 E[/| F =]

Proof. Let + be a F =-measurable event, then using the definition and linearity of E we get

E[1+ (0. + 1/)] = E[1+ 0. + 1+ 1/] =

0 E[1+ .] + 1 E[1+ /] = 0 E[1+ E[. | F =]] + 1 E[1+ E[/| F =]]

= E[1+ (0 E[. | F =] + 1 E[/| F =])]

by uniqueness of the conditional expectation (c.) therefore holds. 

d. If . is a random variable and / is a F =-measurable event, then

E[/. | F =] = / E[. | F =]

14 Proof. Here we also need some measure theory to prove it rigorously, hence we give a intuitive proof. We can see / as a constant since we have all the information about it in F =, hence we can pull it out of the expected value using linearity. A full proof of this can be found in Brzezniak and Zastawniak [1].  e. Often referred to as the law of total expectation we have the property

E[E[. | F =]] = E[.]

Proof. We use that fact that the definition states that the equality holds for any F =-measurable event + in particular it holds for the event where + = Ω in this case we have 1+ ≡ 1 thus (e.) follows.  f. If . is independent of -1,-2,...,-=, then

E[. | F =] = E[.]

Proof. When . is independent of -1,-2,...,-= then . is also indepen- dent of 1+ . Thus we have that

E[1+ .] = E[1+ ] E[.] = E[1+ E[.]]

noting that E[.] is a F =-measurable event since it’s a constant, we have what is stated in the definition and thus we are done [4].  g. If = < <, E[E[. | F <]| F =] = E[. | F =]. This we will refer to as the projection property.

Proof. Note that if V is a F =-measurable event then it must also be a F <-measurable event since F < contains atleast all the ”informa- tion” in F =, also note that E[E[. | F <]| F =] is a F =-measurable event. Which gives us the following

E[1+ E[. | F <]] = E[E[1+ . | F <]] |{z} 1+ is F<-measurable

= E[1+ .] = E[E[1+ . | F =]] = E[1+ E[. | F =]]. |{z} |{z} |{z} By (4.) Again by (4.) 1+ is F=-measurable

By the uniqueness of the conditional expectation we are done [4]. 

15 5 Martingales

With all this covered we have finally reached the main subject of the thesis the martingales, first we will look at the definition and then we will prove some basic properties. In later sections we will explore some of the situations where martingales occur and what they are used for. In section 7 we will prove one of the big theorems related to martingales that is the optional stopping theorem. Lawler is the main source of inspiration for this section [4] but the two latter subsections are also based on Zdzislaw and Tomasz [1].

∞ ∞ Definition 5.1. If {-8 }8=0 and {"8 }8=0 are sequences of random variables ∞ and if {F =}==0 is a filtration such that -1,-2,...,-= is adapted to it, then ∞ ∞ {"8 }8=0 is a martingale with respect to {-8 }8=0 if E[|"=|] < ∞ for each = ∈ N, each "= is F =-measurable and

E["=+1| F =] = "=. (6)

Remark. (6) is what will be referred to as the martingale property.

Combining the definition and the projection property (g.) we easily can prove the following property of a martingale.

Theorem 5.1. Let {"=} and {F =} be given as in definition 5.1, then

E["<| F =] = "=

for all =, < ∈ N where = < <.

Proof. We will prove this by using induction over m. Base case. Let < = = + 1, then by the martingale property (6) we have that

E["=+1| F =] = "=.

Inductive assumption. Assume that it holds for the case < = = + : such that

E["=+: | F =] = "=

Inductive step. Let < = = + : + 1 and using property (g.) we get

E["=+:+1| F =] = E[E["=+:+1| F =+: ] | F =] = E["=+: | F =] = "= | {z } ="=+: where the last equality holds by the inductive assumption. Thus by the inductive hypothesis we are done. 

16 Using the theorem we just proved we see that in particular the following holds E["=| F 0] = "0. Applying property (e.) it then follows that

E["=] = E[E["=| F 0]] = E["0].

That is for each = ∈ N the expected value for the martingale is the same as the expected value before we started. From theorem 5.1 we see that if we are given the information in F = then we can expect the value at a later time step < to be the same as in time step =. We could say that we expect martingales to stay in the last state they were observed in.

5.1 Harmonicity ∞ If {-8 }8=0 is a Markov chain with state space ( and 5 is a function such that "= = 5 (-=) is a martingale, then 5 is said to be harmonic with respect to the Markov chain which is equivalent to saying that for every G, Õ 5 (G) = ?(G,H) 5 (H). (7) H∈(

If (7) only holds on a subset of (0 ⊂ ( we can manipulate our Markov chain such that every state in ( \ (0 is absorbent formally this means,

?(H,H) = 1, ∀H ∈ ( \ (0.

Then the Markov chain becomes a martingale. If we look at the gambler’s ruin problem from earlier, then we see that the function (G) in fact is a harmonic function with respect to the random walk by equation (3). The property of being harmonic is a strong link between martingales and bound- ary value problems, in section 6 we will analyse this further. A function 5 can be subharmonic if instead of (7) we have Õ 5 (G) ≤ ?(G,H) 5 (H). H∈(

Thus we define a submartingale with the property

"= ≤ E["=+1| F =].

If we view our martingale as the outcome of a certain game then the sub- martingale can be viewed as a game in favor of the player. For each time we play the game we expect to win more than we had in the previous step. In a completely analogous fashion a function 5 is superharmonic if instead of (7) Õ 5 (G) ≥ ?(G,H) 5 (H). H∈(

17 Therefore a supermartingale has to be when

"= ≥ E["=+1| F =].

Using the example of a game this is a game where the dealer is in favor. For each round you are expected to have less money than in the previous round. Now let us talk about betting strategies, which will lead us to another area where martingales appears, the area of stochastic integrals.

5.2 Betting strategies This subsection is based mostly on Brzezniak and Zastawniak [1]. Consider a game where you have to bet on the outcome of a coin. We are now interested in modeling the outcome for a chosen set of betting rules. Let -= denote the winnings per unit bet of the =:th coin flip. Then betting 10$ on ”heads” for the =:th coin flip would mean -= = 1 if the outcome is ”heads” and -= = −1 if the outcome is ”tails”. Lets consider the simple strategy where you bet 1$ on ”heads” for each game. Then our total winnings would be,

= Õ "= = -8 (8) 8=1

Let "0 = 0, that is our winnings before playing is 0. If the coin is fair the probability for either ”heads” or ”tails” is 1/2 then it is easy to see that ∞ ∞ {"=}==0 is a martingale with respect to {-=}==1. Let U= be the bet for the =:th coin flip, then we need to determine U= after the (= − 1):th flip in other words before we flip a coin we have to make our bet. Therefore we let U= be a random variable such that it is F =−1-measurable we say that U= is predictable. For an arbitrary betting strategy our winnings could then be written as.

= = = Õ Õ Õ ,= = U8 -8 = U8 ("8 − "8−1) = U8Δ"8 (9) 8=1 8=1 8=1

Again we let ,0 = 0. Note that what we now have written is the expression for a discrete stochastic integral ,=, it is the integral of the integrand U8 with respect to the differential Δ"8 = "8 − "8−1. Now we have all the notation needed for the following definition.

∞ Definition 5.2. A betting strategy {U=}==1 with respect to the information ∞ in F =(i.e. the filtration {F =}==0) is a sequence of random variables such that U= is F =−1-measurable, for all = ∈ N and F 0 = {∅, Ω}. We will now continue to show a theorem that should be considered thor- oughly before ever going to the casino again.

18 ∞ Theorem 5.2 (The house always wins). Let {U=}==0 be a betting strategy. Suppose {"=} and {,=} are defined as in (8) and (9) above.

∞ ∞ 1. If {U=}==0 is a bounded sequence and {"=}==0 is a martingale , then ∞ {,=}==0 is a martingale. ∞ ∞ 2. If {U=}==0 is a non-negative bounded sequence and {"=}==0 is a super- ∞ martingale , then {,=}==0 is a supermartingale. ∞ ∞ 3. If {U=}==0 is a non-negative bounded sequence and {"=}==0 is a sub- ∞ martingale , then {,=}==0 is a submartingale.

Proof. First note that U= and ,=−1 is F =−1-measurable. ,=−1 is F =−1- measurable since it’s a sum of F =−1-measurable (fact from measure theory). Using the properties of conditional expectation we can show that,

E[,=| F =−1] = E[,=−1 + U= ("= − "=−1)| F =−1]

= ,=−1 + U= (E["= − "=−1| F =−1]

= ,=−1 + U= (E["=| F =−1] − "=−1)

= ,=−1 + U= ("=−1 − "=−1)

= ,=−1.

Thus we have shown 1. For 2 we simply note that when U= ≥ 0 and "= is ∞ a supermartingale we have that U= (E["=| F =−1] − "=−1) ≤ 0 thus {,=}==0 is also a supermartingale. When proving 3 we use an analogous argument [1].  So what does this say for our gambling? It states that if we play a game that is unfair then whatever betting strategy we use the game will still be unfair. With the assumption that our credit limit is bounded and that we can only make positive bets, in other words, when we sit down at the roulette table. In section 7 we will cover the optional stopping theorem which also could be seen as a theorem that makes gambling less attractive.

5.3 Stopping times In most games that are played you have the option to stop playing whenever you want. Let g be the number of rounds in a game that we decide to play before we quit. In most cases g will be decided based on the state of the game and therefore we assume it is a random variable. Of course one could fix g if we decide to play a fixed number of rounds, in the same manner that any random variable could be constant. To make the settings general we let g ∈ N∪ {∞}. After the =:th round the ”information” in F = will be available. We should thus be able to decide if g = = after each round =, Therefore we define the following.

19 Definition 5.3. Let g ∈ N∪{∞} be a random variable. Then g is a stopping time with respect to the information in F = if for all = ∈ N,

{g = =} ∈ F = .

In other words the decision to stop is based only on the present infor- mation. We will also define the stopping of a sequence as the following.

∞ Definition 5.4. Let g ∧ = = min(g, =) and {-=}==0 a sequence of random variables. Then -g∧= is the sequence stopped at g. Formally one would write it as

-g (l)∧= (l), ∀l ∈ Ω

Note that we have in fact already used this definition without stating it explicitly. From what we showed in the previous section we get the following theorem almost instantly.

Theorem 5.3. Let g be a stopping time.

1. If "= is a martingale, then "g∧= is a martingale

2. If "= is a supermartingale, then "g∧= is a supermartingale

3. If "= is a submartingale, then "g∧= is a submartingale Proof. Given a stopping time g we can define a betting strategy, ( 1, if g ≥ =, U= = 0, if g < =.

We would have to prove that U= truly is a betting strategy that is to say that U= is F =−1-measurable. Since the decision to stop before time step = only uses the information in F =−1 it must be F =−1-measurable. Therefore U= must be F =−1-measurable and hence also a betting strategy. The result now follows directly from Theorem 5.2. A full proof can be found in Brzezniak and Zastawniak [1]. 

6 More boundary value problems

6.1 Multidimensional boundary value problems Let us consider the higher dimensional cases of boundary value problems, which can be seen as the multidimensional equivalent to the gambler’s ruin problem that we covered earlier. Instead of {1, 2,...,#} we now have an arbitrary subset  ⊂ Z3. We assume that  is a connected set, that is for

20 any two points G,H ∈  a random walker could with a probability greater than 0 walk from G to H. Define the following

m = {I ∈ Z3 \  : 38BC(I, ) = 1}. m is then a discrete boundary of the set . Let  = ∪m be the closure of . These definitions can be seen as a discrete analogue from the continuous definitions of topological closure and topological boundary. This section is solely based on Lawler, Chapter 1, with some details filled in [4]. We now carry on with the two following definitions.

∞ Definition 6.1. Let {-=}==0 be a sequence of d-dimensional random vari- ables such that 1 P[- = 4 ] = P[- = −4 ] = , ∀8 ∈ {1, 2, ..., 3}. = 8 = 8 23

Where 48 is the 8:th standard basis vector in d-dimensions. Now define the sum

= Õ (= = -8. 8=0

∞ Then the sequence {(=}==0 is a d-dimensional simple symmetric random 3 walk. We say that {(=} is a random walk in Z . Definition 6.2. Let L be a operator defined by

1 Õ L (G) = ((H) − (G)). 23 H∈Z3 , |G−H |=1

3 Let {(=} be a random walk in Z also let G be a state of the random walk. Then note that L (G) = 0 is equivalent to the definition of being harmonic in G from earlier. This is seen by noting that the point G in a d-dimensional space has exactly 23 points H such that |G − H| = 1 also the

21 1 probability ?(G,H) = for each such H. Thus when L (G) = 0 we have, 23 0 = L (G) ⇐⇒ 1 Õ 0 = ((H) − (G)) 23 H∈Z3 , |G−H |=1 ⇐⇒

1 Õ 0 = ©−23(G) + (H)ª 23 ­ ® H∈Z3 , |G−H |=1 « ¬ ⇐⇒ 1 Õ (G) = (H) 23 H∈Z3 , |G−H |=1 ⇐⇒ Õ (G) = ?(G,H)(H). H∈Z3

Hence we say that (G) is (discrete) harmonic on a set  with respect to the random walk if L (G) = 0 for each G ∈ . We could rewrite L (G) in terms of the expected value for the random walk (= as,

L (G) = E[((=+1) − ((=)|(= = G].

Intuitively this can be seen as some kind of expected rate of change for each step in the random walk. Hence L is often called the discrete Laplacian. With this we are now able to state the so called discrete Dirichlet problem for harmonic functions. Dirichlet Problem for harmonic functions. Let  ⊂ Z3 be a bounded set and define a function  : m → R. Now find an extension of  onto  such that

L (G) = 0, ∀G ∈ .

When 3 = 1 this is equivalent to the gambler’s ruin problem. To see this we choose m = {0,#} and  : m → R is then given by (0) = 0 and (#) = 1. Also choose  = {1, 2,...,# −1} note that the recursive relation that we gave

22 for each G ∈  in equation (3) can be rewritten as, 1 1 (G) = (G + 1) + (G − 1) 2 2 ⇐⇒ 1 1 0 = (G + 1) + (G − 1) − (G) 2 2 ⇐⇒ 1 Õ 0 = ((H) − (G)) 2 H∈Z, |G−H |=1 ⇐⇒ L (G) = 0, ∀G ∈ .

We will now extend this result for the case where 3 > 1. But first we will state a lemma that will be of use.

Lemma 6.1. Suppose  ⊂ Z3 is a bounded discrete set and (G) is harmonic on  with respect to the random walk {(=} in other words L (G) = 0, ∀G ∈  then (G) attains its maximum on m

This lemma is in fact a generalization of the result we got in the proof of proposition 2.2.

Proof. As we have noted earlier the notion of being harmonic on  describes a recursive relation of (G), thus we have for each G ∈ 

L (G) = 0 ⇐⇒ 1 Õ (H) − (G) = 0 23 H∈Z, |G−H |=1 ⇐⇒ 1 Õ (G) = (H). 23 H∈Z, |G−H |=1

If we let "G = <0G 5 (H) then we see that H∈Z, |G−H |=1

1 Õ 1 Õ 1 (G) = (H) ≤ " = 23 · " = " . 23 23 G 23 G G H∈Z, |G−H |=1 H∈Z, |G−H |=1

Therefore if G ∈  there is always one of its neighbours say H ∈  such that 5 (G) ≤ 5 (H). Since this is true for each G ∈  5 (G) must attain its maximum on m. 

23 Lemma 6.1 is an analogue to what is called the maximum principle in other literature, this could be proven for every harmonic function. Now define the variable g = min{= ≥ 0 : (= ∉ }. We are now ready prove the following theorem.

Theorem 6.2. If  ∈ Z3, then for each function  : m → R. There is a unique extension to  such that,

L (G) = 0, ∀G ∈ .

In other words there is a unique extension such that F is harmonic in A. This extension is given by Õ 0 (G) = E[((g)|(0 = G] = %[(g = H|(0 = G](H). H∈m

Proof. First we show that 0 actually is a solution to the Dirichlet problem. This is done by using the definition of the discrete Laplacian. Consider the following set of equalities,

1 Õ L  (G) = ( (H) −  (G)) 0 23 0 0 H∈Z3 , |G−H |=1 1 Õ Õ = P[( = I|( = H](I) − P[( = I|( = G](I) 23 g 0 g 0 H∈Z3 , |G−H |=1 I ∈m Õ 1 Õ = (I) (P[( = I|( = H] − P[( = I|( = G]) 23 g 0 g 0 I ∈m H∈Z3 , |G−H |=1

In the last step we rearrange the terms and then switch the order of sum- mation. Switching the order can be done since both sums are finite. Now let us consider only the inner sum

1 Õ (P[( = I|( = H] − P[( = I|( = G]) 23 g 0 g 0 H∈Z3 , |G−H |=1

1 Õ = ©−23 P[( = I|( = G] + P[( = I|( = H]ª 23 ­ g 0 g 0 ® H∈Z3 , |G−H |=1 « ¬ 1 Õ = − P[( = I|( = G] + P[( = I|( = H] g 0 23 g 0 H∈Z3 , |G−H |=1 = 0.

24 First we see that there is exactly 23 points neighbouring G hence we get the first equality. The last step is then seen by noting that similarly as in the one dimensional case we have, 1 1 P[( = I|( = G] = P[( = I|( = H ] + P[( = I|( = H ] g 0 23 g 1 1 23 g 1 2 1 + ... + P[( = I|( = H ] 23 g 1 23 1 1 = P[( = I|( = H ] + P[( = I|( = H ] 23 g 0 1 23 g 0 2 1 + ... + P[( = I|( = H ], 23 g 0 23 where H1,H2,...,H23 are all H such that |G − H| = 1. We will now prove that the solution is unique. We define the following

"= = ((g∧=).

If  is harmonic on  and (0 = G ∈  then "= has the property,

E["=+1|(0,(1,...,(=] = "=.

Which is the martingale property, that we defined earlier. The equivalence between being a martingale with respect to {(=} and  being harmonic on  is seen by

 harmonic on  ⇐⇒ L (G) = 0, ∀G ∈ 

⇐⇒ E[((=+1∧g) − ((=∧g)|(0,(1,..., ((=∧g = G)] = 0, ∀G ∈ 

⇐⇒ E[((=+1∧g)|(0,(1,..., ((=∧g = G)] − ((=∧g) = 0, ∀G ∈ 

⇐⇒ E["=+1|(0,(1,...,(=∧g] = "=.

Here we use the Markov property of the random walk to rewrite the expected value such that

E[((=+1∧g) − ((=∧g)|(=∧g = G] = E[((=+1∧g) − ((=∧g)|(0,(1,..., ((=∧g = G)].

Hence (G) being harmonic on  with respect to {(=} is equivalent to the process {"=} being a martingale with respect to the random walk {(=} as mentioned earlier. We now use the property that E["=] = E["0] = "0. If (0 = G we can use this property to get the following, Õ P[(g∧=](H) = E["=] = "0 = (G). H∈

We already proved in the one dimensional case that g < ∞ with probability 1. This time we note that &(G) = P[g = ∞|(0 = G] is harmonic on  also

25 note that &(G) = 0 for all G ∈ m thus by lemma 6.1 &(G) ≡ 0 for all G ∈ . Hence we can calculate that, Õ Õ (G) = lim P[(g∧= = H](H) = P[(g = H](H) = E[((g)|(0 = G]. =→∞ H∈ H∈m

Where the limit and sum can be interchanged since  is finite and thus the sum is finite. Thus we have showed that the solution is unique. 

6.2 Escape times Now let us consider the escape times for random bounded walks. We want to find the expected value for the time it takes the random walker to reach a boundary point.

6.2.1 One dimension

Consider a one-dimensional random walk {(=}. Define  = {1, 2,...,# − 1} and let the boundary be m = {0,#}. Let G ∈  also define g as the first time the random walker reaches m as we did earlier. Now let us calculate the expected value of g,

4(G) = E[g|(0 = G].

Note that 4(0) = 4(#) = 0 since then we start in a stopping state. For each G ∈  we have 1 1 4(G) = 1 + 4(G + 1) + 4(G − 1). 2 2 Thus we can calculate L 4(G), rearranging the terms we get 1 1 4(G + 1) + 4(G − 1) − 4(G) = −1 2 2 ⇐⇒ 1 Õ (4(H) − 4(G)) = −1 2 H∈, |G−H |=1 ⇐⇒ L 4(G) = −1.

To summarize we want to find a function where 4(0) = 4(#) = 0 and L 4(G) = −1, ∀G ∈ . For this let us consider the function 5 (G) = G2 and then calculate

26 that 1 Õ L 5 (G) = ( 5 (H) − 5 (G)) 2 H∈Z, |G−H |=1 1 1 = 5 (G + 1) + 5 (G − 1) − 5 (G) 2 2 1 1 = (G2 + 2G + 1) + (G2 − 2G + 1) − G2 2 2 = G2 + 1 − G2 = 1.

Also consider 6(G) = G and we calculate that 1 Õ L 6(G) = (6(H) − 6(G)) 2 H∈Z, |G−H |=1 1 = (G + 1 + G − 1) − G 2 = 0.

Since the Laplacian is linear we thus have that if 4(G) = 2 · 6(G) − 5 (G) we get

L 4(G) = −1, ∀G ∈ .

Thus one condition is satisfied. Now we have to satisfy the boundary condi- tions note that 4(0) = 2 · 6(0) − 5 (0) = 0 also we have 4(2) = 2 · 6(2) − 5 (2) = 22 − 22 = 0 hence if we set 2 = # we have found the solution to our problem. Which is given by

4(G) = # · 6(G) − 5 (G) = #G − G2 = G(# − G).

In fact this is also a unique solution to the problem. This is seen by consid- ering another solution 41 (G) then we have that

L(4(G) − 41 (G)) = L 4(G) − L 41 (G) = (−1) − (−1) = 0.

Thus the function 4(G) − 41 (G) is harmonic on  note that 4(0) − 41 (0) = 4(#) − 41 (#) = 0 therefore by lemma 6.1 we have that 4(G) − 41 (G) ≡ 0.

6.2.2 Multiple dimensions

3 For the multidimensional case we let  be a finite subset of Z and {(=} be a d-dimensional simple symmetric random walk. Starting with G ∈  and let g be the first time that the random walker reaches m. Exactly as in the one-dimensional case we want to calculate

4(G) = E[g|(0 = G].

27 Thus we also get that 4(G) = 0, ∀G ∈ m, (10) L4(G) = −1, ∀G ∈ . (11) There exists a unique function 4(G) such that (10) and (11) are satisfied. This can be seen using exactly the same arguments for uniqueness as in the 2 2 2 2 one-dimensional case. Now consider the function 5 (G) = |G| = G1 +G2 +...+G3 similar to the one-dimensional case we calculate that L 5 (G) = 1 for all G ∈ . Let us define the martingale 2 "= = |(=∧g | − (= ∧ g). 2 To show that "= is a martingale first let us calculate E[|(=| ], " = = # 2 Õ Õ E[|(=| ] = E[(= · (=] = E - 9 · - 9 9=0 9=0 " = = # Õ Õ Õ = E - 9 · -: = = + E[- 9 · -: ] 9=0 :=0 9≠: = =.

When 9 = : the fact that - 9 · -: = 1 is trivial since any unit vector dotted with itself equals 1. For 9 ≠ : we get a few different cases. First we have P[- 9 = -: ] = 1/23 that is they end up being a step in the same direction then - 9 · -: = 1. Secondly we need to consider P[- 9 = −- ] = 1/23 in other words they end up being a step in opposite direction of each other then - 9 · -: = −1. Lastly we have the case where it ends up being a step in a different dimensional direction then - 9 · -: = 0. Hence E[- 9 · -: ] = 1/23 + 0 − 1/23 = 0 for all 9 and : such that 9 ≠ :. Thus the last equality above holds. Denote F = as the information up until time step n, we can then show that "= is a martingale since if = < g 2 E["(=+1)∧g | F =] = E[|((=+1)∧g | − (= + 1) ∧ g| F =] 2 = E[|((=+1) | | F =] − (= + 1) 2 = |(=| + 1 − (= + 1) 2 = |(=| − =

= "=. On the other hand if = ≥ g then 2 E["(=+1)∧g | F ] = E[|((=+1)∧g | − (= + 1) ∧ g| F =] 2 = E[|(g | − g| F =] 2 = E[|(=∧g | − = ∧ g| F =] 2 = |(=∧g | − = ∧ g

= "=.

28 2 "= is therefore a martingale hence we can use that E["=] = E["0] = |(0| . With this we see that

2 E["=] = E[|(=∧g | ] − E[= ∧ g] ⇐⇒ 2 E[= ∧ g] = E[|(=∧g | ] − E["=] ⇐⇒ 2 2 E[= ∧ g] = E[|(=∧g | ] − |(0| .

2 Now we split the term E[|(=∧g | ] into

2 2 2 E[|(=∧g | ] = E[|(g | 1{g ≤ =}] + E[|(=∧g | 1{g > =}].

Now we can take the limit of each part separately for the first part we use the monotone convergence theorem to get,

2 2 2 lim E[|(g | 1{g ≤ =}] = E[ lim |(g | 1{g ≤ =}] = E[|(g | ]. =→∞ =→∞ The latter part we can bound as     2 2 2 E[|(=∧g | 1{g > =}] ≤ E[ max|G| 1{g > =}] = P[g > =] max|G| → 0, G ∈ G ∈ as = → ∞. Since  is finite we know that the max exists. Thus we get that

2 2 E[g] = E[|(g | ] − |(0| .

We notice that this coincides with the result in the one-dimensional case. Let 3 = 1 and  = {1, 2,...,# − 1} then

2 2 E[|(g | ] = 0 · P[(g = 0|(0 = G] +# P[(g = #|(0 = G] = #G, | {z } =0 E[g] = #G − G2 = G(# − G).

7 Optional stopping theorem

Now we have reached the final part of the thesis and are ready to state the Optional stopping theorem. The theorem does in a game theoretic point of view state that when playing a fair game one cannot expect to win more than one started with if the assumptions are satisfied, which in a realistic game scenario they probably are.

∞ Theorem 7.1 (Optional stopping theorem). Let {"=}==0 be a martingale and let g be a stopping time with respect to the information F = such that

29 1. P[g < ∞] = 1 i.e. g < ∞ almost surely,

2. E[|"g |] < ∞ i.e. "g is integrable,

3. E["= 1{g > =}] → 0 almost surely as = → ∞. Then we have that

E["g] = E["0].

Proof. This proof is based on the proof given in Lawler [4]. Suppose {"=} is a martingale and g is a stopping time with respect to the information in F =. Now note that for each = ∈ N,

"g = "g∧= + "g 1{g > =} − "= 1{g > =}.

By Theorem 5.3 we have that "g∧= is a martingale hence as was showed earlier E["g∧=] = E["0]. From assumption 3 we get that

lim E["= 1{g > =}] = 0. =→∞ All that is left to show is now that

lim E["g 1{g > =}] = 0. =→∞

Note that |"g 1{g > =}| ≤ |"g | for each =, also we have E[|"g |] < ∞ from 2. Since g < ∞ we have that

"g 1{g > =} → 0 almost surely as = → ∞, hence by the dominated convergence theorem

E["g 1{g > =}] → E[0] = 0 as = → ∞.

 Let us consider some examples of the Optional stopping theorem in use, first we consider the gambler’s ruin from earlier. We will use what we just proved to find the same result as we did in theorem 2.1.

Example 7.1 (Gambler’s ruin estimate). Suppose (= is a random walk in one-dimension as before also let (0 = G and let 0 and # be the boundary points where # ∈ N, define as earlier

g = min{= : (= = 0 or #}.

30 Using lemma 6.1 it is simple to see that P[g < ∞] = 1 as before. Now let us rewrite E[(g] in terms of P[(g = #],

E[(g] = 0 · P[(g = 0] + # P[(g = #]

= # P[(g = #].

Since (g∧= is bounded between 0 and # it clearly satisfies what it needed for the optional stopping theorem. Since it is a martingale we then have that

E[(g] = (0 = G.

Hence we can solve for P[() = #],

G = # P[(g = #] ⇐⇒ G P[( = :] = . g #

This result is in accordance with the result we got in theorem 2.1. To show that theses results actually is usable in more general settings, let us alter the setting for this such that (0 = 0 and such that the random walker is bounded between − 9 and :. The random walk could then be seen as the losings and winnings of a gambler, that is negative money is money that the gambler has lost and vice versa.

Example 7.2. (= is a one-dimensional random walk let − 9, : ∈ N be the boundary values for the random walk; define

g = min{(= = − 9 or :}.

Rewrite E[(g],

E[(g] = − 9 P[(g = 9] + : P[(g = :]

= − 9 (1 − P[(g = :]) + : P[(g = :]

= − 9 + ( 9 + :) P[(g = :].

As in the previous example the random walk is bounded and thus

E[(g] = (0 = 0.

To calculate P[(C = :] we just plug the value of E[(g] into the previous

31 equation and we get,

0 = − 9 + ( 9 + :) P[(g = :] ⇐⇒

( 9 + :) P[(g = :] = 9 ⇐⇒ 9 P[( = :] = g 9 + : Example 7.3. Let us consider the martingale betting strategy which we briefly mentioned in the introduction. Let {-=} be a sequence of random variables where P[-= = 1] = P[-= = −1] = 1/2. Let us define the betting strategy where we double our bet each time as

( =−1 2 if -1 = -2 = ... = -=−1 = −1 U= = 0 otherwise. it is a famous strategy since it ”guarantees” a win. So does it work? Sadly no, not in reality at least. Reality puts some basic assumptions on us, we only have a finite amount of both time and wealth. These assumptions makes the optional stopping theorem useful in many realistic scenarios. Let us find out where the martingale betting strategy fails in reality. Let ,= be the winnings of the =:th game as defined earlier then we have the probabilities = −= P[,= = 1 − 2 ] = 2 −= P[,= = 1] = 1 − 2 . Hence we note that = = E[,=] = (1 − 2 ) P[,= = 1 − 2 ] + 1 P[,= = 1] = (1 − 2=)2−= + 1 − 2−= = 2−= − 1 + 1 − 2−= = 0. That is for each finite time step the expected value is 0. But we know that with probability 1 we have

lim ,= = 1. =→∞ −= Since P[,= = 1] = 1 − 2 → 1 as = → ∞. In this case we have that

E[ lim ,=] ≠ lim E[,=]. =→∞ =→∞ So the martingale betting strategy only works under the assumption that one has both infinite time and infinite wealth hence it is not doable in reality, if one had infinite wealth it would be quite unethical to use the strategy.

32 References

[1] Z. Brze´zniak and T. Zastawniak, Basic stochastic processes: a course through exercises. London: Springer, 1999.

[2] Orjan¨ Stenflo, “Lecture notes in markov processes,” January 2020.

[3] S. E. Alm and T. Britton, Stokastik: sannolikhetsteori och statistikteori med till¨ampningar. Stockholm: Liber, 1. uppl. ed., 2008.

[4] G. F. Lawler, Random walk and the heat equation, vol. 55. Providence, R.I: American Mathematical Society, 2010.

[5] S. Sagitov, “Probability and random processes, lecture notes,” June 2014.

33