Discrete Martingales and Harmonicity
Total Page:16
File Type:pdf, Size:1020Kb
U.U.D.M. Project Report 2020:32 Discrete Martingales and Harmonicity Isak Dahlqvist Examensarbete i matematik, 15 hp Handledare: Ingemar Kaj Examinator: Martin Herschend Juni 2020 Department of Mathematics Uppsala University Contents 1 Introduction 3 2 Random walk 3 2.1 Simple random walk . .3 2.2 Gambler's ruin . .6 3 Measure theory 9 3.1 Definitions . .9 3.2 Convergence theorems . 11 4 Conditional expectations 12 4.1 Definitions . 13 4.2 Properties . 14 5 Martingales 16 5.1 Harmonicity . 17 5.2 Betting strategies . 18 5.3 Stopping times . 19 6 More boundary value problems 20 6.1 Multidimensional boundary value problems . 20 6.2 Escape times . 26 6.2.1 One dimension . 26 6.2.2 Multiple dimensions . 27 7 Optional stopping theorem 29 2 1 Introduction Martingales has its origins in gambling. The name Martingale often refers to a gambling strategy, where one plays a game of for example coin tossing. The strategy is performed by doubling ones bet for each loss, until finally the coin gives a winning outcome. In later sections we will take a closer look at this betting strategy. We must note that the theory of martingales does not refer to this betting strategy but a certain property for stochastic processes. As of today martingales has been found useful in various different branches outside of game theory, among those are mathematics of finance, especially in the theory for financial derivatives such as options; modern probability theory and in stochastic analysis, here mostly in diffusion theory [1]. We will touch upon many of the different mathematical areas where the martin- gale is present. Mostly we will cover gambling examples and generalizations of such but when present we will point out connections to other areas of mathematics. We will first cover some basic stochastic processes in section 2, in section 3 we very briefly cover some measure theory, in section 4 we define conditional expectation which we need for defining martingales which is done in section 5, in section 6 we connect back to the results of section 2 and look at the notion of being harmonic and its relation with martingales, lastly in section 7 we cover a major theorem from martingale theory the Optional stopping theorem. The intentions of this thesis is to give a quick introduction into martingales and also minor priming into some ideas from measure theory while still connecting back to the subject of Markov pro- cesses and other stochastic processes. The reader is assumed to know basic probability theory and at least be acquainted with some theory in Markov processes. Much focus is spent on the connection between martingales and harmonicity. We will refer to harmonicity as the property of being harmonic and associated matters, this is to emphasize that this thesis does not cover the subject of harmonic functions it will only reflect on the relation between martingales and harmonic functions. 2 Random walk 2.1 Simple random walk We will start by introducing ourselves to a stochastic process. For this we will use a simple symmetric random walk as an example. We will work through the example by first defining the random walk and then proving some of its properties, these results are often seen in introductory books on Markov processes. Generalized versions of this model is used in an array of problems. For example modeling stock prices, stochastic models of heat diffusion and so forth. In the next subsection 2.2 we will cover bounded symmetric random walks. In particular the gambler's ruin problem which is 3 the one dimensional case, this is later extended to multidimensional bound- ary value problems in section 6. The main purpose of this section is to get a intuitive feel for how some stochastic processes behave. The first part down until Proposition 2.1 is based on lecture notes from the course Markov processes given at Uppsala university [2], the section from Proposition 2.1 until the next subsection is based on Alm and Britton [3]. We now give our first definition. Definition 2.1. Let ) be a subset of R, a stochastic process is then a col- lection of random variables f-C gC 2) defined on the sample space Ω. The set of all possible values for a stochastic process is called the state space and it is commonly denoted (. The variable C is called a time-parameter. Thus one should note that f-C gC 2) is an ordered set, sometimes one uses the notation ¹-C ºC 2) to stress this fact. Here we will only consider stochastic processes where C is a discrete parameter. A special family of stochastic processes is Markov chains. These have the Markov property that is where the next step of the process is only 1 dependent on the previous step. Formally if f-=g==0 is a Markov chain then, P»-=¸1 = Hj-1,-2, ..., ¹-= = Gº¼ = P»-=¸1 = Hj-= = G¼ = ?¹G, Hº. Remark. We use ?¹G, Hº to denote the probability of going from state G to state H in one time step. Now let us consider a simple symmetric random walk. 1 Definition 2.2. Let f-=g==0 be a sequence of random variables such that Í= P»-= = 1¼ = P»-= = −1¼ = 1/2 for all = 2 N. Define (= = 8=0 -8 then (= is a simple symmetric random walk in one dimension [2]. Note that the random walk is a Markov process. Let us consider the probability of returning to 0. We define the event : := "the random walker reaches (= = : for some step = ≥ 1". For simplicity let us define %: := P»: ¼. By using our new notation we are now interested in finding %0, in the search for this we will also find %: for all : 2 Z. This will give us an idea of how probable it is for the random walker to return where it started. To find this probability consider this quite simple proposition. : Proposition 2.1. For any : 2 Z n f0g we have that %: = %1 Proof. Notices that 2 can only happen if we first reach (= = 1 thus we get P»2¼ = P»1¼ P»2j1¼. 4 Notice that the event of walking from state 1 to state 2 is equivalent to the event of walking from state 0 to state 1. Thus P»2j1¼ = P»1¼, hence 2 %2 = %1. Using this recursively we see that, : %: = %1 · %:−1 = %1 · ¹%1 · %:−2º = ... = %1 . Now we will use a technique usually called first step analysis, where we condition on the first step. Let us calculate the following, %1 = P»-1 = 1¼ P»1j-1 = 1¼ ¸ P»-1 = −1¼ P»1j-1 = −1¼ 1 1 1 1 2 = · 1 ¸ %2 = ¸ %1. 2 2 |{z} 2 2 using Prop. 2.1 Thus %1 is given by the equation, 2 %1 − 2%1 ¸ 1 = 0. The equation only has the solution %1 = 1, therefore by Proposition 2.1 %: = 1 for all : 2 Z n f0g. Notice that %0 = P»-1 = 1¼ P»0j-1 = 1¼ ¸ P»-1 = −1¼ P»0j-1 = −1¼ 1 1 = · 1 ¸ · 1 = 1. 2 2 Thus we have shown that in fact %: = 1 for all : 2 Z. In other words each state will certainly be reached and in particular we will return to state 0 [3]. How long time will it take until we actually return? Let us also calculate the time it takes to return to 0. Define )8, 9 := "The time it takes to walk from state i to state j", also define )8 :="the time it takes to return to state i starting in state i". If we assume that E»)0¼ < 1 then we can use first step analysis again but for the expected value which yields, 1 1 E») ¼ = ¹1 ¸ E») ¼º ¸ ¹1 ¸ E») ¼ º 0 2 1,0 2 −1,0 | {z } =E»)1,0 ¼ by symmetry = 1 ¸ E»)1,0¼. Using first step analysis on E»)1,0¼ gives, 1 1 1 E») ¼ = ¹1 ¸ E») ¼º ¸ ¹1 ¸ E») ¼º = 1 ¸ ») ¼. (1) 1,0 2 0,0 2 2,0 2 2,0 | {z } =0 To reach 0 from 2 we first have to reach 1 thus we can rewrite E»)2,0¼ as, E»)2,0¼ = E»)2,1¼ ¸ E»)1,0¼ = 2 E»)1,0¼. | {z } (2) =E»)1,0 ¼ 5 and thus combining equation (1) and (2) we reach the contradiction, E»)1,0¼ = 1 ¸ E»)1,0¼. Hence our assumption was wrong and E»)0¼ = 1 [3]. 2.2 Gambler's ruin This section is mostly based on Lawler, Chapter 1 [4], with some details filled in from Alm and Britton [3]. So let us look at the gambler's ruin problem. It's a famous problem where we look at a gambler who is walking into a casino. The gambler has G coins in his pocket that he is willing to lose and if he wins # coins in total after any number of games he will be satisfied and leave. Thus the game can either end with the gambler going bankrupt that is he has 0 coins or that the gambler has # coins and decides to stop playing. In each game he has a probability ? of winning and 1 − ? 1 of losing. Formally we denote this as, let f-=g==0 be a sequence of random variables such that P»-= = 1¼ = ? and P»-= = −1¼ = 1 − ? for all =. Since we earlier considered a simple symmetric random walk we will for simplicity now also consider the gambler's ruin problem for ? = 1/2. Let = Õ ,= = ,0 ¸ -8 8=0 be the coins that the gambler has after the =:th game, where ,0 = G is the number of coins he starts with.