Stochastic Processes Lecture Note

Stochastic Processes Lecture Note

MATH 171 Stochastic Processes Lecture Note Hanbaek Lyu DEPARTMENT OF MATHEMATICS,UNIVERSITYOF CALIFORNIA,LOS ANGELES, CA 90095 Email address: [email protected] www.hanaeklyu.com Contents Chapter 1. Markov chains 3 1. Definition and examples3 2. Stationary distribution and examples7 3. Existence of stationary distribution9 4. Uniqueness of stationary distribution 11 5. Convergence to the stationary distribution 13 6. Markov chain Monte Carlo 17 Chapter 2. Poisson Processes 22 1. Recap of exponential and Poisson random variables 22 2. Poisson processes as an arrival process 24 3. Momoryless property and stopping times 25 4. Merging and splitting of Poisson process 28 5. M/M/1 queue 31 6. Poisson process as a counting process 33 7. Nonhomogeneous Poisson process 35 Chapter 3. Renewal Processes 37 1. Definition of renewal processes 37 2. Renewal reward processes 40 3. Little’s Law 43 Chapter 4. Martingales 49 1. Conditional expectation 49 2. Definition and examples of martingales 50 3. Basic properties of martingales 54 4. Gambling strategies and stopping times 56 5. Applications of martingales at stopping times 58 Chapter 5. Introduction to mathematical finance 62 1. Hedging and replication in the two-state world 62 2. The fundamental theorem of asset pricing 63 3. The binomial model 65 Bibliography 71 2 CHAPTER 1 Markov chains Say we would like to model the USD price of bitcoin. We could observe the actual price at every hour and record it by a sequence of real numbers x ,x , . However, it is more interesting to build a 1 2 ¢¢¢ ‘model’ that could predict the price of bitcoin at time t, or at least give some meaningful insight how the actual bitcoin price behaves over time. Since there are so many factors affecting the price at every time, it might be reasonable that the price at time t should be given by a certain RV,say Xt . Then our sequence of predictions would be a sequence of RVs, (Xt )t 0. This is an example of stochastic processes. Here ‘process’ ¸ means that we are not interest in just a single RV, that their sequence as a whole: ‘stochastic’ means that the way the RVs evolve in time might be random. In this note, we will be studying a very important class of stochastic processes called Markov chains. The importance of Markov chains lies two places: 1) They are applicable for a wide range of physical, biological, social, and economical phenomena, and 2) the theory is well-established and we can actually compute and make predictions using the models. 1. Definition and examples Roughly speaking, Markov processes are used to model temporally changing systems where future state only depends on the current state. For instance, if the price of bitcoin tomorrow depends only on its price today, then bitcoin price can be modeled as a Markov process. (Of course, the entire history of price often affects decisions of buyers/sellers so it may not be a realistic assumption.) Even through Markov processes can be defined in vast generality, we concentrate on the simplest setting where the state and time are both discrete. Let ­ {1,2, ,m} be a finite set, which we call the Æ ¢¢¢ state space. Consider a sequence (Xt )t 0 of ­-valued RVs, which we call a chain. We call the value of Xt ¸ the state of the chain at time t. In order to narrow down the way the chain (Xt )t 0 behaves, we introduce ¸ the following properties: (i) (Markov property) The distribution of Xt 1 given the history X0, X1, , Xt depends only on Xt . That Å ¢¢¢ is, for any values of j , j , , j ,k ­, 0 1 ¢¢¢ t 2 P(Xt 1 k Xt jt , Xt 1 jt 1, , X1 j1, X0 j0) P(Xt 1 k Xt jt ). (1) Å Æ j Æ ¡ Æ ¡ ¢¢¢ Æ Æ Æ Å Æ j Æ (ii) (Time-homogeneity) The transition probabilities pi j P(Xt 1 j Xt i) i, j ­ (2) Æ Å Æ j Æ 2 do not depend on t. When the chain (Xt )t 0 satisfies the above two properties, we say it is a (discrete-time and time-homogeneous) ¸ Markov chain. We define the transition matrix P to be the m m matrix of transition probabilities: £ 2 p p p 3 11 12 ¢¢¢ 1m 6 p21 p22 p2m 7 6 ¢¢¢ 7 P (pi j )1 i,j m 6 . 7. (3) Æ · · Æ 4 . 5 p p p m1 m2 ¢¢¢ mm 3 1. DEFINITION AND EXAMPLES 4 Finally, since the state Xt of the chain is a RV, we represent its probability mass function (PMF) via a row vector r [P(X 1),P(X 2), ,P(X m)]. (4) t Æ t Æ t Æ ¢¢¢ t Æ Example 1.1. Let ­ {1,2} and let (Xt )t 0 be a Markov chain on ­ with the following transition matrix Æ ¸ ·p p ¸ P 11 12 . (5) Æ p21 p22 We can also represent this Markov chain pictorially as in Figure4, which is called the ‘state space diagram’ of the chain (Xt )t 0. ¸ 푝 푝 1 2 푝 푝 FIGURE 1. State space diagram of a 2-state Markov chain For some concrete example, suppose p 0.2, p 0.8, p 0.6, p 0.4. (6) 11 Æ 12 Æ 21 Æ 22 Æ If the initial state of the chain X0 is 1, then P(X 1) P(X 1 X 1)P(X 1) P(X 1 X 2)P(X 2) (7) 1 Æ Æ 1 Æ j 0 Æ 0 Æ Å 1 Æ j 0 Æ 0 Æ P(X 1 X 1) p 0.2 (8) Æ 1 Æ j 0 Æ Æ 11 Æ and similarly, P(X 2) P(X 2 X 1)P(X 1) P(X 2 X 2)P(X 2) (9) 1 Æ Æ 1 Æ j 0 Æ 0 Æ Å 1 Æ j 0 Æ 0 Æ P(X 2 X 1) p 0.8. (10) Æ 1 Æ j 0 Æ Æ 12 Æ Also we can compute the distribution of X2. For example, P(X 1) P(X 1 X 1)P(X 1) P(X 1 X 2)P(X 2) (11) 2 Æ Æ 2 Æ j 1 Æ 1 Æ Å 2 Æ j 1 Æ 1 Æ p P(X 1) p P(X 2) (12) Æ 11 1 Æ Å 21 1 Æ 0.2 0.2 0.6 0.8 0.04 0.48 0.52. (13) Æ ¢ Å ¢ Æ Å Æ In general, the distribution of Xt 1 can be computed from that of Xt via a simple linear algebra. Note Å that for i 1,2, Æ P(Xt 1 i) P(Xt 1 i Xt 1)P(Xt 1) P(Xt 1 i Xt 2)P(Xt 2) (14) Å Æ Æ Å Æ j Æ Æ Å Å Æ j Æ Æ p P(X 1) p P(X 2). (15) Æ 1i t Æ Å 2i t Æ This can be written as · ¸ p11 p12 [P(Xt 1 2), P(Xt 1 2)] [P(Xt 1 2), P(Xt 1 2)] . (16) Å Æ Å Æ Æ Å Æ Å Æ p21 p22 That is, if we represent the distribution of Xt as a row vector, then the distribution of Xt 1 is given by Å multiplying the transition matrix P to the left. N Example 1.2 (Gambler’s ruin). Suppose a gambler has fortune of k dolors initially and starts gambling. At each time he wins or loses 1 dolor independently with probability p and 1 p, respectively. The game ¡ ends when his fortune reaches either 0 or N dolors. What is the probability that he wins N dolors and goes home happy? We use Markov chains to model his fortune after betting t times. Namely, let ­ {0,1,2, ,N} be the Æ ¢¢¢ state space. Let (Xt )t 0 be a sequence of RVs where Xt is the gambler’s fortune after betting t times. We ¸ first draw the state space diagram for N 4 below: Next, we can write down its transition probabilities as Æ 푝 푝 1 2 푝 푝 1. DEFINITION AND EXAMPLES 5 푝 푃 푝 1 0 1 2 3 4 1 1 − 푝 1 − 푝 1 − 푝 FIGURE 2. State space diagram of a 5-state gambler’s chain 8 P(X k 1 X k) p 1 k N > t 1 t > Å Æ Å j Æ Æ 8 · Ç <P(Xt 1 k Xt k 1) 1 p 1 k N Å Æ j Æ Å Æ ¡ 8 · Ç (17) P > (Xt 1 0 Xt 0) 1 > Å Æ j Æ Æ :P(Xt 1 N Xt N) 1. Å Æ j Æ Æ For example, the transition matrix P for N 5 is given by Æ 2 3 1 0 0 0 0 0 61 p 0 p 0 0 07 6 7 6 ¡ 7 6 0 1 p 0 p 0 07 P 6 ¡ 7. (18) Æ 6 0 0 1 p 0 p 07 푝 6 ¡ 7 4 0 0 0 1 p 0 p5 푝 1 2 푝 ¡ 0 0 0 0 0 1 푝 We call the resulting Markov chain (Xt )t 0 the gambler’s chain. N ¸ Example 1.3 (Ehrenfest Chain). This chain is originated from the physics literature as a model for two 푝 푃 cubical volumes of air connected by a thin tunnel. Suppose there are total N indistinguishable푝 balls split into two1 “urns”0 A and B.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    71 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us