The Master Equation
Total Page:16
File Type:pdf, Size:1020Kb
The Master Equation Johanne Hizanidis November 2002 Contents 1 Stochastic Processes 1 1.1 Why and how do stochastic processes enter into physics? . 1 1.2 Brownian Motion: a stochastic process . 2 2 Markov processes 2 2.1 The conditional probability . 2 2.2 Markov property . 2 3 The Chapman-Kolmogorov (C-K) equation 3 3.1 The C-K equation for stationary and homogeneous processes . 3 4 The Master equation 4 4.1 Derivation of the Master equation from the C-K equation . 4 4.2 Detailed balance . 5 5 The mean-field equation 5 6 One-step processes: examples 7 6.1 The Poisson process . 8 6.2 The decay process . 8 6.3 A Chemical reaction . 9 A Appendix 11 1 STOCHASTIC PROCESSES 1 Stochastic Processes A stochastic process is the time evolution of a stochastic variable. So if Y is the stochastic variable, the stochastic process is Y (t). A stochastic variable is defined by specifying the set of possible values called 'range' or 'set of states' and the probability distribution over this set. The set can be discrete (e.g. number of molecules of a component in a reacting mixture),continuous (e.g. the velocity of a Brownian particle) or multidimensional. In the latter case, the stochastic variable is a vector (e.g. the three velocity components of a Brownian particle). The figure below helps to get a more intuitive idea of a (discrete) stochastic process. At successive times, the most probable values of Y have been drawn as heavy dots. We may select a most probable trajectory from such a picture. Nothing excludes the possible existence of two or more trajectories of equal probability. 1.1 Why and how do stochastic processes enter into physics? Why: The most classical case where stochastic processes enter is Statistical Mechanics. Statistical Mechanics studies systems of large numbers of parti- cles. Therefore, precise calculations can not be made. Instead, average values are measured, through probability consideration. How: The concept of sta- tistical mechanics is replacing the system by an ensemble, i.e. a collection of microstates of the sytem. This ensemble serves to visualize the probability distribution over the set of microstates. 1 2 MARKOV PROCESSES 1.2 Brownian Motion: a stochastic process A very classical example of a stochastic process is the Brownian motion, i.e. the motion of a heavy colloidal particle immersed in a fluid made up of light particles. The stochastic variable Y in this case may be the position or velocity of the Brownian particle. If Y were deterministic, we could find an expression for the evolution in time of Y , such to give Y at each t. But Y is a stochastic variable. Each t doesn't have a specific value for Y , but a probability for Y 's value. 2 Markov processes In order to understand the Markov property, the conditional probability should be defined. 2.1 The conditional probability The conditional probability P1j1(y2; t2jy1; t1) is defined through the following relation: P2(y1; t1; y2; t2) = P1j1(y2; t2jy1; t1)P1(y1; t1) (1) which means that the joint probability of finding y1 at t1 and y2 at t2, equals the probability of finding y1 at t1 times the probability of finding y2 at t2, given y1 at t1. The conditional probability must satisfy the following prop- erties 1. P1j1 ≥ 0 2. P1j1(y2; t2jy1; t1)dy2 = 1 R 3. P1(y2; t2) = P1j1(y2; t2jy1; t1)P1(y1; t1)dy1 R Property 3 follows from equation (1), when integrated over y1. Integrat- ing the left side of (1) over y1 gives: P2(y1; t1; y2; t2)dy1 = P1(y2; t2), i.e. P1(y2; t2) is the marginal probability distributionR of P2 with respect to y2. 2.2 Markov property A Markov process is defined by the following relation, which is called the Markov property: P1jn−1(yn; tnjyn−1; tn−1; :::; y1; t1) = P1j1(yn; tnjyn−1; tn−1) (2) t1 < t2 < ::: < tn 2 3 THE CHAPMAN-KOLMOGOROV (C-K) EQUATION The Markov property merely expresses that, for a Markov process, the prob- ability of a transition at time tn−1 from a value yn−1 to a value yn at time tn, depends only on the value of y at the time tn−1, and not on the previous history of the system. P1j1 is called the transition probability. For a Markov process the joint probabilities for n ≥ 3 are all expressed in terms of P1 and P1j1. For n = 3: P3(y1; t1; y2; t2; y3; t3) = P2(y1; t1; y2; t2)P1j2(y3; t3jy1; t1; y2; t2) = P1(y1; t1)P1j1(y2; t2jy1; t1)P1j1(y3; t3jy2; t2) (3) 3 The Chapman-Kolmogorov (C-K) equation Taking relation (3), integrating it over y2 and dividing both sides by P1 gives us the Chapman-Kolmogorov equation. P (y ; t jy ; t ) = P (y ; t jy ; t )P (y ; t jy ; t )dy ∗ (4) 1j1 3 3 1 1 Z 1j1 3 3 2 2 1j1 2 2 1 1 2 This equation states that a process starting at t1 with value y1 reaches y3 at t3 via any one of the possible values y2 at the intermediate time t2.(Equations with an asterisk are analytically proved in the Appendix) 3.1 The C-K equation for stationary and homogeneous processes First let's define these two types of Markov processes: • Stationary : A process Y is stationary if it is not affected by a shift in time, i.e. Y (t) and Y (t + ) have the same probability distribution. • Homogeneous : A homogeneous process is a nonstationary Markov ∗ process and is defined by the probability P (y1) ≡ P1j1(y1jy0). For such processes the transition probability depends only on the time interval τ = t2 − t1. For both stationary and homogeneous processes, a special notation is used for the transition probability and the C-K equation: P1j1(y2; t2jy1; t1) = Tτ (y2jy1) (5) T 0 (y jy ) = T 0 (y jy )T (y jy )dy (6) τ+τ 3 1 Z τ 3 2 τ 2 1 2 3 4 THE MASTER EQUATION 4 The Master equation 4.1 Derivation of the Master equation from the C-K equation We take the transition probability Tτ 0 and expand it in a Taylor series over zero, considering small τ 0: 0 02 Tτ 0 (y3jy2) = δ(y2 − y3) + τ W (y3jy2) + O(τ ) (7) The delta function expresses that the probability to stay at the same state after time zero equals one, whereas the probability to change state after time zero equals zero. W (y3jy2) is the time derivative of the transition probability at τ 0 = 0. Thus it is called transition probability per unit time. This expression must satisfy the normalization property. Therefore the in- tegral over y3 must equal one. In order for that to happen, the above form must be corrected in the following sense: 0 0 02 Tτ 0 (y3jy2) = (1 − α0τ )δ(y2 − y3) + τ W (y3jy2) + O(τ ) (8) 0 where the delta function has been corrected by the coefficient 1 − α0τ which corresponds to the probability for no transition to have taken place at all. Therefore: α (y ) = W (y jy )dy (9) 0 2 Z 3 2 3 Putting (8) into (6), dividing by τ 0 and going to the limit τ 0 ! 0 gives us the differential form of the Chapman-Kolmogorov equation which is called the Master equation: @ T (y jy ) = [W (y jy )T (y jy ) − W (y jy )T (y jy )]dy ∗ (10) @τ τ 3 1 Z 3 2 τ 2 1 2 3 τ 3 1 2 It is useful to cast the equation in a more intuitive form. Noting that all tran- sition probabilities are for a given value y1 at t1, we may write, suppressing redundant indices: @P (y; t) = [W (yjy0)P (y0; t) − W (y0jy)P (y; t)]dy0 (11) @t Z And if the range of Y is a discrete set of states with labels n, the equation reduces to: dpn(t) = [W 0 p 0 (t) − W 0 p (t)] (12) dt nn n n n n Xn0 This form of the Master equation makes the physical meaning more clear: the Master equation is a gain-loss equation for the probability of each state n. The first term is the gain due to transitions from other states n0, and the second term is the loss due to transitions into other states n0. 4 5 THE MEAN-FIELD EQUATION 4.2 Detailed balance In the steady state condition, the left side of the Master equation equals zero. Therefore the steady state condition property has the form: Wnn0 pn0 = ( Wn0n)pn (13) Xn0 Xn0 This relation expresses the obvious fact that in steady state, the sum of all transitions per unit time into any state n must be balanced by the sum of all transitions fom n into other states n0. Detailed balance is the stronger assertion that for each pair n, n0 separately the transitions must balance: Wnn0 pn0 = Wn0npn (14) The following figures illustrates the difference between the steady state condi- tion property and detailed balance. The length of the arrows are proportional to the transition rate. In the first figure the anticlockwise transition proceeds at twice the rate of the clockwise transition. S (steady state), holds in the first figure but D (detailed balance) does not. In the second figure however,D holds, therefore S holds as well. Note that detailed balance is a necessary but not sufficient condition for ther- modymanic equilibrium.