
Lecture 5:Generating Functions 1 of 16 Course: M362K Intro to Stochastic Processes Term: Fall 2014 Instructor: Gordan Zitkovic Lecture 5 Generating Functions The path-counting method used in the previous lecture only works for computations related to the first n steps of the random walk, where n is given in advance. We will see later that most of the interesting questions do not fall into this category. For example, the distribution of the time it takes for the random walk to hit the level l 6= 0 is like that. There is no way to give an a-priori bound on the number of steps it will take to get to l. To deal with a wider class of properties of random walks (and other processes), we need to develop some new mathematical tools. DEFINITION AND FIRST PROPERTIES As we know, the distribution of an N0-valued random variable X is f g [ ] completely determined by the sequence pk k2N0 of numbers in 0, 1 given by pk = P[X = k], k 2 N0. f g As a sequence of real numbers, pk k2N0 can be used to construct a power series: ¥ k PX(s) = ∑ pks . (5.1) k=0 It follows from the fact that ∑k jpkj ≤ 1 that the radius of conver- 1 f g 1 Remember, that the radius of conver- gence of pk k2N0 is at least equal to 1. Therefore, PX is well defined ¥ k for s 2 [−1, 1], and, perhaps, elsewhere, too. gence of a power series ∑k=0 ak x is the largest number R 2 [0, ¥] such that ¥ k ∑k=0 ak x converges absolutely when- ¥ k ever jxj < R. Definition 5.1. The function PX given by PX(s) = ∑k=0 pks is called 2 2 f g the generating function of the random variable X. more precisely, of its pmf pk k2N0 . Before we proceed, let us derive expressions for the generating func- tions of some of the popular N0-valued random variables. Last Updated: March 23, 2016 Lecture 5:Generating Functions 2 of 16 Example 5.2. 1. Bernoulli (b(p)). Here p0 = q, p1 = p, and pk = 0, for k ≥ 2. Therefore, PX(s) = ps + q. n k n−k 2. Binomial (b(n, p)). Since pk = (k)p q , k = 0, . , n, we have n n k n−k k n PX(s) = ∑ p q s = (ps + q) , k=0 k by the binomial theorem. k 3. Geometric (g(p)). For k 2 N0, pk = q p, so that ¥ ¥ p P (s) = qksk p = p (qs)k = . X ∑ ∑ − k=0 k=0 1 qs k −l l N 4. Poisson (p(l)). Given that pk = e k! , k 2 0, we have ¥ k ¥ k −l l k −l (sl) −l sl l(s−1) PX(s) = ∑ e s = e ∑ = e e = e . k=0 k! k=0 k! Some of the most useful analytic properties of PX are listed in the following proposition N f g Proposition 5.3. Let X be an 0-valued random variable, pk k2N0 its pmf, and PX be its generating function. Then X 1.P X(s) = E[s ], s 2 [−1, 1], 2.P X(s) is convex and non-decreasing with 0 ≤ PX(s) ≤ 1 for s 2 [0, 1] 3.P X(s) is infinitely differentiable on (−1, 1) with dn ¥ ( ) = ( − ) ( − + ) k−n 2 N n PX s ∑ k k 1 ... k n 1 s pk, n . (5.2) ds k=n In particular, 1 dn = ( ) pn n PX s n! ds s=0 7! ( ) f g and so s PX s uniquely determines the sequence pk k2N0 . Proof. Statement 1. follows directly from the formula ¥ E[g(X)] = ∑ g(k)pk, k=0 applied to g(x) = sx. As far as (3) is concerned, we only note that the expression (5.2) is exactly what you would get if you differentiated Last Updated: March 23, 2016 Lecture 5:Generating Functions 3 of 16 the expression (5.1) term by term. The rigorous proof of the fact this is allowed is beyond the scope of these notes. With 3. at our disposal, 2. follows by the fact that the first two derivatives of the function PX are non-negative and that PX(1) = 1. Remark 5.4. 1. If you know about moment-generating functions, you will notice that PX(s) = MX(log(s)), for s 2 (0, 1), where MX(l) = E[exp(lX)] is the moment-generating function of X. f g 2. Generating functions can be used with sequences ak k2N0 which are not necessarily pmf’s of random variables. The method is useful f g ¥ k for any sequence ak k2N0 such that the power series ∑k=0 aks has a positive (non-zero) radius of convergence. 3. The name generating function comes from the last part of the prop- erty (3). The knowledge of PX implies the knowledge of the whole f g sequence pk k2N0 . Put differently, PX generates the whole distri- bution of X. Remark 5.5. Note that the true radius of convergence varies from dis- tribution to distribution. It is infinite in 1., 2. and 4., and equal to 1/q > 1 in 4. in Example 5.2. For the distribution with pmf given by p = C C = ( ¥ 1 )−1 k (k+1)2 , where ∑k=0 (k+1)2 , the radius of convergence is exactly equal to 1. Can you see why? CONVOLUTION AND MOMENTS The true power of generating functions comes from the fact that they behave very well under the usual operations in probability. f g f g Definition 5.6. Let pk k2N0 and qk k2N0 be two probability-mass ∗ f g f g functions. The convolution p q of pk k2N0 and qk k2N0 is the se- f g quence rk k2N0 , where n rn = ∑ pkqn−k, n 2 N0. k=0 This abstractly-defined operation will become much clearer once we prove the following proposition: Proposition 5.7. Let X, Y be two independent N0-valued random variables f g f g = + N with pmfs pk k2N0 and qk k2N0 . Then the sum Z X Y is also 0- f g f g valued and its pmf is the convolution of pk k2N0 and qk k2N0 in the sense of Definition 5.6. Last Updated: March 23, 2016 Lecture 5:Generating Functions 4 of 16 Proof. Clearly, Z is N0-valued. To obtain an expression for its pmf, we use the law of total probability: n P[Z = n] = ∑ P[X = k]P[Z = njX = k]. k=0 On the other hand, P[Z = njX = k] = P[X + Y = njX = k] = P[Y = n − kjX = k] = P[Y = n − k], where the last equality follows from independence of X and Y. There- fore, n n P[Z = n] = ∑ P[X = k]P[Y = n − k] = ∑ pkqn−k. k=0 k=0 f g f g Corollary 5.8. Let pk k2N0 and pk k2N0 be any two pmfs. 1. Convolution is commutative, i.e., p ∗ q = q ∗ p. 2. The convolution r = p ∗ q of two pmfs is a pmf, i.e. rk ≥ 0, for all k 2 N0 ¥ and ∑k=0 rk = 1. f g f g Corollary 5.9. Let pk k2N0 and pk k2N0 be any two pmfs, and let ¥ ¥ k k P(s) = ∑ pks and Q(s) = ∑ qks k=0 k=0 ¥ k be their generating functions. Then the generating function R(s) = ∑k=0 rks , of the convolution r = p ∗ q is given by R(s) = P(s)Q(s). Equivalently, the generating function PX+Y of the sum of two independent N0-valued random variables is equal to the product PX+Y(s) = PX(s)PY(s), of the generating functions PX and PY of X and Y. Example 5.10. 1. The binomial b(n, p) distribution is a sum of n independent Ber- noullis b(p). Therefore, if we apply Corrolary 5.9 n times to the generating function (q + ps) of the Bernoulli b(p) distribution we immediately get that the generating function of the binomial is (q + ps) ... (q + ps) = (q + ps)n. Last Updated: March 23, 2016 Lecture 5:Generating Functions 5 of 16 2. More generally, we can show that the sum of m independent random variables with the b(n, p) distribution has a binomial distribution b(mn, p). If you try to sum binomials with different values of the parameter p you will not get a binomial. 3. What is even more interesting, the following statement can be shown: Suppose that the sum Z of two independent N0-valued random variables X and Y is binomially distributed with parameters n and p. Then both X and Y are binomial with parameters nX, p and ny, p where nX + nY = n. In other words, the only way to get a binomial as a sum of independent random variables is the trivial one. Another useful thing about generating functions is that they make the computation of moments easier. f g N Proposition 5.11. Let pk k2N0 be a pmf of an 0-valued random variable X and let P(s) be its generating function. For n 2 N the following two statements are equivalent 1. E[Xn] < ¥, n n d P(s) d P(s) 2. n exists (in the sense that the left limit lims%1 n exists) ds s=1 ds In either case, we have dn E[X(X − 1)(X − 2) ... (X − n + 1)] = P(s) .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-