Math 345 - Stochastic processes - Spring 2020
1 Bernoulli processes
1.1 Random processes Definition 1.1. A random or stochastic process is an infinite collection of rv’s defined on a common probability model. If the process contains countably many rv’s, then they can be indexed by positive integers, X1,X2,... , and the process is called a discrete-time random process. If there are continuum many rv’s, then they can be indexed by a positive real number t, {Xt; t ≥ 0}, and the process is called a continuous-time random process.
A discrete-time random process assigns a sequence of numbers to every outcome ω ∈ Ω,
ω 7→ (X1(ω),X2(ω),...,Xn(ω),... ), while a continuous-time discrete process assigns a function defined on the half-line [0, +∞),
ω 7→ Xt(ω) : [0, ∞) → R.
Of course the sequence can be thought of as a function defined on the set of natural numbers N. The value of the random process on a particular outcome ω is called a sample path of the process. When studying the random process one may choose to abstract to a probability model in which the outcomes are the sample paths, and events are sets of sample paths. Some examples of processes that can be modeled by random processes are repeated experiments, arrivals or departures (of customers, orders, signals, packets, etc.) and random walks (over a line, in a plane, in a 3D space).
1.2 Bernoulli processes One can make a simple nontrivial random process by considering a sequence of IID binary rv’s.
Definition 1.2. A Bernoulli process is a sequence Z1,Z2,... of IID binary rv’s. The independence here is understood in the sense that for any n > 0, the rv’s {Z1,Z2,...,Zn} are independent. Let p = Pr({Z = 1}) and q = 1 − p = Pr({Z = 0}). One can think of the Bernoulli process as a model describing the arrival of customers at discrete times n = 1, 2,... . Then at a specific time n = j a customer will arrive with probability p (Zj = 1), and no customer will arrive with probability q (Zj = 0). Here we are assuming that at most one arrival occurs at each discrete time instance. Instead of tracking the arrivals of the customers, one can track the interarrival times in this process. The first interarrival time X1 will be the time it takes the first customer to arrive. So 1 if Z1 = 1; prob = p 2 if Z = 0,Z = 1; prob = p(1 − p) 1 2 2 3 if Z1 = 0,Z2 = 0,Z3 = 1; prob = p(1 − p) X1 = ... m−1 m if Z1 = 0,...,Zm−1 = 0,Zm = 1; prob = p(1 − p) ...
As we see, X1 has the geometric PMF
m−1 PX1 (m) = p(1 − p) , m ≥ 1. The second interarrival time X2 is the time between the first and second arrivals, and similarly th th we define Xj to be the interarrival time between the (j − 1) and j arrivals. As the arrival rv’s are IID, clearly X2 will have the same probability distribution as X1, since the entire process can be chopped off at the first arrival time, and the second interarrival time can be computed from there similar to the above. Using induction to generalize this argument, we see that the interarrival times are IID geometric rv’s X1,X2,... , and the Bernoulli process can be caracterized by this sequence ∞ instead of the sequence of binary IID arrival rv’s {Zj}j=1. An example of a sample path of the Bernoulli process in terms of the arrival rv’s and the interarrival times is given below
{Zj} : 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1,... (1) {Xj} : 2, 1, 3, 2, 4, 1, 1,...
In addition to the arrival rv’s Zj and the interarrival time rv’s Xj, one may be also interested in tracking the aggregate number of arrivals up to the time t = n. These aggregate numbers of arrivals will be given by the rv’s
Sn = Z1 + Z2 + ··· + Zn, n ≥ 1.
Sn takes positive integer values, and Sn = k means that in the first n discrete time instances arrivals occurred in exactly k of them. So the probability of Sn = k will be n Pr({S = k}) = P (k) = pk(1 − p)n−k, k = 1, 2, . . . , n, n Sn k