Introduction to Queueing Theory Review on Poisson Process
Total Page:16
File Type:pdf, Size:1020Kb
Contents ELL 785–Computer Communication Networks Motivations Lecture 3 Discrete-time Markov processes Introduction to Queueing theory Review on Poisson process Continuous-time Markov processes Queueing systems 3-1 3-2 Circuit switching networks - I Circuit switching networks - II Traffic fluctuates as calls initiated & terminated Fluctuation in Trunk Occupancy Telephone calls come and go • Number of busy trunks People activity follow patterns: Mid-morning & mid-afternoon at All trunks busy, new call requests blocked • office, Evening at home, Summer vacation, etc. Outlier Days are extra busy (Mother’s Day, Christmas, ...), • disasters & other events cause surges in traffic Providing resources so Call requests always met is too expensive 1 active • Call requests met most of the time cost-effective 2 active • 3 active Switches concentrate traffic onto shared trunks: blocking of requests 4 active active will occur from time to time 5 active Trunk number Trunk 6 active active 7 active active Many Fewer lines trunks – minimize the number of trunks subject to a blocking probability 3-3 3-4 Packet switching networks - I Packet switching networks - II Statistical multiplexing Fluctuations in Packets in the System Dedicated lines involve not waiting for other users, but lines are • used inefficiently when user traffic is bursty (a) Dedicated lines A1 A2 Shared lines concentrate packets into shared line; packets buffered • (delayed) when line is not immediately available B1 B2 C1 C2 (a) Dedicated lines A1 A2 B1 B2 (b) Shared line A1 C1 B1 A2 B2 C2 C1 C2 A (b) Shared lines A C B A B C B Buffer 1 1 1 2 2 2 Output line Number of packets C in the system Input lines 3-5 3-6 Packet switching networks - III Random (or Stochastic) Processes Delay = Waiting times + service times General notion Suppose a random experiment specified by the outcomes ζ from P1 P2 P3 P4 P5 • Packet completes some sample space S, and ζ ∈ S transmission A random process (or stochastic) is a mapping ζ to a function of • Packet begins Service time t: X(t, ζ) transmission time – For a fixed t, e.g., t1, t2,...: X(ti, ζ) is random variable – For ζ fixed: X(t, ζi) is a sample path or realization Packet arrives Waiting at queue P1 P2 P3 P4 time P5 Packet arrival process • Packet service time ≈ • – R bps transmission rate and a packet of L bits long 0 1 2 n n+1 n+2 time – Service time: L/R (transmission time for a packet) – e.g. # of people in Cafe coffee day, # of rickshaws at IIT main – Packet length can be a constant, or random variables gate 3-7 3-8 Discrete-time Markov process I Discrete-time Markov process II A sequence of integer-valued random variables, Xn, n = 0, 1,... , Time-homogeneous, if for any n, is called a discrete-time Markov process pij = Pr[Xn+1 = j|Xn = i](indepedent of time n) If the following Markov property holds which is called one-step (state) transition probability Pr[Xn+1 = j|Xn = i, Xn−1 = in−1,..., X0 = i0] State transition probability matrix: = Pr[Xn+1 = j|Xn = i] p00 p01 p02 ······ State: the value of Xn at time n in the set S • State space: the set S = {n|n = 0, 1,..., } p10 p11 p12 ······ • ··········· – An integer-valued Markov process is called Markov chain (MC) P = [pij] = pi0 pi1 pi2 ······ – With an independent Bernoulli seq. Xi with prob. 1/2, is . Yn = 0.5(Xn + Xn−1) a Markov process? . .. .. – Is the vector process Yn = (Xn, Xn−1) a Markov process? P∞ which is called a stochastic matrix with pij ≥ 0 and j=0 pij = 1 3-9 3-10 Discrete-time Markov process III Discrete-time Markov process IV A mouse in a maze n-step transition probability matrix: A mouse chooses the next cell to visit with (n) p = Pr[Xl+n = j|Xl = i] for n ≥ 0, i, j ≥ 0. 1 2 3 • probability 1/k, where k is the number of ij adjacent cells. 4 5 6 The mouse does not move any more once it is – Consider a two-step transition probability • caught by the cat or it has the cheese. 7 8 9 Pr[X2 = j, X1 = k, X0 = i] Pr[X2 = j, X1 = k|X0 = i] = 1 2 3 4 5 6 7 8 9 Pr[X0 = i] 1 0 1 0 1 0 0 0 0 0 2 2 Pr[X2 = j|X1 = k] Pr[X1 = k|X0 = i] Pr[X0 = i] 1 1 1 = 2 3 0 3 0 3 0 0 0 0 1 1 Pr[X0 = i] 3 0 2 0 0 0 2 0 0 0 4 1 0 0 0 1 0 1 0 0 3 3 3 = pikpkj P = 5 0 1 0 1 0 1 0 1 0 4 4 4 4 6 0 0 1 0 1 0 0 0 1 – Summing over k, we have 3 3 3 7 0 0 0 0 0 0 1 0 0 8 0 0 0 0 1 0 1 0 1 (2) X 3 3 3 pji = pikpkj 9 0 0 0 0 0 0 0 0 1 k 3-11 3-12 Discrete-time Markov process IV Discrete-time Markov process V In a place, the weather each day is classified as sunny, cloudy or rainy. The The Chapman-Kolmogorov equations: next day’s weather depends only on the weather of the present day and not ∞ on the weather of the previous days. If the present day is sunny, the next (n+m) X (n) (m) day will be sunny, cloudy or rainy with respective probabilities 0.70, 0.10 pij = pik pkj for n, m ≥ 0, i, j ∈ S k=0 and 0.20. The transition probabilities are 0.50, 0.25 and 0.25 when the present day is cloudy; 0.40, 0.30 and 0.30 when the present day is rainy. Proof: 0.2 0.3 X 0.7 0.25 SCR Pr[Xn+m = j|X0 = i] = Pr[Xn+m = j|X0 = i, Xn = k] Pr[Xn = k|X0 = i] 0.1 0.25 S 0.7 0.1 0.2 k∈S Sunny Cloudy Rainy P = X 0.5 0.3 C 0.5 0.25 0.25 (Markov property) = Pr[Xn+m = j|Xn = k] Pr[Xn = k|X0 = i] R 0.4 0.3 0.3 k∈S 0.4 X (Time homogeneous) = Pr[Xm = j|X0 = k] Pr[Xn = k|X0 = i] – Using n-step transition probability matrix, k∈S 0.601 0.168 0.230 0.596 0.172 0.231 n+m n m n+1 n 3 12 13 P = P P ⇒ P = P P P = 0.596 0.175 0.233 and P = 0.596 0.172 0.231 = P 0.585 0.179 0.234 0.596 0.172 0.231 3-13 3-14 Discrete-time Markov process VI Discrete-time Markov process VII If P(n) has identical rows, then P(n+1) has also. Suppose State probabilities at time n (n) (n) (n) (n) r – πi = Pr[Xn = i] and π = π0 , . , πi ,...](row vector) (0) r – πi : the initial state probability P(n) = . X . Pr[Xn = j] = Pr[Xn = j|X0 = i] Pr[X0 = i] r i∈S (n) X (n) (0) Then, we have πj = pij πi i∈S r ··· r ··· – In matrix notation: π(n) = π(0)Pn (n) PP = pj1 pj2 ··· pjn . = pj1r + pj2r + ··· + pjnr . (0) ··· . ··· Limiting distribution: Given an initial prob. distribution, π , r ~π = lim π(n) = lim π(0)Pn = π(0) lim Pn+1 ··· n→∞ n→∞ n→∞ (n) h i = r = P = π(0) lim Pn P = ~πP n→∞ ··· 3-15 3-16 Discrete-time Markov process VIII Discrete-time Markov process IX Stationary distribution: Note that – zj and z = [zj] denote the prob. of being in state j and its vector ~π = lim π(n) = lim π(0)Pn → π(∞) = lim p(n) z = z · P and z · ~1 = 1 n→∞ n→∞ j n→∞ ij (0) – The system reaches “equilibrium" or “steady-state" If zj is chosen as the initial distribution, i.e., πj = zj for all j, we • (n) (0) – ~π is independent of π have πj = zj for all n Global balance equation: z = z · P = z · P2 = z · P3 = ··· X X π~ = π~ P ⇒ (each row) πj pji = πipij A limiting distribution, when it exists, is always a stationary • i i distribution, but the converse is not true – LHS represents the total flow from state j into the states except j " # " # " # 0 1 1 0 0 1 – RHS shows the total flow from other states to state j P = , P2 = , P3 = 1 0 0 1 1 0 3-17 3-18 Discrete-time Markov process X Discrete-time Markov process XI Back to the weather example on page 3-16 Classes of states: State j is accessible from state i if p(n) > 0 for some n Using ~πP = ~π, we have • ij • States i and j communicate if they are accessible to each other • Two states belong to the same class if they communicate with π0 =0.7π0 + 0.5π1 + 0.4π2 • each other π =0.1π + 0.25π + 0.3π 1 0 1 2 MC having a single class is said to be irreducible • π2 =0.2π0 + 0.25π1 + 0.3π2 - Note that one equation is always redundant 0 Using 1 = π0 + π1 + π2, we have • 1 2 3 0.3 −0.5 −0.4 π 0 0 Recurrence property −0.1 0.75 −0.3 π = 0 ∞ (n) 1 State j is recurrent if P p = ∞ • n=1 jj 1 1 1 π2 1 – Positive recurrent if πj > 0 π0 = 0.596, π1 = 0.1722, π2 = 0.2318 – Null recurrent if πj = 0 P∞ (n) State j is transient if n=1 pjj < ∞ 3-19 • 3-20 Discrete-time Markov process XII Discrete-time Markov process XIII Periodicity and aperiodic: In a place, a mosquito is produced every hour with prob.