Contents ELL 785–Computer Communication Networks

Motivations

Lecture 3 Discrete-time Markov processes Introduction to Review on Poisson process

Continuous-time Markov processes

Queueing systems

3-1 3-2

Circuit switching networks - I Circuit switching networks - II

Traffic fluctuates as calls initiated & terminated Fluctuation in Trunk Occupancy Telephone calls come and go • Number of busy trunks People activity follow patterns: Mid-morning & mid-afternoon at All trunks busy, new call requests blocked • office, Evening at home, Summer vacation, etc. Outlier Days are extra busy (Mother’s Day, Christmas, ...), • disasters & other events cause surges in traffic Providing resources so Call requests always met is too expensive 1 active • Call requests met most of the time cost-effective 2 active • 3 active

Switches concentrate traffic onto shared trunks: blocking of requests 4 active active will occur from time to time 5 active Trunk number Trunk 6 active active

7 active active

Many Fewer lines trunks – minimize the number of trunks subject to a blocking probability

3-3 3-4 Packet switching networks - I Packet switching networks - II

Statistical multiplexing Fluctuations in Packets in the System Dedicated lines involve not waiting for other users, but lines are • used inefficiently when user traffic is bursty (a) Dedicated lines A1 A2 Shared lines concentrate packets into shared line; packets buffered • (delayed) when line is not immediately available B1 B2

C1 C2

(a) Dedicated lines A1 A2

B1 B2 (b) Shared line A1 C1 B1 A2 B2 C2

C1 C2 A (b) Shared lines A C B A B C B Buffer 1 1 1 2 2 2 Output line Number of packets C in the system Input lines

3-5 3-6

Packet switching networks - III Random (or Stochastic) Processes

Delay = Waiting times + service times General notion Suppose a random experiment specified by the outcomes ζ from P1 P2 P3 P4 P5 • Packet completes some sample space S, and ζ ∈ S transmission A random process (or stochastic) is a mapping ζ to a function of • Packet begins Service time t: X(t, ζ) transmission time – For a fixed t, e.g., t1, t2,...: X(ti, ζ) is random variable – For ζ fixed: X(t, ζi) is a sample path or realization Packet arrives Waiting at queue P1 P2 P3 P4 time P5 Packet arrival process •

Packet service time ≈ • – R bps transmission rate and a packet of L bits long 0 1 2 n n+1 n+2 time – Service time: L/R (transmission time for a packet) – e.g. # of people in Cafe coffee day, # of rickshaws at IIT main – Packet length can be a constant, or random variables gate

3-7 3-8 Discrete-time Markov process I Discrete-time Markov process II

A sequence of integer-valued random variables, Xn, n = 0, 1,... , Time-homogeneous, if for any n, is called a discrete-time Markov process pij = Pr[Xn+1 = j|Xn = i](indepedent of time n) If the following Markov property holds which is called one-step (state) transition probability Pr[Xn+1 = j|Xn = i, Xn−1 = in−1,..., X0 = i0] State transition probability matrix: = Pr[Xn+1 = j|Xn = i]   p00 p01 p02 ······ State: the value of Xn at time n in the set S •   State space: the set S = {n|n = 0, 1,..., } p10 p11 p12 ······ •    ··········· – An integer-valued Markov process is called (MC) P = [pij] =     pi0 pi1 pi2 ······ – With an independent Bernoulli seq. Xi with prob. 1/2, is    . . . . .  Yn = 0.5(Xn + Xn−1) a Markov process? ......

– Is the vector process Yn = (Xn, Xn−1) a Markov process? P∞ which is called a stochastic matrix with pij ≥ 0 and j=0 pij = 1

3-9 3-10

Discrete-time Markov process III Discrete-time Markov process IV

A mouse in a maze n-step transition probability matrix:

A mouse chooses the next cell to visit with (n) p = Pr[Xl+n = j|Xl = i] for n ≥ 0, i, j ≥ 0. 1 2 3 • probability 1/k, where k is the number of ij adjacent cells. 4 5 6 The mouse does not move any more once it is – Consider a two-step transition probability • caught by the cat or it has the cheese. 7 8 9 Pr[X2 = j, X1 = k, X0 = i] Pr[X2 = j, X1 = k|X0 = i] = 1 2 3 4 5 6 7 8 9 Pr[X0 = i] 1 0 1 0 1 0 0 0 0 0 2 2 Pr[X2 = j|X1 = k] Pr[X1 = k|X0 = i] Pr[X0 = i] 1 1 1 = 2 3 0 3 0 3 0 0 0 0  1 1  Pr[X0 = i] 3 0 2 0 0 0 2 0 0 0 4  1 0 0 0 1 0 1 0 0  3 3 3  = pikpkj P = 5 0 1 0 1 0 1 0 1 0  4 4 4 4  6 0 0 1 0 1 0 0 0 1  – Summing over k, we have  3 3 3  7 0 0 0 0 0 0 1 0 0 8 0 0 0 0 1 0 1 0 1  (2) X 3 3 3 pji = pikpkj 9 0 0 0 0 0 0 0 0 1 k

3-11 3-12 Discrete-time Markov process IV Discrete-time Markov process V

In a place, the weather each day is classified as sunny, cloudy or rainy. The The Chapman-Kolmogorov equations: next day’s weather depends only on the weather of the present day and not

∞ on the weather of the previous days. If the present day is sunny, the next (n+m) X (n) (m) day will be sunny, cloudy or rainy with respective probabilities 0.70, 0.10 pij = pik pkj for n, m ≥ 0, i, j ∈ S k=0 and 0.20. The transition probabilities are 0.50, 0.25 and 0.25 when the present day is cloudy; 0.40, 0.30 and 0.30 when the present day is rainy. Proof: 0.2 0.3 X 0.7 0.25 SCR Pr[Xn+m = j|X0 = i] = Pr[Xn+m = j|X0 = i, Xn = k] Pr[Xn = k|X0 = i] 0.1 0.25 S 0.7 0.1 0.2  k∈S Sunny Cloudy Rainy P = X 0.5 0.3 C 0.5 0.25 0.25 (Markov property) = Pr[Xn+m = j|Xn = k] Pr[Xn = k|X0 = i] R 0.4 0.3 0.3 k∈S 0.4 X (Time homogeneous) = Pr[Xm = j|X0 = k] Pr[Xn = k|X0 = i] – Using n-step transition probability matrix, k∈S 0.601 0.168 0.230 0.596 0.172 0.231 n+m n m n+1 n 3   12   13 P = P P ⇒ P = P P P = 0.596 0.175 0.233 and P = 0.596 0.172 0.231 = P 0.585 0.179 0.234 0.596 0.172 0.231

3-13 3-14

Discrete-time Markov process VI Discrete-time Markov process VII

If P(n) has identical rows, then P(n+1) has also. Suppose State probabilities at time n (n) (n)  (n) (n)  r  – πi = Pr[Xn = i] and π = π0 , . . . , πi ,...](row vector) (0)  r  – πi : the initial state probability P(n) =    .  X  .  Pr[Xn = j] = Pr[Xn = j|X0 = i] Pr[X0 = i] r i∈S (n) X (n) (0) Then, we have πj = pij πi i∈S  r      ··· r ··· – In matrix notation: π(n) = π(0)Pn (n)   PP =  pj1 pj2 ··· pjn   .  =  pj1r + pj2r + ··· + pjnr   .  (0) ···  .  ··· Limiting distribution: Given an initial prob. distribution, π , r ~π = lim π(n) = lim π(0)Pn = π(0) lim Pn+1  ···  n→∞ n→∞ n→∞ (n) h i =  r  = P = π(0) lim Pn P = ~πP n→∞ ···

3-15 3-16 Discrete-time Markov process VIII Discrete-time Markov process IX

Stationary distribution:

Note that – zj and z = [zj] denote the prob. of being in state j and its vector

~π = lim π(n) = lim π(0)Pn → π(∞) = lim p(n) z = z · P and z · ~1 = 1 n→∞ n→∞ j n→∞ ij (0) – The system reaches “equilibrium" or “steady-state" If zj is chosen as the initial distribution, i.e., πj = zj for all j, we • (n) (0) – ~π is independent of π have πj = zj for all n

Global : z = z · P = z · P2 = z · P3 = ··· X X π~ = π~ P ⇒ (each row) πj pji = πipij A limiting distribution, when it exists, is always a stationary • i i distribution, but the converse is not true – LHS represents the total flow from state j into the states except j " # " # " # 0 1 1 0 0 1 – RHS shows the total flow from other states to state j P = , P2 = , P3 = 1 0 0 1 1 0

3-17 3-18

Discrete-time Markov process X Discrete-time Markov process XI

Back to the weather example on page 3-16 Classes of states: State j is accessible from state i if p(n) > 0 for some n Using ~πP = ~π, we have • ij • States i and j communicate if they are accessible to each other • Two states belong to the same class if they communicate with π0 =0.7π0 + 0.5π1 + 0.4π2 • each other π =0.1π + 0.25π + 0.3π 1 0 1 2 MC having a single class is said to be irreducible • π2 =0.2π0 + 0.25π1 + 0.3π2

- Note that one equation is always redundant 0

Using 1 = π0 + π1 + π2, we have • 1 2 3  0.3 −0.5 −0.4 π  0 0 Recurrence property −0.1 0.75 −0.3 π  = 0 ∞ (n)    1   State j is recurrent if P p = ∞ • n=1 jj 1 1 1 π2 1 – Positive recurrent if πj > 0

π0 = 0.596, π1 = 0.1722, π2 = 0.2318 – Null recurrent if πj = 0 P∞ (n) State j is transient if n=1 pjj < ∞ 3-19 • 3-20 Discrete-time Markov process XII Discrete-time Markov process XIII

Periodicity and aperiodic: In a place, a mosquito is produced every hour with prob. p, and dies State i has period d if with prob. 1 − p • p(n) = 0 when n is not a multiple of d, Show the state transition diagram ii • where d is the largest integer with this property. State i is aperiodic if it has period d = 1. • All states in a class have the same period 0 1 2 3 … … • – An irreducible Markov chain is said to be aperiodic if the states in its single class have period one Using global balance eqns, find the (stationary) state prob.: Null recurrent: • p  p i Recurrent Periodic: pπ = (1 − p)π → π = π = π i i+1 i+1 1 − p i 1 − p 0 State Positive recurrent Transient: Aperiodic: ergodic All states are positive recurrent if p < 1/2, null recurrent if • P∞ p = 1/2 (see i=0 πi = 1), and transient if p > 1/2

3-21 3-22

Discrete-time Markov process XIV Discrete-time Markov process XV

An autorickshaw driver provides service in two zones of New Delhi. Diksha possesses 5 umbrellas which she employs in going from her Fares picked up in zone A will have destinations in zone A with home to office, and vice versa. If she is at home (the office) at the probability 0.6 or in zone B with probability 0.4. Fares picked up in beginning (end) of a day and it is raining, then she will take an zone B will have destinations in zone A with probability 0.3 or in zone umbrella with her to the office (home), provided there is one to be B with probability 0.7. The driver’s expected profit for a trip entirely taken. If it is not raining, then she never takes an umbrella. Assume in zone A is 40 Rupees (Rps); for a trip entirely in zone B is 80 Rps; that, independent of the past, it rains at the beginning (end) of a day and for a trip that involves both zones is 110 Rps. with probability p. Find the stationary prob. that the driver is in each zone. By defining a Markov chain with 6 states which enables us to • • What is the expected profit of the driver? determine the proportion of time that our TA gets wet, draw its • state transition diagram by specifying all state transition (40 × 0.6 + 110 × 0.4)π + (80 × 0.7 + 110 × 0.3)π A B probabilities (Note: She gets wet if it is raining, and all umbrellas = 68πA + 89πB are at her other location.) = 68π + 89(1 − π ) = 89 − 21π Find the probability that our TA gets wet. A A A • At what value of p, can the chance for our TA to get wet be • highest?

3-23 3-24 Drift and Stability I Drift and Stability II

Suppose an irreducible, aperiodic, discrete-time MC Proof: β = max Di (on page 264 in the textbook) The chain is ‘stable’ if π > 0 for all j i≤i • j Drift is defined as • E[Xn|X0] − i = E[Xn − Xn−1 + Xn−1 − Xn−2 + ··· + X1 − X0|X0 = i] ∞ X n Di = E[Xn+1 − Xn|Xn = i] = kPi(i+k) X = E[Xk − Xk−1|X0 = i] k=−i k=1 n ∞ – If Di > 0, the process goes up some higher states from state i X X = E[Xk − Xk−1|Xk−1 = j] Pr[Xk−1 = j|X0 = i] – If Di < 0, the process visits some lower states from state i k=1 j=0 – In the previous slide, Di = 1 · p − 1 · (1 − p) = 2p − 1 n  i Pakes’ lemma X X ≤ β Pr[Xk−1 = j|X0 = i] 1) Di < ∞ for all i k=1 j=0 ∞ 2) For some scalar δ > 0 and integer i ≥ 0 X  + E[Xk − Xk−1|Xk−1 = j] Pr[Xk−1 = j|X0 = i] D ≤ −δ for all i > i | {z } i j=i+1 −δ Then, the MC has a stationary distribution

3-25 3-26

Drift and Stability III Drift and Stability IV

(Continued) Dividing by n and as n → ∞ yields  i n  n  i X  1 X (k)  i X X 0 ≤ (β + δ) pij − δ + E[Xn|X0] − i ≤ β Pr[Xk−1 = j|X0 = i] n n j=0 k=1 k=1 j=0 i i  X  X  = (β + δ) πj − δ − δ 1 − Pr[Xk−1 = j|X0 = i] j=0 j=0 1 n (k) (n) n i – π = lim P p (Cesaro limit) = lim p , X X j n→∞ n k=1 ij n→∞ ij = (β + δ) Pr[Xk−1 = j|X0 = i] − nδ – This implies that for j ∈ {0,..., i}, we have πj > 0. k=1 j=0 Kaplan’s Instability lemma: the converse of stability lemma from which we can get There exist integers i > 0 and k such that •  i n  X  1 X  Di > 0 for all i > i 0 ≤ E[X |X = i] ≤ n (β + δ) Pr[X = j|X = i] −δ + i n 0 n k−1 0 j=0 k=1 | {z } and pij = 0 for all i and j such that 0 ≤ j ≤ i − k. p(k) ij Then, the Markov chain does not have a stationary distribution

3-27 3-28 Review on Poisson process I Review on Poisson process II Properties of a Poisson process, Λ(t): P1) Independent increment for some finite λ (arrivals/sec): P3) Interarrival (or inter-occurrence) times between Poisson arrivals are exponentially distributed: Number of arrivals in disjoint intervals, e.g., [t1, t2] and [t3, t4], are independent random variables. Its probability density function is Suppose τ1, τ2, τ3,... are the epochs of the first, second and third arrivals, then the interarrival times t , t and t are given by (λt)k 1 2 3 Pr[Λ(t) = k] = e−λt for k = 0, 1,... t = τ , t = τ − τ and t = τ − τ , generally, t = τ − τ k! 1 1 2 2 1 3 3 2 n n n−1 with τ0 = 0. where t = ti+1 − ti. P2) Stationary increments: The number of events (or arrivals) in (t, t + h] is independent of t. time

Using PGF of distribution Λ(t + h), i.e., −λt 1. For t1, we have Pr[Λ(t) = 0] = e = Pr[t1 ≥ t] for t ≥ 0, which E[zΛ(t)] = P∞ zk Pr[Λ(t) = k] = eλt(z−1), k=0 means that t1 is exponentially distributed with mean 1/λ. Λ(t+h) Λ(t) Λ(t+h)−Λ(t) E[z ] = E[z · z ] 2. For t2, we get Pr[t > t|t = x] = Pr[Λ(t+x)−Λ(x) = 0] = Pr[Λ(t) = 0] = e−λt , = E[zΛ(t)] · E[zΛ(t+h)−Λ(t)], due to P1. 2 1 which also means that t2 is independent of t1 and has the same eλ(t+h)(z−1) distribution as t . Similarly t , t ,... are iid. ⇒ E[zΛ(t+h)−Λ(t)] = = eλh(z−1). 1 3 4 eλ(t)(z−1)

3-29 3-30

Review on Poisson process III Review on Poisson process IV

P4) The converse of P4 is true: P5) For a short interval, the probability that an arrival occurs in an If the sequence of interarrival times {ti} is iid rv’s with exp. density interval is proportional to the interval size, i.e., fun. λe−λt, t ≥ 0, then the number of arrivals in interval [0, t], Λ(t), is a Poisson process. Pr[Λ(h) = 1] e−λh(λh) lim = lim = λ. h→0 h h→0 h

time o(h) ≈ Or, we have Pr[Λ(h) = 1] = λh + o(h), where limh→0 h = 0

P6) The probability of two or more arrivals in an interval of length h Let Y denote the sum of j independent rv’s with exp. density fun., j−1 gets small as h → 0. For every t ≥ 0, λ(λy) −λy then Y is -j distributed, fY (y) = (j−1)! e : −λh −λh t Pr[Λ(h) ≥ 2] 1 − e − λhe Z lim = lim = 0 Pr[Λ(t) = j] = Pr[0 arrival in(y, t]|Y = y]fY (y)dy h→0 h h→0 h 0 | {z } Z t j −λt L’Hopital’s rule λ(t−y) (λt) e = e · fY (y)dy = . 0 j!

3-31 3-32 Review on Poisson process V Continuous-time Markov process I

P7) Merging: If Λi(t)’s are mutually independent Poisson processes A stochastic process is called continuous-time MC if it satisfies   with rates λ ’s, the superposition process Λ(t) = Pk Λ (t) is i i=1 i Pr[X(tk+1) = xk+1|X(tk) = xk, X(tk−1) = xk−1,..., X(t1) = x1]  Pk  a Poisson process with rate λ = i=1 λi = Pr[X(tk+1) = xk+1|X(tk) = xk] Note: If the interarrival times of the ith stream are a sequence of If X(t) is a time-homogeneous continuous-time MC if iid rv’s but not necessarily exponentially distributed, then Λ(t) tends to a Poisson process as k → ∞. [D. Cox, ] Pr[X(t + s) = j|X(s) = i] = pij(t) (independent of s)

which is analogous to pij in a discrete-time MC … … : sojourn time in state : time of state change 4 Merging Splitting 3 P8) Splitting: If an arrival randomly chooses the ith branch with 2 probability πi, the arrival process at the ith branch, Λi(t), is 1 Poisson with rate λi(= πiλ). Moreover, Λi(t) is independent of time Λj(t) for any pair of i and j (i 6= j). 3-33 A sample path of continuous time MC 3-34

Continuous-time Markov process II Continuous-time Markov process III

State transition rate State occupancy time follows exponential dist.

qii(δ) = Pr[the process remains in state i during δ sec] Let Ti be the sojourn (or occupancy) time of X(t) in state i before • 2 making a transition to any other state. viδ (viδ) = Pr[T > δ] = e−vi δ = 1 − + − · · · = 1 − v δ + o(δ) i 1 2! i – Ti is assumed to be exponential distribution with mean 1/vi. For all s ≥ 0 and t ≥ 0, due to Markovian property of this process, Or, let vi be the rate that the process moves out of state i, • 1 − qii(δ) viδ + o(δ) −vi t Pr[Ti > s + t|Ti > s] = Pr[Ti > t] = e . lim = lim = vi δ→0 δ δ→0 δ

Only exponential dist. satisfies this property. : Poisson process with mean rate vi Semi-Markov process: The process jumps state j. Such a jump depends only on the • previous state. T for all j follows a general (independent) distribution. • j

3-35 3-36 Continuous-time Markov process IV Continuous-time Markov process V

State probabilities πj(t) = Pr[X(t) = j]. For δ > 0, Subtracting πj(t) from both sides, X πj(t + δ) = Pr[X(t + δ) = j] πj(t + δ) − πj(t) = qij(δ)πi(t) − πj(t) X i = Pr[X(t + δ) = j|X(t) = i] Pr[X(t) = i] X i | {z } = qij(δ)πi(t) + (qjj(δ) − 1)πj(t) =qij (δ) i,i6=j X (n+1) X (n) = qij(δ)πi(t) ⇐⇒ πi = pjiπj (DTMC) i j Dividing both sides by δ, Transition into state j from any other state: π (t + δ) − π (t) dπ (t) lim j j = j δ→0 δ dt state 1h X i X = lim qij(δ)πi(t) + (qjj(δ) − 1) πj(t) = γijπi(t), δ→0 δ i | {z } i γii =−vi which is a form of the Chapman-Kolmogorov equations

dπj(t) X = γ π (t) dt ij i i time 3-37 3-38

Continuous-time Markov process VI Continuous-time Markov process VII

As t → ∞, the system reaches ‘equilibrium’ or ‘steady-state’ As a matrix form, dπ~ (t) dπj(t) ~ → 0 and π (∞) = π = π~ (t)Q and π~ (t)1 = 1 dt j j dt whose solution is given by X X  X  0 = γijπi or vjπj = γijπi γjj = −vj = − γij ~ ~ Qt i i6=j i6=j π(t) = π(0)e P which is called the global balance equation, and j πj = 1. State transition rate diagram: As t → ∞, π~ (∞) , π~ = [πi],   −v0 γ01 γ02 γ03 ...  γ10 −v1 γ12 γ13 ... π~ Q = 0 with Q =   and π~ · ~1 = 1,  γ20 γ21 −v2 γ23 ... … … … …  . . . . .  ...... where Q is called the infinitesimal generator or rate matrix.

3-39 3-40 Continuous-time Markov Process VIII Two-state CTMC I

Comparison between discrete- and continuous time MC A queueing system alternates between two states. In state 0, the : discrete-time Markov process system is idle and waiting for a customer to arrive. This idle time is an exponential random variable with mean 1/α. In state 1, the 8 ≈ system is busy servicing a customer.The time in the busy state is an ≈ exponential random variable with mean 1/β. ≈ Find the state transition rate matrix. 1 •     γ00 γ01 −α α time Q = = γ10 γ11 β −β : sojourn time in state : continuous-time Markov process : time of state change

8 Draw state transition rate diagram ≈ • ≈ 2 0 1 1

time 3-41 3-42

Two-state CTMC II Cartridge Inventory I

Find the state probabilities with initial state probabilities π0(0) and π (0): use dπj (t) = P γ π (t) 1 dt i ij i An office orders laser printer cartridges in batches of four cartridges. 0 0 π0(t) = −απ0(t) + βπ1(t) and π1(t) = απ0(t) − βπ1(t) Suppose that each cartridge lasts for an exponentially distributed time with mean 1 month. Assume that a new batch of four cartridges Using π0(t) + π1(t) = 1, we have • becomes available as soon as the last cartridge in a batch runs out. 0 π (t) = − απ0(t) + β(1 − π0(t)) and π0(0) = p0 0 Find the state transition rate matrix: = − (α + β)π0(t) + β • −at −1 0 0 1  Assume π0(t) = C1e + C2: • 0  1 −1 0 0  (a) Find homogeneous part: π0(t) + (α + β)π0(t) = 0 Q =    0 1 −1 0  (b) Find particular solution 0 0 1 −1 – Using the solution in (a), determine coefficients with 0 π0(t) + (α + β)π0(t) = β Find the stationary pmf for N (t), the number of cartridges • (c) The solution of the above is available at time t. β β π (t) = + Ce−(α+β)t and C = p − 0 α + β 0 α + β

3-43 3-44 Cartridge Inventory II Barber shop I

Transient behavior of π (t): π~ (t) = π~ (0)eQt = π~ (0)EeΛtE −1 Customers arrive at a Barbor shop with a Poisson process with rate λ. • i – E and Λ are given by One barber serves those customers based on first-come first-serve 1 1 1 1  0 0 0 2  basis. Its service time, Si is exponentially distributed with 1/µ (sec). 1 1 i −i −1 0 −1 − i 0 0  The number of customers in the system, N (t) for t ≥ 0, forms a E =   and Λ =   2 1 −1 −1 1  0 0 −1 + i 0  Markov chain 1 −i i −1 0 0 1 −2 N (t + τ) = max(N (t) − B(τ) + A(τ), 0) √ – note that i = −1, use ‘expm’ in Matlab

1 State transition probabilities (see properties of Poisson process):

0.8 Pr[0 arrival (or departure) in (t, t + δ)] = 1 − λδ + o(δ)(or 1 − µδ + o(δ)) 0.6 π4(t) Pr[1 arrival (or deparutre) in (t, t + δ)] 0.4 π t 3( ) = λδ + o(δ)(or µδ + o(δ)) π t 0.2 2( ) Pr[more than 1 arrivals (or departure) in (t, t + h)] = o(h) π1(t)

0 0 1 2 3 4 5 6 time t 3-45 3-46

Barber shop II Barber shop III

Find Pn(t) , Pr[N (t) = n]. For n ≥ 1 For n = 0, we have dP (t) Pn(t + δ) = Pn(t) Pr[0 arrival & 0 departure in (t, t + δ)] 0 = −λP (t) + µP (t). dt 0 1 + Pn−1(t) Pr[1 arrival & 0 departure in (t, t + δ)] dPn (t) As t → ∞, i.e., steady-state, we have Pn(∞) = πn with = 0. + Pn+1(t) Pr[0 arrival & 1 departure in (t, t + δ)] + o(h) dt

= Pn(t)(1 − λδ)(1 − µδ) + Pn−1(t)(λδ)(1 − µδ) λπ0 = µπ1

+ Pn+1(t)(1 − λδ)(µδ) + o(δ). (λ + µ)πn = λπn−1 + µπn+1 for n ≥ 1. Rearranging and dividing it by δ, State transition rate diagram

Pn(t + δ) − Pn(t) o(δ) = −(λ + µ)Pn(t) + λPn−1(t) + µPn+1(t) + δ δ … … As δ → 0, for n > 0 we have

dPn(t) = − (λ + µ) Pn(t) + λ Pn−1(t) dt | {z } |{z} Solution of the above equations is (ρ = λ/µ) rate out of state n rate from state n − 1 to n ∞ + µ P (t).  X  n+1 π = ρnπ and 1 = π 1 + ρi ⇒ π = 1 − ρ |{z} 3-47 n 0 0 0 3-48 rate from state n + 1 to n i=1 Barber shop IV Barbershop V

ρ: the server’s utilization (< 1, i.e., λ < µ) Recall the state transition rate matrix, Q on page 3-40 as Mean of customers in the system   ∞ −v0 γ01 γ02 γ03 ... X ρ E[N ] = nπ =  γ10 −v1 γ12 γ13 ... n 1 − ρ π~ Q = 0 with Q =   and π~ · ~1 = 1, n=0  γ γ −v γ ...  20 21 2 23  2 . . . . . = ρ(in server) + ρ /(1 − ρ)(in queue) ...... An M/M/1 system with 1/µ = 1 – What is γij, and vi in M/M/1 queue ? 20 20 Simulation Simulation  Analysis Analysis λ, if j = i + 1,  15 15  µ, if j = i − 1. γ = ij −(λ + µ), if j = i  10 10  0, otherwise

5 5 If a and b denote interarrival and service time, respectively, then vi is the mean of an exponential distribution, i.e., min(a, b). Mean system response time (sec)

Number of customers in the system 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 ρ ρ 3-49 3-50

Barbershop VI Barbershop VII

Distribution of sojourn time, T:

TN = S1 + S2 + ··· + SN +SN+1 | {z } The babershop at the student activity center opens up for business customers ahead at t = 0. Customers arrive at random based on a Poisson process with mean rate λ (customers/sec). Assume that there is one An arriving customer finds N customers in the system (including the barber and that each haircut takes X sec, which is exponentially customer in the server) distributed with mean 1/µ, i.e., b(x) = µe−µx . – By the memoryless property of the exponential distribution, the remaining service time of the customer in service is exponentially (a) Find the probability that the second arriving customer will not have to distributed: wait ∞ X (µt)i f (t) = µ e−µtπ (b) Find the mean waiting time of the second arriving customer T i! i i=0 ∞ X (µt)i = µ e−µtρi(1 − ρ) = µ(1 − ρ)e−µ(1−ρ)t i! i=0

which can be obtained via Laplace transform of distribution of Si.

3-51 3-52 Sum of independent & identical exponential R.V. Barbershop simulation I

Discrete event simulation Use Laplace transform of an exponentially distributed R.V.: sim_time = 0 Z ∞ Generate an arrival ∗ −ts −µt µ L (s) = e µe dt = sim_time = sim_time + interarrival time 0 s + µ Queue = Queue + 1 Laplace transform of N exponentially distributed R.V.s:

 N Arrival? yes ∗ µ LN (s) = next interarrival time an event s + µ < service time service time < next interarrival time sim_time = sim_time + service time Use the inverse transform pair: Queue = Queue ─ 1 tn 1 e−at ⇐⇒ n! (s + a)n+1 Yes Queue is empty? No

3-53 3-54

Barbershop simulation II Barbershop simulation III clear % Define variables % If a departure occurs, global arrival departure mservice_time elseif event == departure arrival = 1; departure = -1; mservice_time = 1; delay_per_arrival = sim_time - system_queue(1); % Set simulation parameters system_queue(1:max_queue-1) = system_queue(2:max_queue); sim_length = 30000; max_queue = 1000; total_delay = total_delay + delay_per_arrival; % To get delay statistics num_system = num_system - 1; system_queue = zeros(1,max_queue); num_served = num_served + 1; k = 0; if num_system == 0 for arrival_rate = 0.1:0.025:0.97 % nothing to serve, schedule an arrival k = k + 1; event = arrival; % x(k) denotes utilization event_time = exprnd(1/arrival_rate); x(k) = arrival_rate*mservice_time; elseif num_system > 0 % initialize % still the system has customers to serve sim_time = 0; num_arrivals = 0; num_system =0; upon_arrival = 0; total_delay = 0; num_served =0; [event, event_time] = schedule_next_event(arrival_rate); % Assuming that queue is empty end event = arrival; event_time = exprnd(1/arrival_rate); end sim_time = sim_time + event_time; sim_time = sim_time + event_time; while (sim_time < sim_length), end % If an arrival occurs, ana_queue_length(k) = (x(k)/(1-x(k))); if event == arrival ana_response_time(k) = 1/(1/mservice_time-arrival_rate); num_arrivals = num_arrivals + 1; % Queue length seen by arrival num_system = num_system + 1; sim_queue_length(k) = upon_arrival/sim_length; % Record arrival time of the customer sim_response_time(k) = total_delay/num_served; system_queue(num_system) = sim_time; end upon_arrival = upon_arrival + num_system; % To see whether one new arrival comes or new departure occurs [event, event_time] = schedule_next_event(arrival_rate); 3-55 3-56 Barbershop simulation IV Queueing systems I

function [event, event_time] = schedule_next_event(arrival_rate) global arrival departure mservice_time minter_arrival = 1/arrival_rate; inter_arrival = exprnd(minter_arrival); service_time = exprnd(mservice_time); if inter_arrival < service_time event = arrival; event_time = inter_arrival; The arrival times, the size of demand for service, the service capacity else event = departure; and the size of waiting room may be (random) variables. event_time = service_time; end Queueing discipline: specify which customer to pick next for service. First come first serve (FCFS, or FIFO) • Last come first serve (LCFS, LIFO) • Random order, (PS), Round robin (RR) • Priority (preemptive:resume, non-resume; non-preemptive) • Shortest job first (SJF) and Longest job first (LJF) 3-57 • 3-58

Queueing systems II Queueing system III

Customer behavior: jockeying, reneging, balking, etc. Performance measure: Kendall’s notation: N (t) = N (t) + N (t): number in • q S system N (t): number in queue Population size (default ) • q Queue size (default ) N (t): number in service • S # of servers W : Waiting time in queue Service time distribution • T: total time (or response time) in Arrival time distribution • the system For A and B: τ: service time M: Markovian, exponential dist. • • Throughput: γ mean # of customers served per unit time D: Deterministic • , • 1. γ for non-blocking system = min(λ, mµ) GI: General independent • 2. γ for a blocking system = (1 − PB)λ, PB = blocking probability Ek: Erlang-k • Utilization: ρ , fraction of time server is busy H : Mixture of k exponentials • • k load λT λ PH : Phase type distribution ρ = = lim = for a single server queue • capacity T→∞ µT µ E.g.: M /D/2, M /M /c, G/G/1, etc.; Barbershop is M/M/1 queue. λT λ = lim = for an m-server queue 3-59 T→∞ mµT mµ 3-60 Little’s theorem I Little’s theorem II

A data communication line delivers a block of information every 10 Any queueing system in steady state: N = λT µsec. A decoder checks each block for errors and corrects the errors if N : average number • necessary. It takes 1 µsec to determine whether a block has any of customers in the errors. In addition to the time taken to check up the block, if the 6 system 5 block has one error, it takes 5 µsec to correct it, and if it has more λ: steady-state 4 • than one error it takes 20 µsec to correct the error. Blocks wait in a 3 arrival rate, need not queue when the decoder falls behind. Suppose that the decoder is 2 to be a Poisson 1 Customer 2 initially empty and that the numbers of errors in the first ten blocks Customer 1 T: average delay per

Number Number or of departures arrivals • are 0, 1, 3, 1, 0, 4, 0, 1, 1, 0, 2. time customer (a) Using Kendall’s notation, classify this queueing system. Proof: For a system with N (0) = 0 and N (t) = 0, as t → ∞ (b) Plot the number of blocks, N (t) in the decoder as a function of Z t α(t) Pα(t) 1 1 X α(t) Ti time in the figure given below. Specify the time instances on the N = N (τ)dτ = T = i=0 = λ · T . t t t i t α(t) t t x-axis, at which each event occurs. 0 i=1 (c) Find the mean number of blocks in the decoder. Pβ(t) Ti β(t) i=0 (d) What percentage of the time is the decoder empty? If N (t) 6= 0, we have t β(t) ≤ Nt ≤ λtTt.

3-61 3-62

Little’s theorem III Little’s theorem IV

As an alternative, for the cumulative processes, Finite queue

N (t) = α(t) − β(t) = γ(t) −→ N (t)/t = γ(t)/t = Nt divided by t See the variable, ‘num_system’ in the previous Matlab code … … ‘num_arrvials’ in the code (t corresponds to ‘sim_length’)

λt = α(t)/t

Response time per customer from ‘total_delay’ Network of queues γ(t) γ(t) t Nt Tt = = · = α(t) t α(t) λt As t → ∞, we have

λT = λ(W + x) = Nq + ρ valid for any queue (even with any service order) as long as the limits

of λt and Tt exist as t → ∞

3-63 3-64 Increasing the arrival and transmission rates by the Statistical multiplexing vs TDMA or FDMA same fator Multiplexing: m Poisson packet streams each with λ/m (packets/sec) In a packet transmission system, are transmitted over a communication link with 1/µ exponentially Arrival rate (packets/sec) is increased from λ to Kλ for K > 1 • distributed packet transmission time The packet length distribution remains the same (exponential), • with mean 1/µ bits a) Statistical multiplexing b) TDMA or FDMA The transmission capacity (C bps) is increased by a factor of K •

Performance

… … The average number of packets in the system remain the same • ρ N = with ρ = λ/(µC) 1 − ρ 1 m Average delay per packet T = < T = • µ − λ µ − λ λW = N → W = N /(Kλ) When do we need TDMA or FDMA? Aggregation is better: increasing a transmission line by K times can – In a multiplexer, packet generation times overlap, so that it must allow K times as many packets/sec with K times smaller average buffer and delay some of the packets delay per packet 3-65 3-66

Little’s theorem: example I Little’s theorem: example II

Estimating throughput in a time-sharing system The average time a user spends in the system Sec. 3.2 Queueing Models-Little's Theorem 161 T = R + D → R + P ≤ T ≤ R + NP – D: the average delay between the time a job is submitted to the computer and the time its execution is completed, D = [P, NP]

Computer B C Combining this with λ = N /T, N  1 N  ≤ λ ≤ min , Average reflection Average job processing R + NP P R + P time R time P 162 Delay Models in Data Networks Chap. 3

Suppose a time-sharingFigure computer3.4 N terminals connected systemwith a withtime-sharingNcomputerterminals.system. To A user logs – throughput is bounded by 1/P, maximum job execution rate estimate maximum attainable throughput, we assume that a departing user im- mediately reenters the system or, equivalently, is immediately replaced by a new into the system throughuser. a terminal and after an initial reflection period of ,< Bound induced by Bound induced by .... Iimited number CPU processing average length RCombining, submitthis relation a jobwith A that= NIT requires[cf. Eq. (3.3)], we anobtain average processing time P ::J 0. .r:: of terminals capacity at the computer. Jobs queue up insideN the computer and are served by a Cl (3.6) ::J -R+P o single CPU accordingThe throughput to someA is also bounded unspecifiedabove by the processing prioritycapacity orof time-sharingthe computer. In rule. 1: 11P particular, since the execution time of a job is P units on the average, it follows that the I- What is the maximumcomputer cannot ofprocess sustainablein the long run throughputmore than II P jobs per byunit thetime, that system?is, ::c'" I '"c – Assume that there is always a userA ready<--P to take the place(3.7) of a departing Guaranteed throughput (This conclusion can also be reached by applying Little's Theorem between the entry and :t curve user, so the numberexit points ofof usersthe computer's in theCPU.) system is always N By combining the preceding two relations, we obtain the bounds 3-67 o 1 + RIP 3-68

---::-:-:c-N < A < min {I- ---N} (3.8) R+NP- - P'R+P Number of Terminals N for the throughput A. By using T = N I A, we also obtain bounds for the average user delay (a) when the system is fully loaded: max {NP, R+ P} -s; T -s; R+ NP (3.9) These relations are illustrated in Fig. 3.5. It can be seen that as the number of terminals N increases, the throughput approaches E the maximum liP, while the average user delay rises essentially in direct proportion with Upper bound for delay Lower bound for delay due to limited 1;;'" N. The number of terminals becomes a throughput bottleneck when N < I + RIP, in > CPU processing capacity which case the computer resource stays idle for a substantial portion of the time while all en users are engaged in reflection. In contrast, the limited processing power of the computer .s'" becomes the bottleneck when N > I + RIP. It is interesting to note that while the exact c maximum attainable throughput depends on system parameters, such as the statistics of the R+P reflection and processing times, and the manner in which jobs are served by the CPU, the '"E /1 i= /I R I :::> I

Cl 1/ Delay assuming no waiting in queue :;>'" V > //1 <{'" 0

Number of Terminals N

(b)

Figure 3.5 Bounds on throughput and average user delay in a time-sharing system. (a) Bounds on attainable throughput [Eq. (3.8)]. (b) Bounds on average user time in a fully loaded system [Eq. (3.9)]. The time increases essentially in proportion with the number of terminals N.

bounds obtained are independent of these parameters. We owe this convenient situation to the generality of Little's Theorem.

3.3 THE M / M /1 QUEUEING SYSTEM

The M / ]\[/ I queueing system consists of a single queueing station with a single server (in a communication context, a single transmission line). Customers arrive according to a Poisson process with rate A, and the probability distribution of the service time is exponential with mean 1/f.1 sec. We will explain the meaning of these terms shortly. The name AI/AI/ I reflects standard queueing theory nomenclature whereby:

1. The first letter indicates the nature of the arrival process [e.g., !vI stands for mem- oryless, which here means a Poisson process (i.e., exponentially distributed inter- 162 Delay Models in Data Networks Chap. 3

,< Bound induced by Bound induced by .... Iimited number CPU processing ::J 0. .r:: of terminals capacity Cl ::J o Little’s theorem: example III Poisson Arrivals See Time Average (PASTA) theorem I 1: I- 11P ::c'" c '" Guaranteed Using T = N:t /λ, we can rewritethroughput Suppose a random process which spends its time in different states Ej curve o 1 + RIP In equilibrium, we can associate with each state Ej two different max{NP, RNumber+ Pof}Terminals ≤ T N≤ R + NP (a) probabilities The probability of the state as seen by an outside random observer • E Upper bound for delay Lower bound for delay due to limited – π : prob. that the system is in the state E at a random instant 1;;'" j j > CPU processing capacity en The probability of the state seen by an arriving customer .s'" c • ∗ R+P – π : prob. that the system is in the state Ej just before (a '"E /1 j i= /I R I randomly chosen) arrival :::> I 1/ Delay assuming no waiting in queue Cl'" ∗ :;> V In general, we have πj 6= π > //1 j <{'" 0 When the arrival process is Poisson, we have Number of Terminals N (b) ∗ πj = πj Figure 3.5 Bounds on throughput and average user delay in a time-sharing system. (a) Bounds on attainable throughput [Eq. (3.8)]. (b) Bounds on average user time in a fully loaded system [Eq. (3.9)]. The time increases essentially in proportion with the number of terminals N.

bounds obtained are independent of these parameters. We owe this convenient situation to 3-69 3-70 the generality of Little's Theorem.

3.3 THE M / M /1 QUEUEING SYSTEM

The M / ]\[/ I queueing system consists of a single queueing station with a single server (in a communication context, a single transmission line). Customers arrive according to a Poisson process withPASTArate A, and the theoremprobability distribution II of the service time is PASTA theorem III exponential with mean 1/f.1 sec. We will explain the meaning of these terms shortly. The name AI/AI/ I reflects standard queueing theory nomenclature whereby: For a stochastic process, N ≡ {N (t), t ≥ 0} for t ≥ 0 and an Proof: arbitrary1. The setfirstB letter∈ Nindicates: the nature of the arrival process [e.g., !vI stands for mem- oryless, which here means a Poisson process (i.e., exponentially distributed inter- For sufficiently large n, Y (t) is approximated as  t • 1, if N (t) ∈ B, 1 Z n−1 U (t) = ⇒ V (t) = U (τ)dτ. X 0, otherwise. t 0 Yn(t) = U (k(t/n))[A((k + 1)t/n) − A(kt/n)] k=0 | {z } For a Poisson arrival process A(t), (λ(k+1)t−λkt)/n Z t LAA decouples the above as Y (t) = U (τ)dA(τ) ⇒ Z(t) = Y (t)/A(t) • n−1 0 h X i E[Yn(t)] = λtE U (kt/n)/n k=0 Lack of Anticipation Assumption (LAA): For each t ≥ 0, {A(t + u) − A(t), u ≥ 0} and {U (s), 0 ≤ s ≤ t} are independent: As n → ∞, if |Y (t)| is bounded, • n Future inter-arrival times and service times of previously arrived h Z t i customers are independent. lim E[Yn(t)] = E[Y (t)] = λtE[V (t)] = λE U (τ)dτ .  n→∞ 0 Under LAA, as t → ∞, PASTA ensures : the expected number of arrivals who find the system in state B V (t) → V (∞) w.p. 1 if Z(t) → V (∞) w.p.1 equals arrival rate times the expected length of time it is there.

3-71 3-72 Systems where PASTA does not hold M/M/1/K I

Ex1) D/D/1 queue M/M/1/K: the system can accommodate K customers (including one Deterministic arrivals every 10 msec in service) • Deterministic service times of 9 msec • … … … … 0 9 10 19 20

A sample path of D/D/1 queue waiting customers State balance equations Arrivals always finds the system empty. • • The system is occupied on average with 0.9. λπ0 = µπ1 • Ex2) LAA violated: Service times for a current customer depends on (λ + µ)πi = λπi−1 + µπi+1 for 1 ≤ i ≤ K an inter-arrival time of a future customer After rearranging, we have Your own PC (one customer, one server) • λπi−1 = µπi for 1 ≤ i ≤ K Your own PC is always free when you need it, π∗ = 1 • 0 For i ∈ {0, 1,..., K}, steady-state probabilities are π0= proportion of time the PC is free (< 1) • • K n X 1 − ρ 3-73 π = ρ π and π = 1 ⇒ π = 3-74 n 0 n 0 1 − ρK+1 n=0

M/M/1/K II M/M/1/K Simulation I

π : the probability that an arriving customer finds the system full. clear • K % Define variables Due to PASTA, this is a blocking probability global arrival departure mservice_time arrival = 1; departure = -1; mservice_time = 1; % Define simulation parameters 1 − ρ K πK = ρ sim_length = 30000; K = 10; system_queue = zeros(1,K); 1 − ρK+1 k = 0; max_iter = 5; for arrival_rate = 0.1:0.025:0.97 Blocking probability in simulation k = k + 1; • x(k) = arrival_rate*mservice_time; total # of blocked arrivals upon arrival instants % initialize PB = sim_time=0; num_arrivals=0; num_system=0; upon_arrival=0; total_delay=0; num_served=0; dropped=0; total # of arrivals at the system % Assuming that queue is empty event = arrival; event_time = exprnd(1/arrival_rate); 0.15 sim_time = sim_time + event_time; Analysis, K = 10 for iter = 1:max_iter K Simulation, = 10 while (sim_time < sim_length), 0.125 K Analysis, = 5 % If an arrival occurs, Simulation, K = 5 if event == arrival 0.1 num_arrivals = num_arrivals + 1; if num_system == K B 0.075 P dropped = dropped + 1; else 0.05 num_system = num_system + 1; system_queue(num_system) = sim_time; 0.025 upon_arrival = upon_arrival + num_system; end % To see whether one new arrival comes or new departure occurs 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 3-75 [event, event_time] = schedule_next_event(arrival_rate); 3-76 ρ M/M/1/K Simulation II M/M/m queue I

% If a departure occurs, M/M/m: there are m parallel servers, whose service times are elseif event == departure delay_per_arrival = sim_time - system_queue(1); exponentially distributed with mean 1/µ. system_queue(1:K-1) = system_queue(2:K); total_delay = total_delay + delay_per_arrival; num_system = num_system - 1; num_served = num_served + 1; if num_system == 0 … … … % nothing to serve, schedule an arrival event = arrival; event_time = exprnd(1/arrival_rate); elseif num_system > 0 % still the system has customers to serve State transition rate diagram of M/M/m [event, event_time] = schedule_next_event(arrival_rate); end When m servers are busy, the time until the next departure, X, is end sim_time = sim_time + event_time; X = min(τ1, τ2, . . . , τm) ⇒ Pr[X > t] = Pr[min(τ1, τ2, . . . , τm) > t] end m Pd_iter(iter)=dropped/num_arrivals; Y end = Pr[τ > t] = e−mµt (i.i.d.) piK(k) = x(k)^K*(1-x(k))./(1-x(k)^(K+1)); i Pd(k) = mean(Pd_iter); i=1 end Global balance equations:

%%%%%%%%%%% %% use the previous schedule_next_event function λπ0 = µπ1

(λ + min(n, m)µ)πn = λπn−1 + min(n + 1, m)µπn+1 for n ≥ 1 3-77 3-78

M/M/m queue II M/M/c/c I

The previous global balance equation can be rewritten as c-server and only c customers can be accommodated

λπn−1 = min(n, m)µπn for n ≥ 0 … … Using a = λ/µ and ρ = λ/mµ m max(0,n−m) a πn = ρ π0 m! Balance equations are (a = λ/µ called Erlang) From the normalization condition, π is obtained a an 0 λπ = nµπ ⇒ π = π = π n−1 n n n n−1 n! 0 ∞ n m−1 ai am ∞ o X X X i−m c 1 = πi = π0 + ρ P i! m! Using n=0 πn = 1, we have i=0 i=0 i=m c an n X ai o−1 π = Erlang C formula, C(m, a), n n! i! i=0 ∞ m X (mρ) π0 C(m, a) = Pr[W > 0] = Pr[N ≥ m] = π = · Erlang B formula: B(c, a) = πc i m! 1 − ρ i=m – valid for M/G/c/c system. Note that this depends only on the mean of service time distribution 3-79 3-80 M/M/c/c II Example: a system with blocking I

Erlang capacity: Telephone systems with c channels In Select-city shopping mall, customers arrive at the underground

100 100 parking lot of it according to a Poisson process with a rate of 60 cars

c = 1 per hour. Parking time follows a Weibull distribution with mean 2.5 10-1 10-1 hours and the parking lot can accommodate 150 cars. When the ) 2 ) -2 -2 10 20 30 parking lot is full, an arriving customer has to park his car somewhere c, a 10 c, a 10 ( ( B B 40 50 60 else. Find the fraction of customers finding all places occupied upon

-3 3 -3 70 80 90 100 10 10 arrival 4 5 6 7 8 9 10 two different distributions with the same mean 0.7

-4 10 10-4 10-1 100 101 0 20 40 60 80 100 0.6 Weibull: α = 2.7228, k = 5 offered traffic intensity, a offered traffic intensity, a k−1 x k f(x) = k k e( α ) 0.5 α ! α "

100 0.4 )

-1 x 10 ( f 0.3 10-2 0.2 10-3 exponential x 0.1 1 − α f(x) = α e B 10-4 P 0 -5 0 1 2 3 4 5 6 7 8 10 Analysis, c = 3 x (hours) Simulation, c = 3 10-6 R ∞ x−1 −t Analysis, c = 5 – Mean of Weibull distribution: αΓ(1 + 1/k), and Γ(x) = t e dt is called 0 10-7 c Simulation, = 5 the gamma function 10-8 3-81 3-82 0 0.5 1 1.5 2 2.5 3 a

Example: a system with blocking II Finite source population: M/M/C/C/K system I

Consider the loss system (no waiting places) in the case where the c = 150 and a = λ/µ = 60 × 2.5 = 150 arrivals originate from a finite population of sources: the total number • of customers is K c a c! 1 B(c, a) = c i 1 c=150,a=150 P a 2

i=0 i! 2

… … Divide the numerator and denominator by Pc−1 an/n!, • n=0 c a (ac/c!)/ Pc−1 an/n! B(c, a) = c! = n=0 Pc−1 ai c c Pc−1 n The time to the next call attempt by a customer, so called thinking i=0 i! + a /c! 1 + (a /c!)/ n=0 a /n! • (a/c)B(c − 1, a) aB(c − 1, a) time (idle time) of the customer obeys an exponential distribution = = 1 + (a/c)B(c − 1, a) c + aB(c − 1, a) with mean 1/λ (sec) Blocked calls are lost • with B(0, a) = 1 - does not lead to reattempts; starts a new thinking time, again. The time to the next attempt is also the same exponential distribution with 1/λ - the call holding time is exponentially distributed with 1/µ 3-83 3-84 M/M/C/C/K system II M/M/C/C/K system III

If C ≥ K, each customer has its own server, i.e., no blocking. For j = 1, 2,..., K, we have •   Each user shows two-state, active with mean 1/µ and idle with K j • (C − j + 1)πj−1 = jµπj ⇒ πj = a π0. mean 1/λ j The probability for a user to be idle or active is PK • Applying πj = 1, • j=0 π0 = 1/λ/(1/λ + 1/µ) and π1 = 1/µ/(1/λ + 1/µ),   C   K j X K k πj = a / a Call arrival rate: π λ, offered load (or carried load per source): j k • 0 k=0 π1 = a/(1 + a), and a = λ/µ Time blocking (or congestion): the proportion of time the system If C < K, this system can be described as spends in the state C; the equilibrium probability of the state C is

PB = πC … – The probability of all resources being busy in a given observational period – Insensitivity: Like Erlang B formula, this result is insensitive to the form of the holding time distribution (though the derivation above was explicitly ((K − i)λ + iµ)πi = (K − (i − 1))πi−1 + (i + 1)µπi+1 based on the assumption of exponential holding time distribution)

3-85 3-86

M/M/C/C/K system IV M/M/C/C/K system V

Call blocking: the probability that an arriving call is blocked, i.e., PL Call blocking P can be obtained by • L Arrival rate is state-dependent, i.e., (K − N (t))λ: Not Poisson. • λC PASTA does not hold: Time blocking, PB can’t represent PL P λ = P λ → P = P ≤ P • L T B C L λ B B λ : Call arrivals on average T • T C Engset formula: X • λT ∝ (K − i)λπi K! c i=0 (K − C)λ (K − C) C!(K−C)! a PL(K) = πC = – P : the probability that a call finds the system blocked PC PC K! c L i=0(K − i)λπi i=0(K − i) i!(K−i)! a – If λT = 10000 and PL = 0.01, λT PL = 100 calls are lost (K−1)! ac   C   C!(K−1−C)! K − 1 c X K − 1 i λC : Call arrivals when the system is in the blocking state = = a / a • PC (K−1)! c C i i=0 i!(K−1−i)! a i=0 λC ∝ (K − C)λ – The state distribution seen by an arriving customer is the same as the –PBλC : blocked calls upon the arrival instant equilibrium distribution in a system with one less customer. It is as if the PLλT = PBλC arriving customer were an "outside observer" – Among total arrivals, some of them that find the system blocked – PL(K) = PB(K − 1): as K → ∞, PL → PB should be equal to call arrivals of seeing the busy system 3-87 3-88 Where are we? Time Reversibility of discrete-time MC I

Elementary queueing models For an irreducible, aperiodic, discrete-time MC, (Xn, Xn+1,...) having – M/M/1, M/M/C, M/M/C/C/K: product form solution transition probabilities pij and stationary distribution πi for all i: ∗ – Bulk queues (not discussed here) Time-reversed MC is defined as Xn = Xτ−n for an arbitrary τ > 0 Intermediate queueing models (product-form solution) – Time-reversibility of Markov process – Detailed balance equations of time-reversible MCs – Multidimensional Birth-death processes – Network of queues: open- and closed networks Forward process Time reversed process Advanced queueing models 1) Transition probabilities of X ∗ – M/G/1 type queue: Embedded MC and Mean-value analysis n ∗ πjpji – M/G/1 with vacations and Priority queues pij = πi – G/M/m queue ∗ 2) Xn and Xn have the same stationary distribution πi: More advanced queueing models (omitted) ∞ ∞ X X ∗ – Algorithmic approaches to get steady-state solutions πipij = πjpji = πi j=0 j=0 3-89 3-90

Time Reversibility of discrete-time MC II Time Reversibility of discrete-time MC III

∗ Proof for 1) pij = πjpji/πi: • A Markov process, Xn, is said to be reversible, if ∗ pij = Pr[Xm = j|Xm+1 = i, Xm+2 = i2,..., Xm+k = ik ] – the transition probabilities of the forward and reversed chains are Pr[X = j, X = i, X = i ,..., X = i ] the same, = m m+1 m+2 2 m+k k Pr[Xm+1 = i, Xm+2 = i2,..., Xm+k = ik ] ∗ Pr[X = j, X = i] Pr[X = i ,..., X = i |X = j, X = i] pij = Pr[Xm = j|Xm+1 = i] = pij = Pr[Xm+1 = j|Xm = i] = m m+1 m+2 2 m+k k m m+1 Pr[Xm+1 = i] Pr[Xm+2 = i2,..., Xm+k = ik |Xm+1 = i] Time reversibility ⇔ Detailed balanced equations (DBEs) hold Pr[Xm = j, Xm+1 = i] • = Pr[Xm+1 = i] ∗ πip = πjpji → πipij = πjpji (detailed balance eq.) Pr[X = i|X = j] Pr[X = j] ij = m+1 m m Pr[Xm+1 = i] What types of Markov processes satisfy this detailed balance p π = ji j equation? discrete-time Birth-death (BD) process πi

Transition occurs between neighboring states: pij = 0 for |i − j| > 1 Proof for 2) Using the above result, • • X ∗ X … … πipij = πi(πjpji/πi) = πj 0 1 2 i∈S i∈S

3-91 3-92 Time Reversibility of discrete-time MC IV Time Reversibility of discrete-time MC V

A transmitter’s queue with stop-and-wait ARQ (θ = qr) in Mid-term I Kolmogorov Criteria Is this process reversible? A discrete-time Markov chain is reversible if and only if • •

pi1i2 pi2i3 ··· pin−1in pin i1 = pi1in pin in−1 ··· pi3i2 pi2i1 … … 0 1 2 … … for any finite sequence of states, i1, i2,... ,in and any n Proof: Global balance equations (GBEs) For a reversible chain, if detailed balance eqns. hold, we have • • π0 =(1 − p)π0 + (1 − p)θπ1 0 1 π1 =pπ0 + (pθ + (1 − p)(1 − θ))π1 + (1 − p)θπ2 For i = 2, 3,..., we have

πi =p(1 − θ)πi−1 + (pθ + (1 − p)(1 − θ))πi + (1 − p)θπi+1 3 2 Instead, we can use DBEs, or simplify GBEs using DBEs, e.g., • n ∞ n ∞ Fixing two states, i = i, and i = j and multiplying over all states, X X X X • 1 n p(1 − θ)πi = (1 − p)θπi+1 ↔ πjpji = πipij pi,i2 pi2i3 ··· pin−1jpji = pijpjin−1 ··· pi3i2 pi2i j=0 i=n+1 j=0 i=n+1 3-93 3-94

Time Reversibility of discrete-time MC VI Time Reversibility of discrete-time MC VII

From the Kolmogorov criteria, we can get Inspect whether the following three-state MC is reversible •  0 0.6 0.4 pi,i2 pi2i3 ··· pin−1jpji = pijpjin−1 ··· pi3i2 pi2i (n−1) (n−1) P = 0.1 0.8 0.1 pij pji = pijpji 0.5 0 0.5 Using Kolmogorov criteria, As n → ∞, we have • p12p23p31 = 0.6 × 0.1 × 0.5 6= p13p32p21 = 0.4 × 0 × 0.1 = 0 (n−1) (n−1) lim p pji = lim pijp → πjpji = πipij Inspecting state transition diagram, it is not a BD process n→∞ ij n→∞ ji • If the state transition diagram of a Markov process is a tree, then the Inspect whether the following two-state MC is reversible process is time reversible  0 1  P = 0.5 0.5 – It is a small BD process

– Using state probabilities, π0 = 1/3 and π1 = 2/3, 1 2 1 π p = · 1 = π p = · 0 01 3 1 10 3 2 – A generalization of BD processes: at the cut boundary, DBE is

3-95 satisfied 3-96 Exercise Relation between DTMC and CTMC I Consider a transmitter that uses STOP-and-WAIT (SW) ARQ protocol to Recall an embedded MC: each time a state, say i, is entered, an transmit a frame. Each frame can be successfully transmitted with probability exponentially distributed state occupancy time is selected. When the time q, while the probability that the ACK to the frame arrives at the transmitter is up, the next state j is selected according to transition probabilities, pij corrupted with an error or will be lost is 1 − r. Assume that ACK from the : continuous-time Markov process

receiver always arrives at this transmitter just before the time-out, tout, if it is 4 not lost. During each tout, at the transmitter’s queue, one frame is generated 3 from the upper layer with probability p and passed down to this queue. Note 2 that we assume 0 < p, q, r < 1. 1 (a) Show that the stochastic process of describing the transmitter’s queue is a Markov process. time N (n): the number of times state i occurs in the first n transitions (b) Find the global balance equations of the Markov chain. • i T (j): the occupancy time the jth time state i occurs. (c) Is the Markov chain is periodic or aperiodic? Why? • i (d) Under what condition can this Markov chain be positive recurrent, transient, or The proportion of time spent by X(t) in state i after the first n transitions null recurrent? PNi (n) time spent in state i j=1 Ti (j) (e) Determine the state probabilities πi for i ≥ 0, the probability that i frames in the = time spent in all states P PNi (n) queue, at steady-state. i j=1 Ti (j) (f) What is the mean number of frames in the transmitter’s queue? This includes one frame in service. 3-97 3-98

Relation between DTMC and CTMC II Relation between DTMC and CTMC III

As n → ∞, using πi = Ni(n)/n we have Recall M/M/1 queue

Ni (n) 1 PNi (n) Ti(j) a) CTMC n Ni (n) j=1 πiE[Ti] = = φi, P Ni (n) 1 PNi (n) P E[T ]=1/v Ti(j) i πiE[Ti] i i i n Ni (n) j=1 0 1 2 3 4 … …

where πi is the unique pmf solution to X X b) Embedded MC πj = πipij and πj = 1 (∗) i j 0 1 2 3 4 … …

The long-term proportion of time spent in state i approaches

πi/vi πi viφi In the embedded MC, we have the following global balance equations φi = P = c → πi = πi/vi vi c  i π0 = qπ1  Substituting π = (v φ )/c into (∗) yields π1 = π0 + qπ2  p  i i i π = π 5 . i i−1 vjφj 1 X X X .  q = viφipij → vjφj = φivipij = φiγij  c c  i i i πi = pπi−1 + qπi+1

3-99 3-100 Relation between DTMC and CTMC IV Continuous-time reversible MC I

P∞ For a continuous-time MC, X(t), whose stationary state probability Using the normalization condition, i=0 πi = 1, πi, we have a discrete-time embedded Markov chain whose stationary i−1 pmf and a state transition probability are π and p˜ . p  1 1 − 2p i ij π = π and π = i q q 0 0 2(1 − p) Forward Process Reverse Process Converting the embedded MC into CTMC,

c c cπi c φ0 = π0 = π0 and φi = = πi Embedded Markov process time v0 λ vi λ + µ ∗ There is a reversed embedded MC with πip˜ij = πjp˜ji for all i 6= j. Determine c, CTMC ∞ ∞ ! X π0 1 X φ = 1 → c + π = 1 → c = 2λ 0 1 2 3 4 … … i λ λ + µ i i=0 i=1 Embedded MC (BD process) i Finally, we get φi = ρ (1 − ρ) for i = 1, 2,... 0 1 2 3 4 … …

3-101 3-102

Continuous-time reversible MC II Continuous-time reversible MC III

Recall the state occupancy time of the forward process A continuous-time MC whose stationary probability of state i is θi,

−vi s and state transition rate from j to i is γji has a reversible MC whose Pr[Ti > t + s|Ti > t] = Pr[Ti > s] = e ∗ ∗ state transition rate is γij, if we find γij of satisfying

π p˜ π γ If X(t) = i, the probability that the reversed process remains in state ∗ ∗ j ji j ji γij = vip˜ij = vi = vi = θjγji/θi i for an additional s seconds is πi p˜ji =γji /vj πivj | {z } from embedded MC Pr[X(t0) = i, t − s ≤ t0 ≤ t|X(t) = i] = e−vi s ∗ ; after staying t, probability that it shall stay s sec more – p˜ij(=p ˜ij): state transition probability of the reversed embedded MC – Continuous-time MC whose state occupancy times are exponentially

Forward Process Reverse Process distributed is reversible if its embedded MC is reversible ∗ Additionally, we have vj = vj

X ∗ X ∗ X X ∗ θiγij ∗ = θj γji = θjvj = θjvj ⇒ γij = γij Embedded Markov process time γij =θj γji /θi i6=j i6=j j6=i j6=i

3-103 3-104 Continuous-time reversible MC IV M/M/2 queue with heterogeneous servers I

Detailed balance equation holds for continuous-time reversible MCs Servers A and B with service rates µA and µB. When the system

θjγji (input rate to i) = θiγij (output rate from i) for j = i + 1 empty, arrivals go to A with probability p and to B with probability 1 − p. Otherwise, the head of the queue takes the first free server – Birth-death systems with γij = 0 for |i − j| > 1 – Since the embedded MC is reversible, 1A

πip˜ij = πjp˜ji → (viθi/c)˜pij = (vjθj/c)˜pji → θiγij = θjγji 0 2 3

If there exists a set of positive numbers θi, that sum up to 1 and 2B satisfy

θiγij = θjγji for i 6= j Under what condition is this system time-reversible? then, the MC is reversible and θ is the unique stationary distribution For n = 2, 3,..., i • – Birth and death processes, e.g., M/M/1, M/M/c, M/M/∞ n−2 πn = π2 (λ/(µA + µA)) Kolmogorov criteria for continuous time MC Global balance equations along the cuts – A continuous-time Markov chain is reversible if and only if • λπ0 = µAπ1,A + µBπ1,B γi1i2 γi2i3 ··· γin i1 = γi1in γin in−1 ··· γi3i2 γi2i1 (µA + µB)π2 = λ(π1,A + π1,B) – Proof is the same as in the discrete-time reversible MC 3-105 3-106 (µA + λ)π1,A = pλπ0 + µBπ2

M/M/2 queue with heterogeneous servers II Multidimensional Markov chains I

After some manipulations, Suppose that X1(t) and X2(t) are independent reversible MCs Then, X(t) = (X1(t), X2(t)) is a reversible MC λ λ + p(µA + µB) • π1,A = π0 Two independent M /M /1 queue, where arrival and service rates at µA 2λ + µA + µB • queue i are λi and µi λ λ + (1 − p)(µA + µB) 6-23 Example: Two Independent M/M/1 Queues π2,A = π0 – (N1(t), N2(t)) forms an MC µB 2λ + µA + µB 2 λ λ + (1 − p)µ + pµ „ A B Stationary distribution: λ1 λ1 λ1 π2 = π0 nn1203 13 23 33 µAµB 2λ + µA + µB λλλλ 11 2 2 µ µ µ pn(,12n)=−1 1−  1 1 1 µµ11µ2µ2 λ µ λ µ λ µ λ µ P∞ 2 2 2 2 2 2 2 2 λ λ λ π0 can be determined by π0 + π1,A + π2,B + n=2 πn = 1 1 1 1 „ Detailed Balance Equations: 02 12 22 If it is reversible, use detailed balance equations 32 µ1 µ1 µ1 • µ11pn(1+=,n2)λ1p(n1,n2) λ2 µ2 λ2 µ2 λ2 µ2 λ2 µ2 µλpn(,n +=1) p(,n n) λ1 λ1 λ1 (1/2)λπ = µ π → π = 0.5(λ/µ )π 212 212 0 A 1,A 1,A A 0 01 11 21 31 µ µ µ (1/2)λπ0 = µBπ1,B → π1,B = 0.5(λ/µB)π0 1 1 1 λ2 µ2 λ2 µ2 λ2 µ2 λ2 µ2 Verify that the Markov chain is λ λ λ 0.5λ2 1 1 1 reversible – Kolmogorov criterion 00 10 20 30 π2 = π0 µ µ µ µAµB 1 1 1 – Is this a reversible MC? 3-107 3-108 Multidimensional Markov chains II Truncation of a Reversible Markov chain I

X(t) is a reversible Markov process with state space S and stationary – Owing to time-reversibility, detailed balance equations hold distribution, πj for j ∈ S. – Truncated to a set E ⊂ S such that the resulting chain Y (t) is µ1π(n1 + 1, n2) = λ1π(n1, n2) irreducible. Then, Y (t) is reversible and has the stationary µ2π(n1, n2 + 1) = λ2π(n1, n2) distribution π πˆ = j j ∈ E – Stationary state distribution j P k∈E πk

   n1    n2 – This is the conditional prob. that. in steady state, the original λ1 λ1 λ2 λ2 π(n1, n2) = 1 − 1 − process is at state j, given that it is somewhere in E µ1 µ1 µ2 µ2 Proof: Can be generalized for any number of independent queues, e.g., • πj πi M/M/1, M/M/c or M/M/∞ πˆjqji =π ˆiqij ⇒ P qji = P qij ⇒ πjqji = πiqij k∈E πk k∈E πk | {z } π(n1, n2,..., nK ) = π1(n1)π2(n2) ··· πK (nK ) πˆj

X X πj – ’Product form’ distribution πˆk = P = 1 πk k∈E j∈E k∈E

3-109 3-110

Truncation of a Reversible Markov chain II Truncation of a Reversible Markov chain III

Markov processes for M/M/1 and M/M/C are reversible Two independent M/M/1 queues of the previous example share a common buffer of size B (=2) State probabilities of M/M/1/K queue 6-25 Example: Two Queues with Joint Buffer • An arriving customer who finds B customers waiting is blocked i i „ The two independent• M/M/1 queues of (1 − ρ)ρ (1 − ρ)ρ λ λ1 πi = = for ρ = the previous example share a common 03 13 PK i K+1 1 − ρ µ µ i=0(1 − ρ)ρ buffer of size B – arrival that finds B 1 λ µ 2 2 λ2 µ2 customers waiting is blocked λ λ – Truncated version of M/M/1/∞ queue 1 1 „ State space restricted to 02 12 22

++ µ1 µ1 State probabilities of M/M/c/c queue En=−{( 12,n) : (n11) +(n2−1) ≤B} λ2 µ2 λ2 µ2 λ2 µ2 • λ λ λ – M/M/c/∞ queue with ρ = λ/(mµ) and a = λ/µ „ Distribution of truncated chain: 1 1 1 nn 01 11 21 31 pn(,n)=⋅p(0,0)ρρ12,(n,n)∈E 12 1 2 12 µ µ µ c 1 1 1 a λ µ λ µ λ µ max(0,n−c) „ Normalizing: 2 2 2 2 2 2 λ2 µ2 πn = ρ π0 −1 λ1 λ1 λ1 n!  nn12 00 10 20 30 p(0,0) = ∑ ρρ12 µ1 µ1 µ1 – Truncated version of M/M/c/∞ queue (,nn12)∈E Theorem specifies joint Statedistribution space: up E = {(n , n ):(n − 1)+ + (n − 1)+ ≤ B} c c State1 diagram2 1for B =2 2 X an X ai to the normalization •constant πˆ = π / π = / Stationary state distribution of the truncated MC n n n n! i! 0 Calculation of normalization• constant is n=0 i=0 n1 n2 often tedious π(n1, n2) = π(0, 0)ρ1 ρ2 for (n1, n2) ∈ E n n 3-111 π(0, 0) is obtained by π(0, 0) = 1/ P ρ 1 ρ 2 3-112 • (n1,n2)∈E 1 2 Truncation of a Reversible Markov chain IV Truncation of a Reversible Markov chain V

Two session classes in a circuit switching system with preferential The state probabilities can be obtained as • treatment for one class for a total of C channels n1 n2 ρ1 ρ2 Type 1: Poisson arrivals with λ require exponentially distributed P(n1, n2) = P(0, 0) for 0 ≤ n1 ≤ K, n1+n2 ≤ C, n2 ≥ 0 • 1 n1! n2! service rate µ1 – admissible only up to K P – P(0, 0) can be determined by P(n1, n2) = 1 Type 2: Poisson arrivals with λ require exponentially distributed n1,n2 • 2 Blocking probability of type 1 service rate µ2 – can be accepted until C channels are used up • K n2 n1 C−n1 PC−K ρ1 ρ2 PK−1 ρ1 ρ2 374 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 51, NO. 2, MARCH 2002 · + · S = {(n1, n2)|0 ≤ n1 ≤ K, n1 + n2 ≤ C} n2=0 K! n2! n1=0 n1! (C−n1)! Pb1 = n1 n2 PK ρ1 PC−n1 ρ2 n1=0 n1! n2=0 n2! Blocking probability of type 2 • n C−n K ρ 1 ρ 1 P 1 · 2 n1=0 n1! (C−n1)! Pb2 = n1 n2 PK ρ1 PC−n1 ρ2 n1=0 n1! n2=0 n2! For this kind of systems, blocking probabilities are valid for a broad class of holding time distributions

3-113 3-114 Fig. 2. Transition diagram for the new call bounding scheme.

handoff calls in the cell. Let and . From From this, the traffic intensities for new calls and handoff calls the detailed balance equation, we obtain using the above common average channel holding time 1 are given by

Network of queues Networks of queues From the normalization equation, we obtain Applying these formulas in (1) and (2), we obtain similar re- Open queueing networks sults for new call blocking probability and handoff call blocking probability following the traditional approach (one-dimensional Two queues in tandem (BG, p.210) Markov chain theory), which obviously provides only an ap- proximation. We will show later that significantly inaccurate Assume that service time is proportional to the packet length results are obtained using this approach, which implies that we • cannot use the traditional approach if the channel holding times Queue 1 Queue 2 for new calls and handoff calls are distinct with different av- erage values. We observe that there is one case where these two From this, we obtain the formulas for new call blocking proba- approaches give the same results, i.e., when the nonprioritized bility and handoff call blocking probability as follows: scheme is used: . This is because we have the following identity: . As a final remark, this scheme may work best when the call Queue 1 Queue 2 is empty when arrives (1) arrivals are bursty. When a big burst of calls arrives in a cell (for Closed queueing networks example, before or after a football game), if too many new calls accepted, the network may not be able to handle the resulting Queue 2 (2) handoff traffic, which will lead to severe call dropping. The new Three things are needed call bounding scheme, however, could handle the problem well Arrivals at queue 2 get bursty by spreading the potential bursty calls (users will try again when time Obviously,Kleinrock’s when independence, the new call bounding assumption scheme be- the first few tries fail). On another note, as we observe in wired comes the nonprioritized scheme. As we expect, we obtain networks, network traffic tends to be self-similar ([15]). Wire- – Each link works as an M /M /1 queueless network traffic will behave the same considering more data – Interarrival times at the second queue are strongly correlated with services will be supported in the wireless networks. This scheme Burke’s theorem will be useful in the future wireless multimedia networks. the packet length at the first queue or the service time! – The output process at an M /M /1B.queue Cutoff Priority is a Scheme Poisson process with As we mentioned earlier, in most literature the channel The first queue is an M/M/1, but the second queue cannot be Instead of putting limitation on the number of new calls, we holding times for both new calls and handoff calls are iden- • mean rate λ base on the number of total on-going calls in the cell to make tically distributed with the same parameter. In this case, the considered as an M/M/1 a decision whether a new arriving call is accepted or not. The average channel holding time is given by Time-reversibility scheme works as follows. Let denote the threshold, upon a new call arrival.3-115 If the total 3-116 (3) number of busy channels is less than , the new call is accepted; Kleinrock’s Independence Approximation I Kleinrock’s Independence Approximation II

In real networks, many queues interact with each other Suppose several packet streams, each following a unique path through – a traffic stream departing from one or more queues enters one or the network: appropriate for virtual circuit network, e.g., ATM more other queues, even after merging with other streams departing from yet other queues Packet interarrival times are correlated with packet lengths. • Service times at various queue are not independent, e.g., • state-dependent flow control. Kleinrock’s independence approximation: M /M /1 queueing model works for each link: • – sufficient mixing several packet streams on a transmission line makes interarrival times and packet lengths independent Good approximation when: x : arrival rate of packet stream s • • s * Poisson arrivals at entry points of the network f (s): the fraction of the packets of stream s through link (i, j) • ij * Packet transmission times ‘nearly’ exponential Total arrival rate at link (i, j) • * Several packet streams merged on each link X * Densely connected network and moderate to heavy traffic load λij = fij(s)xs

3-117 all packet streams s 3-118 crossing link (i, j)

Kleinrock’s Independence Approximation III Kleinrock’s Independence Approximation IV

Based on M/M/1 (with Kleinrock’s Independence approximation), # In datagram networks including multiple path routing for some of packets in queue or service at (i, j) on average is origin-destination pairs, M/M/1 approx. often fails

Sec. 3.6 Networks of Transmission Lines 213 λij Node A sends traffic to node B along two links with service rate µ N = • ij V2 µij − λij Figure 3.29 Poisson process with rate .\ divided among two links. If division is B done by randomization, each link behaves – 1/µij is the average packet transmission time on link (i, j) like an M I JIII queue. If division is done by metering, the whole system behaves Iike The average number of packets over all queues and the average V2 an 1\111'v112 queue. • delay per packet are – Random splitting:where! = queueL8.rS is the attotal eacharrival linkrate mayin the behavesystem. If likethe average an M/M/1processing and propagation delay el at link (i. j) is not negligible, this formula should be adjusted to X 1 X ij 1 N = Nij and T = Nij TR = I Ai) ) γ T =-L ( . Aij elij (3.103) 11" - A" + (i,j) (i,j) rµ(i.j)− λ/I) 2 1) P – Metering: arrivingFinally, the packetsaverage delay areper assignedpacket of a traffic to astream queuetraversing witha thepath p smallestis given by – γ = s xs: total arrival rate in the system As a generalization with proc. & propag. delay, backlog → approximatedT asp = an'"'L M /(Ai)M../2 .. _with.. + a-.1) common+ eli) queue (3.104) '. ILI)(p,) AI)) 11i) • all (I,))   on path p2 T = < T where the threeMterms in(2theµsum− λabove)(1represent+ ρ) averageRwaiting time in queue, average X  λij 1  transmission time, and processing and propagation delay, respectively. Tp =  + + dij In many networks, the assumption of exponentially distributed packet lengths is not µij(µij − λij) µij  ∗ Metering destroys an M/M/1 approximation all packet streams s   appropriate. Given a different type of probability distribution of the packet lengths, one crossing link (i, j) | {z } may keep the approximation of independence between queues but use the P-K formula for queueing delay 3-119 average number in the system in place of the AI/M/1 formula (3.100). Equations (3.101)3-120 to (3.104) for average delay would then be modified in an obvious way. For virtual circuit networks (cf. Fig. 3.27), the main approximation involved in the I\I/ M /1 formula (3.101) is due to the correlation of the packet lengths and the packet interarrival times at the various queues in the network. If somehow this correlation was not present (e.g., if a packet upon departure from a transmission line was assigned a new length drawn from an exponential distribution), then the average number of packets in the system would be given indeed by the formula

This fact (by no means obvious) is a consequence of Jackson's Theorem, which will be discussed in Section 3.8. In datagram networks that involve multiple path routing for some origin-destination pairs (cf. Fig. 3.28), the accuracy of the M / M /1 approximation deteriorates for another reason, which is best illustrated by an example. Example 3.17 Suppose that node A sends traffic to node B along two links with service rate 11 in the network of Fig. 3.29. Packets arrive at A according to a Poisson process with rate .\ packets/sec. Packet transmission times are exponentially distributed and independent of interarrival times as in the AI/M / I system. Assume that the arriving traffic is to be divided equally among the two links. However, how should this division be implemented? Consider the following possibilities. Burke’s theorem I Burke’s theorem II

For M /M /1, M /M /c, M /M /∞ with arrival rate λ (without bulk arrivals and service): B1. Departure process is Poisson with rate λ. B2. At each time t, the number of jobs in the system at time t is independent of the sequence of departure times prior to time t Forward process Reverse process The sequence of departure times prior to time t in the forward • process is exactly the sequence of arrival times after time t in the reverse process. Since the arrival process in the reverse process is independent Arrivals of forward process Departures of reverse process • Poisson process, the future arrival process does not depend on the The arrival process in the forward process corresponds to the • current number in the system departure process in the reverse process Reverse process The past departure process in the forward process (which is the – The departure process in the forward process is the arrival • future arrival in the reverse process) does not depend on the process in the backward process current number in the system. Because M /M /1 is time-reversible, the reverse process is • statistically identical to the forward process. – The departures in forward time form a Poisson process, which is the arrivals in backward time 3-121 3-122

Two M/M/1 Queues in Tandem Open queueing networks

Consider a network of K first-come first serve, single server queue, The service times of a customer at the first and the second queues are each of which has unlimited queue size and exponential distribution

mutually independent as well as independent of the arrival process. with rate µk. routing path Queue 1 Queue 2

External arrivals Based on Burke’s theorem B1, queue 2 in isolation is an M/M/1 • m – Pr[m at queue 2] = ρ2 (1 − ρ2) B2: # of customers presently in queue 1 is independent of the • sequence of departure time prior to t (earlier arrivals at queue 2) Traffic equation with routing probability p or matrix P = [p ] – independent of # of customers presently in queue 2 • ij ij K K Pr[n at queue 1 and m at queue 2] X X λi = αi + λjpji and pji = 1 n m j=1 i=0 = Pr[n at queue 1] Pr[m at queue 2] = ρ1 (1 − ρ1)ρ2 (1 − ρ2) – pi0: flow going to the outside – λi can be uniquely determined by solving 3-123 3-124 λ = α + λP ⇒ λ = α(I − P)−1 Open queueing networks II Jackson’s theorem I

Let n = (n1,..., nK ) denote a state (row) vector of the network. Using time-reversibility, guess detailed balance equations (DBEs) as

The limiting queue length distribution π(n) λiπ(n − ei) = µiπ(n), λiπ(n) = µiπ(n + ei)

π(n) = lim Pr[X1(t) = n1,..., XK (t) = nK ] and λjπ(n − ei) = µjπ(n + ej − ei) based on t→∞ Global balance equation (GBE): total rate out of n = total rate into n K K K  X  X X α + µi π(n) = αiπ(n − ei) + pi0µiπ(n + ei) i=1 i=1 i=1 | {z } | {z } external arrivals go outside from i Substituting DBEs into GBE gives us K K  K K K PK  X X X αiµi X X j=1 pjiλj + pjiµjπ(n + ej − ei) RHS =π(n) + p λ + µ λ i0 i λ i i=1 j=1 i=1 i i=1 i=1 i | {z }  K K PK  from j to i X X αi + j=1 pjiλj =π(n) p λ + µ th i0 i λ i – ei = (0,..., 1,..., 0), i.e., the 1 is in the i position i=1 i=1 i – π(n − e ) denotes π(n , n ,..., n − 1,..., n ) | {z } i 1 2 i K α

3-125 PK 3-126 – in the numerator: λi = αi + j=1 pjiλj

Jackson’s theorem II Summary of time-reversibility: CTMC

From DBEs, we have

λi π(n1,..., ni,..., nK ) = π(n1,..., ni − 1,..., nK ) µi The forward CTMC has transition rates γij. and The reversed chain is a CTMC with transition rates γ∗ = φj γji ij φi λi π(n1,..., ni − 1,..., nK ) = π(n1,..., ni − 2,..., nK ) If we find positive numbers φi, summing to unity and such that µi scalars γ∗ satisfy P∞ γ = P∞ γ∗ for all i ≥ 0, then φ is the which is finally rearranged as ij j=0 ij j=0 ij i stationary distribution of both the forward and reverse chains. λ ni π(n ,..., n ,..., n ) = i π(n ,..., 0,..., n ) – Proof 1 i K µ 1 K i X X X X X φ γ = φ γ∗ = φ γ∗ = φ γ = φ γ Repeating for i = 1, 2,..., K, j ji i ij i ij i ij i ij j6=i j6=i j6=i j6=i j6=i K  ni Y λi π(n) = π(0) : Global balance equation µ i=1 i  −1 – π(0) = QK P∞ ρni and ρ = λ /µ i=1 ni =0 i i i i

3-127 3-128 Jackson’s theorem: proof of DBEs I Jackson’s theorem: proof of DBEs II

Proving DBEs based on time-reversibility We need to consider the following three cases Construct a routing matrix, P∗ = [p∗ ], of the reversed process Arrival to server i outside the network in the forward process • ij • The rate from node i to j must be the same in the forward and corresponds to a departure out of the network from server i in the • reverse direction, reversed process ∗ π(n)vn,n+e = π(n + ei)v ∗ i n+ei ,n (forward process) λipij = λjpji (reverse process)

∗ ∗ Departure to the outside in the forward process corresponds to – λjpji: the output rate from server j is λj, and pji is the rate of • ∗ ∗ arrival from the outside in the reversed process, moving from j to i; αi = λipi0; pi0 = αi/λi π(n)v = π(n − e )v∗ ∗ n,n−ei i n−ei ,n We need to show (recall θiγij = θjγji) X X Leaving queue i and joining queue j in the forward process π(n)v = π(m)v∗ and v = v∗ • n,m m,n n,m n,m (v = µ p ) correspond to leaving queue j and joining m m n,n−ei +ej i ij ∗ ∗ queue i in the reversed process (vn−e +e ,n = µjpji = λipijµj/λj) ∗ i j – vn,m and vn,m denote state transition rate of the forward and π(n)v = π(n − e + e )v∗ reversed process n,n−ei +ej i j n−ei +ej ,n

3-129 3-130

Jackson’s theorem: proof of DBEs III Jackson’s theorem: proof of DBEs IV

1) π(n)v = π(n + e )v∗ : Rearranging the previous eqn. yields n,n+ei i n+ei ,n Arrival to server i outside the network in the forward process corresponds to K K Y Y a departure out of the network from server i in the reversed process, i.e., πi(ni)αi πj(nj) = πi(ni + 1)(αi/ρi) πj(nj) j=1,j6=i j=1,j6=i  K   K  X X λjpji v∗ =µ 1 − p∗ = µ 1 − After canceling, we have n+ei ,n i ij i Use p∗ =λ p /λ λi j=1 ij j ji i j=1 π (n + 1) = ρ π (n ) ⇒ π (n) = ρn(1 − ρ ) | {z } i i i i i i i i p∗ i0 ∗ 2) π(n)vn,n−e = π(n − ei)v : Departure to the outside in the forward  K  i n−ei ,n µi X ∗ process corresponds to arrival from the outside in the reversed process, = λi − λjpji = αi/ρi (= v ). λ n,n−ei i j=1 K K ∗ X ∗ X λipij | {z } v = αi = λi − λjp = λi − λj PK n−ei ,n ji λi =αi + λj pji λj j=1 j=1 j=1 | {z } Substituting this into 1) (vn,n+ei = αi : arrival to server i from outside) Traffic eqn. for the reversed process K K K ! X Y Y =λ 1 − p = λ p (= v∗ ). πi(ni)αi = πi(ni + 1) πj(nj)αi/ρi i ij i i0 n,n+ei i=1 j=1,j6=i j=1

3-131 3-132 Jackson’s theorem: proof of DBEs V Jackson’s theorem: proof of DBEs VI

Substituting this with vn,n−ei = µipi0 (departure to the outside), Summary of transition rates of forward and reverse processes

∗ K K Transition Forward vn,m Reverse vn,m Comment ni Y ni −1 Y PK (1 − ρi)ρi πk(nk)µipi0 = (1 − ρi)ρi πk(nk)λipi0 n → n + ei αi λi (1 − j=1 pij ) all i k=1,k6=i k=1,k6=i PK n → n − ei µi (1 − j=1 pij ) αi µi /λi all i: ni > 0 n → n − ei + ej µi pij λj pji µi /λi all i: ni > 0, all j 3) π(n)v = π(n − e + e )v∗ : Leaving queue i and n,n−ei +ej i j n−ei +ej ,n P P ∗ 4) Finally, we verify total rate equation, vn,m = v : joining queue j in the forward process (vn,n−ei +ej = µi pij ) correspond to m m n,m leaving queue j and joining queue i in the reversed process, i.e, K ∗ X  X  X  X  v∗ = µ p∗ = λ p µ /λ , vn,m = λi 1 − pij + αi µi /λi + λj pji µi /λi n−ei +ej ,n j ji i ij j j i j=1 i:ni >0 j K |P P{z P } n λi − λi pij ni j Y i i j (1 − ρi)ρi (1 − ρj)ρj πk(nk)µipij k=1,k6=i,j X X X  αi µi µi  = λi − (λj − αj ) + + (λi − αi ) K λi λi i j i:ni >0 ni −1 nj +1 Y ∗ = (1 − ρi)ρi (1 − ρj)ρj πk(nk)µjpji | {z } ∗ PK use pji =λi pij /λj λj =αj + λi pij k=1,k6=i,j i=1 X X = αi + µi = vn,m.  3-133 3-134 i i:ni >0

Open queueing networks: Extension I Open queueing networks: Extension II

– For all state n = (n ,..., n ) The product-form solution of Jackson’s theorem is valid for the 1 K following network of queues Pˆ (n )Pˆ (n ) ··· Pˆ (n ) P(n) = 1 1 2 2 k K , G State-dependent service rate • where G = P∞ ··· P∞ Pˆ (n ) ··· Pˆ (n ) – 1/µi(ni): the mean of queue i’s service time exponentially n1=0 nK =0 1 1 k K distributed, when n is the number of customers in the ith queue i Multiple classes of customers just before the customer’s departure • – Provided that the service time distribution at each queue is the λi same for all customer classes, the product form solution is valid for ρi(ni) = , i = 1,..., K, ni = 1, 2,... µi(ni) the system with different classes of customers, i.e.,

K – λi: total arrival rate at queue i determined by the traffic eqn. X ˆ λj(c) = αj(c) + λi(c)pij(c) – Define Pj(nj) as i=1  ˆ 1, if nj = 0, – αj(c): rate of the external arrival of class c at queue j; pij(c) the Pj(nj) = ρj(1)ρj(2) ··· ρj(nj), if nj > 0 routing probabilities of class c – See pp.230-231 in the textbook for more details

3-135 3-136 Open queueing networks: Performance measure Open queueing networks: example A-I

Performance measure New programs arrive at a CPU according to a Poisson process of rate α.A State probability distribution has been derived • program spends an exponentially distributed execution time of mean 1/µ1 Mean # of hops traversed, h, is • in the CPU. At the end of this service time, the program execution is λ PK λ complete with probability p or it requires retrieving additional information h = = i=1 i PK from secondary storage with probability 1 − p. Suppose that the retrieval of α αi i=1 information from secondary storage requires an exponentially distributed Throughput of queue i: λ • i amount of time with mean 1/µ2. Find the mean time that each program Total throughput of the queueing network: α • spends in the system. Mean number of customers at queue i (ρ = λ /µ ) • i i i

N i = ρi/(1 − ρi) System response time T • K K K N 1 X 1 X 1 X λi T = = N i = λiTi = α α α α µ − λ Data Communications EIEN 368 (2014년 2학기) QUEUEING THEORY Q–69 i=1 i=1 i=1 i i

3-137 ∆ λi 3-138 ρi = µi

ρi 1 N i = ; Ti = 1 ρi µi λi − − M M M ρi N λi N = N i = ; T = = ( )Ti 1 ρi γ γ i=1 i=1 i=1 Open queueing networks: example A-II Open queueingX networks:X − exampleX B-I

eg. Consider the following networks with three routers Find the mean arrival rate, Consider the following network with three nodes External packet arrivals : Arrival rate into each queue, λ1 = α + λ2 and λ2 = (1 − p)λ1 γB • • PoissonExternal process packet arrivals with γ :A Poisson= 350 • λ1 = α/p and λ2 = (1 − p)α/p (packets/sec),process with γAγB == 150 3.5, pack- L1 B L3 γCets/sec,= 150γB. = 1.5, γC = 1.5. Each queue behaves like an M/M/1 system, so • L2 Packet length : exponentially γ γ • Packet length : exponentially dis- ρ ρ A A C C distributed with mean 50 1 2 L4 • E[N1] = and E[N2] = (kbits/packet)tributed with mean 1000 bits/packet. 1 − ρ1 1 − ρ2 Assumptions: where ρ1 = λ1/µ1 and ρ2 = λ2/µ2 (a) Packets moving along a path from source to destination have their Using Little’s result, the total time spent in the system lengths selected independently at each outgoing link → Kleinrock’s independence assumption E[N + N ] 1  ρ ρ  E[T] = 1 2 = 1 + 2 (b) Channel capacity of link i: Ci= 17.5 Mbps for i = 1, 2, 3, 4 α α 1 − ρ1 1 − ρ2 → Service rate at link i: exponentially distributed with rate

µi = Ci/50000 = 350 packets/sec.

3-139 3-140 Open queueing networks: example B-II Open queueing networks: example B-II

Traffic matrix (packets per second) Since α = 650 and λ = 850, the mean number of hops is • • from → to A B C A – 150 200 h = 850/650 = 1.3077 (50% through B) (50% directly to C) We get link utilization, mean number and response time as • B 50 – 100 L1 L2 L3 L4 C 100 50 – ρi 300/350=0.857 100/350=0.286 250/350=0.714 200/350 =0.572 Find mean delay from A to C N i 300/50 100/250 250/100 200/150 • T 1/50=0.02 1/250=0.004 1/100=0.01 1/150=0.0067 First, we need to know link traffic i • traffic type L1 L2 L3 L4 – N i = ρi/(1 − ρi) and Ti = N i/λi A → B 150 Mean delay from A to C A → C 100 100 100 • B → A 50 50 T AC = ( T1 + T2 ) × 0.5 + T3 × 0.5 = 0.017 (sec) B → C 100 |{z} |{z} C → A 100 A to B B to C C → B 50 50 – propagation delay is ignored total λ1 = 300 λ2 = 100 λ3 = 250 λ4 = 200

3-141 3-142

Closed queueing networks I Closed queueing networks II

Consider a network of K first-come first serve, single server queue, Using ~π = ~π · P, and ~π · ~1 = 1, we have • each of which has unlimited queue size and exponential distribution λi = λ(M )πi with rate µk. There are also a fixed number of customers, say M , circulate endlessly in a closed network of queues. – λ(M ): a constant of proportionality, the sum of the arrival rates PK in all the queues in the network, and i=1 λi 6= 1 – G(M ) (normalization constant) will take care of λ(M )

Assuming ρi(ni) = λi/µi(ni) < 1 for i = 1,..., K, we have for all ni ≥ 0, ˆ – Define Pj(nj) as  ˆ 1, if nj = 0, Pj(nj) = ρj(1)ρj(2) ··· ρj(nj), if nj > 0 Traffic eqn.: no external arrival! The joint state probability is expressed as • K K K K X X 1 Y X Y λ = λ p with p = 1 π(n) = Pˆ (n ), and G(M ) = Pˆ (n ) i j ji ji G(M ) i i i i j=1 i=0 i=1 n1+···+nK =M i=1

3-143 3-144 Closed queueing networks III Closed queueing networks IV

ρi: no longer the actual utilization due to λ(M ), i.e., relative GBE of open queueing networks is reduced to • • utilization  K  K K Setting λ(M ) to a value does not change the results X X X • α + µi π(n) = αiπ(n − ei) + pi0µiπ(n + ei) The maximum queue size of each queue is M i=1 i=1 i=1 • | {z } | {z } Proof: as in Jackson’s theorem for open queueing networks external arrivals go outside from i K K Use time-reversibility: X X • ∗ + pjiµjπ(n − ei + ej) – routing matrix of the reversed process, p = λipij/λj ji i=1 j=1 0 For state transition between n and n = n − ei + ej | {z } • from j to i 0 ∗ π(n )v 0 = π(n)v 0 (∗) n ,n n,n Substituting (1) and (2) into (*), we have • As in open queueing networks, we have for ni > 0 • ρiπ(n1,..., ni − 1,..., nj + 1,..., nK ) = ρjπ(n1,..., nK ) ∗ ∗ vn−e +e ,n =µjpji = µj(λipij/λj) (1) i j The proof for the following is given on page 235 (BG) • vn,n−ei +ej =µipij (2) X X ∗ v 0 = v 0 Leaving queue i and joining queue j in the forward process n,n n,n n0 n0 (vn,n−ei +ej = µi pij ) correspond to leaving queue j and joining queue i in the reversed process, 3-145 3-146

Summary of time-reversibility: CTMC Closed queueing networks V

µ (n): a service rate when queue i has n customers • i v = µ (n)p (leaving queue i and joining queue j) • n,n−ei +ej i ij The forward CTMC has transition rates γij. K X X X The reversed chain is a CTMC with transition rates γ∗ = φj γji µi(ni)pij = µi(ni)pij ij φi {(j,i)|ni >0} j=1 {i|ni >0} If we find positive numbers φi, summing to unity and such that K ∗ P∞ P∞ ∗ X X X scalars γij satisfy j=0 γij = j=0 γij for all i ≥ 0, then φi is the = µi(ni) pij = µi(ni) stationary distribution of both the forward and reverse chains. {i|ni >0} j=1 {i|ni >0} – Proof ∗ ∗ ∗ v = µi(n)p = µi(n)λjpji/λi using p = λjpji/λi X X ∗ X ∗ X X • n,n−ei +ej ij ij φjγji = φiγij = φi γij = φi γij = φiγij K j6=i j6=i j6=i j6=i j6=i X µi(ni)λjpji X X µi(ni)λjpji = λi λi : Global balance equation {(j,i)|ni >0} j=1 {i|ni >0} K X µi(ni) X X = λjpji = µi(ni) λi {i|ni >0} j=1 {i|ni >0} | {z } 3-147 λi 3-148 Closed queueing networks VI Closed queueing networks VII

K P Q ˆ 0 0 Dealing with G(M , K) = n +···+n =M i=1 Pi(ni) Since n > 0, we change n = n + 1 for n ≥ 0 1 K • k k k k Computing G(M , K) with M customers and K queues iteratively 0 X n1 n2 nk X n1 n2 nk +1 ρ1 ρ2 ··· ρk = ρ1 ρ2 ··· ρk G(m, k) = G(m, k − 1) + ρkG(m − 1, k) 0 n1+···+nk =m, n1+···+n +1=m, n >0 k k n0 >0 with boundary conditions: G(m, 1) = ρm for m = 0, 1,..., M , and k 1 0 X n1 n2 nk G(0, k) = 1 for k = 1, 2, ··· , K =ρk ρ1 ρ2 ··· ρk 0 For m > 0 and k > 1, split the sum into two disjoint sums as n1+···+nk =m−1, 0 • nk >0 X n1 n2 nk G(m, k) = ρ1 ρ2 ··· ρ k =ρkG(m − 1, k) n1+···+nk =m

X n1 n2 nk X n1 n2 nk = ρ1 ρ2 ··· ρk + ρ1 ρ2 ··· ρk In a closed with M customers, the probability that n1+···+nk =m, n1+···+nk =m, at steady-state, the number of customers in station j greater than or nk =0 nk >0

X n1 n2 nk−1 X n1 n2 nk equal to m is = ρ1 ρ2 ··· ρk−1 + ρ1 ρ2 ··· ρk n +···+n =m, n +···+n =m, 1 k 1 k m G(M − m) nk =0 nk >0 Pr[xj ≥ m] = ρj for 0 ≤ m ≤ M | {z } G(M ) G(m,k−1)

3-149 3-150

Closed queueing networks VIII Closed queueing networks IX

0 0 Implementation of G(M , K): µ is not a function of m. Proof: nj = n + m for n ≥ 0 i • j j n1 nj nK m\k 1 2 ··· k − 1 k ··· K X ρ1 ··· ρj ··· ρ Pr[x ≥ m] = K 0 1 1 1 1 1 j G(M ) G(1, 2) = G(1, 1) + ρ2G(0, 2) n1+···+nj +···+nK =M, 1 ρ1 = ρ1 + ρ2 nj ≥m 2 2 2 ρ1 G(2, 2) = ρ1 + ρ2G(1, 2) 0 n nj +m n . 1 K . X ρ1 ··· ρj ··· ρK . = m − 1 0 G(M ) m ρm n1+···+nj +m+···+nK =M, 1 0 . nj +m≥m . . m ρ n0 M ρM G(M, K) j X n1 j nK 1 = ρ1 ··· ρj ··· ρK G(M ) 0 n1+···+nj +···+nK =M−m, If µi(m) = mµi (multiple servers), we generalize 0 nj ≥0 m i ρm X (λ(M )πk) j G(m, k) = f (i)G(m − i, k − 1) and f (i) = = G(M − m) k k Qi G(M ) i=0 j=1 µi(j) with fk(0) = 1 for all k. Pr[xj = m] = Pr[xj ≥ m] − Pr[xj ≥ m + 1] • m = ρ (G(M − m) − ρjG(M − m − 1))/G(M ) 3-151 j 3-152 Closed queueing networks X Closed queueing networks: example A-I

In a closed Jackson network with M customers, the average number Suppose that the computer system given in the open queueing network is of customers at queue j: now operated so that there are always I programs in the system. Note that M M the feedback loop around the CPU signifies the completion of one job and X X G(M − m) N (M ) = Pr[x ≥ m] = ρm its instantaneous replacement by another one. Find the steady state pmf of j j j G(M ) m=1 m=1 the system. Find the rate at which programs are completed. In a closed Jackson network with M customers, the average throughput of queue j: G(M − 1) γ (M ) =µ Pr[x ≥ 1] = µ ρ j j j j j G(M ) G(M − 1) =λ j G(M ) Using λ = λ(I )π with ~π = ~πP, – Average throughput is the average rate at which customers are • i i serviced in the queue. For a single-server queue, the service rate is µj π1 = pπ1 + π2, π2 = (1 − p)π1 and π1 + π2 = 1 when there are one or more customers in the queue, and 0 when the we have queue is empty λ(I ) λ(I )(1 − p) λ1 = λ(I )π1 = and λ2 = λ(I )π2 = 3-153 2 − p 2 − p 3-154

Closed queueing networks: example A-II for closed networks I

For 0 ≤ i ≤ I , ρ = λ /µ and ρ = λ /µ • 1 1 1 2 2 2 Theorem: In a closed Jackson network with M customers, the (1 − ρ )ρi (1 − ρ )ρI−i occupancy distribution seen by a customer upon arrival at queue j is Pr[N = i, N = I − i] = 1 1 2 2 1 2 S(I ) the same as the occupancy distribution in a closed network with the arriving customer removed, i.e., the system with M − 1 customers The normalization constant, S(I ), is obtained by • In a closed network with M customers, the expected number of I I+1 • X 1 − (ρ1/ρ2) customers found upon arrival by a customer at queue j is equal to S(I ) = (1−ρ )(1−ρ ) ρi ρI−i = (1−ρ )(1−ρ )ρI 1 2 1 2 1 2 2 1 − (ρ /ρ ) the average number of customers at queue j, when the total i=0 1 2 number of customers in the closed network is M − 1 We then have for 0 ≤ i ≤ I • An arriving customer sees the system at a state that does not 1 − β • Pr[N = i, N = I − i] = βi include itself 1 2 1 − βI+1 Proof: where β = ρ1/ρ2 = µ2/((1 − p)µ1) X(t) = [X1(t), X2(t),..., XK (t))]: state of the network at time t Program completion rate: pλ1 • • Tij(t): probability that a customer moves from queue i to j at I I+1 • + λ1/µ1 = 1 − Pr[N1 = 0] = β(1 − β )/(1 − β ) time t

3-155 3-156 Arrival theorem for closed networks II I

For any state n with n > 0, the conditional probability that a • i Performance measure for closed networks with M customers customer moving from node i to j finds the network at state n N (M ): average number of customers in queue j • j Pr[X(t) = n, Tij(t)] T (M ): average time a customer spends (per visit) in queue j αij(n) = Pr[X(t) = n|Tij(t)] = • j Pr[Tij(t)] γ (M ): average throughput of queue j • j Pr[Tij(t)|X(t) = n] Pr[X(t) = n] = P Mean-Value Analysis: Calculates Nj(M ) and Tj(M ) directly, without Pr[Tij(t)|X(t) = m] Pr[X(t) = m] m,mi >0 first computing G(M ) or deriving the stationary distribution of the π(n)µ p ρn1 ··· ρni ··· ρnK = i ij = 1 i K network P P m1 mi mK π(m)µipij ρ ··· ρ ··· ρ m,mi >0 m,mi >0 1 i K a) The queue length observed by an arriving customer is the same as 0 0 – Changing mi = mi + 1, mi ≥ 0, the queue length in a closed network with one less customer n1 ni nK b) Little’s result is applicable throughout the network ρ1 ··· ρi ··· ρK αij(n) = 0 P m1 mi +1 mK 1. Based on a) 0 ρ ··· ρ ··· ρ m1+···+mi +1+···+mK =M, 1 i K 0 mi +1>0 1 n1 ni −1 nK n1 ni −1 nK Tj(s) = (1 + Nj(s − 1)) for j = 1,..., K, s = 1,..., M ρ1 ··· ρi ··· ρK ρ1 ··· ρi ··· ρK µj = 0 = P m1 mi mK 0 ρ ··· ρ ··· ρ G(M − 1) m1+···+mi 1 i K – Tj(0) = Nj(0) = 0 for j = 1,..., K 0 +···+mK =M−1,mi ≥0 3-157 3-158

Mean Value Analysis II Closed queueing networks: example B

2. Based on b), we first have when there are s customers in the Gupta’s truck company owns m trucks: Gupta is interested in the network probability that 90% of his trucks are in operation N (s) = λ (s)T (s) = λ(s)π T (s) (1) Set a routing matrix P: j j j j j • | {z } step 2-b Op LM M and Op  0 0.85 0.15 P = K K LM 0.9 0 0.1  X X s   s = N (s) = λ(s) π T (s) → λ(s) = (2) Local maintenance j j j PK M 1 0 0 j=1 j=1 j=1 πjTj(s) | {z } With π0 = 0.4796, step 2-a • π1 = 0.4077, and Combining (1) and (2) yields Manufacturer π2 = 0.1127, we have ρ = λ(m)π /λ , λ(s)π T (s) 0 0 0 N (s) = s j j j PK ρ1 = λ(m)π1/µ1, and j=1 πjTj(s) ρ2 = λ(m)π2/µ2 This will be iteratively done for s = 0, 1,..., M We have Pr[O = i, L = j, M = k] = 1 ρi ρj ρk/G(m) and • i! 0 1 2 3-159 k = m − i − j 3-160 Where are we? M/G/1 queue: Embedded MC I

Elementary queueing models Recall that a continuous-time MC is described by (n, r): – M/M/1, M/M/C, M/M/C/C/K, ... and bulk queues – either product-form solutions or use PGF n: number of customers in the system. • r: attained or remaining service time of the customer in service. Intermediate queueing models (product-form solution) • – Time-reversibility of Markov process Due to x, (n, x) is not a countable state space. How can we get rid of x? – Detailed balance equations of time-reversible MCs What if we observe the system at the end of each service? – Multidimensional Birth-death processes

– Network of queues: open- and closed networks Xn+1 = max(Xn − 1, 0) + Yn+1 Advanced queueing models Xn: number of customers in the system left behind by a departure. – M/G/1 type queue: Embedded MC and Mean-value analysis Yn: number of arrivals that occur during the service time of the – M/G/1 with vacations and Priority queues departing customer. More materials on queueing models (omitted) Question: Xn is equal to the queue length seen by an arriving – G/M/m queue, G/G/1, etc. customer (queue length just before arrival)? Recall PASTA. – Algorithmic approaches to get steady-state solutions

3-161 3-162

Distribution Upon Arrival or Departure M/G/1 queue: Embedded MC II

α(t), β(t): number of arrivals and departures (respectively) in (0, t) Defining probability generating function of distribution Xn+1,

U (t): number of times the system goes from n to n + 1 in (0, t); Xn+1 max(Xn −1,0)+Yn+1 max(Xn −1,0) Yn+1 n Qn+1(z) , E[z ] = E[z ] = E[z ]E[z ] number of times an arriving customer finds n customers in the system Let U (z) = E[zYn+1 ], as n → ∞, U (z) = U (z) (independent V (t): number of times that the system goes from n + 1 to n; n+1 n+1 n of n). Then, we have number of times a departing customer leaves n. ∞ the transition n to n+1 cannot reoccur until after the number in the system drops to n once more X k (i.e., until after the transition n +1 to n reoccurs) Qn+1(z) =U (z) z Pr[max(Xn − 1, 0) = k] k=0 ∞ h 0 X k−1 i =U (z) z Pr[Xn = 0] + z Pr[Xn = k] k=1 h −1 i =U (z) Pr[Xn = 0] + z (Qn(z) − Pr[Xn = 0])

Un(t) and Vn(t) differ by at most one: |Un(t) − Vn(t)| ≤ 1. As n → ∞, we have Qn+1(z) = Qn(z) = Q(z), and Pr[Xn = 0] = q0,

Un(t) Vn(t) Un(t) α(t) Vn(t) β(t) U (z)(z − 1) lim = lim ⇒ lim = lim Q(z) = q0. t→∞ t t→∞ t t→∞ α(t) t t→∞ β(t) t z − U (z)

3-163 3-164 M/G/1 queue: Embedded MC III M/G/1 Queue: Embedded MC IV

λx(z−1) We need to find U (z) and q0. Using U (z|xi = x) = e , Let fT (t) be probability density function of Tj, i.e., total delay. Z ∞ ∞ Z ∞ k ∗ X k (λt) −λt ∗ U (z) = U (z|xi = x)b(x)dx = B (λ(1 − z)). Q(z) = z e fT (t)dt = T (λ(1 − z)) 0 k! k=0 0 0 ∗ Since Q(1) = 1, we have q0 = 1 − U (1) = 1 − λ · X = 1 − ρ. where T (s) is the Laplace transform of fT (t). We have Transform version of Pollaczek-Khinchin (P-K) formula is B∗(λ(1 − z))(z − 1) T ∗(λ(1 − z)) = (1 − ρ) z − B∗(λ(1 − z)) B∗(λ(1 − z))(z − 1) Q(z) = (1 − ρ) z − B∗(λ(1 − z)) Let s = λ(1 − z), one gets (1 − ρ)sB∗(s) (1 − ρ)s 0 T ∗(s) = = W ∗(s)B∗(s) ⇒ W ∗(s) = Letting q = Q (1), one gets W = q/λ − X. s − λ + λB∗(s) s − λ + λB∗(s) Sojourn time distribution of an M/G/1 system with FIFO service: In an M/M/1 system, we have B∗(s) = µ/(s + µ): If a customer spends Tj sec in the system, the number of customers it  λ  leaves behind in the system is the number of customers that arrive W ∗(s) = (1 − ρ) 1 + during these Tj sec, due to FIFO. s + µ − λ

3-165 3-166

M/G/1 Queue: Embedded MC V Residual life time∗ I

Taking the inverse transform of W ∗(s) (L{Ae−at} ↔ A/(s + a)), Hitchhiker’s paradox: •   λ  Cars are passing at a point of a road according to a Poisson process L−1{W ∗(s)} = L−1 (1 − ρ) 1 + s + µ − λ with rate λ = 1/10, i.e., 10 min. = (1 − ρ)δ(t) + λ(1 − ρ)e−µ(1−ρ)x , x > 0 A hitchhiker arrives to the roadside point at random instant of time.

We can write W ∗(s) in terms of R0∗(s) Previous car Next car

(1 − ρ)s (1 − ρ)s time W ∗(s) = = Hitchhiker arrives s − λ + λB∗(s) s − λ(1 − B∗(s)) 1 − ρ 1 − ρ What is his mean waiting time for the next car? = = 1 − B∗(s) 1 − ρR0∗(s) 1 − λX 1. Since he arrives randomly in an interval, it would be 5 min. sX 2. Due to memoryless property of exponential distribution, it would be ∞ X another 10 min. = (1 − ρ) (ρR0∗(s))k k=0 ∗L. Kleinrock, Queueing systems, vol.1: theory 3-167 3-168 Residual life time II Residual life time III

The distribution of an interval that the hitchhiker captures depends If we take the Laplace transform of the pdf of R0 for 0 ≤ R0 ≤ x, on both X and fX (x): x −sx −sx 0 Z e 1 − e E[e−R s|X 0 = x] = dy = fX0 (x) = CxfX (x) and C : proportional constant 0 x sx R ∞ Unconditioning over X 0, we have R0∗(s) and its moments as Since 0 fX0 (x)dx = 1, we have C = 1/E[X] = 1/X: ∗ (n+1) xfX (x) 0∗ 1 − FX (s) 0n X fX0 (x) = R (s) = ⇒ E[R ] = X sX (n + 1)X Since Pr[R0 < y|X 0 = x] = y/x for 0 ≤ y ≤ x, joint pdf of X and R0: ∗ R ∞ −sx where FX (s) = 0 e fX (t)dt dy xf (x)dx f (x)dydx Mean residual time is rewritten as Pr[y < R0 < y + dy, x < X 0 < x + dx] = X = X x X X  σ2  R = E[R0] = 0.5 X + X Unconditioning over X 0, X Z ∞ dy 1 − FX (y) 1 − FX (y) 0 0 fR0 (y)dy = fX (x)dx = dy ⇒ fR0 (y) = Surprisingly, the distribution of the elapsed waiting time, X − R , is X y X X identical to that of the remaining waiting time.

3-169 3-170

M/G/1 Queue: Embedded MC VI M/G/1 Queue: Embedded MC VII

State transition diagram of M/G/1 GBE can be expressed as

… n X

πn = πn+1−kαk + αnπ0 for n = 0, 1, 2, ··· … k=0 – Q(z) can be also obtained using Q(z) = P∞ π zn … … … n=0 n

As an alternative, we define ν0 = 1 and νi = πi/π0

1 − α0 and its state transition probability matrix ν1 = α0   α0 α1 α2 α3 ... 1 − α1 α1 ν2 = ν1 − α0 α1 α2 α3 ... α0 α0   Z ∞ k  0 α α α ... (λx) −λx . P =  0 1 2  and αk = e b(x)dx .   k! .  0 0 α0 α1 ... 0  . . . . .  1 − α1 α2 αi−1 αi−1 ...... νi = νi−1 − νi−2 − · · · ν1 − . . . . α0 α0 α0 α0

P∞ P∞ πi – νi = 1 + = 1/π0 and πi = π0νi i=0 i=1 π0 3-171 3-172 M/G/1 queue: Mean value analysis I M/G/1 queue: Mean value analysis II

∗ dW (s) We are interested in E[Wi] = ds (See BG p.186) A sample path of M/G/1 queue s=0 W : Waiting time in queue of customer i • i R : Residual service time seen by customer i 3 • i 2 1 Xi: Service time of customer i time • customers of N : Number of customers in queue found by customer i # 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 • i i−1 X Wi = Ri + Xj 8 7 j=i−Ni 6 5 4 Taking expectations and using the independence among Xj, 3

i−1 Virtual workload 2 h X i 1 1 E[Wi] , W = E[Ri] + E E[Xj|Ni] = Ri + Nq time µ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 j=i−Ni

Since Nq = λW , Ri = R for all i, we have 4 3 R 2 W = 1 1 − ρ time 0 1 2 3 4 5 6 7 3-173 8 9 10 11 12 13 14 15 16 3-174

M/G/1 queue: Mean value analysis III M/G/1 queue: Mean value analysis IV

Time averaged residual time of r(τ) in the interval [0, t] is From the hitchhiker’s paradox, we have E[R0] = E[X 2]/(2E[X])

M(t) M(t) 0 1 Z t 1 X 1 1 M (t) P X 2 R =0 · Pr[N (t) = 0] + E[R ] × Pr[N (t) > 0] R(t) = r(τ)dτ = X 2 = i=1 i t t 2 i 2 t M (t) E[X 2] λX 2 0 i=1 = × λE[X] = 2E[X] 2 – M (t) is the number of service completion within [0, t]. P-K formula for mean waiting time in queue 2 2 2 ∗0 λX λ(σX + X ) W = −W (s)|s=0 = = 2(1 − ρ) 2 2 2 2(1 − ρ) X =σX +X 2 2 1 + Cx ρ 1 + Cx = X = WM/M/1 Residual service service Residual time time 2 1 − ρ 2 2 2 2 – Cx = σX /X is the coefficient of variation of the service time – e.g., upon a new service of duration X1, r(τ) starts at X1 and decays – The average time in the system T = W + X linearly for X1 time units 2 Eg.: since Cx = 1 in an M/M/1 and Cx = 0 in an M/D/1, As t → ∞, limt→∞ R(t) = R = λX /2 ρ ρ λX 2 dW ∗(s) W = X > W = X Upon a new service ofW duration= , starts←→ at and decays linearly for time units. M/M/1 1 − ρ M/D/1 2(1 − ρ) 2(1 − ρ) ds s=0 3-175 3-176 Sec. 3.5 The MIG11 System 191

2. A packet transmitted in frame i might be accepted at the receiver, but the correspond- ing acknowledgment (in the form of the receive number) might not have arrived at the transmitter by the time the transmission of packet i + n - I is completed. This can happen due to errors in the return channel, large propagation delays, long return frames relative to the size of the goback number n, or a combination thereof.

We will assume (somewhat unrealistically) that retransmissions occur only due to reason I, and that a packet is rejected at the receiver with probability p independently of other packets. Consider the case where packets arrive at the transmitter according to a Poisson process with rate .\. It follows that the time interval between start of the first transmission of a given packet after the last transmission of the previous packet and end of the last transmission of the given packet is I + kn time units with probability (I - p)pk. (This corresponds to k retransmissions following the last transmission of the previous packet; see Delay Models in Data Networks Chap. 3 Fig. 3.17.) Thus, the transmitter's queue behaves like an MIGII queue with service time distribution given by Packet arrivals P{X=I+kn}=(l-p)p,k k=O.I. ...

The first two moments of the service time are ::::::::'':'::':'':'::'Vx, x x v2 , V3 X5 V4 X5 Delay analysisx of an ARQ(X systemX) M/G/1 Queue with vacations I Busy period

Vacations

Suppose Go-Back-N ARQ system, where a packet is successfully Server takes a vacation at the end of each busy periodTime - x 2 k (00 k 00 k 2 00)2k =(l-p) +n transmitted with probability 1 − p; ACK arrives in tx time of N − 1 Figure 3.12 An M/G/I1 system with vacations. At the end of a busy Take an additionalperiod, vacation the server goes onif vacation no forcustomers time V with first and second are moments found at the end of • Y frames withoutWe now annote errorthat V and V , respectively. If the system is empty at the end of a vacation, the each vacation: V1server, V takes2, a ...new vacation. the An durations arriving customer to an of empty the system successive must vacations X X wait until the end of the current vacation to get service. ,",pk = _1_, '"'kk __P_ Packet arrivals to a transmitter’sI-p P - (I queue_ )2' follows Poisson with mean A customer finds the system idle (vacation), waits for the end of • k=O k=O P • λ (packets/slot) the vacation period Start of effective service time Effective service time Effective service time of packet 4 of packet 1 of packet 2 ,. II .. 'II

X, X3

X, Error Final transmission Error Final transmission Correct Error Error of packet 1 of packet 2

Packets-Transmitted Residual service time including vacation periods Figure 3.17 Illustration of the effective service times of packets in the ARQ Figure 3.13 Residual service times for an M/G/1 system with vacations. We need the first two moments of the service time to use P-K • Busy periods alternate with vacation periods. • system of Example 3.15. For example, packet 2 has an effective service time of M(t) L(t) n + 1 because there was an error in the first attempt to transmit it following the Z t formula ∞ last transmission of packet 1. but no error in the second attempt. 1 1 X 1 1 X 1 X Np r(τ)dτ = X 2 + V 2 X = (1 + kN )(1 − p)pk = 1 + t t 2 i t 2 i 1 − p 0 i=1 i=1 k=0 ∞ – M (t): # of services completed by time t X 2Np N 2(p + p2) X 2 = (1 + kN )2(1 − p)pk = 1 + + – L(t): # of vacations completed by time t 1 − p (1 − p)2 3-177 3-178

k=0 II

M/G/1 Queue with vacations II FDM and TDM on a Slot Basis I

Residual service time including vacation periods is rewritten as Suppose m traffic streams of equal-length packets according to • Poisson process with rate λ/m each Z t PM(t) 1 X 2 PL(t) 1 V 2 1 M (t) i=1 2 i L(t) i=1 2 i If the traffic streams are frequency-division multiplexed on m r(τ)dτ = · + · • t 0 t M (t) t L(t) subchannels, the transmission time of each packet is m time units | {z } | {z } | {z } R as t→∞ λ as t→∞ 1−ρ as t→∞ 2 V – Using P-K formula, λX /(2(1 − ρ)), with ρ = λ and µ = 1/m, 2 2 λX (1 − ρ)V λm = + = R W = 2 2V FDM 2(1 − λ)

Using W = R/(1 − ρ), we have Consider the same FDM, but packet transmissions can start only at • • times, m, 2m, 3m,...: slotted FDM λX 2 V 2 W = + – This system gives stations a vacation of m slots 2(1 − ρ) 2V ! V 2 – The sum of waiting time in M/G/1 queue and mean residual WSFDM = WFDM + 0.5m = vacation times 2V

3-179 3-180 FDM and TDM on a Slot Basis II M/G/1 Queue with Non-Preemptive Priorities I Customers are divided into K priority classes, k = 1,..., K. m traffic streams are time-division multiplexed, where one slot • dedicatedSec. 3.5 to eachThe M traffic/G/1 System stream as shown below 195 Non-preemptive priority Service of a customer completes uninterrupted, even if customers Stream 1 Stream 2 Stream 3 Stream 4 • of higher priority arrive in the meantime A separate (logical) queue is maintained for each class; each time --,--IttM-'------'--...l...----..!-1-!--I__. • the server becomes free, the first customer in the highest priority I. Framek t .1_.Frame (k + 1) One time unit per slot queue (that is not empty) enters service Due to non-preemptive policy, the mean residual service time R0 TDMFigure with3.20 mTOM=with4 mtraffic= 4 traffic streamsstreams. • seen by an arriving customer is the same for all priority classes if all 2 2 – ServiceThus, timethe tocustomer's eachaverage queue,total delayX:ismmoreslotsfavorable→in TDMX than= inmFDM (assuming customers have the same service time distribution that In > 2). The longer average waiting time in queue for TDM is more than compensated – Frame synchronizationby the faster service time. delay:Contrast thism/with2 the Example 3.9, which treats TDM with – Using P-Kslots that formula,are a very small weportion haveof the packet size. Problem 3.33 outlines an altemative Notations approach for deriving the TDM average delay. (k) Nq : mean number of waiting customers belonging to class k in m • 3.5.2 ReservationsWandTDMPolling= = WSFDM the queue. 2(1 − λ) Organizing transmissions from several packet streams into a statistical multiplexing sys- W : mean waiting time of class-k customers • k – Systemtem requires responsesome form time:of scheduling.T = 1 In+someW cases, this scheduling is naturally and ρ : utilization, or load of class k, ρ = λ X . easily accomplished; in other cases, however, someTDMform of reservation or • k k k k is required. R0: mean residual service time in the server upon arrival Situations of this type arise often in multiaccess channels, which will be treated 3-181 • 3-182 extensively in Chapter 4. For a typical example, consider a communication channel that can be accessed by several spatially separated users; however, only one user can transmit successfully on the channel at anyone time. The communication resource of the channel can be divided over time into a portion used for packet transmissions and another portion used for reservation or polling messages that coordinate the packet transmissions. In other words, the time axis is divided into data intervals, where actual data are transmitted, and reservation intervals, used for scheduling future data. For uniform presentation, we use M/G/1the term Queue"reservation" witheven though Non-Preemptive"polling" may be more appropriate Prioritiesto the practical II M/G/1 Queue with Non-Preemptive Priorities III situation. We will consider Tn traffic streams (also called users) and assume that each data Stabilityinterval condition:contains packetsρ + ofρ a sinfile+ ···user.+Reservationsρ < 1.for these packets are made in the immediately preceding1 reservation2 interval. KAll users are taken up in cyclic order (see From W2 = R/((1 − ρ1)(1 − ρ1 − ρ2)), we can generalize Priority 1:Fig. similar3.21). There toare P-Kseveral formula,versions of this system differing in the rule for deciding which packets are transmitted during the data interval of each user. In the fiated system, R the rule is that only those packets that arrived prior to the user's preceding reservation 1 R Wk = Winterval= areR transmitted.+ N (1By) contrast,and Nin (the1)exhaustive= λ Wsystem,⇒theWrule is=that all available (1 − ρ1 − · · · − ρk−1)(1 − ρ1 − · · · − ρk) packets1 of a user are transmittedq duringq the corresponding1 1 data interval,1 including those µ 1 − ρ1 that arrived in this data interval or the preceding reservation interval. An intermediate Priority 2:version, which we call the partially fiated system, results when the packets transmitted in As before, the mean residual service time R is a user's data interval are those that arrived up to the time this data interval began (and the corresponding reservation interval ended). A typical example of such reservation systems 1 (1) 1 (2) 1 K K W2 = R + Nq + Nq + λ1W2 1 2 X 2 1 X 2 µ µ µ R = λX , with λ = λi and X = λiX 1 2 1 2 λ i | {z } | {z } i=1 i=1 time needed to serve time needed to serve those customers class-1 and class-2 customers in higher classes that arrive ahead in the queue during the waiting time of class-2 customer Mean waiting time for class-k customers: (2) From Nq = λ2W2, PK 2 i=1 λiXi R + ρ1W1 Wk = W2 = R + ρ1W1 + ρ2W2 + ρ1W2 ⇒ W2 = 2(1 − ρ1 − · · · − ρk−1)(1 − ρ1 − · · · − ρk) 1 − ρ1 − ρ2 Note that average queueing time of a customer depends on the arrival – Using W1 = R/(1 − ρ1), we have rate of lower priority customers. R W2 = (1 − ρ1)(1 − ρ1 − ρ2) 3-183 3-184 M/G/1 Queue with Preemptive Resume Priorities I M/G/1 Queue with Preemptive Priorities II

Preemptive resume priority (iii) Average time required to serve customers of priority higher than k Service of a customer is interrupted when a higher priority • that arrive while the customer is in the system for Tk customer arrives. k−1 k−1 It resumes from the point of interruption when all higher priority X X • λ X T = ρ T for k > 1 and 0 if k = 1 customers have been served. i i k i k i=1 i=1 In this case the lower priority class customers are completely • "invisible" and do not affect in any way the queues of the higher Combining these three terms, classes • k−1 Waiting time of class-k customer consists of Rk X T = X + + T ρ k k 1 − ρ − · · · − ρ k k (i) The customer’s own mean service time Xk. 1 k i=1 (ii) The mean time to serve the customers in classes 1,..., k, ahead in | {z } this is zero for k=1 the queue, Pk 2 (1 − ρ1 − · · · − ρk)X k + i=1 λiXi k ⇒Tk = . Rk 1 X (1 − ρ − · · · − ρ )(1 − ρ − · · · − ρ ) W = and R = λ X 2. 1 k−1 1 k k 1 − ρ − · · · − ρ k 2 i i | {z } 1 k i=1 becomes 1 if k=1 This is equal to the average waiting time in an M/G/1 system where customers of priority lower than k are neglected 3-185 3-186

Upper Bound for G/G/1 System I Upper Bound for G/G/1 System II

th Waiting time of the k customer Notations for any random variable Y : Y + = max{0, Y } and Y − = − min{0, Y } • 2 Y = E[Y ] and σ2 = EY 2 − Y  • Y Y = Y + − Y − and Y + · Y − = 0 • E[Y ] = Y = Y + − Y − th • Wk: Waiting time of the k customer 2 2 2 + − • σ = σ + + σ − + 2Y · Y th • Y Y Y Xk: Service time of the k customer • th st Using the above, we can express τk: Interarrival time between the k and the (k + 1) customer • th st Ik: Idle period between the k and the (k + 1) customer W = max{0, W +X −τ } = max{0, W +V } = (W +V )+ • • k+1 k k k k k k k I = − min{0, W +X −τ } = − min{0, W +V } = (W +V )− Wk+1 = max{0, Wk + Xk − τk} and Ik = − min{0, Wk + Xk − τk} • k k k k k k k k 2 2 λ(σa +σb ) 2 2 2 + − The average waiting time in queue: W ≤ σ =σ + + σ − + 2(W + V ) · (W + V ) 2(1−ρ) (Wk +Vk ) (Wk +Vk ) (Wk +Vk ) k k k k σ2: variance of the interarrival times =σ2 + σ2 + 2W · I a Wk+1 Ik k+1 k • 2 σ : variance of the service times 2 2 2 2 2 • b =σ + σ = σ + σ + σ λ: average interarrival time Wk Vk Wk a b • 3-187 3-188 Upper Bound for G/G/1 System III

As k → ∞, we can see

σ2 + σ2 + 2W · I = σ2 + σ2 + σ2 Wk+1 Ik k+1 k Wk a b becomes σ2 + σ2 + 2W · I = σ2 + σ2 + σ2 W I W a b We get W as

σ2 + σ2 σ2 σ2 + σ2 λ(σ2 + σ2) W = a b − I ≤ a b = a b 2I 2I 2I 2(1 − ρ)

1 – The average idle time I between two successive arrival is λ (1 − ρ)

3-189