Contents ELL 785–Computer Communication Networks

Motivations

Lecture 3 Discrete-time Markov processes Introduction to Review on Poisson process

Continuous-time Markov processes

Queueing systems

3-1 3-2

Circuit switching networks - I Circuit switching networks - II

Traffic fluctuates as calls initiated & terminated Fluctuation in Trunk Occupancy Telephone calls come and go • Number of busy trunks People activity follow patterns: Mid-morning & mid-afternoon at All trunks busy, new call requests blocked • office, Evening at home, Summer vacation, etc. Outlier Days are extra busy (Mother’s Day, Christmas, ...), • disasters & other events cause surges in traffic Providing resources so Call requests always met is too expensive 1 active • Call requests met most of the time cost-effective 2 active • 3 active

Switches concentrate traffic onto shared trunks: blocking of requests 4 active active will occur from time to time 5 active Trunk number Trunk 6 active active

7 active active

Many Fewer lines trunks – minimize the number of trunks subject to a blocking probability

3-3 3-4 networks - I Packet switching networks - II

Statistical multiplexing Fluctuations in Packets in the System Dedicated lines involve not waiting for other users, but lines are • used inefficiently when user traffic is bursty (a) Dedicated lines A1 A2 Shared lines concentrate packets into shared line; packets buffered • (delayed) when line is not immediately available B1 B2

C1 C2

(a) Dedicated lines A1 A2

B1 B2 (b) Shared line A1 C1 B1 A2 B2 C2

C1 C2 A (b) Shared lines A C B A B C B Buffer 1 1 1 2 2 2 Output line Number of packets C in the system Input lines

3-5 3-6

Packet switching networks - III Random (or Stochastic) Processes

Delay = Waiting times + service times General notion Suppose a random experiment specified by the outcomes ζ from P1 P2 P3 P4 P5 • Packet completes some sample space S, and ζ ∈ S transmission A random process (or stochastic) is a mapping ζ to a function of • Packet begins Service time t: X(t, ζ) transmission time – For a fixed t, e.g., t1, t2,...: X(ti, ζ) is random variable – For ζ fixed: X(t, ζi) is a sample path or realization Packet arrives Waiting at queue P1 P2 P3 P4 time P5 Packet arrival process •

Packet service time ≈ • – R bps transmission rate and a packet of L bits long 0 1 2 n n+1 n+2 time – Service time: L/R (transmission time for a packet) – e.g. # of people in Cafe coffee day, # of rickshaws at IIT main – Packet length can be a constant, or random variables gate

3-7 3-8 Discrete-time Markov process I Discrete-time Markov process II

A sequence of integer-valued random variables, Xn, n = 0, 1,... , Time-homogeneous, if for any n, is called a discrete-time Markov process pij = Pr[Xn+1 = j|Xn = i](indepedent of time n) If the following Markov property holds which is called one-step (state) transition probability Pr[Xn+1 = j|Xn = i, Xn−1 = in−1,..., X0 = i0] State transition probability matrix: = Pr[Xn+1 = j|Xn = i]   p00 p01 p02 ······ State: the value of Xn at time n in the set S •   State space: the set S = {n|n = 0, 1,..., } p10 p11 p12 ······ •    ··········· – An integer-valued Markov process is called (MC) P = [pij] =     pi0 pi1 pi2 ······ – With an independent Bernoulli seq. Xi with prob. 1/2, is    . . . . .  Yn = 0.5(Xn + Xn−1) a Markov process? ......

– Is the vector process Yn = (Xn, Xn−1) a Markov process? P∞ which is called a stochastic matrix with pij ≥ 0 and j=0 pij = 1

3-9 3-10

Discrete-time Markov process III Discrete-time Markov process IV

A mouse in a maze n-step transition probability matrix:

A mouse chooses the next cell to visit with (n) p = Pr[Xl+n = j|Xl = i] for n ≥ 0, i, j ≥ 0. 1 2 3 • probability 1/k, where k is the number of ij adjacent cells. 4 5 6 The mouse does not move any more once it is – Consider a two-step transition probability • caught by the cat or it has the cheese. 7 8 9 Pr[X2 = j, X1 = k, X0 = i] Pr[X2 = j, X1 = k|X0 = i] = 1 2 3 4 5 6 7 8 9 Pr[X0 = i] 1 0 1 0 1 0 0 0 0 0 2 2 Pr[X2 = j|X1 = k] Pr[X1 = k|X0 = i] Pr[X0 = i] 1 1 1 = 2 3 0 3 0 3 0 0 0 0  1 1  Pr[X0 = i] 3 0 2 0 0 0 2 0 0 0 4  1 0 0 0 1 0 1 0 0  3 3 3  = pikpkj P = 5 0 1 0 1 0 1 0 1 0  4 4 4 4  6 0 0 1 0 1 0 0 0 1  – Summing over k, we have  3 3 3  7 0 0 0 0 0 0 1 0 0 8 0 0 0 0 1 0 1 0 1  (2) X 3 3 3 pji = pikpkj 9 0 0 0 0 0 0 0 0 1 k

3-11 3-12 Discrete-time Markov process IV Discrete-time Markov process V

In a place, the weather each day is classified as sunny, cloudy or rainy. The The Chapman-Kolmogorov equations: next day’s weather depends only on the weather of the present day and not

∞ on the weather of the previous days. If the present day is sunny, the next (n+m) X (n) (m) day will be sunny, cloudy or rainy with respective probabilities 0.70, 0.10 pij = pik pkj for n, m ≥ 0, i, j ∈ S k=0 and 0.20. The transition probabilities are 0.50, 0.25 and 0.25 when the present day is cloudy; 0.40, 0.30 and 0.30 when the present day is rainy. Proof: 0.2 0.3 X 0.7 0.25 SCR Pr[Xn+m = j|X0 = i] = Pr[Xn+m = j|X0 = i, Xn = k] Pr[Xn = k|X0 = i] 0.1 0.25 S 0.7 0.1 0.2  k∈S Sunny Cloudy Rainy P = X 0.5 0.3 C 0.5 0.25 0.25 (Markov property) = Pr[Xn+m = j|Xn = k] Pr[Xn = k|X0 = i] R 0.4 0.3 0.3 k∈S 0.4 X (Time homogeneous) = Pr[Xm = j|X0 = k] Pr[Xn = k|X0 = i] – Using n-step transition probability matrix, k∈S 0.601 0.168 0.230 0.596 0.172 0.231 n+m n m n+1 n 3   12   13 P = P P ⇒ P = P P P = 0.596 0.175 0.233 and P = 0.596 0.172 0.231 = P 0.585 0.179 0.234 0.596 0.172 0.231

3-13 3-14

Discrete-time Markov process VI Discrete-time Markov process VII

State probabilities at time n Consider A Markov model for a packetized speech model: if the nth (n) (n)  (n) (n) – πi = Pr[Xn = i] and π = π0 , . . . , πi ,...](row vector) packet contains silence, then the probability of silence in the next (0) packet is 1 − α and the probability of speech activity is α. Similarly, if – πi : the initial state probability the nth packet contains speech activity, then the probability of speech X Pr[Xn = j] = Pr[Xn = j|X0 = i] Pr[X0 = i] activity in the next packet is 1 − β and the probability of silence is β. i∈S (n) X (n) (0) (a) Find the state transition probability matrix, P. πj = pij πi (b) Find an expression of Pn. i∈S

– In matrix notation: π(n) = π(0)Pn (a) " # Limiting distribution: Given an initial prob. distribution, π(0), 1 − α α P = β 1 − β ~π = lim π(n) → π(∞) = lim p(n) n→∞ j n→∞ ij (b) We can write Pn as – n → ∞: π(n) = π(n−1)P → π~ = π~ P and π~ · ~1 = 1 n −1 n – The system reaches “equilibrium" or “steady-state" P = N Λ N .

3-15 3-16 Discrete-time Markov process VIII Discrete-time Markov process IX

(n) (n+1) Using the spectral decomposition of P, i.e., If P has identical rows, then P has also. Suppose  r  |P − λI | = (1 − β − λ)(1 − α − λ) = 0  r  P(n) =    .  we have λ1 = 1 and λ2 = 1 − α − β.  .  The eigenvectors are ~e1 = [1, β/α] and ~e2 = [1, −1]. Thus, we have r

" # " β # " # Then, we have ~e1 1 α 1 α β N = = and N −1 = .  r  ~e2 1 −1 α + β α −α     ··· r ··· (n)   PP =  pj1 pj2 ··· pjn   .  =  pj1r + pj2r + ··· + pjnr  We can write Pn as  .  ···  .  ··· " # " n n # r 1 0 1 α + βθ β − βθ Pn = N −1 N = ,   0 (1 − α − β)n α + β α − αθn β + αθn ··· (n) =  r  = P where θ = 1 − α − β. ···

3-17 3-18

Discrete-time Markov process X Discrete-time Markov process XI

Stationary distribution: Back to the weather example on page 3-16 – z and z = [z ] denote the prob. of being in state j and its vector j j Using ~πP = ~π, we have • z = z · P and z · ~1 = 1 π =0.7π + 0.5π + 0.4π (0) 0 0 1 2 If zj is chosen as the initial distribution, i.e., πj = zj for all j, we • π1 =0.1π0 + 0.25π1 + 0.3π2 have π(n) = z for all n j j π =0.2π + 0.25π + 0.3π A limiting distribution, when it exists, is always a stationary 2 0 1 2 • distribution, but the converse is not true - Note that one equation is always redundant " # " # " # 0 1 1 0 0 1 Using 1 = π0 + π1 + π2, we have P = , P2 = , P3 = • 1 0 0 1 1 0       0.3 −0.5 −0.4 π0 0 Global :       −0.1 0.75 −0.3 π1 = 0 X X 1 1 1 π2 1 π~ = π~ P ⇒ (each row) πj pji = πipij i i π0 = 0.596, π1 = 0.1722, π2 = 0.2318

3-19 3-20 Discrete-time Markov process XII Discrete-time Markov process XIII

Classes of states: Periodicity and aperiodic: State j is accessible from state i if p(n) > 0 for some n State i has period d if • ij • States i and j communicate if they are accessible to each other (n) • p = 0 when n is not a multiple of d, Two states belong to the same class if they communicate with ii • each other where d is the largest integer with this property. State i is aperiodic if it has period d = 1. MC having a single class is said to be irreducible • • All states in a class have the same period • – An irreducible Markov chain is said to be aperiodic if the states 0 in its single class have period one Null recurrent: 1 2 3 Recurrent Recurrence property Periodic: State j is recurrent if P∞ p(n) = ∞ State Positive recurrent • n=1 jj – Positive recurrent if πj > 0 Transient: Aperiodic: ergodic – Null recurrent if πj = 0 P∞ (n) State j is transient if n=1 pjj < ∞ • 3-21 3-22

Discrete-time Markov process XIV Discrete-time Markov process XV

In a place, a mosquito is produced every hour with prob. p, and dies An autorickshaw driver provides service in two zones of New Delhi. with prob. 1 − p Fares picked up in zone A will have destinations in zone A with Show the state transition diagram probability 0.6 or in zone B with probability 0.4. Fares picked up in • zone B will have destinations in zone A with probability 0.3 or in zone B with probability 0.7. The driver’s expected profit for a trip entirely 0 1 2 3 … … in zone A is 40 Rupees (Rps); for a trip entirely in zone B is 80 Rps; and for a trip that involves both zones is 110 Rps. Find the stationary prob. that the driver is in each zone. • Using global balance eqns, find the (stationary) state prob.: What is the expected profit of the driver? • • i p  p  (40 × 0.6 + 110 × 0.4)πA + (80 × 0.7 + 110 × 0.3)πB pπi = (1 − p)πi+1 → πi+1 = πi = π0 1 − p 1 − p = 68πA + 89πB

= 68πA + 89(1 − πA) = 89 − 21πA All states are positive recurrent if p < 1/2, null recurrent if • P∞ p = 1/2 (see i=0 πi = 1), and transient if p > 1/2

3-23 3-24 Discrete-time Markov process XVI Review on Poisson process I Properties of a Poisson process, Λ(t): Diksha possesses 5 umbrellas which she employs in going from her P1) Independent increment for some finite λ (arrivals/sec): home to office, and vice versa. If she is at home (the office) at the Number of arrivals in disjoint intervals, e.g., [t , t ] and [t , t ], are beginning (end) of a day and it is raining, then she will take an 1 2 3 4 independent random variables. Its probability density function is umbrella with her to the office (home), provided there is one to be (λt)k taken. If it is not raining, then she never takes an umbrella. Assume Pr[Λ(t) = k] = e−λt for k = 0, 1,... that, independent of the past, it rains at the beginning (end) of a day k! with probability p. P2) Stationary increments: By defining a Markov chain with 6 states which enables us to • The number of events (or arrivals) in (t, t + h] is independent of t. determine the proportion of time that our TA gets wet, draw its Using PGF of distribution Λ(t + h), i.e., state transition diagram by specifying all state transition E[zΛ(t)] = P∞ zk Pr[Λ(t) = k] = eλt(z−1), probabilities (Note: She gets wet if it is raining, and all umbrellas k=0 Λ(t+h) Λ(t) Λ(t+h)−Λ(t) are at her other location.) E[z ] = E[z · z ] Find the probability that our TA gets wet. = E[zΛ(t)] · E[zΛ(t+h)−Λ(t)], due to P1. • At what value of p, can the chance for our TA to get wet be λ(t+h)(z−1) • Λ(t+h)−Λ(t) e λh(z−1) highest? ⇒ E[z ] = = e . eλ(t)(z−1)

3-25 3-26

Review on Poisson process II Review on Poisson process III

P4) The converse of P4 is true: P3) Interarrival (or inter-occurrence) times between Poisson arrivals are If the sequence of interarrival times {ti} is iid rv’s with exp. density exponentially distributed: fun. λe−λt, t ≥ 0, then the number of arrivals in interval [0, t], Suppose τ , τ , τ ,... are the epochs of the first, second and third 1 2 3 Λ(t), is a Poisson process. arrivals, then the interarrival times t1, t2 and t3 are given by t1 = τ1, t2 = τ2 − τ1 and t3 = τ3 − τ2, generally, tn = τn − τn−1 time with τ0 = 0. ≈

time

−λt Let Y denote the sum of j independent rv’s with exp. density fun., j−1 1. For t1, we have Pr[Λ(t) = 0] = e = Pr[t1 ≥ t] for t ≥ 0, which λ(λy) −λy then Y is -j distributed, fY (y) = e : means that t1 is exponentially distributed with mean 1/λ. (j−1)! 2. For t , we get Z t 2 Pr[Λ(t) = j] = Pr[0 arrival in(y, t]|Y = y]f (y)dy Pr[t > t|t = x] = Pr[Λ(t+x)−Λ(x) = 0] = Pr[Λ(t) = 0] = e−λt , Y 2 1 0 which also means that t2 is independent of t1 and has the same Z t j −λt λ(t−y) (λt) e distribution as t1. Similarly t3, t4,... are iid. = e · fY (y)dy = . 0 j!

3-27 3-28 Review on Poisson process IV Review on Poisson process V

P7) Merging: If Λi(t)’s are mutually independent Poisson processes P5) For a short interval, the probability that an arrival occurs in an   with rates λ ’s, the superposition process Λ(t) = Pk Λ (t) is interval is proportional to the interval size, i.e., i i=1 i  Pk  a Poisson process with rate λ = λi Pr[Λ(h) = 1] e−λh(λh) i=1 lim = lim = λ. Note: If the interarrival times of the ith stream are a sequence of h→0 h h→0 h iid rv’s but not necessarily exponentially distributed, then Λ(t) o(h) tends to a Poisson process as k → ∞. [D. Cox, ]

Or, we have Pr[Λ(h) = 1] = λh + o(h), where limh→0 h = 0 … P6) The probability of two or more arrivals in an interval of length h … gets small as h → 0. For every t ≥ 0, Merging Splitting Pr[Λ(h) ≥ 2] 1 − e−λh − λhe−λh lim = lim = 0 h→0 h h→0 h P8) Splitting: If an arrival randomly chooses the ith branch with | {z } L’Hopital’s rule probability πi, the arrival process at the ith branch, Λi(t), is Poisson with rate λi(= πiλ). Moreover, Λi(t) is independent of Λj(t) for any pair of i and j (i 6= j).

3-29 3-30

Continuous-time Markov process I Continuous-time Markov process II

A is called continuous-time MC if it satisfies State occupancy time follows exponential dist. Pr[X(tk+1) = xk+1|X(tk) = xk, X(tk−1) = xk−1,..., X(t1) = x1] Let T be the sojourn (or occupancy) time of X(t) in state i before = Pr[X(t ) = x |X(t ) = x ] • i k+1 k+1 k k making a transition to any other state.

If X(t) is a time-homogeneous continuous-time MC if – Ti is assumed to be with mean 1/vi. Pr[X(t + s) = j|X(s) = i] = p (t) (independent of s) For all s ≥ 0 and t ≥ 0, due to Markovian property of this process, ij •

which is analogous to pij in a discrete-time MC −vi t Pr[Ti > s + t|Ti > s] = Pr[Ti > t] = e . : sojourn time in state : time of state change Only exponential dist. satisfies this property. 4 Semi-Markov process: 3 The process jumps state j. Such a jump depends only on the 2 • previous state. 1 T for all j follows a general (independent) distribution. • j time

A sample path of continuous time MC 3-31 3-32 Continuous-time Markov process III Continuous-time Markov Process IV

State transition rate Comparison between discrete- and continuous time MC : discrete-time Markov process qii(δ) = Pr[the process remains in state i during δ sec] 2 8 viδ (viδ) = Pr[T > δ] = e−vi δ = 1 − + − · · · = 1 − v δ + o(δ) ≈ i 1 2! i ≈ Or, let vi be the rate that the process moves out of state i, ≈ 1 1 − qii(δ) viδ + o(δ) lim = lim = vi δ→0 δ δ→0 δ time : Poisson process with mean rate v : sojourn time in state i : continuous-time Markov process : time of state change

8 ≈ ≈ 2 1

time 3-33 3-34

Continuous-time Markov Process V Continuous-time Markov process VI

A discrete-time MC is embedded in a continuous-time MC. State probabilities πj(t) = Pr[X(t) = j]. For δ > 0,

: continuous-time Markov process πj(t + δ) = Pr[X(t + δ) = j] X 4 = Pr[X(t + δ) = j|X(t) = i] Pr[X(t) = i] 3 i | {z } =qij (δ) 2 X (n+1) X (n) = qij(δ)πi(t) ⇐⇒ πi = pjiπj (DTMC) 1 i j

time Transition into state j from any other state: Each time a state, say i, is entered, an exponentially distributed state occupancy time is selected. When the time is up, the next state j is state selected according to transition probabilities, pij . When the process enters state j from state i,

qij (δ) = (1 − qjj (δ))pij = vi pij δ + o(δ) = γij δ + o(δ), where γij = limδ→0 qij (δ)/δ = vi pij , i.e., rate from state i to j. time 3-35 3-36 Continuous-time Markov process VII Continuous-time Markov process VIII

Subtracting πj(t) from both sides, As t → ∞, the system reaches ‘equilibrium’ or ‘steady-state’ X πj(t + δ) − πj(t) = qij(δ)πi(t) − πj(t) dπ (t) j → 0 and π (∞) = π i dt j j X = q (δ)π (t) + (q (δ) − 1)π (t) ij i jj j   i X X X 0 = γijπi or vjπj = γijπi γjj = −vj = − γij Dividing both sides by δ, i i6=j i6=j

πj(t + δ) − πj(t) dπj(t) P lim = which is called the global balance equation, and j πj = 1. δ→0 δ dt 1h X i X State transition rate diagram: = lim qij(δ)πi(t) + (qjj(δ) − 1) πj(t) = γijπi(t), δ→0 δ i | {z } i γii =−vi which is a form of the Chapman-Kolmogorov equations … … … … dπj(t) X = γ π (t) dt ij i i

3-37 3-38

Continuous-time Markov process IX Two-state CTMC I

As a matrix form, A queueing system alternates between two states. In state 0, the system is idle and waiting for a customer to arrive. This idle time is dπ~ (t) = π~ (t)Q and π~ (t)~1 = 1 an exponential random variable with mean 1/α. In state 1, the dt system is busy servicing a customer.The time in the busy state is an whose solution is given by exponential random variable with mean 1/β.

Qt Find the state transition rate matrix. π~ (t) = π~ (0)e •     γ00 γ01 −α α As t → ∞, π~ (∞) π~ = [πi], Q = = , γ10 γ11 β −β   −v0 γ01 γ02 γ03 ...  γ10 −v1 γ12 γ13 ... Draw state transition rate diagram π~ Q = 0 with Q =   and π~ · ~1 = 1, •  γ20 γ21 −v2 γ23 ...  . . . . .  ...... 0 1 where Q is called the infinitesimal generator or rate matrix.

3-39 3-40 Two-state CTMC II Cartridge Inventory I

Find the state probabilities with initial state probabilities π0(0) and dπj (t) P π1(0): use = γijπi(t) dt i An office orders laser printer cartridges in batches of four cartridges. 0 0 π0(t) = −απ0(t) + βπ1(t) and π1(t) = απ0(t) − βπ1(t) Suppose that each cartridge lasts for an exponentially distributed time with mean 1 month. Assume that a new batch of four cartridges Using π0(t) + π1(t) = 1, we have • becomes available as soon as the last cartridge in a batch runs out. 0 π0(t) = − απ0(t) + β(1 − π0(t)) and π0(0) = p0 Find the state transition rate matrix: = − (α + β)π0(t) + β •   Assume π (t) = C e−at + C : −1 0 0 1 • 0 1 2 (a) Find homogeneous part: π0 (t) + (α + β)π (t) = 0  1 −1 0 0  0 0 Q =    0 1 −1 0  (b) Find particular solution 0 0 1 −1 – Using the solution in (a), determine coefficients with 0 π0(t) + (α + β)π0(t) = β Find the stationary pmf for N (t), the number of cartridges • (c) The solution of the above is available at time t. β β π (t) = + Ce−(α+β)t and C = p − 0 α + β 0 α + β

3-41 3-42

Cartridge Inventory II Barber shop I

Transient behavior of π (t): π~ (t) = π~ (0)eQt = π~ (0)EeΛtE −1 Customers arrive at a Barbor shop with a Poisson process with rate λ. • i – E and Λ are given by One barber serves those customers based on first-come first-serve 1 1 1 1  0 0 0 2  basis. Its service time, Si is exponentially distributed with 1/µ (sec). 1 1 i −i −1 0 −1 − i 0 0  The number of customers in the system, N (t) for t ≥ 0, forms a E =   and Λ =   2 1 −1 −1 1  0 0 −1 + i 0  Markov chain 1 −i i −1 0 0 1 −2 N (t + τ) = max(N (t) − B(τ), 0) + A(τ) √ – note that i = −1, use ‘expm’ in Matlab

1 State transition probabilities (see properties of Poisson process):

0.8 Pr[0 arrival (or departure) in (t, t + δ)] = 1 − λδ + o(δ)(or 1 − µδ + o(δ)) 0.6 π4(t) Pr[1 arrival (or deparutre) in (t, t + δ)] 0.4 π t 3( ) = λδ + o(δ)(or µδ + o(δ)) π t 0.2 2( ) Pr[more than 1 arrivals (or departure) in (t, t + h)] = o(h) π1(t)

0 0 1 2 3 4 5 6 time t 3-43 3-44 Barber shop II Barber shop III

Find Pn(t) , Pr[N (t) = n]. For n ≥ 1 For n = 0, we have dP (t) Pn(t + δ) = Pn(t) Pr[0 arrival & 0 departure in (t, t + δ)] 0 = −λP (t) + µP (t). dt 0 1 + Pn−1(t) Pr[1 arrival & 0 departure in (t, t + δ)] dPn (t) As t → ∞, i.e., steady-state, we have Pn(∞) = πn with = 0. + Pn+1(t) Pr[0 arrival & 1 departure in (t, t + δ)] + o(h) dt

= Pn(t)(1 − λδ)(1 − µδ) + Pn−1(t)(λδ)(1 − µδ) λπ0 = µπ1

+ Pn+1(t)(1 − λδ)(µδ) + o(δ). (λ + µ)πn = λπn−1 + µπn+1 for n ≥ 1. Rearranging and dividing it by δ, State transition rate diagram

Pn(t + δ) − Pn(t) o(δ) = −(λ + µ)Pn(t) + λPn−1(t) + µPn+1(t) + δ δ … … As δ → 0, for n > 0 we have

dPn(t) = − (λ + µ) Pn(t) + λ Pn−1(t) dt | {z } |{z} Solution of the above equations is (ρ = λ/µ) rate out of state n rate from state n − 1 to n ∞ + µ P (t).  X  n+1 π = ρnπ and 1 = π 1 + ρi ⇒ π = 1 − ρ |{z} 3-45 n 0 0 0 3-46 rate from state n + 1 to n i=1

Barber shop IV Barbershop V

ρ: the server’s utilization (< 1, i.e., λ < µ) Recall the state transition rate matrix, Q on page 3-40 as   Mean of customers in the system −v0 γ01 γ02 γ03 ... ∞ γ −v γ γ ... X ρ  10 1 12 13  E[N ] = nπ = π~ Q = 0 with Q =   and π~ · ~1 = 1, n 1 − ρ  γ20 γ21 −v2 γ23 ... n=0  . . . . .  ...... = ρ(in server) + ρ2/(1 − ρ)(in queue) – What is γij, and vi in M/M/1 queue ? An M/M/1 system with 1/µ = 1  λ, if j = i + 1,  20 20  µ, if j = i − 1. Simulation Simulation γ = Analysis Analysis ij −(λ + µ), if j = i 15 15   0, otherwise

10 10 If a and b denote interarrival and service time, respectively, then vi is the mean of an exponential distribution, i.e., min(a, b). 5 5 What is pi,i+1 or pi+1,i? Mean system response time (sec)

Number of customers in the system 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 pi,i+1 = Pr[a < b] = λ/(λ+µ) and pi+1,1 = Pr[b < a] = µ/(λ+µ). ρ ρ 3-47 3-48 Barbershop VI Barbershop simulation I

Distribution of sojourn time, T: Discrete event simulation sim_time = 0 TN = S1 + S2 + ··· + SN +SN+1 | {z } Generate an arrival customers ahead sim_time = sim_time + interarrival time An arriving customer finds N customers in the system (including the customer in the server) Queue = Queue + 1 – By the memoryless property of the exponential distribution, the Arrival? yes remaining service time of the customer in service is exponentially next interarrival time an event distributed: < service time service time < next interarrival time ∞ i sim_time = sim_time + service time X (µt) −µt fT (t) = µ e πi i! Queue = Queue ─ 1 i=0 ∞ X (µt)i = µ e−µtρi(1 − ρ) = µ(1 − ρ)e−µ(1−ρ)t Yes Queue is i! i=0 empty? No which can be obtained via Laplace transform of distribution of Si.

3-49 3-50

Barbershop simulation II Barbershop simulation III clear % Define variables % If a departure occurs, global arrival departure mservice_time elseif event == departure arrival = 1; departure = -1; mservice_time = 1; delay_per_arrival = sim_time - system_queue(1); % Set simulation parameters system_queue(1:max_queue-1) = system_queue(2:max_queue); sim_length = 30000; max_queue = 1000; total_delay = total_delay + delay_per_arrival; % To get delay statistics num_system = num_system - 1; system_queue = zeros(1,max_queue); num_served = num_served + 1; k = 0; if num_system == 0 for arrival_rate = 0.1:0.025:0.97 % nothing to serve, schedule an arrival k = k + 1; event = arrival; % x(k) denotes utilization event_time = exprnd(1/arrival_rate); x(k) = arrival_rate*mservice_time; elseif num_system > 0 % initialize % still the system has customers to serve sim_time = 0; num_arrivals = 0; num_system =0; upon_arrival = 0; total_delay = 0; num_served =0; [event, event_time] = schedule_next_event(arrival_rate); % Assuming that queue is empty end event = arrival; event_time = exprnd(1/arrival_rate); end sim_time = sim_time + event_time; sim_time = sim_time + event_time; while (sim_time < sim_length), end % If an arrival occurs, ana_queue_length(k) = (x(k)/(1-x(k))); if event == arrival ana_response_time(k) = 1/(1/mservice_time-arrival_rate); num_arrivals = num_arrivals + 1; % Queue length seen by arrival num_system = num_system + 1; sim_queue_length(k) = upon_arrival/sim_length; % Record arrival time of the customer sim_response_time(k) = total_delay/num_served; system_queue(num_system) = sim_time; end upon_arrival = upon_arrival + num_system; % To see whether one new arrival comes or new departure occurs [event, event_time] = schedule_next_event(arrival_rate); 3-51 3-52 Barbershop simulation IV Relation between DTMC and CTMC I

Recall an embedded MC: each time a state, say i, is entered, an exponentially distributed state occupancy time is selected. When the time

is up, the next state j is selected according to transition probabilities, pij function [event, event_time] = schedule_next_event(arrival_rate) : continuous-time Markov process global arrival departure mservice_time 4 3 minter_arrival = 1/arrival_rate; inter_arrival = exprnd(minter_arrival); 2 service_time = exprnd(mservice_time); if inter_arrival < service_time 1 event = arrival; event_time = inter_arrival; time else event = departure; Ni(n): the number of times state i occurs in the first n transitions event_time = service_time; • th end T (j): the occupancy time the j time state i occurs. • i The proportion of time spent by X(t) in state i after the first n transitions

PNi (n) time spent in state i Ti (j) = j=1 time spent in all states P PNi (n) i j=1 Ti (j)

3-53 3-54

Relation between DTMC and CTMC II Relation between DTMC and CTMC III

As n → ∞, using πi = Ni(n)/n we have Recall M/M/1 queue

Ni (n) 1 PNi (n) Ti(j) a) CTMC n Ni (n) j=1 πiE[Ti] = = φi, P Ni (n) 1 PNi (n) P E[T ]=1/v Ti(j) i πiE[Ti] i i i n Ni (n) j=1 0 1 2 3 4 … …

where πi is the unique pmf solution to X X b) Embedded MC πj = πipij and πj = 1 (∗) i j 0 1 2 3 4 … …

The long-term proportion of time spent in state i approaches

πi/vi πi viφi In the embedded MC, we have the following global balance equations φi = P = c → πi = πi/vi vi c  i π0 = qπ1  Substituting π = (v φ )/c into (∗) yields π1 = π0 + qπ2  p  i i i π = π 5 . i i−1 vjφj 1 X X X .  q = viφipij → vjφj = φivipij = φiγij  c c  i i i πi = pπi−1 + qπi+1

3-55 3-56 Relation between DTMC and CTMC IV Queueing systems I

P∞ Using the normalization condition, i=0 πi = 1,

p i−1 1 1 − 2p π = π and π = i q q 0 0 2(1 − p)

Converting the embedded MC into CTMC,

c c cπi c φ0 = π0 = π0 and φi = = πi v0 λ vi λ + µ The arrival times, the size of demand for service, the service capacity Determine c, and the size of waiting room may be (random) variables.

∞ ∞ ! Queueing discipline: specify which customer to pick next for service. X π0 1 X φ = 1 → c + π = 1 → c = 2λ First come first serve (FCFS, or FIFO) i λ λ + µ i • i=0 i=1 Last come first serve (LCFS, LIFO) • Random order, (PS), Round robin (RR) Finally, we get φ = ρi(1 − ρ) for i = 1, 2,... • i Priority (preemptive:resume, non-resume; non-preemptive) • Shortest job first (SJF) and Longest job first (LJF) 3-57 • 3-58

Queueing systems II Queueing system III

Customer behavior: jockeying, reneging, balking, etc. Performance measure: Kendall’s notation: N (t) = N (t) + N (t): number in • q S system N (t): number in queue Population size (default ) • q Queue size (default ) N (t): number in service • S # of servers W : Waiting time in queue Service time distribution • T: total time (or response time) in Arrival time distribution • the system For A and B: τ: service time M: Markovian, exponential dist. • • : γ mean # of customers served per unit time D: Deterministic • , • 1. γ for non-blocking system = min(λ, mµ) GI: General independent • 2. γ for a blocking system = (1 − PB)λ, PB = blocking probability Ek: Erlang-k • Utilization: ρ , fraction of time server is busy H : Mixture of k exponentials • • k load λT λ PH : Phase type distribution ρ = = lim = for a single server queue • capacity T→∞ µT µ E.g.: M /D/2, M /M /c, G/G/1, etc.; Barbershop is M/M/1 queue. λT λ = lim = for an m-server queue 3-59 T→∞ mµT mµ 3-60 Little’s theorem I Little’s theorem II

Any queueing system in steady state: N = λT As an alternative, for the cumulative processes, N : average number • N (t) = α(t) − β(t) = γ(t) −→ N (t)/t = γ(t)/t = Nt of customers in the divided by t 6 system See the variable, ‘num_system’ in the previous Matlab code 5 λ: steady-state 4 • ‘num_arrvials’ in the code (t corresponds to ‘sim_length’) 3 arrival rate, need not λ = α(t)/t 2 to be a Poisson t 1 Customer 2 Customer 1 T: average delay per Response time per customer from ‘total_delay’

Number Number or of departures arrivals • time customer γ(t) γ(t) t Nt Tt = = · = Proof: For a system with N (0) = 0 and N (t) = 0, as t → ∞ α(t) t α(t) λt

Z t α(t) Pα(t) As t → ∞, we have 1 1 X α(t) Ti N = N (τ)dτ = T = i=0 = λ · T . t t t i t α(t) t t 0 i=1 λT = λ(W + x) = Nq + ρ Pβ(t) Ti valid for any queue (even with any service order) as long as the limits β(t) i=0 If N (t) 6= 0, we have ≤ Nt ≤ λtTt. t β(t) of λt and Tt exist as t → ∞

3-61 3-62

Little’s theorem III Increasing the arrival and transmission rates by the same fator Finite queue In a packet transmission system, Arrival rate (packets/sec) is increased from λ to Kλ for K > 1 • The packet length distribution remains the same (exponential), … … • with mean 1/µ bits The transmission capacity (C bps) is increased by a factor of K • Performance The average number of packets in the system remain the same Network of queues • ρ N = with ρ = λ/(µC) 1 − ρ Average delay per packet • λW = N → W = N /(Kλ) Aggregation is better: increasing a transmission line by K times can allow K times as many packets/sec with K times smaller average delay per packet 3-63 3-64 Statistical multiplexing vs TDMA or FDMA Little’s theorem: example I

Estimating throughput in a time-sharing system Multiplexing: m Poisson packet streams each with λ/m (packets/sec) Sec. 3.2 Queueing Models-Little's Theorem 161 are transmitted over a communication link with 1/µ exponentially distributed packet transmission time

a) Statistical multiplexing b) TDMA or FDMA

Computer

B C

… …

Average reflection Average job processing time R time P

Suppose a time-sharingFigure computer3.4 N terminals connected systemwith a withtime-sharingNcomputerterminals.system. To A user logs estimate maximum attainable throughput, we assume that a departing user im- mediately reenters the system or, equivalently, is immediately replaced by a new 1 m into the system throughuser. a terminal and after an initial reflection period of T = < T = average length R, submit a job that requires an average processing time P µ − λ µ − λ Combining this relation with A = NIT [cf. Eq. (3.3)], we obtain at the computer. Jobs queue up insideN the computer and are(3.6) served by a -R+P When do we need TDMA or FDMA? single CPU accordingThe throughput to someA is also bounded unspecifiedabove by the processing prioritycapacity orof time-sharingthe computer. In rule. particular, since the execution time of a job is P units on the average, it follows that the – In a multiplexer, packet generation times overlap, so that it must What is the maximumcomputer cannot ofprocess sustainablein the long run throughputmore than II P jobs per byunit thetime, that system?is, I buffer and delay some of the packets – Assume that there is always a userA ready<--P to take the place(3.7) of a departing (This conclusion can also be reached by applying Little's Theorem between the entry and user, so the numberexit points ofof usersthe computer's in theCPU.) system is always N By combining the preceding two relations, we obtain the bounds 3-65 3-66

---::-:-:c-N < A < min {I- ---N} (3.8) R+NP- - P'R+P Chap. 3 162 for the throughput A. By using T = N I A, weDelayalso obtainModelsboundsin forDatathe averageNetworksuser delay when the system is fully loaded: max {NP, R+ P} -s; T -s; R+ NP (3.9)

,< These relationsBound areinducedillustratedby in Fig. 3.5. Bound induced by .... Iimited number CPU processing ::J It can be seen that as the number of terminals N increases, the throughput approaches 0. .r:: the maximumof terminalsliP, while the average user delaycapacityrises essentially in direct proportion with Cl ::J N. The number of terminals becomes a throughput bottleneck when N < I RIP, in o + Little’s theorem: example II Little’swhich case the computer theorem:resource stays idle for examplea substantial portion of IIIthe time while all 1: 11P I- users are engaged in reflection. In contrast, the limited processing power of the computer ::c'" becomes the bottleneck when N > I + RIP. It is interesting to note that while the exact c maximum attainable throughput depends on system parameters, such as the statistics of the The average time a user spends in the system '" Guaranteed reflection and processing times, and the manner in which jobs are served by the CPU, the Using T = N:t /λ, we can rewritethroughput T = R + D → R + P ≤ T ≤ R + NP curve o 1 + RIP max{NP, RNumber+ Pof}Terminals ≤ T N≤ R + NP – D: the average delay between time time a job is submitted to the (a) computer and the time its execution is completed, D = [P, NP]

E Combining this with λ = N /T, Upper bound for delay Lower bound for delay due to limited 1;;'" > CPU processing capacity   en N 1 N .s'" ≤ λ ≤ min , c R + NP P R + P R+P 162 Delay Models in Data Networks Chap. 3 '"E /1 i= /I R I – throughput is bounded by 1/P, maximum job execution rate :::> I

Cl 1/ Delay assuming no waiting in queue :;>'" V > //1 ,< Bound induced by Bound induced by <{'" .... Iimited number CPU processing ::J 0 0. .r:: of terminals capacity Cl ::J Number of Terminals N o 1: (b) I- 11P

::c'" Figure 3.5 Bounds on throughput and average user delay in a time-sharing c '" Guaranteed system. (a) Bounds on attainable throughput [Eq. (3.8)]. (b) Bounds on average user time in a fully loaded system [Eq. (3.9)]. The time increases essentially in :t throughput curve proportion with the number of terminals N.

o 1 + RIP 3-67 bounds obtained are independent of these parameters. We owe this convenient situation to 3-68 Number of Terminals N the generality of Little's Theorem. (a) 3.3 THE M / M /1 QUEUEING SYSTEM

The M / ]\[/ I queueing system consists of a single queueing station with a single server E (in a communication context, a single transmission line). Customers arrive according Upper bound for delay Lower bound for delay due to limited 1;;'" > to a Poisson process with rate A, and the of the service time is en CPU processing capacity exponential with mean 1/f.1 sec. We will explain the meaning of these terms shortly. .s'" The name AI/AI/ I reflects standard queueing theory nomenclature whereby: c R+P '"E /1 1. The first letter indicates the nature of the arrival process [e.g., !vI stands for mem- i= /I R I oryless, which here means a Poisson process (i.e., exponentially distributed inter- :::> I

Cl 1/ Delay assuming no waiting in queue :;>'" V > //1 <{'" 0

Number of Terminals N

(b)

Figure 3.5 Bounds on throughput and average user delay in a time-sharing system. (a) Bounds on attainable throughput [Eq. (3.8)]. (b) Bounds on average user time in a fully loaded system [Eq. (3.9)]. The time increases essentially in proportion with the number of terminals N.

bounds obtained are independent of these parameters. We owe this convenient situation to the generality of Little's Theorem.

3.3 THE M / M /1 QUEUEING SYSTEM

The M / ]\[/ I queueing system consists of a single queueing station with a single server (in a communication context, a single transmission line). Customers arrive according to a Poisson process with rate A, and the probability distribution of the service time is exponential with mean 1/f.1 sec. We will explain the meaning of these terms shortly. The name AI/AI/ I reflects standard queueing theory nomenclature whereby:

1. The first letter indicates the nature of the arrival process [e.g., !vI stands for mem- oryless, which here means a Poisson process (i.e., exponentially distributed inter- Poisson Arrivals See Time Average (PASTA) theorem I PASTA theorem II

For a stochastic process, N ≡ {N (t), t ≥ 0} for t ≥ 0 and an Suppose a random process which spends its time in different states E j arbitrary set B ∈ N : In equilibrium, we can associate with each state Ej two different  1, if N (t) ∈ B, 1 Z t probabilities U (t) = ⇒ V (t) = U (τ)dτ. 0, otherwise. t 0 The probability of the state as seen by an outside random observer • For a Poisson arrival process A(t), – π : prob. that the system is in the state E at a random instant j j Z t The probability of the state seen by an arriving customer • Y (t) = U (τ)dA(τ) ⇒ Z(t) = Y (t)/A(t) ∗ 0 – πj : prob. that the system is in the state Ej just before (a randomly chosen) arrival Lack of Anticipation Assumption (LAA): For each t ≥ 0, In general, we have π 6= π∗ j j {A(t + u) − A(t), u ≥ 0} and {U (s), 0 ≤ s ≤ t} are independent: When the arrival process is Poisson, we have Future inter-arrival times and service times of previously arrived customers are independent. π = π∗ j j Under LAA, as t → ∞, PASTA ensures V (t) → V (∞) w.p. 1 if Z(t) → V (∞) w.p.1

3-69 3-70

PASTA theorem Systems where PASTA does not hold

Proof: Ex1) D/D/1 queue For sufficiently large n, Y (t) is approximated as • Deterministic arrivals every 10 msec • n−1 Deterministic service times of 9 msec X • Yn(t) = U (k(t/n))[A((k + 1)t/n) − A(kt/n)] k=0 | {z } … … (λ(k+1)t−λkt)/n 0 9 10 19 20 LAA decouples the above as • A sample path of D/D/1 queue n−1 h X i E[Yn(t)] = λtE U (kt/n)/n Arrivals always finds the system empty. • k=0 The system is occupied on average with 0.9. • As n → ∞, if |Y (t)| is bounded, • n Ex2) LAA violated: Service times for a current customer depends on h Z t i an inter-arrival time of a future customer lim E[Yn(t)] = E[Y (t)] = λtE[V (t)] = λE U (τ)dτ .  n→∞ 0 Your own PC (one customer, one server) • : the expected number of arrivals who find the system in state B Your own PC is always free when you need it, π∗ = 1 • 0 equals arrival rate times the expected length of time it is there. π0= proportion of time the PC is free (< 1) •

3-71 3-72 M/M/1/K I M/M/1/K II

M/M/1/K: the system can accommodate K customers (including one π : the probability that an arriving customer finds the system full. • K in service) Due to PASTA, this is a blocking probability

1 − ρ K πK = K+1 ρ … … 1 − ρ Blocking probability in simulation • total # of blocked arrivals upon arrival instants P = waiting customers B total # of arrivals at the system State balance equations • 0.15 Analysis, K = 10 λπ0 = µπ1 Simulation, K = 10 0.125 Analysis, K = 5 (λ + µ)π = λπ + µπ for 1 ≤ i ≤ K Simulation, K = 5 i i−1 i+1 0.1

After rearranging, we have B 0.075 P

λπi−1 = µπi for 1 ≤ i ≤ K 0.05 For i ∈ {0, 1,..., K}, steady-state probabilities are • 0.025 K n X 1 − ρ 0 πn = ρ π0 and πn = 1 ⇒ π0 = 3-73 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 3-74 1 − ρK+1 ρ n=0

M/M/1/K Simulation I M/M/1/K Simulation II clear % If a departure occurs, % Define variables elseif event == departure global arrival departure mservice_time delay_per_arrival = sim_time - system_queue(1); arrival = 1; departure = -1; mservice_time = 1; system_queue(1:K-1) = system_queue(2:K); % Define simulation parameters total_delay = total_delay + delay_per_arrival; sim_length = 30000; K = 10; system_queue = zeros(1,K); num_system = num_system - 1; k = 0; max_iter = 5; num_served = num_served + 1; for arrival_rate = 0.1:0.025:0.97 if num_system == 0 k = k + 1; % nothing to serve, schedule an arrival x(k) = arrival_rate*mservice_time; event = arrival; % initialize event_time = exprnd(1/arrival_rate); sim_time=0; num_arrivals=0; num_system=0; upon_arrival=0; total_delay=0; num_served=0; dropped=0; elseif num_system > 0 % Assuming that queue is empty % still the system has customers to serve event = arrival; event_time = exprnd(1/arrival_rate); [event, event_time] = schedule_next_event(arrival_rate); sim_time = sim_time + event_time; end for iter = 1:max_iter end while (sim_time < sim_length), sim_time = sim_time + event_time; % If an arrival occurs, end if event == arrival Pd_iter(iter)=dropped/num_arrivals; num_arrivals = num_arrivals + 1; end if num_system == K piK(k) = x(k)^K*(1-x(k))./(1-x(k)^(K+1)); dropped = dropped + 1; Pd(k) = mean(Pd_iter); else end num_system = num_system + 1; system_queue(num_system) = sim_time; %%%%%%%%%%% upon_arrival = upon_arrival + num_system; %% use the previous schedule_next_event function end % To see whether one new arrival comes or new departure occurs [event, event_time] = schedule_next_event(arrival_rate); 3-75 3-76 M/M/m queue I M/M/m queue II

M/M/m: there are m parallel servers, whose service times are The previous global balance equation can be rewritten as exponentially distributed with mean 1/µ. λπn−1 = min(n, m)µπn for n ≥ 0 … … … Using a = λ/µ and ρ = λ/mµ am π = ρmax(0,n−m) π n m! 0 State transition rate diagram of M/M/m From the normalization condition, π0 is obtained When m servers are busy, the time until the next departure, X, is ∞ m−1 ∞ X n X ai am X o 1 = π = π + ρi−m X = min(τ1, τ2, . . . , τm) ⇒ Pr[X > t] = Pr[min(τ1, τ2, . . . , τm) > t] i 0 i! m! m i=0 i=0 i=m Y −mµt = Pr[τi > t] = e (i.i.d.) Erlang C formula, C(m, a), i=1 Global balance equations: ∞ m X (mρ) π0 C(m, a) = Pr[W > 0] = Pr[N ≥ m] = π = · i m! 1 − ρ λπ0 = µπ1 i=m (λ + min(n, m)µ)πn = λπn−1 + min(n + 1, m)µπn+1 for n ≥ 1 3-77 3-78

M/M/c/c I M/M/c/c II c-server and only c customers can be accommodated Erlang capacity: Telephone systems with c channels

100 100

c = 1 … … 10-1 10-1 ) 2 ) -2 -2 10 20 30 c, a 10 c, a 10 ( ( B B 40 50 60

-3 3 -3 70 80 90 100 Balance equations are (a = λ/µ called Erlang) 10 10 4 5 6 7 8 9 10 n a a -4 10 10-4 λπn−1 = nµπn ⇒ πn = πn−1 = π0 10-1 100 101 0 20 40 60 80 100 n n! offered traffic intensity, a offered traffic intensity, a

Pc 100 Using n=0 πn = 1, we have 10-1 n c i −1 a n X a o 10-2 πn = n! i! 10-3 i=0

B 10-4 P

Erlang B formula: B(c, a) = πc 10-5 Analysis, c = 3 Simulation, c = 3 – valid for M/G/c/c system. Note that this depends only on the 10-6 Analysis, c = 5 -7 mean of service time distribution 10 Simulation, c = 5

3-79 10-8 3-80 0 0.5 1 1.5 2 2.5 3 a Example: a system with blocking I Example: a system with blocking II

In Select-city shopping mall, customers arrive at the underground parking lot of it according to a Poisson process with a rate of 60 cars c = 150 and a = λ/µ = 60 × 2.5 = 150 per hour. Parking time follows a Weibull distribution with mean 2.5 • c hours and the parking lot can accommodate 150 cars. When the a c! B(c, a) = c i parking lot is full, an arriving customer has to park his car somewhere c=150,a=150 P a i=0 i! else. Find the fraction of customers finding all places occupied upon arrival Pc−1 n two different distributions with the same mean Divide the numerator and denominator by n=0 a /n!, 0.7 • c 0.6 Weibull: α = 2.7228, k = 5 a c Pc−1 n k−1 x k (a /c!)/ a /n! f(x) = k k e( α ) c! n=0 0.5 α ! α " B(c, a) = = Pc−1 ai + ac/c! 1 + (ac/c!)/ Pc−1 an/n! 0.4 i=0 i! n=0 ) x (

f (a/c)B(c − 1, a) aB(c − 1, a) 0.3 = = 0.2 1 + (a/c)B(c − 1, a) c + aB(c − 1, a) exponential − x 0.1 f(x) = 1 e α α with B(0, a) = 1 0 0 1 2 3 4 5 6 7 8 x (hours) ∞ – Mean of Weibull distribution: αΓ(1 + 1/k), and Γ(x) = R tx−1e−t dt is called 0 the gamma function 3-81 3-82

Finite source population: M/M/C/C/K system I M/M/C/C/K system II

Consider the loss system (no waiting places) in the case where the If C ≥ K, each customer has its own server, i.e., no blocking. arrivals originate from a finite population of sources: the total number Each user shows two-state, active with mean 1/µ and idle with of customers is K • mean 1/λ 1 1 The probability for a user to be idle or active is 2 •

2 … … π0 = 1/λ/(1/λ + 1/µ) and π1 = 1/µ/(1/λ + 1/µ),

Call arrival rate: π λ, offered load: π = a/(1 + a), and a = λ/µ • 0 1 If C < K, this system can be described as The time to the next call attempt by a customer, so called thinking • time (idle time) of the customer obeys an exponential distribution with mean 1/λ (sec) … Blocked calls are lost • - does not lead to reattempts; starts a new thinking time, again. The time to the next attempt is also the same exponential distribution with 1/λ ((K − i)λ + iµ)πi = (K − (i − 1))πi−1 + (i + 1)µπi+1 - the call holding time is exponentially distributed with 1/µ 3-83 3-84 M/M/C/C/K system III M/M/C/C/K system IV

For j = 1, 2,..., K, we have Call blocking: the probability that an arriving call is blocked, i.e., PL • K Arrival rate is state-dependent, i.e., (K − N (t))λ: Not Poisson. j • (C − j + 1)πj−1 = jµπj ⇒ πj = a π0. PASTA does not hold: Time blocking, P can’t represent P j • B L λ : Call arrivals on average PK T Applying πj = 1, • • j=0 C X   C   λT ∝ (K − i)λπi K j X K k π = a / a i=0 j j k k=0 – PL: the probability that a call finds the system blocked Time blocking (or congestion): the proportion of time the system – If λT = 10000 and PL = 0.01, λT PL = 100 calls are lost spends in the state C; the equilibrium probability of the state C is λ : Call arrivals when the system is in the blocking state • C PB = πC λC ∝ (K − C)λ

– The probability of all resources being busy in a given observational period –PBλC : blocked calls upon the arrival instant – Insensitivity: Like Erlang B formula, this result is insensitive to the form P λ = P λ of the holding time distribution (though the derivation above was explicitly L T B C based on the assumption of exponential holding time distribution) – Among total arrivals, some of them that find the system blocked should be equal to call arrivals of seeing the busy system 3-85 3-86

M/M/C/C/K system V Where are we?

Call blocking PL can be obtained by Elementary queueing models • – M/M/1, M/M/C, M/M/C/C/K: product form solution λC PLλT = PBλC → PL = PB ≤ PB – Bulk queues (not discussed here) λT Intermediate queueing models (product-form solution) Engset formula: • – Time-reversibility of Markov process K! c – Detailed balance equations of time-reversible MCs (K − C)λπC (K − C) C!(K−C)! a PL(K) = = – Multidimensional Birth-death processes PC (K − i)λπ PC (K − i) K! ac i=0 i i=0 i!(K−i)! – Network of queues: open- and closed networks (K−1)! c   C   C!(K−1−C)! a K − 1 X K − 1 = = ac/ ai Advanced queueing models PC (K−1)! c C i i=0 i!(K−1−i)! a i=0 – M/G/1 type queue: Embedded MC and Mean-value analysis – M/G/1 with vacations and Priority queues – The state distribution seen by an arriving customer is the same as the – G/M/m queue equilibrium distribution in a system with one less customer. It is as if the arriving customer were an "outside observer" More advanced queueing models (omitted)

– PL(K) = PB(K − 1): as K → ∞, PL → PB – Algorithmic approaches to get steady-state solutions

3-87 3-88 of discrete-time MC I Time Reversibility of discrete-time MC II

∗ For an irreducible, aperiodic, discrete-time MC, (X , X ,...) having Proof for 1) p = πjpji/πi: n n+1 • ij transition probabilities p and stationary distribution π for all i: ∗ ij i pij = Pr[Xm = j|Xm+1 = i, Xm+2 = i2,..., Xm+k = ik ] ∗ Time-reversed MC is defined as Xn = Xτ−n for an arbitrary τ > 0 Pr[X = j, X = i, X = i ,..., X = i ] = m m+1 m+2 2 m+k k Pr[Xm+1 = i, Xm+2 = i2,..., Xm+k = ik ] Pr[X = j, X = i] Pr[X = i ,..., X = i |X = j, X = i] = m m+1 m+2 2 m+k k m m+1 Pr[Xm+1 = i] Pr[Xm+2 = i2,..., Xm+k = ik |Xm+1 = i] Pr[X = j, X = i] = m m+1 Pr[Xm+1 = i] Pr[X = i|X = j] Pr[X = j] Forward process Time reversed process = m+1 m m ∗ Pr[Xm+1 = i] 1) Transition probabilities of Xn p π = ji j ∗ πjpji πi pij = πi ∗ Proof for 2) Using the above result, 2) Xn and Xn have the same stationary distribution πi: • X X ∞ ∞ π p∗ = π (π p /π ) = π X X ∗ i ij i j ji i j πipij = πjpji = πi i∈S i∈S j=0 j=0 3-89 3-90

Time Reversibility of discrete-time MC III Time Reversibility of discrete-time MC IV

A transmitter’s queue with stop-and-wait ARQ (θ = qr) in Mid-term I A Markov process, Xn, is said to be reversible, if – the transition probabilities of the forward and reversed chains are Is this process reversible? • the same,

∗ 0 1 2 … … … … pij = Pr[Xm = j|Xm+1 = i] = pij = Pr[Xm+1 = j|Xm = i] Time reversibility ⇔ Detailed balanced equations (DBEs) hold • Global balance equations (GBEs) • ∗ πipij = πjpji → πipij = πjpji (detailed balance eq.) π0 =(1 − p)π0 + (1 − p)θπ1 π =pπ + (pθ + (1 − p)(1 − θ))π + (1 − p)θπ What types of Markov processes satisfy this detailed balance 1 0 1 2 equation? discrete-time Birth-death (BD) process For i = 2, 3,..., we have

Transition occurs between neighboring states: p = 0 for |i − j| > 1 πi =p(1 − θ)πi−1 + (pθ + (1 − p)(1 − θ))πi + (1 − p)θπi+1 • ij Instead, we can use DBEs, or simplify GBEs using DBEs, e.g., 0 1 2 … … • n ∞ n ∞ X X X X p(1 − θ)πi = (1 − p)θπi+1 ↔ πjpji = πipij

3-91 j=0 i=n+1 j=0 i=n+1 3-92 Time Reversibility of discrete-time MC V Time Reversibility of discrete-time MC VI

Kolmogorov Criteria From the Kolmogorov criteria, we can get • A discrete-time Markov chain is reversible if and only if • pi,i2 pi2i3 ··· pin−1jpji = pijpjin−1 ··· pi3i2 pi2i

pi1i2 pi2i3 ··· pin−1in pin i1 = pi1in pin in−1 ··· pi3i2 pi2i1 (n−1) (n−1) pij pji = pijpji for any finite sequence of states, i , i ,... ,i and any n 1 2 n As n → ∞, we have Proof: (n−1) (n−1) lim p pji = lim pijp → πjpji = πipij For a reversible chain, if detailed balance eqns. hold, we have n→∞ ij n→∞ ji • Inspect whether the following two-state MC is reversible 0 1  0 1  P = 0.5 0.5

3 2 – It is a small BD process – Using state probabilities, π0 = 1/3 and π1 = 2/3, 1 2 1 Fixing two states, i = i, and i = j and multiplying over all states, π p = · 1 = π p = · • 1 n 0 01 3 1 10 3 2

pi,i2 pi2i3 ··· pin−1jpji = pijpjin−1 ··· pi3i2 pi2i 3-93 3-94

Time Reversibility of discrete-time MC VII Continuous-time reversible MC I

Inspect whether the following three-state MC is reversible For a continuous-time MC, X(t), whose stationary state probability π , we have a discrete-time embedded Markov chain whose stationary  0 0.6 0.4 i pmf and a state transition probability are πi and p˜ij. P = 0.1 0.8 0.1 0.5 0 0.5 Using Kolmogorov criteria, Forward Process Reverse Process • p12p23p31 = 0.6 × 0.1 × 0.5 6= p13p32p21 = 0.4 × 0 × 0.1 = 0 Inspecting state transition diagram, it is not a BD process • Embedded Markov process If the state transition diagram of a Markov process is a tree, then the time process is time reversible ∗ There is a reversed embedded MC with πip˜ij = πjp˜ji for all i 6= j.

CTMC

0 1 2 3 4 … …

Embedded MC (BD process)

0 1 2 3 4 … … – A generalization of BD processes: at the cut boundary, DBE is satisfied 3-95 3-96 Continuous-time reversible MC II Continuous-time reversible MC III

Recall the state occupancy time of the forward process A continuous-time MC whose stationary probability of state i is θi,

−vi s and state transition rate from j to i is γji has a reversible MC whose Pr[Ti > t + s|Ti > t] = Pr[Ti > s] = e ∗ ∗ state transition rate is γij, if we find γij of satisfying

π p˜ π γ If X(t) = i, the probability that the reversed process remains in state ∗ ∗ j ji j ji γij = vip˜ij = vi = vi = θjγji/θi i for an additional s seconds is πi p˜ji =γji /vj πivj | {z } from embedded MC Pr[X(t0) = i, t − s ≤ t0 ≤ t|X(t) = i] = e−vi s ∗ ; after staying t, probability that it shall stay s sec more – p˜ij(=p ˜ij): state transition probability of the reversed embedded MC – Continuous-time MC whose state occupancy times are exponentially

Forward Process Reverse Process distributed is reversible if its embedded MC is reversible ∗ Additionally, we have vj = vj

X ∗ X ∗ X X ∗ θiγij ∗ = θj γji = θjvj = θjvj ⇒ γij = γij Embedded Markov process time γij =θj γji /θi i6=j i6=j j6=i j6=i

3-97 3-98

Continuous-time reversible MC IV M/M/2 queue with heterogeneous servers I

Detailed balance equation holds for continuous-time reversible MCs Servers A and B with service rates µA and µB. When the system

θjγji (input rate to i) = θiγij (output rate from i) for j = i + 1 empty, arrivals go to A with probability p and to B with probability 1 − p. Otherwise, the head of the queue takes the first free server – Birth-death systems with γij = 0 for |i − j| > 1 – Since the embedded MC is reversible, 1A

πip˜ij = πjp˜ji → (viθi/c)˜pij = (vjθj/c)˜pji → θiγij = θjγji 0 2 3

If there exists a set of positive numbers θi, that sum up to 1 and 2B satisfy

θiγij = θjγji for i 6= j Under what condition is this system time-reversible? then, the MC is reversible and θ is the unique stationary distribution For n = 2, 3,..., i • – Birth and death processes, e.g., M/M/1, M/M/c, M/M/∞ n−2 πn = π2 (λ/(µA + µA)) Kolmogorov criteria for continuous time MC Global balance equations along the cuts – A continuous-time Markov chain is reversible if and only if • λπ0 = µAπ1,A + µBπ1,B γi1i2 γi2i3 ··· γin i1 = γi1in γin in−1 ··· γi3i2 γi2i1 (µA + µB)π2 = λ(π1,A + π1,B) – Proof is the same as in the discrete-time reversible MC 3-99 3-100 (µA + λ)π1,A = pλπ0 + µBπ2 M/M/2 queue with heterogeneous servers II Multidimensional Markov chains I

After some manipulations, Suppose that X1(t) and X2(t) are independent reversible MCs Then, X(t) = (X1(t), X2(t)) is a reversible MC λ λ + p(µA + µB) • π1,A = π0 Two independent M /M /1 queue, where arrival and service rates at µA 2λ + µA + µB • queue i are λi and µi λ λ + (1 − p)(µA + µB) 6-23 Example: Two Independent M/M/1 Queues π2,A = π0 – (N1(t), N2(t)) forms an MC µB 2λ + µA + µB 2 λ λ + (1 − p)µ + pµ „ A B Stationary distribution: λ1 λ1 λ1 π2 = π0 nn1203 13 23 33 µAµB 2λ + µA + µB λλλλ 11 2 2 µ µ µ pn(,12n)=−1 1−  1 1 1 µµ11µ2µ2 λ µ λ µ λ µ λ µ P∞ 2 2 2 2 2 2 2 2 λ λ λ π0 can be determined by π0 + π1,A + π2,B + n=2 πn = 1 1 1 1 „ Detailed Balance Equations: 02 12 22 If it is reversible, use detailed balance equations 32 µ1 µ1 µ1 • µ11pn(1+=,n2)λ1p(n1,n2) λ2 µ2 λ2 µ2 λ2 µ2 λ2 µ2 µλpn(,n +=1) p(,n n) λ1 λ1 λ1 (1/2)λπ = µ π → π = 0.5(λ/µ )π 212 212 0 A 1,A 1,A A 0 01 11 21 31 µ µ µ (1/2)λπ0 = µBπ1,B → π1,B = 0.5(λ/µB)π0 1 1 1 λ2 µ2 λ2 µ2 λ2 µ2 λ2 µ2 Verify that the Markov chain is λ λ λ 0.5λ2 1 1 1 reversible – Kolmogorov criterion 00 10 20 30 π2 = π0 µ µ µ µAµB 1 1 1 – Is this a reversible MC? 3-101 3-102

Multidimensional Markov chains II Truncation of a Reversible Markov chain I

X(t) is a reversible Markov process with state space S and stationary – Owing to time-reversibility, detailed balance equations hold distribution, πj for j ∈ S. – Truncated to a set E ⊂ S such that the resulting chain Y (t) is µ1π(n1 + 1, n2) = λ1π(n1, n2) irreducible. Then, Y (t) is reversible and has the stationary µ2π(n1, n2 + 1) = λ2π(n1, n2) distribution π πˆ = j j ∈ E – Stationary state distribution j P k∈E πk

   n1    n2 – This is the conditional prob. that. in steady state, the original λ1 λ1 λ2 λ2 π(n1, n2) = 1 − 1 − process is at state j, given that it is somewhere in E µ1 µ1 µ2 µ2 Proof: Can be generalized for any number of independent queues, e.g., • πj πi M/M/1, M/M/c or M/M/∞ πˆjqji =π ˆiqij ⇒ P qji = P qij ⇒ πjqji = πiqij k∈E πk k∈E πk | {z } π(n1, n2,..., nK ) = π1(n1)π2(n2) ··· πK (nK ) πˆj

X X πj – ’Product form’ distribution πˆk = P = 1 πk k∈E j∈E k∈E

3-103 3-104 Truncation of a Reversible Markov chain II Truncation of a Reversible Markov chain III

Markov processes for M/M/1 and M/M/C are reversible Two independent M/M/1 queues of the previous example share a common buffer of size B (=2) State probabilities of M/M/1/K queue 6-25 Example: Two Queues with Joint Buffer • An arriving customer who finds B customers waiting is blocked i i „ The two independent• M/M/1 queues of (1 − ρ)ρ (1 − ρ)ρ λ λ1 πi = = for ρ = the previous example share a common 03 13 PK i K+1 1 − ρ µ µ i=0(1 − ρ)ρ buffer of size B – arrival that finds B 1 λ µ 2 2 λ2 µ2 customers waiting is blocked λ λ – Truncated version of M/M/1/∞ queue 1 1 „ State space restricted to 02 12 22

++ µ1 µ1 State probabilities of M/M/c/c queue En=−{( 12,n) : (n11) +(n2−1) ≤B} λ2 µ2 λ2 µ2 λ2 µ2 • λ λ λ – M/M/c/∞ queue with ρ = λ/(mµ) and a = λ/µ „ Distribution of truncated chain: 1 1 1 nn 01 11 21 31 pn(,n)=⋅p(0,0)ρρ12,(n,n)∈E 12 1 2 12 µ µ µ c 1 1 1 a λ µ λ µ λ µ max(0,n−c) „ Normalizing: 2 2 2 2 2 2 λ2 µ2 πn = ρ π0 −1 λ1 λ1 λ1 n!  nn12 00 10 20 30 p(0,0) = ∑ ρρ12 µ1 µ1 µ1 – Truncated version of M/M/c/∞ queue (,nn12)∈E Theorem specifies joint Statedistribution space: up E = {(n , n ):(n − 1)+ + (n − 1)+ ≤ B} c c State1 diagram2 1for B =2 2 X an X ai to the normalization •constant πˆ = π / π = / Stationary state distribution of the truncated MC n n n n! i! 0 Calculation of normalization• constant is n=0 i=0 n1 n2 often tedious π(n1, n2) = π(0, 0)ρ1 ρ2 for (n1, n2) ∈ E n n 3-105 π(0, 0) is obtained by π(0, 0) = 1/ P ρ 1 ρ 2 3-106 • (n1,n2)∈E 1 2

Truncation of a Reversible Markov chain IV Truncation of a Reversible Markov chain V

Two session classes in a circuit switching system with preferential The state probabilities can be obtained as • treatment for one class for a total of C channels n1 n2 ρ1 ρ2 Type 1: Poisson arrivals with λ require exponentially distributed P(n1, n2) = P(0, 0) for 0 ≤ n1 ≤ K, n1+n2 ≤ C, n2 ≥ 0 • 1 n1! n2! service rate µ1 – admissible only up to K P – P(0, 0) can be determined by P(n1, n2) = 1 Type 2: Poisson arrivals with λ require exponentially distributed n1,n2 • 2 Blocking probability of type 1 service rate µ2 – can be accepted until C channels are used up • K n2 n1 C−n1 PC−K ρ1 ρ2 PK−1 ρ1 ρ2 374 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 51, NO. 2, MARCH 2002 · + · S = {(n1, n2)|0 ≤ n1 ≤ K, n1 + n2 ≤ C} n2=0 K! n2! n1=0 n1! (C−n1)! Pb1 = n1 n2 PK ρ1 PC−n1 ρ2 n1=0 n1! n2=0 n2! Blocking probability of type 2 • n C−n K ρ 1 ρ 1 P 1 · 2 n1=0 n1! (C−n1)! Pb2 = n1 n2 PK ρ1 PC−n1 ρ2 n1=0 n1! n2=0 n2! For this kind of systems, blocking probabilities are valid for a broad class of holding time distributions

3-107 3-108 Fig. 2. Transition diagram for the new call bounding scheme. handoff calls in the cell. Let and . From From this, the traffic intensities for new calls and handoff calls the detailed balance equation, we obtain using the above common average channel holding time 1 are given by

From the normalization equation, we obtain Applying these formulas in (1) and (2), we obtain similar re- sults for new call blocking probability and handoff call blocking probability following the traditional approach (one-dimensional Markov chain theory), which obviously provides only an ap- proximation. We will show later that significantly inaccurate results are obtained using this approach, which implies that we cannot use the traditional approach if the channel holding times for new calls and handoff calls are distinct with different av- erage values. We observe that there is one case where these two From this, we obtain the formulas for new call blocking proba- approaches give the same results, i.e., when the nonprioritized bility and handoff call blocking probability as follows: scheme is used: . This is because we have the following identity: . As a final remark, this scheme may work best when the call (1) arrivals are bursty. When a big burst of calls arrives in a cell (for example, before or after a football game), if too many new calls accepted, the network may not be able to handle the resulting (2) handoff traffic, which will lead to severe call dropping. The new call bounding scheme, however, could handle the problem well by spreading the potential bursty calls (users will try again when Obviously, when , the new call bounding scheme be- the first few tries fail). On another note, as we observe in wired comes the nonprioritized scheme. As we expect, we obtain networks, network traffic tends to be self-similar ([15]). Wire- less network traffic will behave the same considering more data services will be supported in the wireless networks. This scheme will be useful in the future wireless multimedia networks.

B. Cutoff Priority Scheme As we mentioned earlier, in most literature the channel Instead of putting limitation on the number of new calls, we holding times for both new calls and handoff calls are iden- base on the number of total on-going calls in the cell to make tically distributed with the same parameter. In this case, the a decision whether a new arriving call is accepted or not. The average channel holding time is given by scheme works as follows. Let denote the threshold, upon a new call arrival. If the total (3) number of busy channels is less than , the new call is accepted; Network of queues Networks of queues

Open queueing networks Two queues in tandem (BG, p.210) Assume that service time is proportional to the packet length • Queue 1 Queue 2

Queue 1 Queue 2 is empty when arrives Closed queueing networks Queue 2 Three things are needed Arrivals at queue 2 get bursty Kleinrock’s independence assumption time – Each link works as an M /M /1 queue – Interarrival times at the second queue are strongly correlated with Burke’s theorem the packet length at the first queue or the service time! – The output process at an M /M /1 queue is a Poisson process with The first queue is an M/M/1, but the second queue cannot be • mean rate λ considered as an M/M/1 Time-reversibility 3-109 3-110

Kleinrock’s Independence Approximation I Kleinrock’s Independence Approximation II

In real networks, many queues interact with each other Suppose several packet streams, each following a unique path through – a traffic stream departing from one or more queues enters one or the network: appropriate for virtual circuit network, e.g., ATM more other queues, even after merging with other streams departing from yet other queues Packet interarrival times are correlated with packet lengths. • Service times at various queue are not independent, e.g., • state-dependent flow control. Kleinrock’s independence approximation: M /M /1 queueing model works for each link: • – sufficient mixing several packet streams on a transmission line makes interarrival times and packet lengths independent Good approximation when: x : arrival rate of packet stream s • • s * Poisson arrivals at entry points of the network f (s): the fraction of the packets of stream s through link (i, j) • ij * Packet transmission times ‘nearly’ exponential Total arrival rate at link (i, j) • * Several packet streams merged on each link X * Densely connected network and moderate to heavy traffic load λij = fij(s)xs

3-111 all packet streams s 3-112 crossing link (i, j) Kleinrock’s Independence Approximation III Kleinrock’s Independence Approximation IV

Based on M/M/1 (with Kleinrock’s Independence approximation), # In datagram networks including multiple path routing for some of packets in queue or service at (i, j) on average is origin-destination pairs, M/M/1 approx. often fails

Sec. 3.6 Networks of Transmission Lines 213 λij Node A sends traffic to node B along two links with service rate µ N = • ij V2 µij − λij Figure 3.29 Poisson process with rate .\ divided among two links. If division is B done by randomization, each link behaves – 1/µij is the average packet transmission time on link (i, j) like an M I JIII queue. If division is done by metering, the whole system behaves Iike The average number of packets over all queues and the average V2 an 1\111'v112 queue. • delay per packet are – Random splitting:where! = queueL8.rS is the attotal eacharrival linkrate mayin the behavesystem. If likethe average an M/M/1processing and propagation delay el at link (i. j) is not negligible, this formula should be adjusted to X 1 X ij 1 N = Nij and T = Nij TR = I Ai) ) γ T =-L ( . Aij elij (3.103) 11" - A" + (i,j) (i,j) rµ(i.j)− λ/I) 2 1) P – Metering: arrivingFinally, the packetsaverage delay areper assignedpacket of a traffic to astream queuetraversing witha thepath p smallestis given by – γ = s xs: total arrival rate in the system As a generalization with proc. & propag. delay, backlog → approximatedT asp = an'"'L M /(Ai)M../2 .. _with.. + a-.1) common+ eli) queue (3.104) '. ILI)(p,) AI)) 11i) • all (I,))   on path p2 T = < T where the threeMterms in(2theµsum− λabove)(1represent+ ρ) averageRwaiting time in queue, average X  λij 1  transmission time, and processing and propagation delay, respectively. Tp =  + + dij In many networks, the assumption of exponentially distributed packet lengths is not µij(µij − λij) µij  ∗ Metering destroys an M/M/1 approximation all packet streams s   appropriate. Given a different type of probability distribution of the packet lengths, one crossing link (i, j) | {z } may keep the approximation of independence between queues but use the P-K formula for queueing delay 3-113 average number in the system in place of the AI/M/1 formula (3.100). Equations (3.101)3-114 to (3.104) for average delay would then be modified in an obvious way. For virtual circuit networks (cf. Fig. 3.27), the main approximation involved in the I\I/ M /1 formula (3.101) is due to the correlation of the packet lengths and the packet interarrival times at the various queues in the network. If somehow this correlation was not present (e.g., if a packet upon departure from a transmission line was assigned a new length drawn from an exponential distribution), then the average number of packets in Burke’s theorem I the system Burke’swould be given indeed theoremby the formula II

For M /M /1, M /M /c, M /M /∞ with arrival rate λ (without bulk This fact (by no means obvious) is a consequence of Jackson's Theorem, which will be arrivals and service): discussed in Section 3.8. In datagram networks that involve multiple path routing for some origin-destination B1. Departure process is Poisson with rate λ. B2. At each timepairst,(cf. theFig. number3.28), the accuracy of jobsof the inM / theM /1 approximation system atdeteriorates time fort isanother independentreason, of thewhich sequenceis best illustrated ofby departurean example. times prior to time t Forward process Reverse process Example 3.17 The sequence ofSuppose departurethat node A timessends traffic priorto node toB timealong ttwoinlinks thewith forwardservice rate 11 in the • network of Fig. 3.29. Packets arrive at A according to a Poisson process with rate .\ process is exactlypackets/sec. the sequencePacket transmission of arrivaltimes are timesexponentially afterdistributed timeandt independentin the of reverse process.interarrival times as in the AI/M / I system. Assume that the arriving traffic is to be divided equally among the two links. However, how should this division be implemented? Since the arrivalConsider processthe following in thepossibilities. reverse process is independent Arrivals of forward process Departures of reverse process • Poisson process, the future arrival process does not depend on the The arrival process in the forward process corresponds to the • current number in the system departure process in the reverse process Reverse process The past departure process in the forward process (which is the – The departure process in the forward process is the arrival • future arrival in the reverse process) does not depend on the process in the backward process current number in the system. Because M /M /1 is time-reversible, the reverse process is • statistically identical to the forward process. – The departures in forward time form a Poisson process, which is the arrivals in backward time 3-115 3-116 Two M/M/1 Queues in Tandem Open queueing networks

Consider a network of K first-come first serve, single server queue, The service times of a customer at the first and the second queues are each of which has unlimited queue size and exponential distribution mutually independent as well as independent of the arrival process. with rate µk. routing path Queue 1 Queue 2

External arrivals Based on Burke’s theorem B1, queue 2 in isolation is an M/M/1 • m – Pr[m at queue 2] = ρ2 (1 − ρ2) B2: # of customers presently in queue 1 is independent of the • sequence of departure time prior to t (earlier arrivals at queue 2) Traffic equation with routing probability p or matrix P = [p ] – independent of # of customers presently in queue 2 • ij ij K K Pr[n at queue 1 and m at queue 2] X X λi = αi + λjpji and pji = 1 n m j=1 i=0 = Pr[n at queue 1] Pr[m at queue 2] = ρ1 (1 − ρ1)ρ2 (1 − ρ2) – pi0: flow going to the outside – λi can be uniquely determined by solving 3-117 3-118 λ = α + λP ⇒ λ = α(I − P)−1

Open queueing networks II Jackson’s theorem I

Let n = (n1,..., nK ) denote a state (row) vector of the network. Using time-reversibility, guess detailed balance equations (DBEs) as

The limiting queue length distribution π(n) λiπ(n − ei) = µiπ(n), λiπ(n) = µiπ(n + ei)

π(n) = lim Pr[X1(t) = n1,..., XK (t) = nK ] and λjπ(n − ei) = µjπ(n + ej − ei) based on t→∞ Global balance equation (GBE): total rate out of n = total rate into n K K K  X  X X α + µi π(n) = αiπ(n − ei) + pi0µiπ(n + ei) i=1 i=1 i=1 | {z } | {z } external arrivals go outside from i Substituting DBEs into GBE gives us K K  K K K PK  X X X αiµi X X j=1 pjiλj + pjiµjπ(n + ej − ei) RHS =π(n) + p λ + µ λ i0 i λ i i=1 j=1 i=1 i i=1 i=1 i | {z }  K K PK  from j to i X X αi + j=1 pjiλj =π(n) p λ + µ th i0 i λ i – ei = (0,..., 1,..., 0), i.e., the 1 is in the i position i=1 i=1 i – π(n − e ) denotes π(n , n ,..., n − 1,..., n ) | {z } i 1 2 i K α

3-119 PK 3-120 – in the numerator: λi = αi + j=1 pjiλj Jackson’s theorem II Summary of time-reversibility: CTMC

From DBEs, we have

λi π(n1,..., ni,..., nK ) = π(n1,..., ni − 1,..., nK ) µi The forward CTMC has transition rates γij. and The reversed chain is a CTMC with transition rates γ∗ = φj γji ij φi λi π(n1,..., ni − 1,..., nK ) = π(n1,..., ni − 2,..., nK ) If we find positive numbers φi, summing to unity and such that µi scalars γ∗ satisfy P∞ γ = P∞ γ∗ for all i ≥ 0, then φ is the which is finally rearranged as ij j=0 ij j=0 ij i stationary distribution of both the forward and reverse chains. λ ni π(n ,..., n ,..., n ) = i π(n ,..., 0,..., n ) – Proof 1 i K µ 1 K i X X X X X φ γ = φ γ∗ = φ γ∗ = φ γ = φ γ Repeating for i = 1, 2,..., K, j ji i ij i ij i ij i ij j6=i j6=i j6=i j6=i j6=i K  ni Y λi π(n) = π(0) : Global balance equation µ i=1 i  −1 – π(0) = QK P∞ ρni and ρ = λ /µ i=1 ni =0 i i i i

3-121 3-122

Jackson’s theorem: proof of DBEs I Jackson’s theorem: proof of DBEs II

Proving DBEs based on time-reversibility We need to consider the following three cases Construct a routing matrix, P∗ = [p∗ ], of the reversed process Arrival to server i outside the network in the forward process • ij • The rate from node i to j must be the same in the forward and corresponds to a departure out of the network from server i in the • reverse direction, reversed process ∗ π(n)vn,n+e = π(n + ei)v ∗ i n+ei ,n (forward process) λipij = λjpji (reverse process)

∗ ∗ Departure to the outside in the forward process corresponds to – λjpji: the output rate from server j is λj, and pji is the rate of • ∗ ∗ arrival from the outside in the reversed process, moving from j to i; αi = λipi0; pi0 = αi/λi π(n)v = π(n − e )v∗ ∗ n,n−ei i n−ei ,n We need to show (recall θiγij = θjγji) X X Leaving queue i and joining queue j in the forward process π(n)v = π(m)v∗ and v = v∗ • n,m m,n n,m n,m (v = µ p ) correspond to leaving queue j and joining m m n,n−ei +ej i ij ∗ ∗ queue i in the reversed process (vn−e +e ,n = µjpji = λipijµj/λj) ∗ i j – vn,m and vn,m denote state transition rate of the forward and π(n)v = π(n − e + e )v∗ reversed process n,n−ei +ej i j n−ei +ej ,n

3-123 3-124 Jackson’s theorem: proof of DBEs III Jackson’s theorem: proof of DBEs IV

1) π(n)v = π(n + e )v∗ : Rearranging the previous eqn. yields n,n+ei i n+ei ,n Arrival to server i outside the network in the forward process corresponds to K K Y Y a departure out of the network from server i in the reversed process, i.e., πi(ni)αi πj(nj) = πi(ni + 1)(αi/ρi) πj(nj) j=1,j6=i j=1,j6=i  K   K  X X λjpji v∗ =µ 1 − p∗ = µ 1 − After canceling, we have n+ei ,n i ij i Use p∗ =λ p /λ λi j=1 ij j ji i j=1 π (n + 1) = ρ π (n ) ⇒ π (n) = ρn(1 − ρ ) | {z } i i i i i i i i p∗ i0 ∗ 2) π(n)vn,n−e = π(n − ei)v : Departure to the outside in the forward  K  i n−ei ,n µi X ∗ process corresponds to arrival from the outside in the reversed process, = λi − λjpji = αi/ρi (= v ). λ n,n−ei i j=1 K K ∗ X ∗ X λipij | {z } v = αi = λi − λjp = λi − λj PK n−ei ,n ji λi =αi + λj pji λj j=1 j=1 j=1 | {z } Substituting this into 1) (vn,n+ei = αi : arrival to server i from outside) Traffic eqn. for the reversed process K K K ! X Y Y =λ 1 − p = λ p (= v∗ ). πi(ni)αi = πi(ni + 1) πj(nj)αi/ρi i ij i i0 n,n+ei i=1 j=1,j6=i j=1

3-125 3-126

Jackson’s theorem: proof of DBEs V Jackson’s theorem: proof of DBEs VI

Substituting this with vn,n−ei = µipi0 (departure to the outside), Summary of transition rates of forward and reverse processes

∗ K K Transition Forward vn,m Reverse vn,m Comment ni Y ni −1 Y PK (1 − ρi)ρi πk(nk)µipi0 = (1 − ρi)ρi πk(nk)λipi0 n → n + ei αi λi (1 − j=1 pij ) all i k=1,k6=i k=1,k6=i PK n → n − ei µi (1 − j=1 pij ) αi µi /λi all i: ni > 0 n → n − ei + ej µi pij λj pji µi /λi all i: ni > 0, all j 3) π(n)v = π(n − e + e )v∗ : Leaving queue i and n,n−ei +ej i j n−ei +ej ,n P P ∗ 4) Finally, we verify total rate equation, vn,m = v : joining queue j in the forward process (vn,n−ei +ej = µi pij ) correspond to m m n,m leaving queue j and joining queue i in the reversed process, i.e, K ∗ X  X  X  X  v∗ = µ p∗ = λ p µ /λ , vn,m = λi 1 − pij + αi µi /λi + λj pji µi /λi n−ei +ej ,n j ji i ij j j i j=1 i:ni >0 j K |P P{z P } n λi − λi pij ni j Y i i j (1 − ρi)ρi (1 − ρj)ρj πk(nk)µipij k=1,k6=i,j X X X  αi µi µi  = λi − (λj − αj ) + + (λi − αi ) K λi λi i j i:ni >0 ni −1 nj +1 Y ∗ = (1 − ρi)ρi (1 − ρj)ρj πk(nk)µjpji | {z } ∗ PK use pji =λi pij /λj λj =αj + λi pij k=1,k6=i,j i=1 X X = αi + µi = vn,m.  3-127 3-128 i i:ni >0 Open queueing networks: Extension I Open queueing networks: Extension II

– For all state n = (n ,..., n ) The product-form solution of Jackson’s theorem is valid for the 1 K following network of queues Pˆ (n )Pˆ (n ) ··· Pˆ (n ) P(n) = 1 1 2 2 k K , G State-dependent service rate • where G = P∞ ··· P∞ Pˆ (n ) ··· Pˆ (n ) – 1/µi(ni): the mean of queue i’s service time exponentially n1=0 nK =0 1 1 k K distributed, when n is the number of customers in the ith queue i Multiple classes of customers just before the customer’s departure • – Provided that the service time distribution at each queue is the λi same for all customer classes, the product form solution is valid for ρi(ni) = , i = 1,..., K, ni = 1, 2,... µi(ni) the system with different classes of customers, i.e.,

K – λi: total arrival rate at queue i determined by the traffic eqn. X ˆ λj(c) = αj(c) + λi(c)pij(c) – Define Pj(nj) as i=1  ˆ 1, if nj = 0, – αj(c): rate of the external arrival of class c at queue j; pij(c) the Pj(nj) = ρj(1)ρj(2) ··· ρj(nj), if nj > 0 routing probabilities of class c – See pp.230-231 in the textbook for more details

3-129 3-130

Open queueing networks: Performance measure Open queueing networks: example A-I

Performance measure New programs arrive at a CPU according to a Poisson process of rate α.A State probability distribution has been derived • program spends an exponentially distributed execution time of mean 1/µ1 Mean # of hops traversed, h, is • in the CPU. At the end of this service time, the program execution is λ PK λ complete with probability p or it requires retrieving additional information h = = i=1 i PK from secondary storage with probability 1 − p. Suppose that the retrieval of α αi i=1 information from secondary storage requires an exponentially distributed Throughput of queue i: λ • i amount of time with mean 1/µ2. Find the mean time that each program Total throughput of the queueing network: α • spends in the system. Mean number of customers at queue i (ρ = λ /µ ) • i i i

N i = ρi/(1 − ρi) System response time T • K K K N 1 X 1 X 1 X λi T = = N = λ T = α α i α i i α µ − λ i=1 i=1 i=1 i i

3-131 3-132 Data Communications EIEN 368 (2014년 2학기) QUEUEING THEORY Q–69

∆ λi ρi = µi

ρi 1 N i = ; Ti = 1 ρi µi λi − − M M M ρi N λi N = N i = ; T = = ( )Ti 1 ρi γ γ i=1 i=1 i=1 Open queueing networks: example A-II Open queueingX networks:X − exampleX B-I

eg. Consider the following networks with three routers Find the mean arrival rate, Consider the following network with three nodes External packet arrivals : Arrival rate into each queue, λ1 = α + λ2 and λ2 = (1 − p)λ1 γB • • PoissonExternal process packet arrivals with γ :A Poisson= 350 • λ1 = α/p and λ2 = (1 − p)α/p (packets/sec),process with γAγB == 150 3.5, pack- L1 B L3 γCets/sec,= 150γB. = 1.5, γC = 1.5. Each queue behaves like an M/M/1 system, so • L2 Packet length : exponentially γ γ • Packet length : exponentially dis- ρ ρ A A C C distributed with mean 50 1 2 L4 • E[N1] = and E[N2] = (kbits/packet)tributed with mean 1000 bits/packet. 1 − ρ1 1 − ρ2 Assumptions: where ρ1 = λ1/µ1 and ρ2 = λ2/µ2 (a) Packets moving along a path from source to destination have their Using Little’s result, the total time spent in the system lengths selected independently at each outgoing link → Kleinrock’s independence assumption E[N + N ] 1  ρ ρ  E[T] = 1 2 = 1 + 2 (b) Channel capacity of link i: Ci= 17.5 Mbps for i = 1, 2, 3, 4 α α 1 − ρ1 1 − ρ2 → Service rate at link i: exponentially distributed with rate

µi = Ci/50000 = 350 packets/sec.

3-133 3-134

Open queueing networks: example B-II Open queueing networks: example B-II

Traffic matrix (packets per second) Since α = 650 and λ = 850, the mean number of hops is • • from → to A B C A – 150 200 h = 850/650 = 1.3077 (50% through B) (50% directly to C) We get link utilization, mean number and response time as • B 50 – 100 L1 L2 L3 L4 C 100 50 – ρi 300/350=0.857 100/350=0.286 250/350=0.714 200/350 =0.572 Find mean delay from A to C N i 300/50 100/250 250/100 200/150 • T 1/50=0.02 1/250=0.004 1/100=0.01 1/150=0.0067 First, we need to know link traffic i • traffic type L1 L2 L3 L4 – N i = ρi/(1 − ρi) and Ti = N i/λi A → B 150 Mean delay from A to C A → C 100 100 100 • B → A 50 50 T AC = ( T1 + T2 ) × 0.5 + T3 × 0.5 = 0.017 (sec) B → C 100 |{z} |{z} C → A 100 A to B B to C C → B 50 50 – propagation delay is ignored total λ1 = 300 λ2 = 100 λ3 = 250 λ4 = 200

3-135 3-136 Closed queueing networks I Closed queueing networks II

Consider a network of K first-come first serve, single server queue, Using ~π = ~π · P, and ~π · ~1 = 1, we have • each of which has unlimited queue size and exponential distribution λi = λ(M )πi with rate µk. There are also a fixed number of customers, say M , circulate endlessly in a closed network of queues. – λ(M ): a constant of proportionality, the sum of the arrival rates PK in all the queues in the network, and i=1 λi 6= 1 – G(M ) (normalization constant) will take care of λ(M )

Assuming ρi(ni) = λi/µi(ni) < 1 for i = 1,..., K, we have for all ni ≥ 0, ˆ – Define Pj(nj) as  ˆ 1, if nj = 0, Pj(nj) = ρj(1)ρj(2) ··· ρj(nj), if nj > 0 Traffic eqn.: no external arrival! The joint state probability is expressed as • K K K K X X 1 Y X Y λ = λ p with p = 1 π(n) = Pˆ (n ), and G(M ) = Pˆ (n ) i j ji ji G(M ) i i i i j=1 i=0 i=1 n1+···+nK =M i=1

3-137 3-138

Closed queueing networks III Closed queueing networks IV

ρi: no longer the actual utilization due to λ(M ), i.e., relative GBE of open queueing networks is reduced to • • utilization  K  K K Setting λ(M ) to a value does not change the results X X X • α + µi π(n) = αiπ(n − ei) + pi0µiπ(n + ei) The maximum queue size of each queue is M i=1 i=1 i=1 • | {z } | {z } Proof: as in Jackson’s theorem for open queueing networks external arrivals go outside from i K K Use time-reversibility: X X • ∗ + pjiµjπ(n − ei + ej) – routing matrix of the reversed process, p = λipij/λj ji i=1 j=1 0 For state transition between n and n = n − ei + ej | {z } • from j to i 0 ∗ π(n )v 0 = π(n)v 0 (∗) n ,n n,n Substituting (1) and (2) into (*), we have • As in open queueing networks, we have for ni > 0 • ρiπ(n1,..., ni − 1,..., nj + 1,..., nK ) = ρjπ(n1,..., nK ) ∗ ∗ vn−e +e ,n =µjpji = µj(λipij/λj) (1) i j The proof for the following is given on page 235 (BG) • vn,n−ei +ej =µipij (2) X X ∗ v 0 = v 0 Leaving queue i and joining queue j in the forward process n,n n,n n0 n0 (vn,n−ei +ej = µi pij ) correspond to leaving queue j and joining queue i in the reversed process, 3-139 3-140 Summary of time-reversibility: CTMC Closed queueing networks V

µ (n): a service rate when queue i has n customers • i v = µ (n)p (leaving queue i and joining queue j) • n,n−ei +ej i ij The forward CTMC has transition rates γij. K X X X The reversed chain is a CTMC with transition rates γ∗ = φj γji µi(ni)pij = µi(ni)pij ij φi {(j,i)|ni >0} j=1 {i|ni >0} If we find positive numbers φi, summing to unity and such that K ∗ P∞ P∞ ∗ X X X scalars γij satisfy j=0 γij = j=0 γij for all i ≥ 0, then φi is the = µi(ni) pij = µi(ni) stationary distribution of both the forward and reverse chains. {i|ni >0} j=1 {i|ni >0} – Proof ∗ ∗ ∗ v = µi(n)p = µi(n)λjpji/λi using p = λjpji/λi X X ∗ X ∗ X X • n,n−ei +ej ij ij φjγji = φiγij = φi γij = φi γij = φiγij K j6=i j6=i j6=i j6=i j6=i X µi(ni)λjpji X X µi(ni)λjpji = λi λi : Global balance equation {(j,i)|ni >0} j=1 {i|ni >0} K X µi(ni) X X = λjpji = µi(ni) λi {i|ni >0} j=1 {i|ni >0} | {z } 3-141 λi 3-142

Closed queueing networks VI Closed queueing networks VII

K P Q ˆ 0 0 Dealing with G(M , K) = n +···+n =M i=1 Pi(ni) Since n > 0, we change n = n + 1 for n ≥ 0 1 K • k k k k G(M , K) with M customers and K queues iteratively 0 X n1 n2 nk X n1 n2 nk +1 ρ1 ρ2 ··· ρk = ρ1 ρ2 ··· ρk G(m, k) = G(m, k − 1) + ρkG(m − 1, k) 0 n1+···+nk =m, n1+···+n +1=m, n >0 k k n0 >0 with boundary conditions: G(m, 1) = ρm for m = 0, 1,..., M , and k 1 0 X n1 n2 nk G(0, k) = 1 for k = 1, 2, ··· , K =ρk ρ1 ρ2 ··· ρk 0 For m > 0 and k > 1, split the sum into two disjoint sums as n1+···+nk =m−1, 0 • nk >0 X n1 n2 nk G(m, k) = ρ1 ρ2 ··· ρ k =ρkG(m − 1, k) n1+···+nk =m

X n1 n2 nk X n1 n2 nk = ρ1 ρ2 ··· ρk + ρ1 ρ2 ··· ρk In a closed with M customers, the probability that n1+···+nk =m, n1+···+nk =m, at steady-state, the number of customers in station j greater than or nk =0 nk >0

X n1 n2 nk−1 X n1 n2 nk equal to m is = ρ1 ρ2 ··· ρk−1 + ρ1 ρ2 ··· ρk n +···+n =m, n +···+n =m, 1 k 1 k m G(M − m) nk =0 nk >0 Pr[xj ≥ m] = ρj for 0 ≤ m ≤ M | {z } G(M ) G(m,k−1)

3-143 3-144 Closed queueing networks VIII Closed queueing networks IX

0 0 Implementation of G(M , K): µ is not a function of m. Proof: nj = n + m for n ≥ 0 i • j j n1 nj nK m\k 1 2 ··· k − 1 k ··· K X ρ1 ··· ρj ··· ρ Pr[x ≥ m] = K 0 1 1 1 1 1 j G(M ) G(1, 2) = G(1, 1) + ρ2G(0, 2) n1+···+nj +···+nK =M, 1 ρ1 = ρ1 + ρ2 nj ≥m 2 2 2 ρ1 G(2, 2) = ρ1 + ρ2G(1, 2) 0 n nj +m n . 1 K . X ρ1 ··· ρj ··· ρK . = m − 1 0 G(M ) m ρm n1+···+nj +m+···+nK =M, 1 0 . nj +m≥m . . m ρ n0 M ρM G(M, K) j X n1 j nK 1 = ρ1 ··· ρj ··· ρK G(M ) 0 n1+···+nj +···+nK =M−m, If µi(m) = mµi (multiple servers), we generalize 0 nj ≥0 m i ρm X (λ(M )πk) j G(m, k) = f (i)G(m − i, k − 1) and f (i) = = G(M − m) k k Qi G(M ) i=0 j=1 µi(j) with fk(0) = 1 for all k. Pr[xj = m] = Pr[xj ≥ m] − Pr[xj ≥ m + 1] • m = ρ (G(M − m) − ρjG(M − m − 1))/G(M ) 3-145 j 3-146

Closed queueing networks X Closed queueing networks: example A-I

In a closed Jackson network with M customers, the average number Suppose that the computer system given in the open queueing network is of customers at queue j: now operated so that there are always I programs in the system. Note that M M the feedback loop around the CPU signifies the completion of one job and X X G(M − m) N (M ) = Pr[x ≥ m] = ρm its instantaneous replacement by another one. Find the steady state pmf of j j j G(M ) m=1 m=1 the system. Find the rate at which programs are completed. In a closed Jackson network with M customers, the average throughput of queue j: G(M − 1) γ (M ) =µ Pr[x ≥ 1] = µ ρ j j j j j G(M ) G(M − 1) =λ j G(M ) Using λ = λ(I )π with ~π = ~πP, – Average throughput is the average rate at which customers are • i i serviced in the queue. For a single-server queue, the service rate is µj π1 = pπ1 + π2, π2 = (1 − p)π1 and π1 + π2 = 1 when there are one or more customers in the queue, and 0 when the we have queue is empty λ(I ) λ(I )(1 − p) λ1 = λ(I )π1 = and λ2 = λ(I )π2 = 3-147 2 − p 2 − p 3-148 Closed queueing networks: example A-II for closed networks I

For 0 ≤ i ≤ I , ρ = λ /µ and ρ = λ /µ • 1 1 1 2 2 2 Theorem: In a closed Jackson network with M customers, the (1 − ρ )ρi (1 − ρ )ρI−i occupancy distribution seen by a customer upon arrival at queue j is Pr[N = i, N = I − i] = 1 1 2 2 1 2 S(I ) the same as the occupancy distribution in a closed network with the arriving customer removed, i.e., the system with M − 1 customers The normalization constant, S(I ), is obtained by • In a closed network with M customers, the expected number of I I+1 • X 1 − (ρ1/ρ2) customers found upon arrival by a customer at queue j is equal to S(I ) = (1−ρ )(1−ρ ) ρi ρI−i = (1−ρ )(1−ρ )ρI 1 2 1 2 1 2 2 1 − (ρ /ρ ) the average number of customers at queue j, when the total i=0 1 2 number of customers in the closed network is M − 1 We then have for 0 ≤ i ≤ I • An arriving customer sees the system at a state that does not 1 − β • Pr[N = i, N = I − i] = βi include itself 1 2 1 − βI+1 Proof: where β = ρ1/ρ2 = µ2/((1 − p)µ1) X(t) = [X1(t), X2(t),..., XK (t))]: state of the network at time t Program completion rate: pλ1 • • Tij(t): probability that a customer moves from queue i to j at I I+1 • + λ1/µ1 = 1 − Pr[N1 = 0] = β(1 − β )/(1 − β ) time t

3-149 3-150

Arrival theorem for closed networks II I

For any state n with n > 0, the conditional probability that a • i Performance measure for closed networks with M customers customer moving from node i to j finds the network at state n N (M ): average number of customers in queue j • j Pr[X(t) = n, Tij(t)] T (M ): average time a customer spends (per visit) in queue j αij(n) = Pr[X(t) = n|Tij(t)] = • j Pr[Tij(t)] γ (M ): average throughput of queue j • j Pr[Tij(t)|X(t) = n] Pr[X(t) = n] = P Mean-Value Analysis: Calculates Nj(M ) and Tj(M ) directly, without Pr[Tij(t)|X(t) = m] Pr[X(t) = m] m,mi >0 first computing G(M ) or deriving the stationary distribution of the π(n)µ p ρn1 ··· ρni ··· ρnK = i ij = 1 i K network P P m1 mi mK π(m)µipij ρ ··· ρ ··· ρ m,mi >0 m,mi >0 1 i K a) The queue length observed by an arriving customer is the same as 0 0 – Changing mi = mi + 1, mi ≥ 0, the queue length in a closed network with one less customer n1 ni nK b) Little’s result is applicable throughout the network ρ1 ··· ρi ··· ρK αij(n) = 0 P m1 mi +1 mK 1. Based on a) 0 ρ ··· ρ ··· ρ m1+···+mi +1+···+mK =M, 1 i K 0 mi +1>0 1 n1 ni −1 nK n1 ni −1 nK Tj(s) = (1 + Nj(s − 1)) for j = 1,..., K, s = 1,..., M ρ1 ··· ρi ··· ρK ρ1 ··· ρi ··· ρK µj = 0 = P m1 mi mK 0 ρ ··· ρ ··· ρ G(M − 1) m1+···+mi 1 i K – Tj(0) = Nj(0) = 0 for j = 1,..., K 0 +···+mK =M−1,mi ≥0 3-151 3-152 Mean Value Analysis II Closed queueing networks: example B

2. Based on b), we first have when there are s customers in the Gupta’s truck company owns m trucks: Gupta is interested in the network probability that 90% of his trucks are in operation N (s) = λ (s)T (s) = λ(s)π T (s) (1) Set a routing matrix P: j j j j j • | {z } step 2-b Op LM M and Op  0 0.85 0.15 P = K K LM 0.9 0 0.1  X X s   s = N (s) = λ(s) π T (s) → λ(s) = (2) Local maintenance j j j PK M 1 0 0 j=1 j=1 j=1 πjTj(s) | {z } With π0 = 0.4796, step 2-a • π1 = 0.4077, and Combining (1) and (2) yields Manufacturer π2 = 0.1127, we have ρ = λ(m)π /λ , λ(s)π T (s) 0 0 0 N (s) = s j j j PK ρ1 = λ(m)π1/µ1, and j=1 πjTj(s) ρ2 = λ(m)π2/µ2 This will be iteratively done for s = 0, 1,..., M We have Pr[O = i, L = j, M = k] = 1 ρi ρj ρk/G(m) and • i! 0 1 2 3-153 k = m − i − j 3-154

Where are we? M/G/1 queue: Embedded MC I

Elementary queueing models Recall that a continuous-time MC is described by (n, r): – M/M/1, M/M/C, M/M/C/C/K, ... and bulk queues – either product-form solutions or use PGF n: number of customers in the system. • r: attained or remaining service time of the customer in service. Intermediate queueing models (product-form solution) • – Time-reversibility of Markov process Due to x, (n, x) is not a countable state space. How can we get rid of x? – Detailed balance equations of time-reversible MCs What if we observe the system at the end of each service? – Multidimensional Birth-death processes

– Network of queues: open- and closed networks Xn+1 = max(Xn − 1, 0) + Yn+1 Advanced queueing models Xn: number of customers in the system left behind by a departure. – M/G/1 type queue: Embedded MC and Mean-value analysis Yn: number of arrivals that occur during the service time of the – M/G/1 with vacations and Priority queues departing customer. More materials on queueing models (omitted) Question: Xn is equal to the queue length seen by an arriving – G/M/m queue, G/G/1, etc. customer (queue length just before arrival)? Recall PASTA. – Algorithmic approaches to get steady-state solutions

3-155 3-156 Distribution Upon Arrival or Departure M/G/1 queue: Embedded MC II

α(t), β(t): number of arrivals and departures (respectively) in (0, t) Defining probability generating function of distribution Xn+1,

U (t): number of times the system goes from n to n + 1 in (0, t); Xn+1 max(Xn −1,0)+Yn+1 max(Xn −1,0) Yn+1 n Qn+1(z) , E[z ] = E[z ] = E[z ]E[z ] number of times an arriving customer finds n customers in the system Let U (z) = E[zYn+1 ], as n → ∞, U (z) = U (z) (independent V (t): number of times that the system goes from n + 1 to n; n+1 n+1 n of n). Then, we have number of times a departing customer leaves n. ∞ the transition n to n+1 cannot reoccur until after the number in the system drops to n once more X k (i.e., until after the transition n +1 to n reoccurs) Qn+1(z) =U (z) z Pr[max(Xn − 1, 0) = k] k=0 ∞ h 0 X k−1 i =U (z) z Pr[Xn = 0] + z Pr[Xn = k] k=1 h −1 i =U (z) Pr[Xn = 0] + z (Qn(z) − Pr[Xn = 0])

Un(t) and Vn(t) differ by at most one: |Un(t) − Vn(t)| ≤ 1. As n → ∞, we have Qn+1(z) = Qn(z) = Q(z), and Pr[Xn = 0] = q0,

Un(t) Vn(t) Un(t) α(t) Vn(t) β(t) U (z)(z − 1) lim = lim ⇒ lim = lim Q(z) = q0. t→∞ t t→∞ t t→∞ α(t) t t→∞ β(t) t z − U (z)

3-157 3-158

M/G/1 queue: Embedded MC III M/G/1 Queue: Embedded MC IV

λx(z−1) We need to find U (z) and q0. Using U (z|xi = x) = e , Let fT (t) be probability density function of Tj, i.e., total delay. Z ∞ ∞ Z ∞ k ∗ X k (λt) −λt ∗ U (z) = U (z|xi = x)b(x)dx = B (λ(1 − z)). Q(z) = z e fT (t)dt = T (λ(1 − z)) 0 k! k=0 0 0 ∗ Since Q(1) = 1, we have q0 = 1 − U (1) = 1 − λ · X = 1 − ρ. where T (s) is the Laplace transform of fT (t). We have Transform version of Pollaczek-Khinchin (P-K) formula is B∗(λ(1 − z))(z − 1) T ∗(λ(1 − z)) = (1 − ρ) z − B∗(λ(1 − z)) B∗(λ(1 − z))(z − 1) Q(z) = (1 − ρ) z − B∗(λ(1 − z)) Let s = λ(1 − z), one gets (1 − ρ)sB∗(s) (1 − ρ)s 0 T ∗(s) = = W ∗(s)B∗(s) ⇒ W ∗(s) = Letting q = Q (1), one gets W = q/λ − X. s − λ + λB∗(s) s − λ + λB∗(s) Sojourn time distribution of an M/G/1 system with FIFO service: In an M/M/1 system, we have B∗(s) = µ/(s + µ): If a customer spends Tj sec in the system, the number of customers it  λ  leaves behind in the system is the number of customers that arrive W ∗(s) = (1 − ρ) 1 + during these Tj sec, due to FIFO. s + µ − λ

3-159 3-160 M/G/1 Queue: Embedded MC V Residual life time∗ I

Taking the inverse transform of W ∗(s) (L{Ae−at} ↔ A/(s + a)), Hitchhiker’s paradox: •   λ  Cars are passing at a point of a road according to a Poisson process L−1{W ∗(s)} = L−1 (1 − ρ) 1 + s + µ − λ with rate λ = 1/10, i.e., 10 min. = (1 − ρ)δ(t) + λ(1 − ρ)e−µ(1−ρ)x , x > 0 A hitchhiker arrives to the roadside point at random instant of time.

We can write W ∗(s) in terms of R0∗(s) Previous car Next car

(1 − ρ)s (1 − ρ)s time W ∗(s) = = Hitchhiker arrives s − λ + λB∗(s) s − λ(1 − B∗(s)) 1 − ρ 1 − ρ What is his mean waiting time for the next car? = = 1 − B∗(s) 1 − ρR0∗(s) 1 − λX 1. Since he arrives randomly in an interval, it would be 5 min. sX 2. Due to memoryless property of exponential distribution, it would be ∞ X another 10 min. = (1 − ρ) (ρR0∗(s))k k=0 ∗L. Kleinrock, Queueing systems, vol.1: theory 3-161 3-162

Residual life time II Residual life time III

The distribution of an interval that the hitchhiker captures depends If we take the Laplace transform of the pdf of R0 for 0 ≤ R0 ≤ x, on both X and fX (x): x −sx −sx 0 Z e 1 − e E[e−R s|X 0 = x] = dy = fX0 (x) = CxfX (x) and C : proportional constant 0 x sx R ∞ Unconditioning over X 0, we have R0∗(s) and its moments as Since 0 fX0 (x)dx = 1, we have C = 1/E[X] = 1/X: ∗ (n+1) xfX (x) 0∗ 1 − FX (s) 0n X fX0 (x) = R (s) = ⇒ E[R ] = X sX (n + 1)X Since Pr[R0 < y|X 0 = x] = y/x for 0 ≤ y ≤ x, joint pdf of X and R0: ∗ R ∞ −sx where FX (s) = 0 e fX (t)dt dy xf (x)dx f (x)dydx Mean residual time is rewritten as Pr[y < R0 < y + dy, x < X 0 < x + dx] = X = X x X X  σ2  R = E[R0] = 0.5 X + X Unconditioning over X 0, X Z ∞ dy 1 − FX (y) 1 − FX (y) 0 0 fR0 (y)dy = fX (x)dx = dy ⇒ fR0 (y) = Surprisingly, the distribution of the elapsed waiting time, X − R , is X y X X identical to that of the remaining waiting time.

3-163 3-164 M/G/1 Queue: Embedded MC VI M/G/1 Queue: Embedded MC VII

State transition diagram of M/G/1 GBE can be expressed as

… n X

πn = πn+1−kαk + αnπ0 for n = 0, 1, 2, ··· … k=0 – Q(z) can be also obtained using Q(z) = P∞ π zn … … … n=0 n

As an alternative, we define ν0 = 1 and νi = πi/π0

1 − α0 and its state transition probability matrix ν1 = α0   α0 α1 α2 α3 ... 1 − α1 α1 ν2 = ν1 − α0 α1 α2 α3 ... α0 α0   Z ∞ k  0 α α α ... (λx) −λx . P =  0 1 2  and αk = e b(x)dx .   k! .  0 0 α0 α1 ... 0  . . . . .  1 − α1 α2 αi−1 αi−1 ...... νi = νi−1 − νi−2 − · · · ν1 − . . . . α0 α0 α0 α0

P∞ P∞ πi – νi = 1 + = 1/π0 and πi = π0νi i=0 i=1 π0 3-165 3-166

M/G/1 queue: Mean value analysis I M/G/1 queue: Mean value analysis II

∗ dW (s) We are interested in E[Wi] = ds (See BG p.186) A sample path of M/G/1 queue s=0 W : Waiting time in queue of customer i • i R : Residual service time seen by customer i 3 • i 2 1 Xi: Service time of customer i time • customers of N : Number of customers in queue found by customer i # 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 • i i−1 X Wi = Ri + Xj 8 7 j=i−Ni 6 5 4 Taking expectations and using the independence among Xj, 3

i−1 Virtual workload 2 h X i 1 1 E[Wi] , W = E[Ri] + E E[Xj|Ni] = Ri + Nq time µ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 j=i−Ni

Since Nq = λW , Ri = R for all i, we have 4 3 R 2 W = 1 1 − ρ time 0 1 2 3 4 5 6 7 3-167 8 9 10 11 12 13 14 15 16 3-168 M/G/1 queue: Mean value analysis III M/G/1 queue: Mean value analysis IV

Time averaged residual time of r(τ) in the interval [0, t] is From the hitchhiker’s paradox, we have E[R0] = E[X 2]/(2E[X])

M(t) M(t) 0 1 Z t 1 X 1 1 M (t) P X 2 R =0 · Pr[N (t) = 0] + E[R ] × Pr[N (t) > 0] R(t) = r(τ)dτ = X 2 = i=1 i t t 2 i 2 t M (t) E[X 2] λX 2 0 i=1 = × λE[X] = 2E[X] 2 – M (t) is the number of service completion within [0, t]. P-K formula for mean waiting time in queue 2 2 2 ∗0 λX λ(σX + X ) W = −W (s)|s=0 = = 2(1 − ρ) 2 2 2 2(1 − ρ) X =σX +X 2 2 1 + Cx ρ 1 + Cx Sec. 3.5 The MIG11 System 191 = X = WM/M/1 Residual service service Residual time time 2 1 − ρ 2 2. A packet transmitted in frame i might be accepted at the receiver, but the correspond- ing acknowledgment (in the form of the receive number) might not have arrived at 2 2 2 i – Cx = σX /X is the coefficient of variation of the service time – e.g., upon a newthe servicetransmitter ofby the durationtime the transmissionX1, r(ofτpacket) starts+ n - atI isXcompleted.1 andThis decays can happen due to errors in the return channel, large propagation delays, long return – The average time in the system T = W + X linearly for X1 timeframes unitsrelative to the size of the goback number n, or a combination thereof. Eg.: since C = 1 in an M/M/1 and C = 0 in an M/D/1, As t → ∞, lim We willRassume(t) =(somewhatR = unrealistically)λX 2/2 that retransmissions occur only due to x x reasont→∞I, and that a packet is rejected at the receiver with probability p independently of other packets. 2 ∗ ρ ρ Upon a new serviceConsider of durationthe case λ where X, packets startsarrive at dW at theandtransmitter (decayss) linearlyaccording forto a Poisson time units. WM/M/1 = X > WM/D/1 = X process withWrate=.\. It follows that the←→time interval between start of the first transmission 1 − ρ 2(1 − ρ) 3-169 3-170 of a given packet after2(1the−lastρtransmission) of thedsprevious packets=0 and end of the last transmission of the given packet is I + kn time units with probability (I - p)pk. (This corresponds to k retransmissions following the last transmission of the previous packet; see Delay Models in Data Networks Chap. 3 Fig. 3.17.) Thus, the transmitter's queue behaves like an MIGII queue with service time distribution given by Packet arrivals P{X=I+kn}=(l-p)p,k k=O.I. ...

The first two moments of the service time are ::::::::'':'::':'':'::'Vx, x x v2 , V3 X5 V4 X5 Delay analysisx of an ARQ(X systemX) M/G/1 Queue with vacations I Busy period

Vacations

Suppose Go-Back-N ARQ system, where a packet is successfully Server takes a vacation at the end of each busy periodTime - x 2 k (00 k 00 k 2 00)2k =(l-p) +n transmitted with probability 1 − p; ACK arrives in tx time of N − 1 Figure 3.12 An M/G/I1 system with vacations. At the end of a busy Take an additionalperiod, vacation the server goes onif vacation no forcustomers time V with first and second are moments found at the end of • Y frames withoutWe now annote errorthat V and V , respectively. If the system is empty at the end of a vacation, the each vacation: V1server, V takes2, a ...new vacation. the An durations arriving customer to an of empty the system successive must vacations X X wait until the end of the current vacation to get service. ,",pk = _1_, '"'kk __P_ Packet arrivals to a transmitter’sI-p P - (I queue_ )2' follows Poisson with mean A customer finds the system idle (vacation), waits for the end of • k=O k=O P • λ (packets/slot) the vacation period Start of effective service time Effective service time Effective service time of packet 4 of packet 1 of packet 2 ,. II .. 'II

X, X3

X, Error Final transmission Error Final transmission Correct Error Error of packet 1 of packet 2

Packets-Transmitted Residual service time including vacation periods Figure 3.17 Illustration of the effective service times of packets in the ARQ Figure 3.13 Residual service times for an M/G/1 system with vacations. We need the first two moments of the service time to use P-K • Busy periods alternate with vacation periods. • system of Example 3.15. For example, packet 2 has an effective service time of M(t) L(t) n + 1 because there was an error in the first attempt to transmit it following the Z t formula ∞ last transmission of packet 1. but no error in the second attempt. 1 1 X 1 1 X 1 X Np r(τ)dτ = X 2 + V 2 X = (1 + kN )(1 − p)pk = 1 + t t 2 i t 2 i 1 − p 0 i=1 i=1 k=0 ∞ – M (t): # of services completed by time t X 2Np N 2(p + p2) X 2 = (1 + kN )2(1 − p)pk = 1 + + – L(t): # of vacations completed by time t 1 − p (1 − p)2 3-171 3-172

k=0 II M/G/1 Queue with vacations II FDM and TDM on a Slot Basis I

Residual service time including vacation periods is rewritten as Suppose m traffic streams of equal-length packets according to • Poisson process with rate λ/m each Z t PM(t) 1 X 2 PL(t) 1 V 2 1 M (t) i=1 2 i L(t) i=1 2 i If the traffic streams are frequency-division multiplexed on m r(τ)dτ = · + · • t 0 t M (t) t L(t) subchannels, the transmission time of each packet is m time units | {z } | {z } | {z } R as t→∞ λ as t→∞ 1−ρ as t→∞ 2 V – Using P-K formula, λX /(2(1 − ρ)), with ρ = λ and µ = 1/m, 2 2 λX (1 − ρ)V λm = + = R W = 2 2V FDM 2(1 − λ)

Using W = R/(1 − ρ), we have Consider the same FDM, but packet transmissions can start only at • • times, m, 2m, 3m,...: slotted FDM λX 2 V 2 W = + – This system gives stations a vacation of m slots 2(1 − ρ) 2V ! V 2 – The sum of waiting time in M/G/1 queue and mean residual WSFDM = WFDM + 0.5m = vacation times 2V

3-173 3-174

FDM and TDM on a Slot Basis II M/G/1 Queue with Non-Preemptive Priorities I Customers are divided into K priority classes, k = 1,..., K. m traffic streams are time-division multiplexed, where one slot • dedicatedSec. 3.5 to eachThe M traffic/G/1 System stream as shown below 195 Non-preemptive priority Service of a customer completes uninterrupted, even if customers Stream 1 Stream 2 Stream 3 Stream 4 • of higher priority arrive in the meantime A separate (logical) queue is maintained for each class; each time --,--IttM-'------'--...l...----..!-1-!--I__. • the server becomes free, the first customer in the highest priority I. Framek t .1_.Frame (k + 1) One time unit per slot queue (that is not empty) enters service Due to non-preemptive policy, the mean residual service time R0 TDMFigure with3.20 mTOM=with4 mtraffic= 4 traffic streamsstreams. • seen by an arriving customer is the same for all priority classes if all 2 2 – ServiceThus, timethe tocustomer's eachaverage queue,total delayX:ismmoreslotsfavorable→in TDMX than= inmFDM (assuming customers have the same service time distribution that In > 2). The longer average waiting time in queue for TDM is more than compensated – Frame synchronizationby the faster service time. delay:Contrast thism/with2 the Example 3.9, which treats TDM with – Using P-Kslots that formula,are a very small weportion haveof the packet size. Problem 3.33 outlines an altemative Notations approach for deriving the TDM average delay. (k) Nq : mean number of waiting customers belonging to class k in m • 3.5.2 ReservationsWandTDMPolling= = WSFDM the queue. 2(1 − λ) Organizing transmissions from several packet streams into a statistical multiplexing sys- W : mean waiting time of class-k customers • k – Systemtem requires responsesome form time:of scheduling.T = 1 In+someW cases, this scheduling is naturally and ρ : utilization, or load of class k, ρ = λ X . easily accomplished; in other cases, however, someTDMform of reservation or • k k k k is required. R0: mean residual service time in the server upon arrival Situations of this type arise often in multiaccess channels, which will be treated 3-175 • 3-176 extensively in Chapter 4. For a typical example, consider a communication channel that can be accessed by several spatially separated users; however, only one user can transmit successfully on the channel at anyone time. The communication resource of the channel can be divided over time into a portion used for packet transmissions and another portion used for reservation or polling messages that coordinate the packet transmissions. In other words, the time axis is divided into data intervals, where actual data are transmitted, and reservation intervals, used for scheduling future data. For uniform presentation, we use the term "reservation" even though "polling" may be more appropriate to the practical situation. We will consider Tn traffic streams (also called users) and assume that each data interval contains packets of a sinfile user. Reservations for these packets are made in the immediately preceding reservation interval. All users are taken up in cyclic order (see Fig. 3.21). There are several versions of this system differing in the rule for deciding which packets are transmitted during the data interval of each user. In the fiated system, the rule is that only those packets that arrived prior to the user's preceding reservation interval are transmitted. By contrast, in the exhaustive system, the rule is that all available packets of a user are transmitted during the corresponding data interval, including those that arrived in this data interval or the preceding reservation interval. An intermediate version, which we call the partially fiated system, results when the packets transmitted in a user's data interval are those that arrived up to the time this data interval began (and the corresponding reservation interval ended). A typical example of such reservation systems M/G/1 Queue with Non-Preemptive Priorities II M/G/1 Queue with Non-Preemptive Priorities III

Stability condition: ρ + ρ + ··· + ρ < 1. 1 2 K From W2 = R/((1 − ρ1)(1 − ρ1 − ρ2)), we can generalize Priority 1: similar to P-K formula, R Wk = 1 (1) (1) R (1 − ρ − · · · − ρ )(1 − ρ − · · · − ρ ) W1 = R + Nq and Nq = λ1W1 ⇒ W1 = 1 k−1 1 k µ 1 − ρ1 Priority 2: As before, the mean residual service time R is

1 (1) 1 (2) 1 K K W2 = R + Nq + Nq + λ1W2 1 2 X 2 1 X 2 µ µ µ R = λX , with λ = λi and X = λiX 1 2 1 2 λ i | {z } | {z } i=1 i=1 time needed to serve time needed to serve those customers class-1 and class-2 customers in higher classes that arrive ahead in the queue during the waiting time of class-2 customer Mean waiting time for class-k customers: (2) From Nq = λ2W2, PK 2 i=1 λiXi R + ρ1W1 Wk = W2 = R + ρ1W1 + ρ2W2 + ρ1W2 ⇒ W2 = 2(1 − ρ1 − · · · − ρk−1)(1 − ρ1 − · · · − ρk) 1 − ρ1 − ρ2 Note that average queueing time of a customer depends on the arrival – Using W1 = R/(1 − ρ1), we have rate of lower priority customers. R W2 = (1 − ρ1)(1 − ρ1 − ρ2) 3-177 3-178

M/G/1 Queue with Preemptive Resume Priorities I M/G/1 Queue with Preemptive Priorities II

Preemptive resume priority (iii) Average time required to serve customers of priority higher than k Service of a customer is interrupted when a higher priority • that arrive while the customer is in the system for Tk customer arrives. k−1 k−1 It resumes from the point of interruption when all higher priority X X • λ X T = ρ T for k > 1 and 0 if k = 1 customers have been served. i i k i k i=1 i=1 In this case the lower priority class customers are completely • "invisible" and do not affect in any way the queues of the higher Combining these three terms, classes • k−1 Waiting time of class-k customer consists of Rk X T = X + + T ρ k k 1 − ρ − · · · − ρ k k (i) The customer’s own mean service time Xk. 1 k i=1 (ii) The mean time to serve the customers in classes 1,..., k, ahead in | {z } this is zero for k=1 the queue, Pk 2 (1 − ρ1 − · · · − ρk)X k + i=1 λiXi k ⇒Tk = . Rk 1 X (1 − ρ − · · · − ρ )(1 − ρ − · · · − ρ ) W = and R = λ X 2. 1 k−1 1 k k 1 − ρ − · · · − ρ k 2 i i | {z } 1 k i=1 becomes 1 if k=1 This is equal to the average waiting time in an M/G/1 system where customers of priority lower than k are neglected 3-179 3-180 Upper Bound for G/G/1 System I Upper Bound for G/G/1 System II

th Waiting time of the k customer Notations for any random variable Y : Y + = max{0, Y } and Y − = − min{0, Y } • 2 Y = E[Y ] and σ2 = EY 2 − Y  • Y Y = Y + − Y − and Y + · Y − = 0 • E[Y ] = Y = Y + − Y − th • Wk: Waiting time of the k customer 2 2 2 + − • σ = σ + + σ − + 2Y · Y th • Y Y Y Xk: Service time of the k customer • th st Using the above, we can express τk: Interarrival time between the k and the (k + 1) customer • th st Ik: Idle period between the k and the (k + 1) customer W = max{0, W +X −τ } = max{0, W +V } = (W +V )+ • • k+1 k k k k k k k I = − min{0, W +X −τ } = − min{0, W +V } = (W +V )− Wk+1 = max{0, Wk + Xk − τk} and Ik = − min{0, Wk + Xk − τk} • k k k k k k k k 2 2 λ(σa +σb ) 2 2 2 + − The average waiting time in queue: W ≤ σ =σ + + σ − + 2(W + V ) · (W + V ) 2(1−ρ) (Wk +Vk ) (Wk +Vk ) (Wk +Vk ) k k k k σ2: variance of the interarrival times =σ2 + σ2 + 2W · I a Wk+1 Ik k+1 k • 2 σ : variance of the service times 2 2 2 2 2 • b =σ + σ = σ + σ + σ λ: average interarrival time Wk Vk Wk a b • 3-181 3-182

Upper Bound for G/G/1 System III

As k → ∞, we can see

σ2 + σ2 + 2W · I = σ2 + σ2 + σ2 Wk+1 Ik k+1 k Wk a b becomes σ2 + σ2 + 2W · I = σ2 + σ2 + σ2 W I W a b We get W as

σ2 + σ2 σ2 σ2 + σ2 λ(σ2 + σ2) W = a b − I ≤ a b = a b 2I 2I 2I 2(1 − ρ)

1 – The average idle time I between two successive arrival is λ (1 − ρ)

3-183