Introduction to Queueing Theory Review on Poisson Process

Introduction to Queueing Theory Review on Poisson Process

Contents ELL 785–Computer Communication Networks Motivations Lecture 3 Discrete-time Markov processes Introduction to Queueing theory Review on Poisson process Continuous-time Markov processes Queueing systems 3-1 3-2 Circuit switching networks - I Circuit switching networks - II Traffic fluctuates as calls initiated & terminated Fluctuation in Trunk Occupancy Telephone calls come and go • Number of busy trunks People activity follow patterns: Mid-morning & mid-afternoon at All trunks busy, new call requests blocked • office, Evening at home, Summer vacation, etc. Outlier Days are extra busy (Mother’s Day, Christmas, ...), • disasters & other events cause surges in traffic Providing resources so Call requests always met is too expensive 1 active • Call requests met most of the time cost-effective 2 active • 3 active Switches concentrate traffic onto shared trunks: blocking of requests 4 active active will occur from time to time 5 active Trunk number Trunk 6 active active 7 active active Many Fewer lines trunks – minimize the number of trunks subject to a blocking probability 3-3 3-4 Packet switching networks - I Packet switching networks - II Statistical multiplexing Fluctuations in Packets in the System Dedicated lines involve not waiting for other users, but lines are • used inefficiently when user traffic is bursty (a) Dedicated lines A1 A2 Shared lines concentrate packets into shared line; packets buffered • (delayed) when line is not immediately available B1 B2 C1 C2 (a) Dedicated lines A1 A2 B1 B2 (b) Shared line A1 C1 B1 A2 B2 C2 C1 C2 A (b) Shared lines A C B A B C B Buffer 1 1 1 2 2 2 Output line Number of packets C in the system Input lines 3-5 3-6 Packet switching networks - III Random (or Stochastic) Processes Delay = Waiting times + service times General notion Suppose a random experiment specified by the outcomes ζ from P1 P2 P3 P4 P5 • Packet completes some sample space S, and ζ ∈ S transmission A random process (or stochastic) is a mapping ζ to a function of • Packet begins Service time t: X(t, ζ) transmission time – For a fixed t, e.g., t1, t2,...: X(ti, ζ) is random variable – For ζ fixed: X(t, ζi) is a sample path or realization Packet arrives Waiting at queue P1 P2 P3 P4 time P5 Packet arrival process • Packet service time ≈ • – R bps transmission rate and a packet of L bits long 0 1 2 n n+1 n+2 time – Service time: L/R (transmission time for a packet) – e.g. # of people in Cafe coffee day, # of rickshaws at IIT main – Packet length can be a constant, or random variables gate 3-7 3-8 Discrete-time Markov process I Discrete-time Markov process II A sequence of integer-valued random variables, Xn, n = 0, 1,... , Time-homogeneous, if for any n, is called a discrete-time Markov process pij = Pr[Xn+1 = j|Xn = i](indepedent of time n) If the following Markov property holds which is called one-step (state) transition probability Pr[Xn+1 = j|Xn = i, Xn−1 = in−1,..., X0 = i0] State transition probability matrix: = Pr[Xn+1 = j|Xn = i] p00 p01 p02 ······ State: the value of Xn at time n in the set S • State space: the set S = {n|n = 0, 1,..., } p10 p11 p12 ······ • ··········· – An integer-valued Markov process is called Markov chain (MC) P = [pij] = pi0 pi1 pi2 ······ – With an independent Bernoulli seq. Xi with prob. 1/2, is . Yn = 0.5(Xn + Xn−1) a Markov process? . .. .. – Is the vector process Yn = (Xn, Xn−1) a Markov process? P∞ which is called a stochastic matrix with pij ≥ 0 and j=0 pij = 1 3-9 3-10 Discrete-time Markov process III Discrete-time Markov process IV A mouse in a maze n-step transition probability matrix: A mouse chooses the next cell to visit with (n) p = Pr[Xl+n = j|Xl = i] for n ≥ 0, i, j ≥ 0. 1 2 3 • probability 1/k, where k is the number of ij adjacent cells. 4 5 6 The mouse does not move any more once it is – Consider a two-step transition probability • caught by the cat or it has the cheese. 7 8 9 Pr[X2 = j, X1 = k, X0 = i] Pr[X2 = j, X1 = k|X0 = i] = 1 2 3 4 5 6 7 8 9 Pr[X0 = i] 1 0 1 0 1 0 0 0 0 0 2 2 Pr[X2 = j|X1 = k] Pr[X1 = k|X0 = i] Pr[X0 = i] 1 1 1 = 2 3 0 3 0 3 0 0 0 0 1 1 Pr[X0 = i] 3 0 2 0 0 0 2 0 0 0 4 1 0 0 0 1 0 1 0 0 3 3 3 = pikpkj P = 5 0 1 0 1 0 1 0 1 0 4 4 4 4 6 0 0 1 0 1 0 0 0 1 – Summing over k, we have 3 3 3 7 0 0 0 0 0 0 1 0 0 8 0 0 0 0 1 0 1 0 1 (2) X 3 3 3 pji = pikpkj 9 0 0 0 0 0 0 0 0 1 k 3-11 3-12 Discrete-time Markov process IV Discrete-time Markov process V In a place, the weather each day is classified as sunny, cloudy or rainy. The The Chapman-Kolmogorov equations: next day’s weather depends only on the weather of the present day and not ∞ on the weather of the previous days. If the present day is sunny, the next (n+m) X (n) (m) day will be sunny, cloudy or rainy with respective probabilities 0.70, 0.10 pij = pik pkj for n, m ≥ 0, i, j ∈ S k=0 and 0.20. The transition probabilities are 0.50, 0.25 and 0.25 when the present day is cloudy; 0.40, 0.30 and 0.30 when the present day is rainy. Proof: 0.2 0.3 X 0.7 0.25 SCR Pr[Xn+m = j|X0 = i] = Pr[Xn+m = j|X0 = i, Xn = k] Pr[Xn = k|X0 = i] 0.1 0.25 S 0.7 0.1 0.2 k∈S Sunny Cloudy Rainy P = X 0.5 0.3 C 0.5 0.25 0.25 (Markov property) = Pr[Xn+m = j|Xn = k] Pr[Xn = k|X0 = i] R 0.4 0.3 0.3 k∈S 0.4 X (Time homogeneous) = Pr[Xm = j|X0 = k] Pr[Xn = k|X0 = i] – Using n-step transition probability matrix, k∈S 0.601 0.168 0.230 0.596 0.172 0.231 n+m n m n+1 n 3 12 13 P = P P ⇒ P = P P P = 0.596 0.175 0.233 and P = 0.596 0.172 0.231 = P 0.585 0.179 0.234 0.596 0.172 0.231 3-13 3-14 Discrete-time Markov process VI Discrete-time Markov process VII State probabilities at time n Consider A Markov model for a packetized speech model: if the nth (n) (n) (n) (n) – πi = Pr[Xn = i] and π = π0 , . , πi ,...](row vector) packet contains silence, then the probability of silence in the next (0) packet is 1 − α and the probability of speech activity is α. Similarly, if – πi : the initial state probability the nth packet contains speech activity, then the probability of speech X Pr[Xn = j] = Pr[Xn = j|X0 = i] Pr[X0 = i] activity in the next packet is 1 − β and the probability of silence is β. i∈S (n) X (n) (0) (a) Find the state transition probability matrix, P. πj = pij πi (b) Find an expression of Pn. i∈S – In matrix notation: π(n) = π(0)Pn (a) " # Limiting distribution: Given an initial prob. distribution, π(0), 1 − α α P = β 1 − β ~π = lim π(n) → π(∞) = lim p(n) n→∞ j n→∞ ij (b) We can write Pn as – n → ∞: π(n) = π(n−1)P → π~ = π~ P and π~ · ~1 = 1 n −1 n – The system reaches “equilibrium" or “steady-state" P = N Λ N . 3-15 3-16 Discrete-time Markov process VIII Discrete-time Markov process IX (n) (n+1) Using the spectral decomposition of P, i.e., If P has identical rows, then P has also. Suppose r |P − λI | = (1 − β − λ)(1 − α − λ) = 0 r P(n) = . we have λ1 = 1 and λ2 = 1 − α − β. . The eigenvectors are ~e1 = [1, β/α] and ~e2 = [1, −1]. Thus, we have r " # " β # " # Then, we have ~e1 1 α 1 α β N = = and N −1 = . r ~e2 1 −1 α + β α −α ··· r ··· (n) PP = pj1 pj2 ··· pjn . = pj1r + pj2r + ··· + pjnr We can write Pn as . ··· . ··· " # " n n # r 1 0 1 α + βθ β − βθ Pn = N −1 N = , 0 (1 − α − β)n α + β α − αθn β + αθn ··· (n) = r = P where θ = 1 − α − β. ··· 3-17 3-18 Discrete-time Markov process X Discrete-time Markov process XI Stationary distribution: Back to the weather example on page 3-16 – z and z = [z ] denote the prob. of being in state j and its vector j j Using ~πP = ~π, we have • z = z · P and z · ~1 = 1 π =0.7π + 0.5π + 0.4π (0) 0 0 1 2 If zj is chosen as the initial distribution, i.e., πj = zj for all j, we • π1 =0.1π0 + 0.25π1 + 0.3π2 have π(n) = z for all n j j π =0.2π + 0.25π + 0.3π A limiting distribution, when it exists, is always a stationary 2 0 1 2 • distribution, but the converse is not true - Note that one equation is always redundant " # " # " # 0 1 1 0 0 1 Using 1 = π0 + π1 + π2, we have P = , P2 = , P3 = • 1 0 0 1 1 0 0.3 −0.5 −0.4 π0 0 Global balance equation: −0.1 0.75 −0.3 π1 = 0 X X 1 1 1 π2 1 π~ = π~ P ⇒ (each row) πj pji = πipij i i π0 = 0.596, π1 = 0.1722, π2 = 0.2318 3-19 3-20 Discrete-time Markov process XII Discrete-time Markov process XIII Classes of states: Periodicity and aperiodic: State j is accessible from state i if p(n) > 0 for some n State i has period d if • ij • States i and j communicate if they are accessible to each other (n) • p = 0 when n is not a multiple of d, Two states belong to the same class if they communicate with ii • each other where d is the largest integer with this property.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    46 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us