Analysis of structured multi-dimensional Markov processes
Citation for published version (APA): Selen, J. (2017). Analysis of structured multi-dimensional Markov processes. Technische Universiteit Eindhoven.
Document status and date: Published: 27/09/2017
Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
Please check the document version of this publication:
• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication
General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement: www.tue.nl/taverne
Take down policy If you believe that this document breaches copyright please contact us at: [email protected] providing details and we will investigate your claim.
Download date: 06. Oct. 2021 Analysis of structured multi-dimensional Markov processes This work was financially supported by The Netherlands Organisation for Scientific Research (NWO) through the Free Competition grant QueQuaR.
c Jori Selen, 2017 Analysis of structured multi-dimensional Markov processes
A catalogue record is available from the Eindhoven University of Technology Library ISBN: 978-90-386-4322-9
Painting on cover: You and me (in the centre of the universe), Frank Selen Printed by EPS Amsterdam Analysis of structured multi-dimensional Markov processes
PROEFSCHRIFT
ter verkrijging van de graad van doctor aan de Technische Universiteit Eindhoven, op gezag van de rector magnificus prof.dr.ir. F.P.T. Baaijens, voor een commissie aangewezen door het College voor Promoties, in het openbaar te verdedigen op woensdag 27 september 2017 om 16:00 uur
door
Jori Selen
geboren te Tegelen Dit proefschrift is goedgekeurd door de promotoren en de samenstelling van de promotiecommissie is als volgt:
voorzitter: prof.dr.ir. B. Koren 1e promotor: prof.dr. J.S.H. van Leeuwaarden 2e promotor: prof.dr.ir. I.J.B.F. Adan leden: dr. B.H. Fralix (Clemson University) dr. F.M. Spieksma (Universiteit Leiden) prof.dr.ir. O.J. Boxma prof.dr.ir. G.J.J.A.N. van Houtum prof.dr. R.J. Boucherie (Universiteit Twente)
Het onderzoek dat in dit proefschrift wordt beschreven is uitgevoerd in overeenstemming met de TU/e Gedragscode Wetenschapsbeoefening. Acknowledgements
As a mechanical engineering student I craved for more mathematical rigour. Ivo Adan and Johan van Leeuwaarden allowed me to satisfy this hunger by offering me a Ph.D. position in an area that is close to my heart. The main result of that position is this thesis. Along the path towards the completion of the thesis I have experienced support and encouragement from many. I would like to take this opportunity to thank them. I am incredibly grateful for the supervision that I have received. Johan, your guidance, ambition, and love for analysis have not only shaped me mathematically, but also as a person. I thoroughly enjoyed our close collaboration on the book and the off-topic discussions that we had during those meetings. Ivo, the enthusiasm, intuition and rigour with which you tackle problems is phenomenal. I have been in the fortunate position to witness these qualities first-hand and I hope that I succeeded in making some of them my own. I would not have been able to do all the work in this thesis by myself. In addition to my supervisors, I wish to thank Stella Kapodistria for the joint effort that resulted in Chapter6 and the intensive period of six weeks filled with daily meetings that led to Chapter7. I am also indebted to Guido Janssen for his help with a particularly nasty proof in Chapter6. Brian Fralix, thank you for teaching me about so many new techniques, for hosting me at Clemson, and for the Friday afternoon Skype calls that ultimately led to Chapter5. Vidyadhar Kulkarni, Andrei Sleptchenko, Geert-Jan van Houtum, your contributions to Chapters2 and4 are also highly appreciated. To Brian Fralix, Floske Spieksma, Onno Boxma, Geert-Jan van Houtum and Richard Boucherie I would like to extend my thanks for being part of my defense committee, reading my dissertation and providing valuable feedback. For four years the Stochastics group has been my second home. I have thoroughly enjoyed its open and pleasant atmosphere and the bright and
v vi Acknowledgements motivated people inside of it. I am also thankful for all the help that I received from the support staff. A special mention goes to my current and former office mates Alessandro, Debankur, Gianmarco, Kay and Marta for creating the best work environment that I could have ever hoped for. Outside work, I have had the pleasure to spend time with some great friends. Daan, Don, Etienne, Guus, Jos, Kay H, Kay P, Koen H, Koen J, and Rob, thank you for the great time at the Vestdijk, the boardgames, weekends away, festivals, and all the other things we did throughout the years. Mom, dad, Janne, Dennis, Imo, Karlijn, Lina, thank you for all the interest that you showed in my work, for helping me in whatever way possible and for always being there. I am also very appreciative of the support that I received from Sjors and Margriet. The final paragraph is dedicated to you, Simone. Thank you for your unconditional support, the comfort at home, and all the fun we have together. I am truly happy that I got to share this experience with you and all the ones that are to come. Contents
Notation index 1
1 Introduction 3 1.1 Machine with setup times ...... 5 1.2 Single-server priority system ...... 9 1.3 Join-the-shortest-queue routing ...... 13 1.4 Outline of this thesis ...... 18
2 The snowball effect of customer slowdown 19 2.1 Introduction...... 20 2.2 Key features of threshold slowdown systems ...... 25 2.3 Model description...... 33 2.4 Obtaining the equilibrium distribution ...... 34 2.5 A QED-type regime ...... 41 2.6 Conclusion ...... 44 2.A Proof of Lemma 2.2 ...... 44 2.B Proof of Lemma 2.3 ...... 46
3 Product-form solutions for a class of Markov processes 49 3.1 Introduction...... 50 3.2 Modeling assumptions...... 54 3.3 Analysis of the balance equations ...... 56 3.4 Symmetric processes...... 63 3.5 Aggregated state concept ...... 69 3.6 First passage times ...... 71 3.7 Conclusion ...... 76
4 Single-server priority systems 79 4.1 Introduction...... 80 4.2 Matrix-analytic method ...... 82
vii viii Contents
4.3 Application in spare parts logistics...... 93 4.4 Conclusion ...... 97
5 Multi-server priority systems 99 5.1 Introduction...... 100 5.2 Model description and outline of approach ...... 104 5.3 A slight modification of the CAP method ...... 109 5.4 Deriving an explicit expression for vi,j ...... 112 5.5 Laplace transforms for states in the horizontal boundary 117 5.6 The single-server case ...... 131 5.7 Conclusion ...... 133 5.A Combinatorial identities...... 134 5.B Random-product representation ...... 138 5.C Results for M/M/1 queues ...... 139
6 Shortest-expected-delay routing 145 6.1 Introduction...... 146 6.2 Balance equations ...... 149 6.3 Evolution of the compensation approach and our contri- bution ...... 153 6.4 Numerical results ...... 158 6.5 Applying the compensation approach ...... 162 6.6 Conclusion ...... 190 6.A Proof of step (c) of Lemma 6.8(i) ...... 190 6.B Proof of Lemma 6.18(iii) ...... 193
7 Generalized join-the-shortest-queue routing 199 7.1 Introduction...... 200 7.2 Model description...... 203 7.3 Near-insensitivity ...... 205 7.4 Single queue approximation...... 208 7.5 Evaluating the approximation ...... 212 7.6 Conclusion ...... 214 7.A Proof of Theorem 7.5...... 214
Bibliography 219
Summary 231
About the author 233 Notation index
Vectors are denoted by bold lowercase or uppercase letters or numbers. Matrices are denoted by uppercase letters. Unless mentioned otherwise, indexing of vectors and matrices starts at 0. We sometimes abuse notation by using the bold vector notation for a (change in) state of a Markov process that has more than two dimensions. Aside from the number sets, all sets are denoted by calligraphic letters such as . TheA argument of a (probability) generating function is z and the argument of a Laplace(-Stieltjes) transform is ω.
0 vector of zeros of appropriate dimension 1 A indicator function of the event A 1{ } vector of ones of appropriate dimension α, β parameters of a product-form solution (A)[i,j] matrix A with row i and column j removed (A)i,j element (i, j) of matrix A 1 1 A− or (A)− inverse of a matrix A c complement of a set A A C set of complex numbers C+ set of complex numbers with positive real part det(A) determinant of a matrix A Ex[f(X(t))] expectation of a functional of X(t) given X(0) = x ei vector of zeros of appropriate dimension with a 1 at position i G auxiliary matrix of the matrix-analytic method Im(z) imaginary part of z C i complex unit ∈ Λn transition rate submatrices in a QBD process from level i to level i + n, independent of i
1 2 Notation index
(i) Λn transition rate submatrices in a QBD process from level i to level i + n N, N0 N = 1, 2, 3,... , N0 = N 0 { } ∪ { } Px(f(X(t))) probability of a functional of X(t) given X(0) = x Q transition rate matrix of a Markov process R rate matrix of the matrix-geometric method R set of real numbers Re(z) real part of z C state space of∈ a Markov process S, ∂ closed unit disc and unit circle vU| U transpose of a vector v Z set of integer numbers := defined as d = equal in distribution d convergence in distribution → 1| Introduction
Markov processes provide essential instruments for modeling and analyzing a large variety of systems, including manufacturing systems, communi- cation networks, traffic networks and service systems such as clinics or hospitals. Markov processes often prove detailed enough to capture the essential system dynamics, while remaining simple enough in terms of mathematical structure to be amenable for theoretical analysis. Markov processes fall under the umbrella of Stochastics, the branch of mathematics that aims to establish rigorous statements about systems that are inherently uncertain, and therefore subject to some degree of randomness. A classical example is a queue, in which jobs need to wait for service. The queue grows when new jobs arrive and shrinks when jobs complete service. Queues occur virtually everywhere and can be seen as stochastic systems subject to variability in arrivals and services. Under certain assumptions, a queueing system can be modeled as a Markov process. In this thesis we are primarily interested in determining the equilibrium distributions of Markov processes. The equilibrium distribution p charac- terizes the long-term fractions of time that a Markov process spends in each of the possible states. Without using any further structural properties, finding the equilibrium distribution requires solving the balance equations
pQ = 0, (1.1) where Q is the transition rate matrix. The linear system of balance equations (1.1) is generally difficult to solve, especially when Q is an infinite matrix. In this thesis we develop and extend methods for determining the equilibrium distribution by exploiting structural properties that appear due to the interaction between states. These methods are no brute-force solvers of the large system of equations (1.1), but instead exploit recurring structure in (1.1) to develop elegant solutions.
3 4 Chapter 1. Introduction
λ λ λ λ
··· ··· 0 µ 1 µ 2 i 1 µ i µ i + 1 − Figure 1.1: Transition rate diagram of the Markov process associated with the simple queue.
A famous example where this structure plays a role is the Markov process associated with the simplest possible queue that serves jobs at an exponential rate µ and to which new jobs arrive at exponential rate λ. Because of the exponential rates, at any moment in time only one event can happen: a new arrival or a service completion. An arrival is sometimes called a ‘birth’ and a service completion a ‘death’ and for that reason this Markov process is a special case of a birth–and–death (BD) process. The Markov process that describes the queue size evolves on the state space 0, 1, 2,... according to the rates in Figure 1.1. Figure 1.1 is called a transition{ rate} diagram and displays the states of the Markov process with the arrows depicting the rates at which the process transitions from one state to the other. Rate λ should be smaller than µ, otherwise the queue will grow to infinity, and under this assumption, the balance equations are given by λp(0) = µp(1), (1.2a) (λ + µ)p(i) = λp(i 1) + µp(i + 1), i 1. (1.2b) − ≥ These balance equations together can be written as the system of linear equations (1.1) with p = p(0) p(1) p(2) and Q the transition rate matrix given by ··· λ λ − µ (λ + µ) λ − Q = µ (λ + µ) λ . (1.3) − . . . ...... We shall exploit additional structures that are hidden in the general matrix equation (1.1). For the simple queue we know that Q is extremely sparse and contains only elements on three diagonals. Moreover, the form of (1.2) may allow an iterative solution. Indeed, using (1.2a) and setting ρ := λ/µ we find that p(1) = ρp(0) and using (1.2b) that p(2) = ρp(1), and more generally, p(i) = ρp(i 1) = ρ2p(i 2) = = ρip(0), i 0. (1.4) − − ··· ≥ 1.1. Machine with setup times 5
P Since i 0 p(i) = 1, we conclude that p(0) = 1 ρ to arrive at an elegant solution in≥ terms of powers of ρ: −
p(i) = (1 ρ)ρi, i 0. (1.5) − ≥ Most of the Markov processes in this thesis are multi-dimensional, in which case we encounter product forms of powers, for instance of the types αiβj or matrix powers Ri (instead of scalar powers). While finding these product forms will typically be less straightforward than in the case of (1.5), we will often use ways to exploit recursive structures. Alternatively, when these structures do not allow for an explicit product-form solution, we develop exact recursive solution methods for determining the equilibrium distribution. In the remainder of this chapter we introduce three Markov processes that are intended to familiarize the reader with the Markov processes and solution methods that are introduced and developed in later chapters. These illustrative examples identify the structures that we exploit and show what types of solutions and techniques to expect. At the symbol we explain how a later chapter relates to the process currently under consideration and we summarize the contribution of that chapter. Since the next chapters each provide a comprehensive literature overview, references to the literature in this chapter are scarce.
1.1 Machine with setup times
Let us consider a machine processing jobs in order of arrival. Jobs arrive according to a Poisson process with rate λ and the processing times are exponential with mean 1/µ. For stability we assume that ρ := λ/µ < 1. The machine is turned off when the system is empty and it is turned on again when a new job arrives. The setup time is exponentially distributed with mean 1/θ. Turning off the machine takes no time. The state of the system at time t is X(t) := (X1(t),X2(t)) with X1(t) representing the number of jobs in the system at time t and X2(t) describes if the machine is turned off (0) or on (1) at time t. The process X(t) t 0 { } ≥ is a Markov process with state space := (i, j) N0 0, 1 . The transition rate diagram is displayed in FigureS { 1.2. It∈ looks× similar { }} to the one for the simple queueing process of Figure 1.1, except that each state i has been replaced by a set of states (i, 0), (i, 1) . This set of states is called level i. A process with a similar transition{ structure} as a BD process, but with levels instead of single states, is called a quasi-birth–and–death (QBD) process. In BD processes transitions are restricted to neighbouring states, 6 Chapter 1. Introduction
X2(t) λ λ λ λ 1 µ µ µ µ θ θ θ θ ···
0 λ 1 λ 2 λ 3 λ 4 X1(t)
Figure 1.2: Transition rate diagram of the Markov process associated with the machine with setup times. while in QBD processes the transitions are restricted to neighbouring levels (but larger transitions may be possible within levels). Let p(i, j) denote the equilibrium probability of state (i, j). Clearly, p(0, 1) = 0 since the process cannot transition to (0, 1). State (0, 1) is included in the state space for notational convenience: all levels consist of two states. From the transition rate diagram we obtain by equating the flow out of a state and the flow into that state the balance equations
λp(0, 0) = µp(1, 1), (1.6a) (λ + θ)p(i, 0) = λp(i 1, 0), i 1, (1.6b) − ≥ (λ + µ)p(i, 1) = λp(i 1, 0) + θp(i, 0) + µp(i + 1, 1), i 1. (1.6c) − ≥ The structure of the equations (1.6) is closely related to the balance equations (1.2) of the simple queueing system. This becomes more striking by introducing vectors of equilibrium probabilities pi = p(i, 0) p(i, 1) and writing (1.6) in vector-matrix notation:
(0) (1) p0Λ0 + p1Λ 1 = 0, (1.7a) − pi 1Λ1 + piΛ0 + pi+1Λ 1 = 0, i 1, (1.7b) − − ≥ where 0 0 (λ + θ) θ λ 0 Λ 1 = , Λ0 = − , Λ1 = , − 0 µ 0 (λ + µ) 0 λ − (0) (1) 0 0 Λ0 = Λ1, Λ 1 = . − − µ 0 We now present two methods for determining the equilibrium probabil- ities of this Markov process known as the matrix-geometric method [116] and the spectral expansion method [112]. 1.1. Machine with setup times 7
We first simplify the balance equations (1.7b) by eliminating the vector pi+1. By equating the flow from level i to level i + 1 to the flow from level i + 1 to level i we obtain
λ p(i, 0) + p(i, 1) = µp(i + 1, 1), (1.8) or, in vector-matrix notation,
piΛ∗ = pi+1Λ 1, (1.9) − where 0 λ Λ = . (1.10) ∗ 0 λ Substituting relation (1.9) into (1.7b) produces pi 1Λ1 + pi Λ0 + Λ∗ = 0, i 1, (1.11) − ≥
which allows us to express pi in terms of pi 1: − 1 pi = pi 1Λ1 Λ0 + Λ∗ − = pi 1R, (1.12) − − − where " λ λ # 1 λ+θ µ R := Λ1 Λ0 + Λ∗ − = λ . (1.13) − 0 µ Iterating (1.12) leads to the matrix-geometric solution
p = p Ri, i 0. (1.14) i 0 ≥ Notice that this is very similar to the solution for the simple queueing i model, which is p(i) = p(0)ρ , i 0. Finally, p0 follows from the equations (1.7a) and the normalization condition≥
X 1 p(i, j) = p (I R)− 1 = 1. (1.15) 0 − (i,j) ∈S The matrix R is critical in the matrix-geometric approach. It is called the rate matrix and has an interesting (and useful) probabilistic interpretation. Element (R)j,k is the expected time spent in state (i + 1, k) multiplied by (Λ ) before the first transition to a state in level i, given − 0 j,j the initial state (i, j). This immediately implies that rows of zeros in Λ1 lead to rows of zeros in R. In Chapter2 we study a queueing system with multiple servers and jobs that require a longer service when they have to wait. To determine the 8 Chapter 1. Introduction equilibrium distribution of the Markov process associated with this queue- ing system, we use the matrix-geometric approach. The interpretation of the elements of R and the structure present in the transition rate diagram simplifies the analysis considerably and allows for an exact determination of R. We take the analysis a step further by developing a novel method for determining the equilibrium probabilities for the boundary levels, i.e., the levels close to the origin that exhibit different transition behavior. We now demonstrate the spectral expansion method. This method first seeks solutions of the equations (1.7b) of the simple form
p = xiy, i 0, (1.16) i ≥ where y = y(0) y(1) is a non-zero vector and x < 1. The latter is required, since we want to be able to normalize the| | solution afterwards. Substitution of this form into (1.7b) and dividing by common powers of x gives 2 y Λ1 + xΛ0 + x Λ 1 = 0. (1.17) − So, the desired values of x are the roots inside the unit circle of the determinant equation
2 det(Λ1 + xΛ0 + x Λ 1) = 0. (1.18) − In this case we have
2 det(Λ1 + xΛ0 + x Λ 1) = (λ (λ + θ)x)(µx λ)(x 1) = 0. (1.19) − − − − From this determinant equation we see that the roots with x < 1 are | | λ λ x = , x = . (1.20) 1 λ + θ 2 µ
For k = 1, 2, let yk be the non-zero solution of