Controlled Queueing Systems with Heterogeneous Servers

Controlled Queueing Systems with Heterogeneous Servers

Controlled Queueing Systems with Heterogeneous Servers Dmitri Efrosinin University of Trier, Germany 2004 Contents 1 Introduction 1 1.1 Queues and dynamic control . 1 1.2 Queueing system with heterogeneous servers. 6 1.3 Monotonicity properties of optimal solutions . 7 1.4 Numerical methods . 15 1.5 Outline of the thesis . 16 2 Controlled M=M=K=B − K queue 21 2.1 Literature overview and chapter organization . 21 2.2 Model description . 23 2.3 Problem formulation . 24 2.4 Number of jobs minimization (NJM–problem) . 27 2.4.1 Quantity functional . 27 2.4.2 Optimality equation . 28 2.4.3 Transformation of Optimality Equation . 30 2.4.4 Monotonicity properties of optimal policies . 32 2.4.5 Assignment to the fastest available server . 34 2.4.6 Submodularity of the value function . 36 2.4.7 Threshold phenomenon of optimal policy. Threshold func- tion for NJM–problem . 40 2.5 Processing cost minimization (PCM–problem) . 46 2.5.1 Quantity functional . 46 2.5.2 Optimality equation . 46 2.5.3 Monotonicity properties of optimal policies. Two types of optimal policy structure for PCM–problem . 47 2.5.4 Assignment to the server with the lowest mean usage cost 48 2.5.5 Submodularity of the value function . 49 2.5.6 Threshold function for PCM–problem . 50 i ii CONTENTS 2.5.7 Optimality of threshold function . 53 2.5.8 Two-level threshold function for PCM–problem . 61 2.6 Algorithm . 65 2.6.1 Policy-iteration algorithm description . 66 2.6.2 Value-iteration algorithm description . 68 2.6.3 Routine . 70 2.7 Numerical analysis . 71 2.7.1 Dependence of thresholds on the state of slow servers . 71 2.7.2 Numerical examples. NJM–problem. Threshold function . 74 2.7.3 Numerical examples. PCM–problem. Threshold function . 76 2.7.4 Numerical examples. PCM–problem. Two-level function . 83 2.8 Conclusions . 89 2.9 Appendices . 90 2.9.1 Threshold functions for NJM–problem. 90 2.9.2 3–dimensional diagrams of threshold function for PCM– problem. 92 2.9.3 2–dimensional diagrams of threshold function for PCM– problem. 94 2.9.4 Two-level threshold function for PCM–problem. 97 3 Arrivals with modulating phases 101 3.1 Introduction. 101 3.2 Problem description . 103 3.3 Controlled E=M=K=B − K queue . 104 3.3.1 Optimality equation . 105 3.3.2 Monotonicity properties. Dependence on modulating phases107 3.3.3 Algorithm . 110 3.3.4 Numerical analysis . 111 3.4 Controlled P H=M=K=B − K queue . 116 3.4.1 Optimality equation . 117 3.4.2 Monotonicity properties. Dependence on modulating phases119 3.4.3 Algorithm . 120 3.4.4 Numerical analysis . 121 3.5 Controlled MAP=M=K=B − K queue . 126 3.5.1 Optimality equation . 128 3.5.2 Monotonicity properties. Dependence on modulating phases129 3.5.3 Algorithm . 131 3.5.4 Numerical analysis . 131 CONTENTS iii 3.6 Conclusions . 136 3.7 Appendices . 138 3.7.1 Em=M=K=B − K . 138 3.7.2 P H=M=K=B − K . 140 3.7.3 MMP P=M=K=B − K . 142 3.7.4 MAP=M=K=B − K . 144 4 Service with phases 147 4.1 Introduction. 147 4.2 Problem description . 148 4.3 Controlled M=E=K=B − K queue . 150 4.3.1 Optimality equation . 152 4.3.2 Monotonicity properties. Dependence on service phases . 153 4.3.3 Algorithm . 160 4.3.4 Numerical analysis . 162 4.4 Controlled M=P H=K=B − K queue . 167 4.4.1 Optimality equation . 169 4.4.2 Monotonicity properties. Dependence on service phases . 171 4.4.3 Algorithm . 182 4.4.4 Numerical analysis . 183 4.5 Conclusions . 188 4.6 Appendices . 190 4.6.1 M=Emk =K=B − K . 190 4.6.2 M=P Hhet=K=B − K . 192 5 Controlled MAP=P H=K=B − K queue 193 5.1 Introduction. 193 5.2 Problem description . 194 5.3 Optimality equation . 196 5.4 Monotonicity properties . 198 5.5 Algorithm . 200 5.6 Numerical analysis. 201 5.6.1 Em0 =Emk =K=B − K queue. 202 5.6.2 MAP=Emk =K=B − K queue. 203 5.6.3 P H=P Hhet=K=B − K queue. 204 5.6.4 MAP=P Hhet=K=B − K queue. 205 5.7 Conclusions . 206 5.8 Appendices . 207 iv CONTENTS 5.8.1 Em0 =Emk =K=B − K . 207 5.8.2 MAP=Emk =K=B − K . 209 5.8.3 P H=P Hhet=K=B − K . 211 5.8.4 MAP=P Hhet=K=B − K . 212 Bibliography 213 CONTENTS v Chapter 1 Introduction 1.1 Queues and dynamic control Many real-life phenomena, such as computer systems, communication networks, manufacturing systems, supermarket checkout lines as well as structural military systems can be represented by means of queueing models. Queueing systems take their origin in the study of design problems for automatic telephone exchange, and were first analyzed by A.K. Erlang in the early 1900s. At that time, in designing telephone systems one main problem with respect to performance criteria was to determine how many lines had to be supplied in order to guarantee a certain grade of service. Similar questions arise in many other cases, e.g. when asking for the maximum number of terminals in a computer system that tolerates keeping the message loss probability below a prespecified level, or when looking at the im- pact of service speed on waiting lines, and so on. In order to satisfy the people's demands for quality of service (QoS) it is impossible, due to economic aspects, to simply increase resources without limits. In essence, the question that queueing theory faces in practice is how to find a balance between improvement of (QoS) and acceptance economic overhead, i.e. to find the right trade-off between a gain in quality, that is achievable by supplying more resources, and the corresponding economic loss. By studying mathematical models that reveal the probabilistic nature of real sys- tems, queueing theory has succeeded to obtain many analytical and numerical results for the key-quantities which characterize the performance behavior of sys- tems. Such key quantities are, for example, queue length and waiting time distri- butions, loss probabilities, mean values of sojourn times and throughput, etc. 1 2 CHAPTER 1. INTRODUCTION In classical queueing theory, the corresponding models do not incorporate facil- ities (controllers) that allow to pursue different strategies, possibly based upon state depending decisions. This is in spite of the fact that man made real systems usually have to be dynamically controlled during operation. Looking at queue- ing models, a controller may considerably improve the system's performance by reducing queue lengths, or increasing the throughput, or diminishing the over- head, whereas in the absence of a controller the system behavior may get quite erratic, exhibiting periods of high load and long queues followed by periods, dur- ing which the servers remain idle. Control is performed by adequate actions that can be described in mathematical terms and can be subjected to the determination of optimal control strategies. Thus, by incorporating such aspects into queueing theory, its field is broadened and may be classified as a branch of optimization theory. As such it forms the area of interest and the subject of the present thesis. The theoretical foundations of controlled queueing systems are led in the theory of Markov, semi-Markov and semi-regenerative decision processes [31, 39, 49, 57, 58, 61, 64, 70]. Figure 1.1 Bellman, in the early 1950s, was one of the first who investigated decision processes and developed computational methods for analyzing sequential deci- sion processes with finite planning horizon. His ideas have been generalized by Howard (1960), who used elements from Markov chain theory and dynamic pro- gramming to develop a policy-iteration algorithm that provides the solution of 1.1. QUEUES AND DYNAMIC CONTROL 3 sequential probabilistic decision processes with infinite planning horizon. A typical sequential decision making model can symbolically be represented in Figure 1.1. At a specified point of time, which we refer to as present decision epoch, a decision maker or controller observes the state of a system. Based on this state, the controller chooses a control (performs a decision). The control choice produces two results: the controller losses an immediate cost, and the system, ac- cording to a probability distribution determined by the chosen control, evolves to a new state at the subsequent point of time, termed the next decision epoch. At the next decision epoch, the controller faces a similar situation. Different states, in general, determine different sets of possible controls. As such a process evolves in time, the controller incurs a sequence of costs or, as in sometimes said, a sequence of negative rewards. The key components of such a sequential decision model are the following • A set of decision epochs • A set E of system states • A set A of available controls • A set of state and control dependent immediate costs • A set of state and control dependent transition probabilities All these sets are assumed to be known to the controller at any decision epoch. Definition 1.2 A strategy is a prescription telling the controller out of which (state depending) set of controls a choice has to be made at any future time epoch. Definition 1.3 A policy specifies the control to be chosen at a particular point in time. A policy may depend on the present state alone, or on that state and all previ- ous states and controls.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    229 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us