(Thesis Proposal) Takayuki Osogami

Total Page:16

File Type:pdf, Size:1020Kb

(Thesis Proposal) Takayuki Osogami Resource Allocation Solutions for Reducing Delay in Distributed Computing Systems (Thesis Proposal) Takayuki Osogami Department of Computer Science Carnegie Mellon University [email protected] April, 2004 Committee members: Mor Harchol-Balter (Chair) Hui Zhang Bruce Maggs Alan Scheller-Wolf (Tepper School of Business) Mark Squillante (IBM Research) 1 Introduction Waiting time (delay) is a source of frustration for users who receive service via computer or communication systems. This frustration can result in lost revenue, e.g., when a customer leaves a commercial web site to shop at a competitor's site. One obvious way to decrease delay is simply to buy (more expensive) faster machines. However, we can also decrease delay for free with given resources by making more efficient use of resources and by better scheduling jobs (i.e., by changing the order of jobs to be processed). For single server systems, it is well understood how to minimize mean delay, namely by the shortest remaining processing time first (SRPT) scheduling policy. SRPT can provide mean delay an order of magnitude smaller than a naive first come first serve (FCFS) scheduling policy. Also, the mean delay under various scheduling policies, including SRPT and FCFS, can be easily analyzed for a relatively broad class of single server systems (M/GI/1 queues). However, utilizing the full potential computing power of multiserver systems and analyzing their performance are much harder problems than for the case of a single server system. Despite the ubiquity of multiserver systems, it is not known how we should assign jobs to servers and how we should schedule jobs within each server to minimize the mean delay in multiserver systems. Also, it is not well understood how we can evaluate various assignment and scheduling policies for multiserver systems. In this thesis, we provide partial answers to these questions. 1.1 Multiserver architectures In this thesis, we seek to minimize delay in distributed computing systems (multiserver architec- tures). Figure 1 shows four common models of multiserver architectures that we consider in this thesis. (a) Network of workstations (NOW): There are n workstations and each workstation • owns a queue of jobs. Each workstation usually processes its own jobs, but we also al- low some workstations to help others (i.e., some workstations can process jobs from other workstations' queue). Examples of NOWs include local area networks in universities and companies. (b) Server farm with distributed queues: There are n servers and jobs arriving from • 1 (a) NOW (b) server farm (c) server farm (d) servers with (distributed queues) (central queue) affinities Figure 1: Four models of distributed computing systems that we consider in this thesis. outside the server farm are immediately dispatched to one of the servers. Examples of a server farm with distributed queues include high volume web servers. (c) Server farm with a central queue: There are n servers and one central queue. Here, • jobs arriving from outside the server farm waits in the central queue, and when one of the servers becomes available, a job is dispatched from the central queue to the available server. Examples of a server farm with a central queue include supercomputing centers. (d) Servers with affinities: There are n servers and m classes of jobs. Here, jobs typically • have different affinities with different servers, i.e., a job may be processed more quickly on one server than on another. Examples of servers with affinities include multiprocessor systems, where cache affinity can significantly affect processing speed, and call centers, where people with different abilities serve different types of requests. For simplicity, we set n = 2 and m = 2 in the figure. These models are not exhaustive, but they cover a wide range of distributed computing systems. 1.2 Where does delay come from We start by asking where delay comes from. Long waiting times are experienced when the system load is high, i.e., when the average arrival rate (jobs per second), λ, is high relative to the average service rate (jobs per second), µ (see Figure 2). 2 Long waiting times are also experienced when utilization of system resources is poor. When system resources cannot by fully utilized, the effective average service rate (jobs per second), µ0, becomes smaller than the potential average service rate, µ. (For example, the potential service capacity is eaten up by context switching time in switching from one type of jobs to another and migration time to transfer jobs from one server to another.) As a result, λ can be high relative to µ0, which has the same effect as high load (λ is high relative to µ), causing long waiting times. In fact, maximizing utilization does not necessarily result in minimizing delay in distributed computing systems, and this makes the design of resource allocation mechanisms in distributed computing systems difficult. We will see later that there are situation where we want to keep some servers idle even in the presence of jobs in queue so that more important (e.g. short processing time) future arrivals can receive service immediately upon their arrivals. High load and poor utilization are not the only causes of the long waiting time; we can experience long waiting time even when the average system load is low (see Figure 3). The long waiting time at low load is primarily due to variability in service demand and/or interarrival time, but other factors such as higher moments and correlation of service demand and interarrival time can also increase the waiting time. Even if long-run average load is not too high, fluctuation in the load can cause lots of delay, i.e., high instantaneous load can be problematic. In particular, variability and autocorrelation in interarrival times often causes fluctuations in load, and peak load and average load can differ by an order of magnitude. 1.3 Brief summary of prior work on minimizing delay We briefly summarizes prior work on minimizing delay by reducing the impact of high load and service demand variability, which are the primary sources of delay. More detailed literature review will be provided in later sections as needed. 1.3.1 Minimizing delay by combating high load When a system is overloaded, that is when the arrival rate is higher than the service rate, we need to either decrease the arrival rate, degrade the service quality, or increase the service rate, to keep the mean waiting time low. Below we classify common approaches to combating high load into three types: decreasing the arrival rate, degrading the service quality, and increasing the service 3 λ 50 40 jobs in queue (waiting) 30 20 a job in service mean waiting time 10 µ 0 0 0.2 0.4 ρ 0.6 0.8 1 (a) An M/M/1 queue (b) The mean waiting time in an M/M/1 queue λ Figure 2: The mean waiting time in an M/M/1/FCFS queue as a function of system load, ρ = µ , where λ is the average arrival rate (jobs per second) and µ = 1:0 is the average service rate (jobs per second). rate. One way to decrease the arrival rate is to reject some arrivals into the system; this approach is known as admission control and has been applied to various computer systems such as web servers [29, 30, 31, 103, 155, 158] and packet networks [81, 18, 19]. Admission control may be combined with some scheduling policy so that rather than just dropping an arrival at random, the scheduling policy determines which arrivals to drop [28]. Degrading the service quality during overload period (e.g., by omitting pictures from web pages) is also popular at web servers and has been studied as content adaptation [1, 25]. An advantage of admission control and content adaptation is that they can be exercised within a single system. When multiple systems are available, we can increase the service rate of an overloaded system by utilizing resources of other systems. Load balancing and cycle stealing are two popular ap- proaches that make use of multiple systems to mitigate the impact of overload. Load balancing mitigates the impact of overload in a system, by sharing the load among many systems. Load balancing has been popular in networks of workstations (NOW) [21, 60] and implemented in 4 50 50 40 40 2 30 C =64 30 C2=64 S A 20 20 mean waiting time 2 mean waiting time C =8 2 10 S 2 10 C =8 2 C =1 A C =1 S A 0 0 0 0.2 0.4 ρ 0.6 0.8 1 0 0.2 0.4 ρ 0.6 0.8 1 (a) The mean waiting time in M/G/1 (b) The mean waiting time in G/M/1 Figure 3: The mean waiting time in an M/G/1/FCFS queue and the mean waiting time in a λ G/M/1/FCFS queue as a function of system load, ρ = µ , where λ is the average arrival rate (jobs per second) and µ = 1:0 is the average service rate (jobs per second). The variability of the service σ(S) demand is represented by the coefficient of variability, CS = E[S] , where E[S] denotes the mean service demand and σ(S) denotes the standard deviation of the service demand. The variability of the interarrival time is represented by CA, which is defined analogously. (In (b), the arrival process is assumed to be a batch Poisson process with geometric batch size.) systems such as MOSIX [11] and Utopia [173]. A disadvantage of load balancing is that load from overloaded system can slow down other lightly loaded systems.
Recommended publications
  • M/M/C Queue with Two Priority Classes
    Submitted to Operations Research manuscript (Please, provide the mansucript number!) M=M=c Queue with Two Priority Classes Jianfu Wang Nanyang Business School, Nanyang Technological University, [email protected] Opher Baron Rotman School of Management, University of Toronto, [email protected] Alan Scheller-Wolf Tepper School of Business, Carnegie Mellon University, [email protected] This paper provides the first exact analysis of a preemptive M=M=c queue with two priority classes having different service rates. To perform our analysis, we introduce a new technique to reduce the 2-dimensionally (2D) infinite Markov Chain (MC), representing the two class state space, into a 1-dimensionally (1D) infinite MC, from which the Generating Function (GF) of the number of low-priority jobs can be derived in closed form. (The high-priority jobs form a simple M=M=c system, and are thus easy to solve.) We demonstrate our methodology for the c = 1; 2 cases; when c > 2, the closed-form expression of the GF becomes cumbersome. We thus develop an exact algorithm to calculate the moments of the number of low-priority jobs for any c ≥ 2. Numerical examples demonstrate the accuracy of our algorithm, and generate insights on: (i) the relative effect of improving the service rate of either priority class on the mean sojourn time of low-priority jobs; (ii) the performance of a system having many slow servers compared with one having fewer fast servers; and (iii) the validity of the square root staffing rule in maintaining a fixed service level for the low priority class.
    [Show full text]
  • Modified Erlang Loss System for Cognitive Wireless Networks
    Journal of Mathematical Sciences, Vol. , No. , , MODIFIED ERLANG LOSS SYSTEM FOR COGNITIVE WIRELESS NETWORKS E. V. Morozov1;2;3 , S. S. Rogozin1;2 , H. Q. Nguyen4 and T. Phung-Duc4 This paper considers a modified Erlang loss system for cognitive wireless networks and related ap- plications. A primary user has preemptive priority over secondary users and the primary customer is lost if upon arrival all the channels are used by other primary users. Secondary users cognitively use idle channels and they can wait at an infinite buffer in cases idle channels are not available upon arrival or they are interrupted by primary users. We obtain explicit stability condition for the cases where arrival processes of primary users and secondary users follow Poisson processes and their ser- vice times follow two distinct arbitrary distributions. The stability condition is insensitive to the service time distributions and implies the maximal throughout of secondary users. For a special case of exponential service time distributions, we analyze in depth to show the effect of parameters on the delay performance and the mean number of interruptions of secondary users. Our simulations for distributions rather than exponential reveal that the mean number of terminations for secondary users is less sensitive to the service time distribution of primary users. 1. Introduction In recent years, Internet traffic has increased explosively due to the increased use of smart-phones, tablet computers, etc. This causes a shortage problem of wireless spectrum. Cognitive wireless is considered as a promising solution to this problem [1{5]. For recent development of cognitive radio networks, we refer to the survey paper by Ostovar et al.
    [Show full text]
  • Matrix Analytic Methods in Applied Probability with a View Towards Engineering Applications
    Downloaded from orbit.dtu.dk on: Sep 28, 2021 Matrix Analytic Methods in Applied Probability with a View towards Engineering Applications Nielsen, Bo Friis Publication date: 2013 Document Version Publisher's PDF, also known as Version of record Link back to DTU Orbit Citation (APA): Nielsen, B. F. (2013). Matrix Analytic Methods in Applied Probability with a View towards Engineering Applications. Technical University of Denmark. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Matrix Analytic Methods in Applied Probability with a View towards Engineering Applications Bo Friis Nielsen Department of Applied Mathematics and Computer Science Technical University of Denmark Technical University of Denmark Department of Applied Mathematics and Computer Science Matematiktorvet, building 303B, 2800 Kongens Lyngby, Denmark Phone +45 4525 3031 [email protected] www.compute.dtu.dk Author: Bo Friis Nielsen (Copyright c 2013 by Bo Friis Nielsen) Title: Matrix Analysis Methods in Applied Probability with a View towards Engineering Applications Doctoral Thesis: ISBN 978-87-643-1291-1 Denne afhandling er af Danmarks Tekniske Universitet antaget til forsvar for den tekniske doktorgrad.
    [Show full text]
  • Speed of Convergence of Diffusion Approximations Eustache Besançon
    Speed of convergence of diffusion approximations Eustache Besançon To cite this version: Eustache Besançon. Speed of convergence of diffusion approximations. Probability [math.PR]. Institut Polytechnique de Paris, 2020. English. NNT : 2020IPPAT038. tel-03128781 HAL Id: tel-03128781 https://tel.archives-ouvertes.fr/tel-03128781 Submitted on 2 Feb 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Speed of Convergence of Diffusion Approximations Thèse de doctorat de l’Institut Polytechnique de Paris T038 préparée à Télécom Paris École doctorale n°574 École doctorale de mathématiques Hadamard (EDMH) : 2020IPPA : Spécialité de doctorat: Mathématiques Appliquées NNT Thèse présentée et soutenue à Paris, le 8 décembre 2020, par Eustache Besançon Composition du Jury: Laure Coutin Professeur, Université Paul Sabatier (– IM Toulouse) Président Kavita Ramanan Professor, Brown University (– Applied Mathematics) Rapporteur Ivan Nourdin Professeur, Université du Luxembourg (– Dpt Mathématiques) Rapporteur Philippe Robert Directeur de Recherche, INRIA Examinateur François Roueff Professeur, Télécom Paris (– LTCI) Examinateur Laurent Decreusefond Professeur, Télécom Paris (– LTCI) Directeur de thèse Pascal Moyal Professeur, Université de Lorraine (–Institut Elie Cartan) Directeur de thèse Speed of convergence of diffusion approximations Eustache Besan¸con December 8th, 2020 2 Contents 1 Introduction 7 1 Diffusion Approximations .
    [Show full text]
  • AN EXTENSION of the MATRIX-ANALYTIC METHOD for M/G/1-TYPE MARKOV PROCESSES 1. Introduction We Consider a Bivariate Markov Proces
    Journal of the Operations Research Society of Japan ⃝c The Operations Research Society of Japan Vol. 58, No. 4, October 2015, pp. 376{393 AN EXTENSION OF THE MATRIX-ANALYTIC METHOD FOR M/G/1-TYPE MARKOV PROCESSES Yoshiaki Inoue Tetsuya Takine Osaka University Osaka University Research Fellow of JSPS (Received December 25, 2014; Revised May 23, 2015) Abstract We consider a bivariate Markov process f(U(t);S(t)); t ≥ 0g, where U(t)(t ≥ 0) takes values in [0; 1) and S(t)(t ≥ 0) takes values in a finite set. We assume that U(t)(t ≥ 0) is skip-free to the left, and therefore we call it the M/G/1-type Markov process. The M/G/1-type Markov process was first introduced as a generalization of the workload process in the MAP/G/1 queue and its stationary distribution was analyzed under a strong assumption that the conditional infinitesimal generator of the underlying Markov chain S(t) given U(t) > 0 is irreducible. In this paper, we extend known results for the stationary distribution to the case that the conditional infinitesimal generator of the underlying Markov chain given U(t) > 0 is reducible. With this extension, those results become applicable to the analysis of a certain class of queueing models. Keywords: Queue, bivariate Markov process, skip-free to the left, matrix-analytic method, reducible infinitesimal generator, MAP/G/1 queue 1. Introduction We consider a bivariate Markov process f(U(t);S(t)); t ≥ 0g, where U(t) and S(t) are referred to as the level and the phase, respectively, at time t.
    [Show full text]
  • BUSF 40901-1/CMSC 34901-1: Stochastic Performance Modeling Winter 2014
    BUSF 40901-1/CMSC 34901-1: Stochastic Performance Modeling Winter 2014 Syllabus (January 15, 2014) Instructor: Varun Gupta Office: 331 Harper Center e-mail: [email protected] Phone: 773-702-7315 Office hours: by appointment Class Times: Wed, Fri { 10:10-11:30 am { Harper Center (3A) Final Exam (tentative): March 21, Friday { 8:00-11:00am { Harper Center (3A) Course Website: http://chalk.uchicago.edu Course Objectives This is an introductory course in queueing theory and performance modeling, with applications including but not limited to service operations (healthcare, call centers) and computer system resource management (from datacenter to kernel level). The aim of the course is two-fold: 1. Build insights into best practices for designing service systems (How many service stations should I provision? What speed? How should I separate/prioritize customers based on their service requirements?) 2. Build a basic toolbox for analyzing queueing systems in particular and stochastic processes in general. Tentative list of topics: Open/closed queueing networks; Operational laws; M=M=1 queue; Burke's theorem and reversibility; M=M=k queue; M=G=1 queue; G=M=1 queue; P h=P h=k queues and their solution using matrix-analytic methods; Arrival theorem and Mean Value Analysis; Analysis of scheduling policies (e.g., Last-Come-First Served; Processor Sharing); Jackson network and the BCMP theorem (product form networks); Asymptotic analysis (M=M=k queue in heavy/light traf- fic, Supermarket model in mean-field regime) Prerequisites Exposure to undergraduate probability (random variables, discrete and continuous probability dis- tributions, discrete time Markov chains) and calculus is required.
    [Show full text]
  • Analysis of MAP/PH/1 Queueing Model with Breakdown, Instantaneous Feedback and Server Vacation
    Available at Applications and Applied http://pvamu.edu/aam Mathematics: Appl. Appl. Math. An International Journal ISSN: 1932-9466 (AAM) Vol. 15, Issue 2 (June 2000), pp. 673 – 707 Analysis of MAP=PH=1 Queueing Model with Breakdown, Instantaneous Feedback and Server Vacation 1G. Ayyappan and 2K. Thilagavathy 1;2Department of Mathematics Pondicherry Engineering College Puducherry, India [email protected]; [email protected] Received: July 7, 2020; Accepted: October 3, 2020 Abstract In this article, we analyze a single server queueing model with feedback, a single vacation under Bernoulli schedule, breakdown and repair. The arriving customers follow the Markovian Arrival Process (MAP) and service follow the phase-type distribution. When the server returns from vacation, if there is no one present in the system, the server will wait until the customer’s arrival. When the service completion epoch if the customer is not satisfied then that customer will get the service immediately. Under the steady-state probability vector that the total number of customers are present in the system is probed by the Matrix-analytic method. In our model, the stability condition, some system performance measures are discussed and we have examined the analysis of the busy period. Numerical results and some graphical representation are discussed for the proposed model. Keywords: Markovian arrival process; Instantaneous feedback; Phase-type distributions; Breakdown; Repair; Single vacation; Matrix-analytic method MSC (2010) No.: 60K25, 68M20, 90B22 1. Introduction The Markovian Arrival Process is a rich class of point processes, a special class of tractable Markov renewal process that includes many well-known process such as Poisson, Markov-Modulated Poisson process and PH-renewal processes.
    [Show full text]
  • Analysis of the M/G/1 Queue with Exponentially Working Vacations—A Matrix Analytic Approach
    Queueing Syst (2009) 61: 139–166 DOI 10.1007/s11134-008-9103-8 Analysis of the M/G/1 queue with exponentially working vacations—a matrix analytic approach Ji-hong Li · Nai-shuo Tian · Zhe George Zhang · Hsing Paul Luh Received: 13 March 2008 / Revised: 15 December 2008 / Published online: 13 January 2009 © Springer Science+Business Media, LLC 2009 Abstract In this paper, an M/G/1 queue with exponentially working vacations is an- alyzed. This queueing system is modeled as a two-dimensional embedded Markov chain which has an M/G/1-type transition probability matrix. Using the matrix ana- lytic method, we obtain the distribution for the stationary queue length at departure epochs. Then, based on the classical vacation decomposition in the M/G/1 queue, we derive a conditional stochastic decomposition result. The joint distribution for the stationary queue length and service status at the arbitrary epoch is also obtained by analyzing the semi-Markov process. Furthermore, we provide the stationary waiting time and busy period analysis. Finally, several special cases and numerical examples are presented. Keywords Working vacations · Embedded Markov chain · M/G/1-type matrix · Stochastic decomposition · Conditional waiting time J.-h. Li College of Economics and Management, Yanshan University, Qinhuangdao, 066004, China e-mail: [email protected] N.-s. Tian () College of Science, Yanshan University, Qinhuangdao, 066004, China e-mail: [email protected] Z.G. Zhang Dept. of Decision Sciences, Western Washington University, Bellingham, WA 98225, USA e-mail: [email protected] Z.G. Zhang Faculty of Business Administration, Simon Fraser University, Burnaby, BC, Canada H.P.
    [Show full text]
  • Automated Multi-Paradigm Analysis of Extended and Layered Queueing Models with LINE Demonstration Paper
    Automated multi-paradigm analysis of extended and layered queueing models with LINE Demonstration Paper Giuliano Casale Imperial College London, UK [email protected] ABSTRACT finite buffer capacities, priorities, caching, non-exponential ser- LINE is an open source MATLAB library for performance and relia- vice and arrival distributions, and multiple types of scheduling and bility analysis of systems that can be modeled by means of queueing routing policies; (ii) LayeredNetwork models, which are layered theory. Recently, a new major release of the tool (version 2.0.0) has queueing networks [4], i.e., models consisting of layers, each cor- introduced several novel features, which are the focus of this demon- responding to a Network object, which interact with each other stration. These include, among others, an object-oriented modeling through synchronous and asynchronous calls. For both classes of language aligned with the abstraction of the Java Modelling Tools models, LINE also supports the definition of a class of random en- (JMT) simulator and a set of native solvers based on state-of-the-art vironments to describe systems that change with the environment analytical and simulation-based solution paradigms. (e.g., breakdown and repair). Main performance metrics returned by LINE include average CCS CONCEPTS response times, queue-lengths, throughputs, and utilizations. Such metrics can either refer to an individual station, or when meaning- • Mathematics of computing → Queueing theory; • Comput- ful to the system as a whole (e.g., system response time, system ing methodologies → Modeling and simulation; throughput). For response times it is also possible to obtain per- KEYWORDS centile and the cumulative distribution, either analytically (e.g., through the FLUID solver) or via simulation.
    [Show full text]
  • ETAQA Solutions for Infinite Markov Processes with Repetitive Structure
    ETAQA Solutions for Infinite Markov Processes with Repetitive Structure Alma Riska Seagate Research, 1251 Waterfront Place, Pittsburgh, Pennsylvania 15222, USA, [email protected] Evgenia Smirni Department of Computer Science, College of William and Mary, Williamsburg, Virginia 23187-8795, USA, [email protected] We describe the ETAQA approach for the exact analysis of M/G/1 and GI/M/1-type pro- cesses, and their intersection, i.e., quasi-birth-death processes. ETAQA exploits the repeti- tive structure of the infinite portion of the chain and derives a finite system of linear equa- tions. In contrast to the classic techniques for the solution of such systems, the solution of this finite linear system does not provide the entire probability distribution of the state space but simply allows for the calculation of the aggregate probability of a finite set of classes of states from the state space, appropriately defined. Nonetheless, these aggregate probabili- ties allow for the computation of a rich set of measures of interest such as the system queue length or any of its higher moments. The proposed solution approach is exact and, for the case of M/G/1-type processes, it compares favorably to the classic methods as shown by detailed time and space complexity analysis. Detailed experimentation further corroborates that ETAQA provides significantly less expensive solutions when compared to the classic methods. Key words: M/G/1-type processes; GI/M/1-type processes; quasi-birth-death processes; computer system performance modeling; matrix-analytic methods; Markov chains History: Accepted by Ed Kao, Area Editor for Computational Probability & Analysis; 1.
    [Show full text]
  • How Many Servers Are Best in a Dual-Priority M/PH/K System?1
    How many servers are best in a dual-priority M/PH/k system?1 Adam Wierman Takayuki Osogami Mor Harchol-Balter Alan Scheller-Wolf Computer Science Department and Tepper School of Business Carnegie Mellon University Abstract We ask the question, “for minimizing mean response time (sojourn time), which is preferable: one fast server of speed 1, or k slow servers each of speed 1/k?” Our setting is the M/PH/k system with two priority classes of customers, high priority and low priority, where PH is a phase-type distribution. We find that multiple slow servers are often preferable, and we demonstrate exactly how many servers are preferable as a function of the load and service time distribution. In addition, we find that the optimal number of servers with respect to the high priority jobs may be very different from that preferred by low priority jobs, and we characterize these preferences. We also study the optimal number of servers with respect to overall mean response time, averaged over high and low priority jobs. Lastly, we ascertain the effect of the service demand variability of high priority jobs on low priority jobs. Keywords: Scheduling, queueing, multiserver, priority, preemptive, M/PH/k, dimensionality reduction, busy period. 1 Introduction Is it preferable to use a single fast server of speed s, or k slow servers each of speed s/k? What is the optimal k? The fundamental question of “how many servers are best” has been asked by a stream of research [22, 32, 18, 28, 21, 29] discussed in Section 2.2.
    [Show full text]
  • On the BMAP 1, BMAP 2/PH/G, C Retrial Queueing System
    On the BMAP1,BMAP2/PH/g,c retrial queueing system Jinbiao Wu a,∗ Yi Pengb, Zaiming Liua aSchool of Mathematics and Statistics, Central South University, Changsha 410083, Hunan, P.R. China bJunior Education Department, Changsha Normal University, Changsha 410100, Hunan, P.R. China Abstract In this paper, we analyze a retrial queueing system with Batch Markovian Arrival Processes and two types of customers. The rate of individual repeated attempts from the orbit is modulated according to a Markov Modulated Poisson Process. Using the theory of multi-dimensional asymptotically quasi-Toeplitz Markov chain, we obtain the stability condition and the algorithm for calculating the stationary state distribution of the system. Main performance measures are presented. Furthermore, we investigate arXiv:1504.01191v1 [math.PR] 6 Apr 2015 some optimization problems. The algorithm for determining the optimal number of guard servers and total servers is elaborated. Finally, this queueing system is applied to the cellular wireless network. Numerical results to illustrate the optimization problems and the impact of retrial on performance measures are provided. We find that the performance measures are mainly affected by the two types of customers’ arrivals and service patterns, but the retrial rate plays a less crucial role. Keywords: Queueing; Markov Modulated Poisson Process; Asymptotically quasi-Toeplitz Markov chains; Cellular wireless networks; Performance evaluation ∗E-mail address: [email protected] 1 1. Introduction Most queueing systems assume that the arrival process is a stationary Poisson process. But such a process does not characterize the typical features of traffic in modern telecommunication networks such as correlation and burstiness. The batch Markovian arrival process (BMAP) is a useful and appropriate model for describing such features.
    [Show full text]