Simulating Strict Priority Queueing, Weighted Round Robin, And
Total Page:16
File Type:pdf, Size:1020Kb

Load more
Recommended publications
-
Queue Scheduling Disciplines
White Paper Supporting Differentiated Service Classes: Queue Scheduling Disciplines Chuck Semeria Marketing Engineer Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, CA 94089 USA 408 745 2000 or 888 JUNIPER www.juniper.net Part Number:: 200020-001 12/01 Contents Executive Summary . 4 Perspective . 4 First-in, First-out (FIFO) Queuing . 5 FIFO Benefits and Limitations . 6 FIFO Implementations and Applications . 6 Priority Queuing (PQ) . 6 PQ Benefits and Limitations . 7 PQ Implementations and Applications . 8 Fair Queuing (FQ) . 9 FQ Benefits and Limitations . 9 FQ Implementations and Applications . 10 Weighted Fair Queuing (WFQ) . 11 WFQ Algorithm . 11 WFQ Benefits and Limitations . 13 Enhancements to WFQ . 14 WFQ Implementations and Applications . 14 Weighted Round Robin (WRR) or Class-based Queuing (CBQ) . 15 WRR Queuing Algorithm . 15 WRR Queuing Benefits and Limitations . 16 WRR Implementations and Applications . 18 Deficit Weighted Round Robin (DWRR) . 18 DWRR Algorithm . 18 DWRR Pseudo Code . 19 DWRR Example . 20 DWRR Benefits and Limitations . 24 DWRR Implementations and Applications . 25 Conclusion . 25 References . 26 Textbooks . 26 Technical Papers . 26 Seminars . 26 Web Sites . 27 Copyright © 2001, Juniper Networks, Inc. List of Figures Figure 1: First-in, First-out (FIFO) Queuing . 5 Figure 2: Priority Queuing . 7 Figure 3: Fair Queuing (FQ) . 9 Figure 4: Class-based Fair Queuing . 11 Figure 5: A Weighted Bit-by-bit Round-robin Scheduler with a Packet Reassembler . 12 Figure 6: Weighted Fair Queuing (WFQ)—Service According to Packet Finish Time . 13 Figure 7: Weighted Round Robin (WRR) Queuing . 15 Figure 8: WRR Queuing Is Fair with Fixed-length Packets . 17 Figure 9: WRR Queuing is Unfair with Variable-length Packets . -
Scheduling Algorithms
Scheduling in OQ architectures Scheduling and QoS scheduling Andrea Bianco Telecommunication Network Group [email protected] http://www.telematica.polito.it/ Andrea Bianco – TNG group - Politecnico di Torino Computer Networks Design and Management - 1 Scheduling algorithms • Scheduling: choose a packet to transmit over a link among all packets stored in a given buffer (multiplexing point) • Mainly look at QoS scheduling algorithms – Choose the packet according to QoS needs N OUTPUT inputs BUFFER Andrea Bianco – TNG group - Politecnico di Torino Computer Networks Design and Management - 2 Pag. 1 Scheduling in OQ architectures Output buffered architecture • Advantage of OQ (Output Queued) architectures – All data immediately transferred to output buffers according to data destination – It is possible to run QoS scheduling algorithms independently for each output link • In other architectures, like IQ or CIOQ switches, problems become more complex – Scheduling to satisfy QoS requirements and scheduling to maximize the transfer data from inputs to outputs have conflicting requirements Andrea Bianco – TNG group - Politecnico di Torino Computer Networks Design and Management - 3 QoS scheduling algorithms • Operate over multiplexing points • Micro or nano second scale • Easy enough to be implemented in hardware at high speed • Regulate interactions among flows • Single traffic relation (1VP/1VC) • Group of traffic relations (more VC/1VP o more VC with similar QoS needs) • QoS classes • Strictly related and dependent from buffer management -
Scheduling for Fairness: Fair Queueing and CSFQ
Copyright Hari Balakrishnan, 1998-2005, all rights reserved. Please do not redistribute without permission. LECTURE 8 Scheduling for Fairness: Fair Queueing and CSFQ ! 8.1 Introduction nthe last lecture, we discussed two ways in which routers can help in congestion control: Iby signaling congestion, using either packet drops or marks (e.g., ECN), and by cleverly managing its buffers (e.g., using an active queue management scheme like RED). Today, we turn to the third way in which routers can assist in congestion control: scheduling. Specifically, we will first study a particular form of scheduling, called fair queueing (FQ) that achieves (weighted) fair allocation of bandwidth between flows (or between whatever traffic aggregates one chooses to distinguish between.1 While FQ achieves weighted-fair bandwidth allocations, it requires per-flow queueing and per-flow state in each router in the network. This is often prohibitively expensive. We will also study another scheme, called core-stateless fair queueing (CSFQ), which displays an interesting design where “edge” routers maintain per-flow state whereas “core” routers do not. They use their own non- per-flow state complimented with some information reaching them in packet headers (placed by the edge routers) to implement fair allocations. Strictly speaking, CSFQ is not a scheduling mechanism, but a way to achieve fair bandwidth allocation using differential packet dropping. But because it attempts to emulate fair queueing, it’s best to discuss the scheme in conjuction with fair queueing. The notes for this lecture are based on two papers: 1. A. Demers, S. Keshav, and S. Shenker, “Analysis and Simulation of a Fair Queueing Algorithm”, Internetworking: Research and Experience, Vol. -
University of Bradford Ethesis
Performance Modelling and Analysis of Weighted Fair Queueing for Scheduling in Communication Networks. An investigation into the Development of New Scheduling Algorithms for Weighted Fair Queueing System with Finite Buffer. Item Type Thesis Authors Alsawaai, Amina S.M. Rights <a rel="license" href="http://creativecommons.org/licenses/ by-nc-nd/3.0/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by- nc-nd/3.0/88x31.png" /></a><br />The University of Bradford theses are licenced under a <a rel="license" href="http:// creativecommons.org/licenses/by-nc-nd/3.0/">Creative Commons Licence</a>. Download date 26/09/2021 15:01:09 Link to Item http://hdl.handle.net/10454/4877 University of Bradford eThesis This thesis is hosted in Bradford Scholars – The University of Bradford Open Access repository. Visit the repository for full metadata or to contact the repository team © University of Bradford. This work is licenced for reuse under a Creative Commons Licence. Performance Modelling and Analysis of Weighted Fair Queueing for Scheduling in Communication Networks An investigation into the Development of New Scheduling Algorithms for Weighted Fair Queueing System with Finite Buffer Amina Said Mohammed Alsawaai Submitted for the Degree of Doctor of Philosophy Department of Computing School of Computing, Informatics and Media University of Bradford 2010 Abstract Analytical modelling and characterization of Weighted Fair Queueing (WFQ) have re- cently received considerable attention by several researches since WFQ offers the min- imum delay and optimal fairness guarantee. However, all previous work on WFQ has focused on developing approximations of the scheduler with an infinite buffer because of supposed scalability problems in the WFQ computation. -
Qos in IEEE 802.11-Based Wireless Networks: a Contemporary Survey Aqsa Malik, Junaid Qadir, Basharat Ahmad, Kok-Lim Alvin Yau, Ubaid Ullah
1 QoS in IEEE 802.11-based Wireless Networks: A Contemporary Survey Aqsa Malik, Junaid Qadir, Basharat Ahmad, Kok-Lim Alvin Yau, Ubaid Ullah. Abstract—Apart from mobile cellular networks, IEEE 802.11- requirements at a lower cost. In IEEE 802.11 WLANs, the based wireless local area networks (WLANs) represent the error and interference prone nature of wireless medium—due most widely deployed wireless networking technology. With the to fading and multipath effects [2]—makes QoS provisioning migration of critical applications onto data networks, and the emergence of multimedia applications such as digital audio/video even more challenging. The combination of best-effort routing, and multimedia games, the success of IEEE 802.11 depends datagram routing, and an unreliable wireless medium, makes critically on its ability to provide quality of service (QoS). A lot the task of QoS provisioning in IEEE 802.11 WLANs very of research has focused on equipping IEEE 802.11 WLANs with challenging. features to support QoS. In this survey, we provide an overview of In this survey, we provide a focused overview of work these techniques. We discuss the QoS features incorporated by the IEEE 802.11 standard at both physical (PHY) and media access done to ensure QoS in the IEEE 802.11 standard. We have control (MAC) layers, as well as other higher-layer proposals. We the following three goals: (i) to provide a self-contained also focus on how the new architectural developments of software- introduction to the QoS features embedded in the IEEE defined networking (SDN) and cloud networking can be used to 802.11 standard; (ii) to provide a layer-wise description and facilitate QoS provisioning in IEEE 802.11-based networks. -
Packet Scheduling in Wireless Systems by Token Bank Fair Queueing Algorithm
Packet scheduling in wireless systems by Token Bank Fair Queueing Algorithm by WILLIAM KING WONG B.A.Sc, The University of Western Ontario, 1991 M.A.Sc, The University of Western Ontario, 1993 M.Sc., The University of Western Ontario, 1994 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDffiS ( ELECTRICAL AND COMPUTER ENGINEERING ) THE UNIVERSITY OF BRITISH COLUMBIA July 2005 © William King Wong, 2005 ABSTRACT The amalgamation of wireless systems is a vision for the next generation wireless systems where multimedia applications will dominate. One of the challenges for the operators of next generation networks is the delivery of quality-of-service (QoS) to the end users. Wireless scheduling algorithms play a critical and significant role in the user perception of QoS. However, the scarcity of wireless resources and its unstable channel condition present a challenge to providing QoS. To that end, this thesis proposes a novel scheduling algorithm Token Bank Fair Queuing (TBFQ) that is simple to implement and has the soft tolerance characteristic which is suitable under wireless constraints. TBFQ combines both token- based policing mechanism and fair throughput prioritization rule in scheduling packets over packet- switched wireless systems. This results in a simple design and yet achieves better performance and fairness than existing wireless scheduling strategies. The characterization of the algorithm, which leads to its tight bound performance, is conducted through numerical analysis that recursively determines the steady-state probability distribution of the bandwidth allocation process and ultimately the probability distribution of input queue. -
Lottery Scheduling and Dominant Resource Fairness (Lecture 24, Cs262a)
Lottery Scheduling and Dominant Resource Fairness (Lecture 24, cs262a) Ion Stoica, UC Berkeley November 7, 2016 Today’s Papers Lottery Scheduling: Flexible Proportional-Share Resource Management, Carl Waldspurger and William Weihl, OSDI’94 (www.usenix.org/publications/library/proceedings/osdi/full_papers/waldspurger.pdf) Dominant Resource Fairness: Fair Allocation of Multiple Resource Types Ali Ghodsi, Matei Zaharia, Benjamin Hindman, Andy Konwinski, Scott Shenker, Ion Stoica, NSDI’11 (https://www.cs.berkeley.edu/~alig/papers/drf.pdf) What do we want from a scheduler? Isolation: have some sort of guarantee that misbehaved processes cannot affect me “too much” Efficient resource usage: resource is not idle while there is a process whose demand is not fully satisfied Flexibility: can express some sort of priorities, e.g., strict or time based CPU Single Resource: Fair Sharing 100% 33% n users want to share a resource (e.g. CPU) 50% 33% • Solution: give each 1/n of the shared resource 33% 0% Generalized by max-min fairness 100% 20% • Handles if a user wants less than its fair share 40% 50% • E.g. user 1 wants no more than 20% 40% 0% 100% Generalized by weighted max-min fairness 33% • Give weights to users according to importance 50% • User 1 gets weight 1, user 2 weight 2 66% 0% Why Max-Min Fairness? Weighted Fair Sharing / Proportional Shares • User 1 gets weight 2, user 2 weight 1 Priorities • Give user 1 weight 1000, user 2 weight 1 Revervations • Ensure user 1 gets 10% of a resource • Give user 1 weight 10, sum weights ≤ 100 Deadline-based scheduling • Given a user job’s demand and deadline, compute user’s reservation/ weight Isolation • Users cannot affect others beyond their share Widely Used OS: proportional sharing, lottery, Linux’s cfs, … Networking: wfq, wf2q, sfq, drr, csfq, .. -
Multi-Queue Fair Queuing Mohammad Hedayati, University of Rochester; Kai Shen, Google; Michael L
Multi-Queue Fair Queuing Mohammad Hedayati, University of Rochester; Kai Shen, Google; Michael L. Scott, University of Rochester; Mike Marty, Google https://www.usenix.org/conference/atc19/presentation/hedayati-queue This paper is included in the Proceedings of the 2019 USENIX Annual Technical Conference. July 10–12, 2019 • Renton, WA, USA ISBN 978-1-939133-03-8 Open access to the Proceedings of the 2019 USENIX Annual Technical Conference is sponsored by USENIX. Multi-Queue Fair Queueing Mohammad Hedayati Kai Shen Michael L. Scott Mike Marty University of Rochester Google University of Rochester Google Abstract ric within a few microseconds. GPUs and machine learning Modern high-speed devices (e.g., network adapters, storage, accelerators may offload computations that run just a few mi- accelerators) use new host interfaces, which expose multiple croseconds at a time [30]. At the same time, the proliferation software queues directly to the device. These multi-queue in- of multicore processors has necessitated architectures tuned terfaces allow mutually distrusting applications to access the for independent I/O across multiple hardware threads [4, 36]. device without any cross-core interaction, enabling through- These technological changes have shifted performance bot- put in the order of millions of IOP/s on multicore systems. tlenecks from hardware resources to the software stacks that Unfortunately, while independent device access is scalable, manage them. In response, it is now common to adopt a multi- it also introduces a new problem: unfairness. Mechanisms queue architecture in which each hardware thread owns a that were used to provide fairness for older devices are no dedicated I/O queue, directly exposed to the device, giving longer tenable in the wake of multi-queue design, and straight- it an independent path over which to send and receive re- forward attempts to re-introduce it would require cross-core quests. -
Improved Waiting Time of Tasks Scheduled Under Preemptive Round Robin
Improving Waiting Time of Tasks Scheduled Under Preemptive Round Robin Using Changeable Time Quantum Saad Zagloul Rida* Safwat Helmy Hamad ** Samih Mohemmed Mostafa *** *Faculty of Science, Mathematics Department, South Valley University, Egypt, Quena. ** Faculty of Computer and Information Sciences, Ain Shams University, Egypt, Cairo. *** Faculty of Science, Computer Science Department, South Valley University, Egypt, Quena. ABSTRACT Minimizing waiting time for tasks waiting in the queue for execution is one of the important scheduling cri- teria which took a wide area in scheduling preemptive tasks. In this paper we present Changeable Time Quan- tum (CTQ) approach combined with the round-robin algorithm, we try to adjust the time quantum according to the burst times of the tasks in the ready queue. There are two important benefits of using (CTQ) approach: minimizing the average waiting time of the tasks, consequently minimizing the average turnaround time, and keeping the number of context switches as low as possible, consequently minimizing the scheduling overhead. In this paper, we consider the scheduling problem for preemptive tasks, where the time costs of these tasks are known a priori. Our experimental results demonstrate that CTQ can provide much lower scheduling overhead and better scheduling criteria. General Terms: Algorithm, round-robin, CPU length. Keywords: Processor sharing, proportional sharing, residual time, survived tasks, cyclic queue. 1. INTRODUCTION Modern operating systems (OS) nowadays have become more complex than ever before. They have evolved from a single task, single user architecture to a multitasking environment in which tasks run in a con- current manner. Allocating CPU to a task requires careful attention to assure fairness and avoid task starvation for CPU. -
Weighted Fair Scheduling Algorithm for Qos of Input-Queued Switches
Weighted Fair Scheduling Algorithm for QoS of Input-Queued Switches Sang-Ho Lee1, Dong-Ryeol Shin2, and Hee Yong Youn2 1 Samsung Electronics, System LSI Division, Korea [email protected] http://nova.skku.ac.kr 2 Sungkyunkwan University, Information and Communication Engineering, Korea {drshin, youn}@ece.skku.ac.kr Abstract. The high speed network usually deals with two main issues. The first is fast switching to get good throughput. At present, the state- of-the-art switches are employing input queued architecture to get high throughput. The second is providing QoS guarantees for a wide range of applications. This is generally considered in output queued switches. For these two requirements, there have been lots of scheduling mechanisms to support both better throughput and QoS guarantees in high speed switching networks. In this paper, we present a scheduling algorithm for providing QoS guarantees and higher throughput in an input queued switch. The proposed algorithm, called Weighted Fair Matching (WFM), which provides QoS guarantees without output queues, i.e., WFM is a flow based algorithm that achieves asymptotically 100% throughput with no speed up while providing QoS. Keywords: scheduling algorithm, input queued switch, QoS 1 Introduction The input queued switch overcomes the scalability problem occurring in the output queued switches. However, it is well known that the input-queued switch with a FIFO queue in each input port suffers the Head-of-Line (HOL) blocking problem which limits the throughput to 58% [1]. Lots of algorithms have been suggested to improve the throughput. In order to overcome the performance reduction due to HOL blocking, most of proposed input queued switches have separate queues called Virtual Output Queue(VOQ) for different output ports at each input port. -
Baochun Li Department of Electrical and Computer Engineering University of Toronto Keshav Chapter 9.4, 9.5.1, 13.3.4
Episode 5. Scheduling and Traffic Management Part 2 Baochun Li Department of Electrical and Computer Engineering University of Toronto Keshav Chapter 9.4, 9.5.1, 13.3.4 ECE 1771: Quality of Service — Baochun Li, Department of Electrical and Computer Engineering, University of Toronto Outline What is scheduling? Why do we need it? Requirements of a scheduling discipline Fundamental choices Scheduling best effort connections Scheduling guaranteed-service connections Baochun Li, Department of Electrical and Computer Engineering, University of Toronto Scheduling best effort connections Main requirement is fairness Achievable using Generalized Processor Sharing (GPS) Visit each non-empty queue in turn Serve infnitesimal from each queue in a fnite time interval Why is this max-min fair? How can we give weights to connections? Baochun Li, Department of Electrical and Computer Engineering, University of Toronto More on Generalized Processor Sharing GPS is not implementable! we cannot serve infnitesimals, only packets No packet discipline can be as fair as GPS: but how closely? while a packet is being served, we are unfair to others Defne: work(i, a, b) = # bits transmitted for connection i in time [a,b] Absolute fairness bound for scheduling discipline S max (workGPS(i, a, b) - workS(i, a, b)) Relative fairness bound for scheduling discipline S max (workS(i, a, b) - workS(j, a, b)) Baochun Li, Department of Electrical and Computer Engineering, University of Toronto What is next? We can’t implement GPS So, lets see how to emulate it We want to be as fair as possible But also have an efficient implementation Baochun Li, Department of Electrical and Computer Engineering, University of Toronto Weighted Round Robin Round Robin: Serve a packet from each non-empty queue in turn Unfair if packets are of different length or weights are not equal — Weighted Round Robin Different weights, fxed size packets serve more than one packet per visit, after normalizing to obtain integer weights Different weights, variable size packets normalize weights by mean packet size e.g. -
Proportional Share Scheduling for Uniprocessor and Multiprocessor Systems
Group Ratio Round-Robin: O(1) Proportional Share Scheduling for Uniprocessor and Multiprocessor Systems Bogdan Caprita, Wong Chun Chan, Jason Nieh, Clifford Stein,∗ and Haoqiang Zheng Department of Computer Science Columbia University Email: {bc2008, wc164, nieh, cliff, hzheng}@cs.columbia.edu Abstract We present Group Ratio Round-Robin (GR3), the first pro- work has been done to provide proportional share schedul- portional share scheduler that combines accurate propor- ing on multiprocessor systems, which are increasingly com- tional fairness scheduling behavior with O(1) scheduling mon especially in small-scale configurations with two or overhead on both uniprocessor and multiprocessor systems. four processors. Over the years, a number of scheduling GR3 uses a simple grouping strategy to organize clients mechanisms have been proposed, and much progress has into groups of similar processor allocations which can be been made. However, previous mechanisms have either su- more easily scheduled. Using this strategy, GR3 combines perconstant overhead or less-than-ideal fairness properties. the benefits of low overhead round-robin execution with a We introduce Group Ratio Round-Robin (GR3), the 3 novel ratio-based scheduling algorithm. GR introduces a first proportional share scheduler that provides constant novel frontlog mechanism and weight readjustment algo- fairness bounds on proportional sharing accuracy with O(1) 3 rithm to operate effectively on multiprocessors. GR pro- scheduling overhead for both uniprocessor and small-scale vides fairness within a constant factor of the ideal general- multiprocessor systems. In designing GR3, we observed ized processor sharing model for client weights with a fixed that accurate, low-overhead proportional sharing is easy to upper bound and preserves its fairness properties on multi- achieve when scheduling a set of clients with equal pro- 3 processor systems.